input
stringlengths
6.82k
29k
Instruction: Helplessness/hopelessness, minimization and optimism predict survival in women with invasive ovarian cancer: a role for targeted support during initial treatment decision-making? Abstracts: abstract_id: PUBMED:26732767 Helplessness/hopelessness, minimization and optimism predict survival in women with invasive ovarian cancer: a role for targeted support during initial treatment decision-making? Purpose: Women with advanced ovarian cancer generally have a poor prognosis but there is significant variability in survival despite similar disease characteristics and treatment regimens. The aim of this study was to determine whether psychosocial factors predict survival in women with ovarian cancer, controlling for potential confounders. Methods: The sample comprised 798 women with invasive ovarian cancer recruited into the Australian Ovarian Cancer Study and a subsequent quality of life study. Validated measures of depression, optimism, minimization, helplessness/hopelessness, and social support were completed 3-6 monthly for up to 2 years. Four hundred nineteen women (52.5 %) died over the follow-up period. Associations between time-varying psychosocial variables and survival were tested using adjusted Cox proportional hazard models. Results: There was a significant interaction of psychosocial variables measured prior to first progression and overall survival, with higher optimism (adjusted hazard ratio per 1 standard deviation (HR) = 0.80, 95 % confidence interval (CI) 0.65-0.97), higher minimization (HR = 0.79, CI 0.66-0.94), and lower helplessness/hopelessness (HR = 1.40, CI 1.15-1.71) associated with longer survival. After disease progression, these variables were not associated with survival (optimism HR = 1.10, CI 0.95-1.27; minimization HR = 1.12, CI 0.95-1.31; and helplessness/hopelessness HR = 0.86, CI 0.74-1.00). Depression and social support were not associated with survival. Conclusions: In women with invasive ovarian cancer, psychosocial variables prior to disease progression appear to impact on overall survival, suggesting a preventive rather than modifying role. Addressing psychosocial responses to cancer and their potential impact on treatment decision-making early in the disease trajectory may benefit survival and quality of life. abstract_id: PUBMED:12586585 Women's perceptions about treatment decision making for ovarian cancer. Objectives: To identify in women with advanced epithelial ovarian cancer who had just undergone surgery the extent to which they (1) perceived that they had treatment options, (2) understood the treatment related risks and benefits, and (3) preferred to participate in the treatment decision-making process. Methods: This qualitative study included women who underwent initial surgery for stage 3 or 4 ovarian cancer and who had received less than two cycles of chemotherapy. In depth semistructured interviews were conducted with 21 patients between June 1999 and February 2001. The interviews were content analyzed according to the themes that arose in the interview. Results: Five themes were identified. (1) Knowledge of treatment benefits and risks. Women understood that the treatment had both survival and quality of life benefits. Women could clearly articulate the risks of chemotherapy. (2) Readiness to make a decision. When making treatment decisions, women described being overwhelmed by the effects of concurrent drugs like analgesics, the severity of the illness, unexpected diagnosis of cancer and grief, and feeling pressured into a decision. (3) Perception of a treatment choice. Most women felt that they made their treatment decision; however, most women did not perceive that they had a treatment choice. Thus, treatment decision making is really a process of coming to terms with the disease and the recommended treatment. (4) Physician-patient relationship. All women suggest that their doctor knew the right treatment for them and they felt confident in their cancer physician. (5) Social supports. Women described supports through decision-making processes that included individuals who advocated for them, faith, and past experience with the cancer system. Hindrances to decision making included people who were negative, the cancer label, and employers. Conclusions: Women with advanced epithelial ovarian cancer did not describe the treatment decision-making process as shared; rather they described an interaction that was directed largely by the physician. These women attribute this form of decision-making to their advanced age, severity of illness, immediate ramification of treatment choices, and lack of advocacy for a different model of interaction. Thus, the onus is on the physician to ensure that there is an environment for shared decision-making in the event that the patient is interested in such an interaction. abstract_id: PUBMED:20101941 Women's decision-making needs related to treatment for recurrent ovarian cancer: a pilot study. Unlabelled: The purpose of this pilot study was to describe the decision-making needs of women with ovarian cancer related to treatment of recurrent disease. Design: A retrospective, cross-sectional needs assessment was conducted. Data were collected using face-to-face interviews and analyzed using content analysis. Results: Thirteen women were recruited. Eleven women did not report difficulty in making the decision. Five women perceived that they had options. Seven had a passive role in the decision. When considering future decisions, nine women preferred a shared or autonomous role; seven wanted to be presented with options. The role of nurses in providing information was emphasized. Conclusions: The findings provide some beginning direction for an inter-professional decision support approach, as well as implications for future research. abstract_id: PUBMED:22993365 Treatment decision-making processes in the systemic treatment of ovarian cancer: review of the scientific evidence. Background: The systemic treatment of epithelial ovarian cancer (OC) is one of the cornerstones in the multimodal management of advanced OC in both primary and recurrent stages of this disease. In most situations various treatment options are available but only few data exists about the treatment decision-making process. Therefore, we conducted a review of the current literature regarding the decision-making process concerning the systemic therapy in patients with advanced ovarian cancer. Materials And Methods: The electronic database MEDLINE (PubMed) was systematically reviewed for studies that evaluate the treatment decision-making processes in patients with advanced OC. Results: The PubMed database was searched in detail for all titles and abstracts of potentially relevant studies published between 1995 and 2011. An initial search identified 15 potentially relevant studies, but only seven met all inclusion criteria. Factors that influence treatment decisions in patients with OC include not only rational arguments and medical reasons, but also individual attitudes, fears, existential questions, various projections resulting from the physician patient relationship and the social environment. The physician's personal experience with OC treatment seems to be an important factor, followed by previous personal experience with medical issues, and the fear of side-effects and future metastases. Family and self-support organisations also seem to play a significant role in the treatment decision-making process. Conclusion: This review underlines the need for more research activities to explore the treatment decision-making process to enable the best individual support for patients in treatment decision-making. It is a challenge for clinicians to determine the individual information needs of women with OC and to involve them during the decision-making process to the extent they wish. abstract_id: PUBMED:19319830 It's a choice to move forward: women's perceptions about treatment decision making in recurrent ovarian cancer. Objective: This research explores the treatment decision-making (TDM) experiences of women with recurrent ovarian cancer (ROC) with regard to treatment options; their understanding of risks and benefits of various treatment options; the decision-making role they want for themselves and for their oncologist; and the social context of the consultation as it pertains to the decision. Methods: We conducted semi-structured interviews with 26 women at the time of first recurrence. Through inductive data analysis key themes were identified. Results: Many women describe self-identifying the cancer recurrence fairly quickly due to new symptoms. Many feel that the goal for treating their recurrence is to control versus cure the cancer. They describe the subsequent process of diagnosis and TDM for ROC as quick and straightforward with all women accepting the oncologists' treatment recommendation. They feel that the type and number of treatment options are limited. They have a strong desire for physician continuity in their care. Participants feel that their doctor's recommendations as well as their previous experience with ovarian cancer are strong factors influencing their current TDM process. Conclusions: Shared decision making is based on a simultaneous participation of both the physician and patient in TDM. When faced with ROC, women feel that their doctor's recommendation and their past experience with treatment and TDM are prominent factors influencing the current TDM process. abstract_id: PUBMED:12754781 Living with ovarian cancer: women's perspectives on treatment and treatment decision-making. In the year 2002, 2,500 women will have been diagnosed with ovarian cancer. Treatment for ovarian cancer is arduous, involving invasive surgery, chemotherapy, and/or radiation therapy. Studies have described the side effects of ovarian cancer treatment, but little has been written about women's perspectives on receiving that treatment. This work was undertaken to describe women's perceptions of living with ovarian cancer and their experiences with treatment. Eighteen women who had been diagnosed with ovarian cancer were interviewed in depth and theme analysis was undertaken with their transcripts. Three prominent themes emerged during the analysis: 1) initial treatment plans were overwhelming, 2) involvement in treatment decision-making was minimal, and 3) treatment had many side effects and complications. abstract_id: PUBMED:29644780 Support needs of couples with hereditary breast and ovarian cancer during reproductive decision making. Objective: Reproductive decision making for couples with hereditary breast and ovarian cancer (HBOC) is complex and can result in decisional conflict or regret. This study investigated couples' support needs and aimed to identify vulnerable couples. Ultimately, we should strive to develop a clear standard of care guideline regarding reproductive decision support. Methods: Mixed methods were used for data collection. A focus group study was conducted among 18 couples (N = 35) with HBOC who had made a reproductive decision after reproductive counselling. Subsequently, 129 similar couples (N = 258) were invited to complete a cross-sectional survey based on the focus group study. Results: Clinical and practical aspects of reproductive counselling were positively evaluated in the focus group study, although couples indicated a need for additional support with emotional and social concerns in which their relationship, social environment, and the way they picture their desired family were key elements. The survey was completed by 86 participants. Making a reproductive choice was experienced as (very) difficult by 43%, and 69% showed a need for additional support during decision making. Younger participants and those who opted for a natural pregnancy experienced more difficulty with reproductive decision making, and partners showed a higher need for psychological support than carriers. Conclusions: Couples with HBOC who need to make a reproductive decision have specific needs for guidance and support, of which the desired content and methods can vary. It is therefore important to identify vulnerable couples and to attune counselling to couples' needs. abstract_id: PUBMED:27841939 Novel Psychological Intervention for Decision Support in Women Considering Fertility Preservation Before Cancer Treatment. Decisions about fertility preservation in young adults with cancer are often made under conditions of high subjective stress and time pressure. In women, these decisions are further complicated by the invasiveness of fertility preservation procedures, concerns about health risks of these procedures, and financial barriers. This article describes the rationale for and development of a brief decision support and stress management intervention for women aged 18-40 who are considering fertility preservation before cancer treatment. Case examples from participants are provided to illustrate the potential applicability of the intervention to survivors in a variety of circumstances. abstract_id: PUBMED:32725550 Interventions that Facilitate Shared Decision-Making in Cancers with Active Surveillance as Treatment Option: a Systematic Review of Literature. Purpose Of Review: Medical decisions concerning active surveillance are complex, especially when evidence on superiority of one of the treatments is lacking. Decision aids have been developed to facilitate shared decision-making on whether to pursue an active surveillance strategy. However, it is unclear how these decision aids are designed and which outcomes are considered relevant. The purpose of this study is to systematically review all decision aids in the field of oncological active surveillance strategies and outcomes used by authors to assess their efficacy. Recent Findings: A search was performed in Embase, Medline, Web of Science, Cochrane, PsycINFO Ovid and Google Scholar until June 2019. Eligible studies concerned interventions aiming to facilitate shared decision-making for patients confronted with several treatment alternatives, with active surveillance being one of the treatment alternatives. Twenty-three eligible articles were included. Twenty-one articles included patients with prostate cancer, one with thyroid cancer and one with ovarian cancer. Interventions mostly consisted of an interactive web-based decision aid format. After categorization of outcomes, seven main groups were identified: knowledge, involvement in decision-making, decisional conflict, treatment preference, decision regret, anxiety and health-related outcomes. Although active surveillance has been implemented for several malignancies, interventions that facilitate shared decision-making between active surveillance and other equally effective treatment alternatives are scarce. Future research should focus on developing interventions for malignancies like rectal cancer and oesophageal cancer as well. The efficacy of interventions is mostly assessed using short-term outcomes. abstract_id: PUBMED:32926832 Integrative Review of Reproductive Decision Making of Women Who Are BRCA Positive. Objective: To synthesize research findings about reproductive decision making among women who are BRCA positive. Data Sources: PubMed and CINAHL. Study Selection: Articles published in English between 2000 and June 28, 2020, about the reproductive decision making of women with a confirmed BRCA1 or BRCA2 mutation. Data Extraction: We extracted data about participants, study design, analysis, follow-up, and results. We used the modified Downs and Black checklist and Kennelly's qualitative data analysis to rate studies for quality and applicability by using. Data Synthesis: We included five of 257 screened articles in our synthesis. The total sample size of the five studies was 1,468 women. The most prevalent factors related to reproductive decision making were the impending decisions regarding childbearing and family choices, including decisions about biological children, preventive surgery, preimplantation genetic diagnosis, and prenatal diagnosis to prevent further transmission of a BRCA mutation, and family planning. Conclusion: A lack of knowledge exists about the reproductive decision-making processes of women who are BRCA positive. A better understanding of this process would provide nurses and other clinicians with the knowledge needed to support these women through their reproductive life choices. Answer: The study by the Australian Ovarian Cancer Study and a subsequent quality of life study involving 798 women with invasive ovarian cancer found that certain psychosocial factors measured prior to disease progression were predictive of overall survival. Specifically, higher levels of optimism and minimization, as well as lower levels of helplessness/hopelessness, were associated with longer survival. These associations were significant when controlling for potential confounders, suggesting that these psychosocial variables may play a preventive role in the disease trajectory. The study concluded that addressing these psychosocial responses to cancer early on could potentially benefit survival and quality of life, indicating a role for targeted support during initial treatment decision-making (PUBMED:26732767). This finding aligns with the broader context of treatment decision-making in ovarian cancer, where women's perceptions and psychosocial factors play a significant role. For instance, women with advanced epithelial ovarian cancer often do not perceive that they have a treatment choice and describe the decision-making process as being largely directed by their physician. This suggests that there is a need for an environment that supports shared decision-making, especially if the patient is interested in such an interaction (PUBMED:12586585). Additionally, women with recurrent ovarian cancer often prefer a shared or autonomous role in future decision-making and emphasize the role of nurses in providing information, highlighting the importance of support during the decision-making process (PUBMED:20101941). The review of scientific evidence on treatment decision-making processes in ovarian cancer further emphasizes the complexity of these decisions, which are influenced by individual attitudes, fears, existential questions, and the social environment. The physician's experience and the patient's social support play significant roles in this process (PUBMED:22993365). Moreover, women's perceptions about treatment decision-making in recurrent ovarian cancer indicate that their past experiences and physician's recommendations are influential factors (PUBMED:19319830). In conclusion, the evidence suggests that psychosocial factors such as helplessness/hopelessness, minimization, and optimism do predict survival in women with invasive ovarian cancer, and there is a role for targeted support during initial treatment decision-making to potentially improve outcomes. This support could address the emotional, social, and informational needs of patients, facilitating a shared decision-making process that respects their preferences and optimizes their psychosocial well-being.
Instruction: Do we need a chart of quality for websites related to cosmetic surgery? Abstracts: abstract_id: PUBMED:12927882 Do we need a chart of quality for websites related to cosmetic surgery? Unlabelled: Convergence of medicine and Internet may be one of the most remarkable transformations in the health care business. Following the path led by the United States, the number of French websites related to cosmetic surgery is growing rapidly. In this study, we intend to assess the quality of French websites dedicated to cosmetic surgery regarding good quality criteria currently available and recommendations suggested by the French Medical Association. Materials And Methods: We browsed the main French search engines on the Web that initially answering the following question: how many webpages are available regarding cosmetic surgery and what are the best referenced websites. For each website, we surveyed the following data: author's name and qualification, date of creation and last update, sources of information, level of interactivity, and adherence to a chart of quality such as HON. Results: Eighty-five websites were surveyed and assessed. Forty-five French websites were active websites dedicated to cosmetic surgery. Websites are mainly hosted by private clinics (18 sites = 40%), with informative content. We found that no website adheres to any chart of quality, and the French Society for Plastic Surgery (SOF.C.P.R.E.) is never mentioned. Intrinsic quality criteria for websites (author's identification, last update, sources of information, confidentiality) are only partially present. Discussion: We recall the key statistics regarding e-health business in the world, the various charts of quality available for medical websites, and recommendations provided by the French Medical Association. We suggest that websites should be available as a service (for information to the patient, for managing the office, for setting up medical records) rather than a poor personal webpage or a showcase. Conclusion: The quality of websites for cosmetic surgery is poor; however, as in the USA, the number of web surfers on medical sites is growing. Online presence of our speciality should evolve. To improve medical websites, collective awareness is required. We recommend using a specific chart of quality, with recommendations rather than constraints. abstract_id: PUBMED:33215109 Arthroplasty information on the internet: quality or quantity? Aims: Total joint replacement (TJR) is a high-cost, high-volume procedure that impacts patients' quality of life. Informed decisions are important for patients facing TJR. The quality of information provided by websites regarding TJR is highly variable. We aimed to measure the quality of TJR information online. Methods: We identified 10,800 websites using 18 TJR-related keywords (conditions and procedures) across the Australian, French, German and Spanish Google search engines. We used the Health on the Net (HON) toolbar to evaluate the first 150 websites downloaded for every keyword in each language. The quality of information on websites was inspected, accounting for differences by language and tertiles. We also undertook an analysis of English websites to explore types of website providers. Results: 'Total joint replacement' had the most results returned (150 million websites), and 9% of websites are HON-accredited. Differences in information quality were seen across search terms (p < 0.001) and tertiles (p < 0.001), but not between languages (p = 0.226). A larger proportion of HON-accredited websites were seen from keywords in the condition and arthroplasty categories. The first tertile contained the highest number of HON-accredited websites for the majority of search terms. Government/educational bodies sponsored the majority of websites. Conclusion: Clinicians must consider the shortage of websites providing validated information, with disparities in both number and quality of websites for TJR conditions and procedures. As such, the challenge for clinicians is to lead the design of reliable, accurate and ethical orthopaedic websites online and direct patients to them. This stands to reward both parties greatly. abstract_id: PUBMED:28500021 Thoracic Surgery Information on the Internet: A Multilingual Quality Assessment. Background: Previous data suggest that quality of Internet information regarding surgical conditions and their treatments is variable. However, no comprehensive analysis of website quality exists for thoracic surgery. Objective: The aim of this study was to quantify website quality in a multilingual setting using an international standard for assessment. Methods: Health On the Net (HON) principles may be applied to websites using an automated toolbar function. We used the English, French, Spanish, and German Google search engines to identify 12,000 websites using keywords related to thoracic conditions and procedures. The first 150 websites returned by each keyword in each language were examined. We compared website quality to assess for tertile (is the quality better in first, second, or third 50 websites returned) and language differences. A further analysis of the English site types was undertaken performing a comparative analysis of website provider types. Results: Overall, there are a considerable number of websites devoted to thoracic surgery: "lung cancer" returned over 150 million websites. About 7.85% (940/11,967) of websites are HON-accredited with differences by search term (P<.001) and tertiles (P<.001) of the first 150 websites, but not between languages. Oncological keywords regarding conditions and procedures were found to return a higher percentage of HON-accreditation. The percentage of HON-accredited sites was similar across all four languages (P=.77). In general, the first tertile contained a higher percentage of HON-accredited sites for every keyword. Conclusions: Clinicians should appreciate the lack of validation of the majority of thoracic websites, with discrepancies in quality and number of websites across conditions and procedures. These differences appear similar regardless of language. An opportunity exists for clinicians to participate in the development of informative, ethical, and reliable health websites on the Internet and direct patients to them. abstract_id: PUBMED:34306881 How Comprehensive Are Canadian Plastic Surgery Fellowship Websites? Background Online fellowship program websites are more commonly becoming the primary information resource used by prospective applicants. This study aimed to analyze the online content of Canadian plastic surgery fellowship program websites. Methods The content of all accredited Canadian Plastic Surgery fellowship program websites was evaluated using a 75-point criterion in the following ten domains: recruitment, faculty, residents/fellows, research and education, surgical program, clinical work, benefits, and career planning, wellness, environment and gender of faculty leadership. Results On average, fellowship program websites obtained a score of 29.9 (SD=12.6). No correlation was detected between program websites and location (P > 0.05) nor by ranking (P > 0.05). Conclusions Most Canadian plastic surgery fellowship program websites lacked content relevant to prospective applicants. More comprehensive fellowship program websites may be of benefit to prospective applicants and the programs. abstract_id: PUBMED:36755827 An Evaluation of the Content of Canadian Plastic Surgery Residency Websites. Background: Plastic surgery residency program websites are an important source of information to prospective applicants, especially given the ongoing COVID-19 pandemic and resulting suspension of all visiting electives and in-person interviews. This study aimed to analyze the online content of Canadian plastic surgery residency program websites. Methods: The content of all accredited Canadian plastic surgery residency websites was evaluated using 77-point criteria in the following 10 domains: recruitment, faculty, residents, research and education, surgical program, clinical work, benefits and career planning, wellness, environment, and gender of faculty leadership. Results: All accredited Canadian plastic surgery residency programs (n = 13) were identified using Canadian Resident Matching Service and had their dedicated program websites available for analysis. On average, residency program websites obtained a score of 33.5 (standard deviation = 13.7). The majority of programs did not score differently on the criteria by geographical distribution (P > .05) nor by ranking (P > .05). Conclusions: Most Canadian plastic surgery residency program websites are lacking content relevant to prospective applicants. Addressing inadequacies in online content may support programs to inform and recruit strong applicants into residency programs. abstract_id: PUBMED:34652282 Critical Evaluation of the Efficiency of Colorectal Fellowship Websites: Cross-sectional Study. Background: Websites are an important source of information for fellowship applicants, as they can influence ongoing interest and potential program selection. Objective: This study aims to evaluate the current state of colorectal fellowship websites. Methods: This cross-sectional study evaluates the quantity and quality of information available on websites of colorectal fellowship programs verified by the Accreditation Council for Graduate Medical Education in 2019. Results: A total of 63 colorectal fellowships were included for evaluation. Websites were surveyed for content items that previous studies have found to be influential to program applicants. The 58 (91%) programs with a functional website were evaluated using an information index (calculated as a function of availability of content items concerning education, application, personnel, and benefits) and an interactive index (calculated as a function of accessibility and usability of the webpage). Programs had a median total score of 27.8 (IQR 21.5-34.5) of 79. The median score for the interactive index was 7.5 of 15 and for the information index was 20 of 64. The median scores for website application, education, personnel, and benefits or life considerations were 5, 5.5, 3.3, and 4 of 13, 24, 13, and 14, respectively. There was no difference in total score between programs in different geographical regions (P=.46). Conclusions: Currently, colorectal surgery fellowship program websites do not provide enough content for applicants to make informed decisions. All training programs, regardless of specialty, should evaluate and improve their digital footprint to ensure their websites are accessible and provide the information desired by applicants. abstract_id: PUBMED:36673540 Growing Taller without Hormones? Dr. Consult Google-An Evaluation of Online Information Related to Limb Lengthening. Purpose: The aim of this study was to investigate the reliability, content and readability of the information available on the Internet related to limb lengthening surgeries, which have recently been progressively in fashion. Methods: The three most commonly used browsers on the Internet were determined and a search term for "Limb Lengthening Surgery" was typed for each browser. The websites were categorized by their type, and the content and the quality of them was evaluated using the DISCERN score, the Journal of American Medical Association (JAMA) benchmark and the Global Quality Score (GQS). The Flesch Kincaid Grade Level (FKGL) and the Flesch Reading Ease Score (FKRS) were used to evaluate the readability. Each website also assessed the presence (or absence) of the Health on Net (HON) code. Results: The academic category was found to be significantly higher than the medical and commercial categories. Mean FKGL and FCRS scores, DISCERN score values, JAMA, GQS and LLCS score values of Websites with HON code were significantly higher than those without. Conclusions: The quality of online information related to limb lengthening was of low quality. Although some websites, especially academic resources, were of higher quality, the readability of their content is just about 2.5 degrees higher than the sixth-grade reading level. abstract_id: PUBMED:33392371 Evaluation of plastic surgery resident aesthetic clinic websites. Background: The purpose of the present study was to evaluate the website pages of integrated plastic and reconstructive surgery resident aesthetic clinics in the United States. Methods: Website pages dedicated to resident aesthetic clinics in all integrated plastic and reconstructive surgery residency programs (n = 79) were accessed between October and November 2019 and evaluated across several criteria, including: number of pages, procedures offered, faculty participation, respective prices, patient photographs, patient testimonials, contact information, and specific recruitment incentives. Results: Seventy-nine integrated and 54 independent residency programs were identified, 31 of which had both an integrated and an independent residency program for a total of 102 distinct programs. Out of these, only 11 programs (10.8%) had a webpage dedicated to their resident aesthetic clinic (Figure 1). Twelve other programs (13.7%) that did not have a dedicated webpage mentioned a resident aesthetic clinic elsewhere on their residency program website. For each of the eleven programs with space for the resident aesthetic clinic, there was exactly one dedicated webpage. None of the programs with dedicated webpages included photographs of before and after cases, nor procedures performed, nor a listing of resident aesthetic clinic prices. Three of the dedicated webpages included information about faculty participation. Out of the 24 ASAPS endorsed aesthetic fellowship programs, 6 (25%) had a webpage dedicated to the fellow aesthetic clinic. Four of these webpages were comprised of a single webpage, while one program had 6 webpages and one program had 8 webpages. Only one program's webpage included before and after pictures. Four programs (16.7%) included information about faculty participation on the webpage. Five out of the 6 programs had a procedure list on the webpage. Discussion: Resident aesthetic clinic websites are an important tool in recruiting patients as well as medical students. Addressing the lack and quality of such websites may improve recruitment of patients and students to plastic and reconstructive surgery residency programs. As a primary source of information for potential future residents and patients, plastic and reconstructive surgery programs need to maximize the content and utility of their websites. abstract_id: PUBMED:25356297 "What Motivates Her": Motivations for Considering Labial Reduction Surgery as Recounted on Women's Online Communities and Surgeons' Websites. Introduction: Increasing numbers of women are seeking labial reduction surgery. We studied the motivations for considering labial reduction surgery as recounted on women's online communities and surgery provider's websites. Aims: The study analyzed motivations for considering labial reduction surgery expressed by women on online communities, looked at the role of the women's age and nationality, compared findings with motivations indicated on the websites of an international sample of surgery providers, and identified similarities to and differences from what is known from extant studies. Methods: Quantitative content analysis of the posts of 78 American, British, and Dutch women on online communities, and 40 international surgery providers' websites about labial reduction surgery was conducted. Main Outcome Measures: Main outcome measures concerned the incidence and prominence of different motivational categories (functional/emotional and discomfort/enhancement related). Differences in motivations as a function of age, national background, and women's vs. surgeons' stated motivations were tested. Results: Emotional discomfort regarding self-appearance and social and sexual relationships was found to be the most frequent and most prominent motivation for considering labial reduction surgery on women's online communities, regardless of age and national background. Functional discomfort and desired emotional enhancement ranked second. Very few age or national differences were found. The surgeons' websites recognized functional discomfort more and elaborated upon emotional issues in sexual relationships less than members of the online communities. Conclusions: Feelings of emotional and psychosexual distress in addition to functional distress are a highly prevalent motivation among women considering labial reduction surgery. Emotional distress appears to be greater and more freely emphasized when women communicate on online communities, while functional issues appear to receive greater notice on surgery provider's websites. Zwier S. "What motivates her": Motivations for considering labial reduction surgery as recounted on women's online communities and surgeons' websites. Sex Med 2014;2:16-23. abstract_id: PUBMED:27888472 The Quality of Romanian Breast Cancer Websites: a Five-Year Longitudinal Assessment. The Internet has become an important source of overall health information and seems to be the second common source of information used by patients in the process of decision-making before breast surgery. The goal of this study was to monitor Romanian breast cancer websites and their quality over a period of 5 years. We evaluated a sample of 20 websites selected from Google's first search results pages using specific rating scores for e-health quality, completeness, accuracy, and potential risk, in 2011 and 2016, respectively. Only 15 (75%) of the websites in the 2011 sample were accessible in 2016 and only two (10%) retained real-life visibility (Google PageRank < 20). The mean quality scores at baseline (2011) and follow-up (2016), respectively, were as follows: e-health quality 3.80 vs. 4.05; completeness 4.23 vs. 5.43; accuracy 5.74 vs. 6.35; and potential risk score 7.60 vs. 7.30. All quality scores were low or, at best, modest and did not improve significantly over the 5-year period. The results of the study draw attention to the need for programs aiming to improve the ability of breast cancer patients to screen the online health resources and to better regulate the medical Internet to safeguard the best interest of health information seekers. Answer: Based on the information provided in the abstracts, there is a clear indication that a chart of quality for websites related to cosmetic surgery could be beneficial. The study assessing French websites dedicated to cosmetic surgery found that no website adhered to any chart of quality, and intrinsic quality criteria for websites were only partially present (PUBMED:12927882). This suggests that the quality of information on these websites is poor, and there is a need for improvement. Moreover, other studies evaluating the quality of online information in different medical fields, such as arthroplasty (PUBMED:33215109), thoracic surgery (PUBMED:28500021), and plastic surgery residency programs (PUBMED:36755827), also highlight the variability and often poor quality of information available online. These studies emphasize the importance of validated and reliable information, suggesting that the development of a specific chart of quality could help improve the standard of online medical information. The use of the Health on the Net (HON) principles and accreditation as a measure of website quality is mentioned in several studies (PUBMED:33215109, PUBMED:28500021, PUBMED:36755827), indicating that adherence to recognized quality standards is already considered important in assessing the reliability of medical websites. In conclusion, the evidence from these studies supports the need for a chart of quality for websites related to cosmetic surgery to ensure that the information provided is accurate, reliable, and ethical, thereby improving the online presence and quality of information in this medical specialty.
Instruction: Can mental health integration in a primary care setting improve quality and lower costs? Abstracts: abstract_id: PUBMED:16623603 Can mental health integration in a primary care setting improve quality and lower costs? A case study. Objective: To describe the successful implementation of an evidence-based, integrated quality improvement mental health program in a primary care setting. Summary: Intermountain Healthcare (IHC) has aligned resources around a conceptual framework that emphasizes clinic and community accountability, family and consumer health focused on recovery rather than disease, and enhanced decision making through partnerships and automation. The mental health integration system includes an integrated team led foremost by the patient and family with vital defined roles for primary care providers, care managers, psychiatrists, advanced practice registered nurses, support staff, and the National Alliance for the Mentally Ill. Pharmacists have assumed training functions on the team and have the potential to play more vital roles. Conclusion: The IHC experience demonstrates that mental health services can be effectively integrated into everyday practice in a primary care setting. Clinical and financial burden can be decreased for the health care team, patients, and family. abstract_id: PUBMED:31342836 Improving PHQ9 Utilization Rates in a Primary Care-Mental Health Integration Setting. INTRODUCTION: Measurement-based care (MBC) uses standardized measurement to systematically monitor treatment response over time. Although MBC is underutilized in mental health settings, primary care-mental health integration (PC-MHI) settings are expected to provide MBC. This article describes a quality improvement (QI) process to increase Patient Health Questionnaire-9 (PHQ9) utilization within a PC-MHI setting. AIMS: Pre-intervention, rates of baseline and follow-up PHQ9 administration for veterans with a depressive disorder were 76% and 35%, respectively. This article describes a QI process to increase PHQ9 utilization rates within a PC-MHI setting, with the goal to improve provider PHQ9 utilization rates at baseline and within 4-week follow-up to 90%. METHOD: An educational intervention and weekly motivational enhancement sessions were implemented in 2017. Chart review data compared PHQ9 utilization rates from fall 2016 and 2017. RESULTS: Following intervention, provider PHQ9 utilization rates increased to 98% and 88% at baseline and follow-up. CONCLUSIONS: These findings demonstrate that a brief education-based intervention can increase clinician use of MBC within a PC-MHI setting. Meaningful use of MBC to inform treatment was not evaluated in this QI project and is an area for future investigation. abstract_id: PUBMED:36585707 Healthcare use and costs among individuals receiving mental health services for depression within primary care in Nepal. Background: Integrating mental health services into primary care is a key strategy for reducing the mental healthcare treatment gap in low- and middle-income countries. We examined healthcare use and costs over time among individuals with depression and subclinical depressive symptoms in Chitwan, Nepal to understand the impact of integrated care on individual and health system resources. Methods: Individuals diagnosed with depression at ten primary care facilities were randomized to receive a package of integrated care based on the Mental Health Gap Action Programme (treatment group; TG) or this package plus individual psychotherapy (TG + P); individuals with subclinical depressive symptoms received primary care as usual (UC). Primary outcomes were changes in use and health system costs of outpatient healthcare at 3- and 12-month follow up. Secondary outcomes examined use and costs by type. We used Poisson and log-linear models for use and costs, respectively, with an interaction term between time point and study group, and with TG as reference. Results: The study included 192 primary care service users (TG = 60, TG + P = 60, UC = 72; 86% female, 24% formally employed, mean age 41.1). At baseline, outpatient visits were similar (- 11%, p = 0.51) among TG + P and lower (- 35%, p = 0.01) among UC compared to TG. Visits increased 2.30 times (p < 0.001) at 3 months among TG, with a 50% greater increase (p = 0.03) among TG + P, before returning to baseline levels among all groups at 12 months. Comparing TG + P to TG, costs were similar at baseline (- 1%, p = 0.97) and cost changes did not significantly differ at three (- 16%, p = 0.67) or 12 months (- 45%, p = 0.13). Costs among UC were 54% lower than TG at baseline (p = 0.005), with no significant differences in cost changes over follow up. Post hoc analysis indicated individuals not receiving psychotherapy used less frequent, more costly healthcare. Conclusion: Delivering psychotherapy within integrated services for depression resulted in greater healthcare use without significantly greater costs to the health system or individual. Previous research in Chitwan demonstrated psychotherapy determined treatment effectiveness for people with depression. While additional research is needed into service implementation costs, our findings provide further evidence supporting the inclusion of psychotherapy within mental healthcare integration in Nepal and similar contexts. abstract_id: PUBMED:18956504 Integration of medical and psychological treatment within the primary health care setting. Integrated care entails the provision of behavioral health services within the primary care setting and emphasizes a collaborative approach between mental health professionals and primary care providers (Kenkel, Deleon, Orabona Mantell, Steep, 2005). Research was collected to highlight the history, development, and implementation of integrated care within primary care facilities. The authors performed a comprehensive literature review of collaborative care and summarized the program design of the site where they work. It is hypothesized that integration will improve patient access to health care, increase the rate of evidence based practice, improve patient health and satisfaction, and reduce long-term costs. abstract_id: PUBMED:31381382 Veterans Health Administration Investments In Primary Care And Mental Health Integration Improved Care Access. Aiming to increase care access, the national Primary Care-Mental Health Integration (PC-MHI) initiative of the Veterans Health Administration (VHA) embedded specialists, care managers, or both in primary care clinics to collaboratively care for veterans with psychiatric illness. The initiative's effects on health care use and cost patterns were examined among 5.4 million primary care patients in 396 VHA clinics in 2013-16. The median rate of patients who saw a PC-MHI provider was 6.3 percent. Each percentage-point increase in the proportion of clinic patients seen by these providers was associated with 11 percent more mental health and 40 percent more primary care visits but also with 9 percent higher average total costs per patient per year. At the mean, 2.5 integrated care visits substituted for each specialty-based mental health visit that did not occur. PC-MHI was associated with improved access to outpatient care, albeit at increased total cost to the VHA. Successful implementation of integrated care necessitates significant investment and multidisciplinary partnership within health systems. abstract_id: PUBMED:30486900 Barriers and facilitators to the integration of mental health services into primary health care: a systematic review. Background: The objective of the review was to synthesize evidence of barriers and facilitators to the integration of mental health services into PHC from existing literature. The structure of the review was guided by the SPIDER framework which involves the following: Sample or population of interest-primary care providers (PCPs); Phenomenon of Interest-integration of mental health services into primary health care (PHC); Design-influenced robustness and analysis of the study; Evaluation-outcomes included subjective outcomes (views and attitudes); and Research type-qualitative, quantitative, and mixed methods studies. Methods: Studies that described mental health integration in PHC settings, involved primary care providers, and presented barriers/facilitators of mental health integration into PHC were included in the review. The sources of information included PubMed, PsycINFO, Cochrane Central Register of Controlled trials, the WHO website, and OpenGrey. Assessment of bias and quality was done using two separate tools: the Critical Appraisal Skills Program (CASP) qualitative checklist and the Effective Public Health Practice Project Quality Assessment Tool for Quantitative Studies. Results: Twenty studies met the inclusion criteria out of the 3353 search results. The most frequently reported barriers to integration of mental health services into PHC were (i) attitudes regarding program acceptability, appropriateness, and credibility; (ii) knowledge and skills; (iii) motivation to change; (iv) management and/or leadership; and (v) financial resources. In order to come up with an actionable approach to addressing the barriers, these factors were further analyzed along a behavior change theory. Discussion: We have shown that the integration of mental health services into PHC has been carried out by various countries. The analysis from this review provides evidence to inform policy on the existing barriers and facilitators to the implementation of the mental health integration policy option. Not all databases may have been exhausted. Systematic Review Registration: PROSPERO 2016 (Registration Number: CRD42016052000 ) and published in BMC Systematic Reviews August 2017. abstract_id: PUBMED:31792866 Time and Organizational Cost for Facilitating Implementation of Primary Care Mental Health Integration. Background: Integrating mental health services into primary care settings is complex and challenging. Although facilitation strategies have successfully supported implementation of primary care mental health integration and other complex innovations, we know little about the time required or its cost. Objective: To examine the time and organizational cost of facilitating implementation of primary care mental health integration. Design: Descriptive analysis. Participants: One expert external facilitator and two internal regional facilitators who helped healthcare system stakeholders, e.g., leaders, managers, clinicians, and non-clinical staff, implement primary care mental health integration at eight clinics. Intervention: Implementation facilitation tailored to the needs and resources of the setting and its stakeholders. Main Measures: We documented facilitators' and stakeholders' time and types of activities using a structured spreadsheet collected from facilitators on a weekly basis. We obtained travel costs and salary information. We conducted descriptive analysis of time data and estimated organizational cost. Key Results: The external facilitator devoted 263 h (0.09 FTE), including travel, across all 8 clinics over 28 months. Internal facilitator time varied across networks (1792 h versus 1169 h), as well as clinics. Stakeholder participation time was similar across networks (1280.6 versus 1363.4 person hours) but the number of stakeholders varied (133 versus 199 stakeholders). The organizational cost of providing implementation facilitation also varied across networks ($263,490 versus $258,127). Stakeholder participation accounted for 35% of the cost of facilitation activities in one network and 47% of the cost in the other. Conclusions: Although facilitation can improve implementation of primary care mental health integration, it requires substantial organizational investments that may vary by site and implementation effort. Furthermore, the cost of using an external expert to transfer facilitation skills and build capacity for implementation efforts appears to be minimal. abstract_id: PUBMED:34484875 Primary and Behavioral Health Care Integration Program: Impacts on Health Care Utilization, Cost, and Quality. This article describes an extension of the RAND Corporation's evaluation of the Substance Abuse and Mental Health Services Administration's Primary and Behavioral Health Care Integration (PBHCI) grants program. PBHCI grants are designed to improve the overall wellness and physical health status of people with serious mental illness or co-occurring substance use disorders by supporting the integration of primary care and preventive PH services into community behavioral health centers where individuals already receive care. From 2010 to 2013, RAND conducted a program evaluation of PBHCI, describing the structure, process, and outcomes for the first three cohorts of grantee programs (awarded in 2009 and 2010). The current study extends previous work by investigating the impact of PBHCI on consumers' health care utilization, total costs of care to Medicaid, and quality of care in three states. The evidence suggests that PBHCI was successful in reducing frequent use of emergency room and inpatient services for physical health conditions, reducing costs of care, and improving follow-up after hospitalization for a mental illness. However, PBHCI evidence does not suggest that PBHCI had a consistent effect on quality of preventive care and health monitoring for chronic physical conditions. These findings can guide the design of future cohorts of PBHCI clinics to build on the strengths with respect to shifting emergency department and inpatient care to less costly and more effective settings and address the continuing challenge of integrating care between specialty behavioral health providers and general medical care providers. abstract_id: PUBMED:29330238 Changing Patterns of Mental Health Care Use: The Role of Integrated Mental Health Services in Veteran Affairs Primary Care. Objective: Aiming to foster timely, high-quality mental health care for Veterans, VA's Primary Care-Mental Health Integration (PC-MHI) embeds mental health specialists in primary care and promotes care management for depression. PC-MHI and patient-centered medical home providers work together to provide the bulk of mental health care for primary care patients with low-to-moderate-complexity mental health conditions. This study examines whether increasing primary care clinic engagement in PC-MHI services is associated with changes in patient health care utilization and costs. Methods: We performed a retrospective longitudinal cohort study of primary care patients with identified mental health needs in 29 Southern California VA clinics from October 1, 2008 to September 30, 2013, using electronic administrative data (n = 66,638). We calculated clinic PC-MHI engagement as the proportion of patients receiving PC-MHI services among all primary care clinic patients in each year. Capitalizing on variation in PC-MHI engagement across clinics, our multivariable regression models predicted annual patient use of 1) non-primary care based mental health specialty (MHS) visits, 2) total mental health visits (ie, the sum of MHS and PC-MHI visits), and 3) health care utilization and costs. We controlled for year- and clinic-fixed effects, other clinic interventions, and patient characteristics. Results: Median clinic PC-MHI engagement increased by 8.2 percentage points over 5 years. At any given year, patients treated at a clinic with 1 percentage-point higher PC-MHI engagement was associated with 0.5% more total mental health visits (CI, 0.18% to 0.90%; P = .003) and 1.0% fewer MHS visits (CI, -1.6% to -0.3%; P = .002); this is a substitution rate, at the mean, of 1.5 PC-MHI visits for each MHS visit. There was no PC-MHI effect on other health care utilization and costs. Conclusions: As intended, greater clinic engagement in PC-MHI services seems to increase realized accessibility to mental health care for primary care patients, substituting PC-MHI for MHS visits, without increasing acute care use or total costs. Thus, PC-MHI services within primary care clinics may improve mental health care value at the patient population level. More research is needed to understand the relationship between clinic PC-MHI engagement and clinical quality of mental health care. abstract_id: PUBMED:35138129 Lessons Learned From a New Reverse-Integration Model to Improve Primary Care Screening in Community Mental Health Settings. The authors sought to describe a reverse-integration intervention aimed at improving preventive health screening in a community mental health clinic. The intervention, CRANIUM (cardiometabolic risk assessment and treatment through a novel integration model for underserved populations with mental illness), integrated primary care services into a large urban community mental health setting. It was implemented in 2015 and included a patient-centered team, population-based care, emphasis on screening, and evidence-based treatment. CRANIUM's strengths included provider acceptability, a patient-centered approach, sustained patient engagement, and economic feasibility. Challenges included underutilized staff, registry maintenance, and unanticipated screening barriers. The CRANIUM reverse-integration model can be feasibly implemented and was acceptable to providers. Answer: Yes, mental health integration in a primary care setting can improve quality and lower costs. Intermountain Healthcare's experience with an integrated quality improvement mental health program in a primary care setting demonstrated that clinical and financial burdens can be decreased for the health care team, patients, and family (PUBMED:16623603). Additionally, the Veterans Health Administration's investments in Primary Care-Mental Health Integration (PC-MHI) improved care access, although it was associated with increased total cost to the VHA (PUBMED:31381382). Furthermore, the Primary and Behavioral Health Care Integration (PBHCI) program was successful in reducing the frequent use of emergency room and inpatient services for physical health conditions, reducing costs of care, and improving follow-up after hospitalization for a mental illness (PUBMED:34484875). Moreover, a study in Nepal found that delivering psychotherapy within integrated services for depression resulted in greater healthcare use without significantly greater costs to the health system or individual (PUBMED:36585707). Another study showed that greater clinic engagement in PC-MHI services increased accessibility to mental health care for primary care patients, substituting PC-MHI for mental health specialty visits, without increasing acute care use or total costs (PUBMED:29330238). Lastly, the CRANIUM reverse-integration model, which integrated primary care services into a community mental health setting, was found to be economically feasible and improved preventive health screening (PUBMED:35138129). These findings suggest that mental health integration into primary care settings can lead to improved quality of care and can be cost-effective, although the extent of cost savings may vary depending on the specific model and context of integration.
Instruction: A comparison of colorectal cancer in Nigerian and North American patients: is the cancer biology different? Abstracts: abstract_id: PUBMED:24953266 A comparison of colorectal cancer in Nigerian and North American patients: is the cancer biology different? Background: Of the 24 million people predicted to have cancer by 2050, 70% will live in low- and middle-income countries (LMIC). As a result, cancer care is becoming a priority for health care systems in West Africa. This study compares the presentation and pattern of spread of colorectal cancer (CRC) observed in a hospital in West Africa with that of a North American referral center. Methods: Data on all adults presenting with CRC at a hospital in Nigerian patients (West Africa; 1990-2011) and all adults with stages III or IV CRC at a specialty hospital in (New York City, New York, North America; 2005-2011) were examined retrospectively. Demographic data, stage of disease, site of metastasis, and survival were compared. Results: There were 160 patients identified in West Africa and 1,947 patients identified in North America. Nigerian patients were younger (52 vs 59 years; P < .01) and presented with a later stage of disease (58% stage IV vs 47%; P < .01). Site of disease presentation was different between West African and North American patients (P < .01); 2.2% of West African patients presented with liver metastases only compared with 48.1% of North American patients. Conversely, 61.3% of patients in West Africa presented with peritoneal metastases only compared with 5.4% in North America. Overall survival stratified by stage at presentation (III/IV) showed worse prognosis for patients in either stage subgroup in Nigeria than North America. Conclusion: We found differences in the presentation, metastatic pattern, and outcomes of CRC in Nigerian (West Africa) when compared with New York City (North America). Late detection and differential tumor biology may drive the differences observed between the sites. Future studies on early CRC detection and on tumor biology in LMIC will be critical for understanding and treating CRC in this region. abstract_id: PUBMED:35139807 Colorectal cancer community engagement: a qualitative exploration of American Indian voices from North Dakota. Background: American Indians (AI) in North Dakota present with higher rates of advanced-stage disease for screening detectable colorectal cancers and have lower overall baseline colorectal cancer screening rates than non-AIs. We sought to identify the perceived barriers and facilitators for the engagement with colorectal cancer prevention within North Dakota tribal communities. Methods: Twelve semi-structured interviews were carried out across four tribal reservation communities in the state of North Dakota with American Indian adults between the ages of 30 and 75 years. We utilized purposive sampling to ensure maximum variation in age, sex, and tribal community until data saturation was achieved. The interviews were transcribed, and thematic analysis was carried out to identify consistent themes rooted within the data. Ethical approval was gained for this project from all relevant institutional review boards. Results: Four main themes were identified as barriers for the engagement with colorectal cancer prevention, including: colorectal cancer screening barriers, focused on other health problems, lack of colorectal cancer tailored health promotion, and socio-cultural factors affecting colorectal cancer prevention. Three main themes were identified as facilitators for the engagement with colorectal cancer prevention, including: reasons for getting colorectal cancer screening, role of culture, and getting out into the community. Conclusion: There is need for more community-rooted, strengths-based approaches to colorectal cancer prevention activities in AI communities in North Dakota. Socio-cultural factors, such as the use of storytelling, and the use of traditional knowledge have been demonstrated to be an important element of consideration for colorectal cancer tribal community engagement and prevention planning in the state. abstract_id: PUBMED:19670692 Misclassification of American Indian race in state cancer data among non-federally recognized Indians in North Carolina. Background: Equitable distribution of health care resources relies on accurate morbidity and mortality data, classified by race. Accurate classification is a problem, particularly for non-federally recognized American Indians (AI) receiving care outside of the Indian Health Service. Methods: We identified counties in North Carolina that had the majority of AIs belonging to the 7 state-recognized, non-federally recognized tribes. We collaborated with the tribe in each county and compared the incident cases of cancer in the North Carolina Central Cancer Registry (NCCCR) to the tribal rolls. Data were analyzed to calculate what percent of names on both lists were not correctly identified as AI in the NCCCR. We corrected the NCCCR classification and calculated the percentage misclassified, then recalculated the cancer incidence rates for 4 major cancers (prostate, female breast, lung, and colorectal). We compared the recalculated rate to the original rate. Results: There were 626 AIs on the tribal rolls; 112 (17.9%) were not identified as AI on the NCCCR list. Comparing 1996-2000 age-adjusted cancer incidence rates before and after reclassification, the increase in rates were prostate 41%, female breast 18%, lung 10%, and 11% for colorectal cancers. There was less than a 2% increase in cancer rates for the combined 4 sites for Blacks and Whites, before and after reclassification, and 19% for AIs. Conclusions: The study estimated 18% misclassification of non-federally recognized AIs in cancer registration in North Carolina, and determined an underestimation of cancer rates in the population. The underestimation of cancer burden among AIs in North Carolina may affect resources allocated for prevention, screening, and treatment programs, as well as funding for research. abstract_id: PUBMED:16704943 Cancer incidence in Montana: rates for American Indians exceed those for whites. Objectives: Previous studies suggested that the cancer incidence rates in American Indians and Alaska Natives were lower than in other groups. The objective of this study was to compare the cancer incidence rates in American Indians and whites in Montana. Methods: Age-adjusted 6-year cancer incidence rates were calculated for American-Indian and white men and women in Montana to allow comparison of rates in 1991-1996 to those in 1997-2002. Results: The age-adjusted rates for American-Indian men were significantly higher than those for white men for all cancer sites (755+/-74 [95% confidence interval] per 100,000 vs 544+/-9 per 100,000), lung cancer (167+/-35 per 100,000 vs 83+/-4 per 100,000), and colorectal cancer (115+/-29 per 100,000 vs 61+/-4 per 100,000) from 1997 to 2002. The adjusted rates for American-Indian women were significantly higher than those for white women for all cancer sites (526+/-47 per 100,000 vs 412+/-8 per 100,000) and lung cancer (120+/-24 per 100,000 vs 56+/-3 per 100,000) during this same time period. There was a significant increase in the age-adjusted rates for all cancer sites among white men and women but not for American-Indian men or women between 1991-1996 and 1997-2002. Conclusions: There is a significant disparity in the cancer incidence rates between American Indians and whites in Montana. Regional or state-level surveillance data will be needed to describe the changing patterns of cancer incidence in many native communities in the United States. abstract_id: PUBMED:10553387 Improving cancer incidence estimates for American Indians in Minnesota. Objectives: The purpose of this study was to estimate cancer incidence for American Indians in Minnesota. Methods: Indian Health Service enrollment data were linked to the Minnesota tumor registry to identify cancers among American Indians in Minnesota. Incidence rates for the 5 most common cancers in this population, estimated after the linkage, were compared with rates estimated before the linkage and with rates for the total population of Minnesota. Results: The linkage identified 302 cancer cases not previously identified as occurring among American Indians in Minnesota. Postlinkage estimates suggested that incidence rates for prostate and colorectal cancer are similar to those for the total population of Minnesota, but that rates of lung and cervical cancer are significantly higher. Breast cancer rates are slightly lower than those for the total population of Minnesota but more than twice as high as previous estimates for American Indians. Conclusions: The postlinkage estimates suggest different priorities for cancer education, prevention, and control than might be assumed from either prelinkage estimates or previously published data, and underscore the importance of using accurate and specific data for setting these priorities. abstract_id: PUBMED:29805137 Perspectives of American Indians in Eastern North Carolina on Socio-cultural Factors that Influence Colorectal Cancer Screening Decisions. American Indians (AI) have disproportionately high rates of colorectal cancer (CRC), but there is relatively little research focused on developing AI CRC screening interventions. We held six focus groups with AIs in rural Eastern U.S. (n=74) and analyzed the qualitative data from them in order to identify individual and socio-cultural factors that influence AI CRC screening decisions. Screening fear and anxiety was a central theme of the focus groups. For example, participants told stories about how late-stage cancer diagnoses and deaths in the community contributed to their fears and cancer fatalism. Furthermore, CRC screening fears were reinforced by health communication norms that limited productive conversations about CRC screening. Our findings indicate that culturally adapted interventions, such as CRC screening decision aids, are needed to help AIs communicate and make informed decisions about CRC screening with support from family and health care providers. More research is needed on the influence of these interventions on CRC screening intentions, social norms, and shared decision-making. abstract_id: PUBMED:17939129 Annual report to the nation on the status of cancer, 1975-2004, featuring cancer in American Indians and Alaska Natives. Background: The American Cancer Society, the Centers for Disease Control and Prevention, the National Cancer Institute, and the North American Association of Central Cancer Registries collaborate annually to provide updated information on cancer occurrence and trends in the U.S. The 2007 report features a comprehensive compilation of cancer information for American Indians and Alaska Natives (AI/AN). Methods: Cancer incidence data were available for up to 82% of the U.S. population. Cancer deaths were available for the entire U.S. population. Long-term (1975 through 2004) and fixed-interval (1995 through 2004) incidence and mortality trends were evaluated by annual percent change using regression analyses (2-sided P < .05). Cancer screening, risk factors, socioeconomic characteristics, incidence data, and stage were compiled for non-Hispanic whites (NHW) and AI/AN across 6 regions of the U.S. Results: Overall cancer death rates decreased by 2.1% per year from 2002 through 2004, nearly twice the annual decrease of 1.1% per year from 1993 through 2002. Among men and women, death rates declined for most cancers. Among women, lung cancer incidence rates no longer were increasing and death rates, although they still were increasing slightly, were increasing at a much slower rate than in the past. Breast cancer incidence rates in women decreased 3.5% per year from 2001 to 2004, the first decrease observed in 20 years. Colorectal cancer incidence and death rates and prostate cancer death rates declined, with colorectal cancer death rates dropping more sharply from 2002 through 2004. Overall, rates for AI/AN were lower than for NHW from 1999 through 2004 for most cancers, but they were higher for cancers of the stomach, liver, cervix, kidney, and gallbladder. Regional analyses, however, revealed high rates for AI/AN in the Northern and Southern Plains and Alaska. For cancers of the breast, colon and rectum, prostate, and cervix, AI/AN were less likely than NHW to be diagnosed at localized stages. Conclusions: For all races/ethnicities combined in the U.S., favorable trends in incidence and mortality were noted for lung and colorectal cancer in men and women and for breast cancer in women. For the AI/AN population, lower overall cancer incidence and death rates obscured important variations by geographic regions and less favorable healthcare access and socioeconomic status. Enhanced tobacco control and cancer screening, especially in the Northern and Southern Plains and Alaska, emerged as clear priorities. abstract_id: PUBMED:19477742 Masculinity and the body: how African American and White men experience cancer screening exams involving the rectum. Past research on prostate and colorectal cancer disparities finds that barriers to screening, such as embarrassment and offensiveness, are often reported. Yet none of this literature investigates why. This study uses masculinity and health theory to examine how men experience two common screenings: digital rectal exams (DREs) and colonoscopies. In-depth interviews were conducted with 64 African American and White men from diverse backgrounds, aged 40 to 64, from North Carolina. Regardless of race or education, men experienced DREs more negatively than colonoscopies because penetration with a finger was associated with a gay sexual act. Some men disliked colonoscopies, however, because they associated any penetration as an affront to their masculinity. Because beliefs did not differ by race, future research should focus on structural issues to examine why disparities persist with prostate and colorectal cancer. Recommendations are provided for educational programs and physicians to improve men's experiences with exams that involve the rectum. abstract_id: PUBMED:9726093 Irinotecan plus 5-FU and leucovorin in advanced colorectal cancer: North American trials. But fluorouracil (5-FU) and irinotecan (CPT-11 [Camptosar]) have shown activity in metastatic colorectal cancer and are approved for its treatment in the United States. Preclinical experiments in cell cultures and human tumor xenografts have indicated potential synergy when irinotecan is combined with 5-FU and leucovorin. The synergy appears to be sequence-dependent and is optimal when irinotecan exposure precedes 5-FU exposure by at least 24 hours. Four North American trials have been reported in which the three drugs were used together in either simultaneous, sequential, or alternating schedules. All three schedules showed activity in patients with metastatic colorectal cancer. The concern that diarrhea, which can be a dose-limiting toxicity with both irinotecan and 5-FU, would prevent the two drugs from being combined in reasonable doses has not proven to be a clinical issue. Phase III trials comparing the combination of the three drugs in a variety of schedules against 5-FU plus leucovorin alone are currently under way or in the planning stages. abstract_id: PUBMED:34255951 Cancer Mortality and Research Outcomes in a Rural State. Background: North Dakota is a rural state with high rates of cancer. Determining how various demographic, geographic, and funding factors contributed to cancer incidence on a state and county level helps improve cancer prevention and control. Objectives: We examined cancer incidence rate trends by demographic (sex and ethnicity) and geographic (county, population, rural/frontier status) factors. We also examined cancer funding and research output by year. Methods: Cancer incidence rates were obtained from the North Dakota Cancer Registry and stratified by sex, ethnicity, and county. US cancer rates also were obtained for comparison. Generalized linear models were used to compare overall incidence rates and yearly trends. Results: Male melanoma incidence rates increased faster than the US average across year P = 0.020). Incidence rates for prostate, lung, and colorectal cancer among American Indians/Alaska Natives (AI/AN) decreased faster than Whites across year (P < 0.001, P= 0.001, P < 0.001, respectively). Four counties-2 for breast cancer and 2 for prostate cancer-had differential trends compared to the North Dakota average across year (P = 0.011, P = 0.029; P= 0.046, P = 0.042). County-level lung cancer incidence rates were positively correlated with county population size, while rates for cervix/uteri were negatively correlated (P = 0.001, P = 0.023). Funding from the National Institutes of Health for North Dakota increased across year along with cancer papers published increased (P < 0.001, P < 0.001). Conclusions: Examining state and county data revealed several surprising trends and the need for a more fine-scale approach to cancer cause, control, and prevention. Answer: The study comparing colorectal cancer (CRC) in Nigerian and North American patients suggests that there may be differences in cancer biology between the two populations. Nigerian patients were found to be younger at diagnosis and presented with a later stage of disease compared to North American patients. Specifically, 58% of Nigerian patients presented with stage IV disease versus 47% of North American patients. Additionally, the site of disease presentation differed significantly, with only 2.2% of West African patients presenting with liver metastases compared to 48.1% of North American patients. Conversely, a much higher percentage of patients in West Africa presented with peritoneal metastases only (61.3%) compared to North America (5.4%). Overall survival, when stratified by stage at presentation (III/IV), was worse for patients in Nigeria than in North America. These findings suggest that late detection and possibly differential tumor biology may be driving the observed differences between the two regions (PUBMED:24953266). The differences in presentation and outcomes of CRC in Nigerian patients compared to those in North America highlight the need for further studies on early CRC detection and tumor biology in low- and middle-income countries (LMICs). Understanding these differences is critical for developing effective strategies for CRC treatment and prevention in these regions. The study's conclusion points to the importance of future research in early detection and tumor biology to address the disparities in CRC presentation and outcomes between West Africa and North America.
Instruction: Unilateral auditory neuropathy spectrum disorder: retrocochlear lesion in disguise? Abstracts: abstract_id: PUBMED:25404198 Unilateral auditory neuropathy spectrum disorder: retrocochlear lesion in disguise? Objective: To investigate whether the aetiology for hearing impairment in neonates with unilateral auditory neuropathy spectrum disorder could be explained by structural abnormalities such as cochlear nerve aplasia, a cerebellopontine angle tumour or another identifiable lesion. Methods: In this prospective case series, 17 neonates were diagnosed with unilateral auditory neuropathy spectrum disorder on electrophysiological testing. Diagnostic audiology testing, including auditory brainstem response testing, was supplemented with computed tomography and/or magnetic resonance imaging. Results: Ten of the neonates (59 per cent) showed evidence for cochlear nerve aplasia. Of the remaining seven, four were shown to have another abnormality of the temporal bone on imaging. Only three neonates (18 per cent) were not diagnosed with cochlear nerve aplasia or another lesion. Three computed tomography scans were reported as normal, but subsequent magnetic resonance imaging revealed cochlear nerve aplasia. Conclusion: Auditory neuropathy spectrum disorder as a unilateral condition mandates further investigation for a definitive diagnosis. This series demonstrates that most neonates with unilateral auditory neuropathy spectrum disorder had pathology as visualised on computed tomography and/or magnetic resonance imaging scans. Magnetic resonance imaging is an appropriate first-line imaging modality. abstract_id: PUBMED:34785080 Vestibular and radiological characteristics of children affected by unilateral auditory neuropathy spectrum disorder. Objective: Auditory neuropathy spectrum disorders (ANSD) are defined by the association of a preserved outer hair cell function and an impaired auditory nerve neural response, and present mostly bilaterally. Unilateral ANSD are consequently only seldom described, and most frequently as isolated cases. This study aims to describe the audiological, vestibular and radiological characteristics of a population of children with unilateral ANSD. Material And Methods: We isolated 22 patients with unilateral ANSD, 12 boys and 10 girls from 0 to 95 months, in a database of auditory evoked potentials. We reviewed the audiological, radiological and vestibular assessments. The audiological assessment included tympanometry, otoacoustic emission recording and auditory evoked potential. Otolithic function was assessed by performing cervical vestibular evoked myogenic potential. The canal function was determined by video head impulse test and/or caloric test. The radiological evaluation consisted of an MRI of the internal auditory canal. Results: Many patients with a type A tympanometry had no response to otoacoustic emission (53,8%), in the presence of a cochlear microphonic potential. Vestibular assessment was performed in 9 of the 22 patients. 4 children had impaired otolithic and/or canal function. MRI evaluation of the inner ear was performed in 18 patients. Aplasia or hypoplasia of the cochlear nerve was found in 17 of them. MRI showed additional vestibular or brainstem abnormalities in 7 of the 18 children. All children with impaired vestibular function had vestibular or brainstem radiological alterations in addition to cochlear branch aplasia or hypoplasia. Conclusions: Radiological and vestibular abnormalities are common in children with unilateral ANSD and suggest that a radiological and vestibular assessment is required. abstract_id: PUBMED:19927687 Late-onset unilateral auditory neuropathy/dysynchrony: a case study. Background: Auditory neuropathy/dysynchrony (AN/AD) typically develops early in life and is bilateral in nature. Purpose: Herein, we describe an unusual finding of late-onset unilateral AN/AD based on reported case history and audiometric findings. Research Design: A 64-year-old female presented with a complaint of a progressive unilateral hearing loss that had developed over the past two-three years. She underwent an extensive behavioral/electrophysiological test battery. Results: Magnetic resonance imaging was negative for internal auditory canal mass or lesion. A unilateral notched loss centered at 1000 Hz and other findings were consistent with late-onset unilateral AN/AD: observable bilateral otoacoustic emissions and cochlear microphonics, absent middle acoustic reflexes with stimulation on the affected side, abnormal auditory brain stem response on the affected side, and poorer speech recognition than would be predicted by the audiogram. Middle-latency and long-latency evoked responses were present bilaterally, although with lower amplitudes on the affected side. abstract_id: PUBMED:37645035 Language and health-related quality of life outcomes of children early-detected with unilateral and mild bilateral hearing loss. Introduction: We aimed to describe the language and health-related quality of life (HRQoL) outcomes of children early-identified with unilateral or mild bilateral permanent hearing loss. This was a cross-sectional community-based study of children with mild bilateral or unilateral permanent hearing loss (including unilateral auditory neuropathy spectrum disorder (ANSD)), drawn from a population-based databank in Victoria, Australia. Methods: Enrolment in this databank is independent of early intervention and amplification approaches. Language and caregiver-reported HRQoL outcomes are described by type and degree of loss at three timepoints across child development: at age 2 years (n = 255), 5-7 years (n = 173) and 9-12 years (n = 45). Results: Across all age groups, average language outcomes were poorer than population normative scores by between a half to two thirds of a standard deviation. Children with mild bilateral hearing loss demonstrated poorer average language outcomes than children with unilateral hearing loss, particularly at younger ages. Children with unilateral ANSD showed language outcomes comparable to their peers with unilateral profound hearing loss. Children had poorer HRQoL psychosocial scores compared to physical scores, without obvious patterns of outcomes linked to degree or type of hearing loss. Discussion: This study demonstrates children with early-identified unilateral or mild bilateral hearing loss have average language and HRQoL outcomes poorer than population normative expectations from an early age. These outcomes are observed at later ages across childhood. These findings provide a contemporary description of language and quality of life outcomes for children identified but not targeted by universal newborn hearing screening and raise questions of how to provide better support for these populations of children and their families. abstract_id: PUBMED:22073929 Unilateral auditory neuropathy spectrum disorder. Conclusions: The majority of the patients with unilateral auditory neuropathy spectrum disorder (UANSD) were pediatric and mostly showed a great degree of hearing loss when diagnosed. Abnormal auditory brainstem response (ABR) and preserved otoacoustic emissions (OAEs) and/or cochlear microphonics (CM) were important features to differentiate it from common sensorineural deafness and central nerve hearing loss. Objective: To identify the clinical characteristics of patients with UANSD. Methods: This was a retrospective study involving 14 patients diagnosed as having UANSD between 2004 and 2010 in the Chinese PLA Hospital. Results: In all, 50% of the cases were males (1:1 sex ratio) and the average age of onset was 4.1 years. Of the 14 affected ears with UANSD in these cases, 6 were left-sided, while 8 were right-sided. Of the 14 contralateral ears, 4 presented with sensorineural hearing loss, while the other 10 showed normal hearing. The degree of hearing loss in the 14 affected ears varied, including moderate in 1, moderately severe in 4, severe in 5, and profound in 4. ABRs were absent in the 14 affected ears, while the OAEs, and/or CM were present. abstract_id: PUBMED:32821455 Cochlear Implantation in a Case of Auditory Neuropathy Spectrum Disorder with CAPOS Syndrome. Auditory neuropathy spectrum disorder (ANSD) is a hearing disorder which characterized with normal outer hair cell function but disrupted neural synchrony in the afferent auditory pathway. CAPOS (cerebellar ataxia, areflexia, pes cavus, optic atrophy, and sensorineural hearing loss) syndrome can manifest itself with ANSD and this rare situation and audiological rehabilitation outcomes have not well documented in the literature. We aim to present a cochlear implant user subject with CAPOS syndrome and ANSD. A 14-year-old girl diagnosed with ANSD and CAPOS syndrome. She received unilateral cochlear implant (CI). Her hearing sensitivity and speech perception abilities have been improved with CI. Also, she has a good music perception ability measured with the Turkish version of Clinical Assessment of Music Perception Test. After detailed audiological evaluations, CI could be a good option for patients who have ANSD and CAPOS syndrome. abstract_id: PUBMED:36131687 How Craniofacial Anomalies Can Obscure the Diagnosis of Unilateral Auditory Neuropathy Spectrum Disorder: A Case Study. Comorbidities are common with craniofacial anomalies and can include auditory neuropathy spectrum disorder (ANSD). Identification of these comorbidities, such as a diagnosis of ANSD, has increased with expanded multi-disciplinary care teams and the use of auditory objective measures. Patients with craniofacial anomalies often have complex medical histories including extreme prematurity and jaundice, which may lead to the development of additional comorbidities. The 5-year-old boy that is the subject of this case study presented with noted developmental delays and residing in foster care with an unknown family/medical history. Initially presenting with middle ear dysfunction, exacerbated by craniofacial anomalies, the underlying cause of his speech delays was missed. Behavioral audiometry was attempted with inaccurate results. Surgery was performed to correct the craniofacial anomalies, middle ear dysfunction, and was followed with auditory brainstem response (ABR) testing. The boy healed well without complications. The ABR revealed ANSD, hearing aids were fitted for the diagnosis, regular speech therapy was initiated, and the boy continues to make steady progress toward developmental milestones. The purpose of this case study was to highlight the course of identification and treatment of a complex case that may provide insight into determining a differential diagnosis of ANSD when initially obscured by craniofacial anomalies. abstract_id: PUBMED:27843504 Relationship Between Patients with Clinical Auditory Neuropathy Spectrum Disorder and Mutations in Gjb2 Gene. The auditory neuropathy is a condition which there is a dyssynchrony in the nerve conduction of the auditory nerve fibers. There is no evidence about the relationship between patients with clinical auditory neuropathy spectrum disorder and mutations in GJB2 gene. There are only two studies about this topic in the medical literature. Connexin 26 (GJB2 gene) mutations are common causes of genetic deafness in many populations and we also being reported in subjects with auditory neuropathy. Objective: To analyze the pattern of clinical relationship between patients with clinical diagnosis with auditory neuropathy spectrum disorder and GJB2 gene. Patients And Methods: Study Design - Retrospective analysis and genetic evaluation. Setting - Tertiary referral center. Subjects - 40 patients with Auditory Neuropathy Spectrum Disorder. Intervention - Clinical information and genetic evaluation (GJB2 gene) were analyzed. Results: Biallelic mutations that accounted for hearing loss (HL) were found in three patients, both with c.35delG mutation in homozygous state. The splice site mutation IVS1+1G>A was detected in heterozygous state in one individual. However, since the second mutant allele was not identified, it was not possible to establish its correlation with the phenotype. Conclusion: Mutations in GJB2 gene mutations were found in 7.5% of the patients with ANSD. We found no relationship between patients with clinical auditory neuropathy spectrum disorder and mutations in GJB2 gene (p>0.05). abstract_id: PUBMED:32290039 Auditory Neuropathy Spectrum Disorders: From Diagnosis to Treatment: Literature Review and Case Reports. Auditory neuropathy spectrum disorder (ANSD) refers to a range of hearing impairments characterized by deteriorated speech perception, despite relatively preserved pure-tone detection thresholds. Affected individuals usually present with abnormal auditory brainstem responses (ABRs), but normal otoacoustic emissions (OAEs). These electrophysiological characteristics have led to the hypothesis that ANSD may be caused by various dysfunctions at the cochlear inner hair cell (IHC) and spiral ganglion neuron (SGN) levels, while the activity of outer hair cells (OHCs) is preserved, resulting in discrepancies between pure-tone and speech comprehension thresholds. The exact prevalence of ANSD remains unknown; clinical findings show a large variability among subjects with hearing impairment ranging from mild to profound hearing loss. A wide range of prenatal and postnatal etiologies have been proposed. The study of genetics and of the implicated sites of lesion correlated with clinical findings have also led to a better understanding of the molecular mechanisms underlying the various forms of ANSD, and may guide clinicians in better screening, assessment and treatment of ANSD patients. Besides OAEs and ABRs, audiological assessment includes stapedial reflex measurements, supraliminal psychoacoustic tests, electrocochleography (ECochG), auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs). Hearing aids are indicated in the treatment of ANSD with mild to moderate hearing loss, whereas cochlear implantation is the first choice of treatment in case of profound hearing loss, especially in case of IHC presynaptic disorders, or in case of poor auditory outcomes with conventional hearing aids. abstract_id: PUBMED:26557342 Cervical and Ocular Vestibular Evoked Myogenic Potentials Test Results in Individuals with Auditory Neuropathy Spectrum Disorders. Auditory neuropathy spectrum disorder is a clinical disorder where the outer hair cell functioning is intact but the functioning of the auditory nerve is affected. Since, the 8(th) nerve is constituted by both the auditory and vestibular branch of nerve fibers, there are chances that the vestibular nerve might also be affected. Hence, the current study was carried out in order to determine the functioning of vestibular nerve in individuals with auditory neuropathy. A total of 11 participants were considered for the current study. Cervical vestibular evoked myogenic potentials (cVEMPs) and ocular vestibular evoked myogenic potentials (oVEMPs) were administered using the conventional protocol. In all the participants (100%) the oVEMPs were absent whereas in 20 ears out of 22 ears tested (90.90%) the cVEMPs were absent. The results of the present study indicate a high incidence of vestibular involvement in individuals with auditory neuropathy spectrum disorders. Also, it necessitates the inclusion of vestibular tests in the test battery used to assess individuals with auditory neuropathy spectrum disorder. Answer: The investigation into unilateral auditory neuropathy spectrum disorder (ANSD) suggests that it could indeed be a retrocochlear lesion in disguise. In a prospective case series, 17 neonates diagnosed with unilateral ANSD underwent diagnostic audiology testing supplemented with computed tomography (CT) and/or magnetic resonance imaging (MRI). The results showed that 59% of these neonates had evidence of cochlear nerve aplasia, and 18% did not have cochlear nerve aplasia or another lesion identified. Three CT scans initially reported as normal were later found to show cochlear nerve aplasia on subsequent MRI, indicating that MRI is a more appropriate first-line imaging modality for these cases (PUBMED:25404198). Furthermore, a study involving 22 children with unilateral ANSD found that radiological and vestibular abnormalities are common in this population. MRI evaluation revealed aplasia or hypoplasia of the cochlear nerve in 17 out of 18 patients, and additional vestibular or brainstem abnormalities in 7 of these 18 children. All children with impaired vestibular function had radiological alterations in addition to cochlear branch aplasia or hypoplasia (PUBMED:34785080). These findings suggest that unilateral ANSD often has an underlying structural abnormality, such as cochlear nerve aplasia or hypoplasia, which could be considered a retrocochlear lesion. Therefore, unilateral ANSD may not be an isolated auditory nerve disorder but could be associated with more extensive retrocochlear pathology. This underscores the importance of thorough radiological and vestibular assessment in children with unilateral ANSD to uncover any underlying lesions that may be contributing to the disorder.
Instruction: Does rigid instrumentation increase the fusion rate in one-level anterior cervical discectomy and fusion? Abstracts: abstract_id: PUBMED:15541695 Does rigid instrumentation increase the fusion rate in one-level anterior cervical discectomy and fusion? Background Context: Although plate fixation enhances the fusion rate in multilevel anterior cervical discectomy and fusion (ACDF), debate exists regarding the efficacy of nonplating to rigid plate fixation in one-level ACDF. Purpose: To determine the efficacy of nonplating to rigid plate fixation in regards to fusion rate and clinical outcome in patients undergoing one-level ACDF with autograft. Study Design: A review of 69 consecutive patients who underwent one-level ACDF with autograft and with or without rigid anterior cervical plate fixation. Patient Sample: Sixty-nine patients who underwent one-level ACDF (mean age, 45 years) were evaluated for radiographic evidence of fusion (mean, 14 months) and for clinical outcome. All patients received tricortical iliac crest autografts. Disc space distraction was 2 mm, the grafts were inserted with the cortical surface positioned anteriorly, and each graft was countersunk 2 mm from the anterior vertebral border. Thirty-eight patients underwent nonplated ACDF and 31 patients underwent plated ACDF. Eighteen Orion (Sofamor-Danek, Memphis, TN), eight Atlantis (Sofamor-Danek) and five PEAK polyaxial (Depuy-Acromed, Rayham, MA) anterior cervical plating systems were used. Rigid plate fixation was used in all patients with instrumentation. Postoperatively, hard collars were worn 6 to 8 weeks in nonplated patients and soft collars were worn for 3 to 4 weeks in plated patients. Twenty-four patients were smokers (54.2% nonplating; 45.8% plating) and work-related injuries entailed 23 patients (47.8% nonplating; 52.2% plating). Outcome Measures: Fusion was assessed based on last follow-up of lateral neutral, flexion and extension radiographs. Radiographs were evaluated blindly to assess fusion and instrumentation integrity between nonplated and plated patients. Clinical outcomes were assessed with the Cervical Spine Outcomes Questionnaire and also assessed on last follow-up as excellent, good, fair or poor based on Odom's criteria. Methods: Fusion rate and postoperative clinical outcome were assessed in 69 patients who underwent one-level ACDF with autograft and with or without rigid anterior plate fixation. Additional risk factors were also analyzed. Statistical significance was established at p<.05. Results: Sixty-six patients (95.7%) achieved a solid fusion (100% nonplated; 90.3% plated). Nonunions occurred in three patients (1 smoker; 2 nonsmokers) with Orion instrumentation. Slight screw penetration into the involved and uninvolved interbody spaces occurred in one patient who was a nonsmoker and did not achieve fusion. One superficial cervical wound infection was noted in a nonplated patient. No other intraoperative or postoperative complications were noted. No statistically significant difference was noted between nonplating to rigid plating upon fusion rate (p>.05). All nonunions occurred at the C5-C6 level. Mean estimated intraoperative blood loss was significantly greater in plated patients (p=.043). Revision surgery involved 9.7% of the plated patients, whereas none of the nonplated patients required reoperation. Postoperative clinical outcome was assessed in all patients (mean, 21 months). Excellent results were noted in 18.8%, good results in 72.5% and fair results in 8.7% of the patients. Nonunion patients reported satisfactory clinical outcome. No statistical significance was noted between clinical outcome of fused and nonfused patients, the presence of a work-related injury and the use of plating (p>.05). Demographics and history of smoking were not factors influencing fusion or clinical outcome in this series (p>.05). The effect on fusion by various plate types could not be discerned from this study. Conclusion: A 100% and 90.3% fusion rate was obtained for one-level nonplated and plated ACDF procedures with autograft, respectively. The effects of smoking or level of fusion could not be discerned from these one-level cases. Excellent and good clinical outcome results were obtained for 91.3%. Nonplating or rigid plate fixation for ACDF in properly selected patients to treat radiculopathy with or without myelopathy has a high fusion rate and yields a satisfactory clinical outcome. Although controversy exists as to the efficacy of rigid plate fixation in one-level ACDF, solid bone fusion can be adequately obtained without plate fixation and instrumentation-related complications can be avoided. In line with the literature, plate fixation should be reserved for patients unwilling or unable to wear a hard orthosis postoperatively for an extended period of time or for those patients who seek a quicker return to normal activities. Proper patient selection, meticulous operative technique and postoperative care is essential to promote optimal graft-host incorporation. abstract_id: PUBMED:35972886 Segmental Slope is a Predictor of Fusion Rate in Single Level Anterior Cervical Discectomy and Fusion. Study Design: Retrospective study. Objectives: To explore preoperative segmental slope as a predictor of fusion rate after single-level anterior cervical discectomy and fusion (ACDF). Methods: Data of consecutive patients with single-level cervical spondylosis who underwent ACDF between 2011 and January 2019 were retrospectively reviewed. Preoperative parameters including baseline characteristics, segment slope, T1 slope, range of motion, intervertebral disk height, and bone density were evaluated. Data were analyzed using Student's t-test, Mann-Whitney U test, χ2 test, Fisher's exact test, multivariate logistic regression analysis, and receiver operating characteristic curve. Results: In total, 253 patients were selected. The mean follow-up was 32.1 months (range 24-99 months). Male sex, body mass index, preoperative segment slope, T1 slope, operative levels, and osteoporosis or osteopenia were associated with non-fusion at an early stage after surgery. However, no statistically significant difference was observed at the last follow-up. Multivariate logistic regression analysis revealed that the preoperative segment slope was an independent predictor for non-fusion at 3, 6, and 12 months postoperatively. Osteoporosis or osteopenia was an independent predictor of non-fusion at 6 and 12 months postoperatively. The area under the curve was .874 at 3 months (P < .001, cutoff value 10.21°), .888 at 6 months (P < .001, cutoff value 14.56°), and .825 at 12 months (P < .001, cutoff value 21.08°). Conclusion: Preoperative segment slope can be used as a predictor of early fusion rate after single-level ACDF. We determined detailed cutoff values. This study may help surgeons take measures to promote early fusion in advance. abstract_id: PUBMED:28473949 Minimally Invasive Anterior Cervical Discectomy Without Fusion to Treat Cervical Disc Herniations in Patients with Previous Cervical Fusions. Adjacent level cervical disc disease and secondarily progressive disc space degeneration that develops years after previously successful anterior cervical fusion at one or more levels is a common, but potentially complex problem to manage. The patient is faced with the option of further open surgery which involves adding another level of disc removal with fusion, posterior decompression, and stabilization, or possibly replacing the degenerated disc with an artificial disc construct. These three cases demonstrate that some patients, especially after minor trauma, may have small herniated discs as the cause for their new symptoms rather than progressive segmental degeneration. Each patient became symptomatic after minor trauma three to six years after the original fusion and had no or minimal radiologic changes of narrowing of the disc or spur formation commonly seen in adjacent level disease, but rather had magnetic resonance imaging (MRI) findings typical of small herniated discs. After failing multiple months of conservative treatment they were offered surgery as an option. Subsequently, all three were successfully treated with minimal anterior discectomy without fusion. There are no reports in the literature of using minimal anterior cervical discectomy without fusion in previous fused patients. This report reviews the background of adjacent level cervical disease, the various biomechanical explanations for developing a new disc herniation rather than progressive segmental degeneration, and how anterior cervical discectomy without fusion can be an option in these patients. abstract_id: PUBMED:24741136 Outcome of single level anterior cervical discectomy and fusion using nano-hydroxyapatite/polyamide-66 cage. Background: Cages have been widely used for the anterior reconstruction and fusion of cervical spine. Nonmetal cages have become popular due to prominent stress shielding and high rate of subsidence of metallic cages. This study aims to assess fusion with n-HA/PA66 cage following one level anterior cervical discectomy. Materials And Methods: Forty seven consecutive patients with radiculopathy or myelopathy underwent single level ACDF using n-HA/PA66 cage. We measured the segmental lordosis and intervertebral disc height on preoperative radiographs and then calculated the loss of segmental lordosis correction and cage subsidence over followup. Fusion status was evaluated on CT scans. Odom criteria, Japanese Orthopedic Association (JOA) and Visual Analog Pain Scales (VAS) scores were used to assess the clinical results. Statistically quantitative data were analyzed while Categorical data by χ(2) test. Results: Mean correction of segmental lordosis from surgery was 6.9 ± 3.0° with a mean loss of correction of 1.7 ± 1.9°. Mean cage subsidence was 1.2 ± 0.6 mm and the rate of cage subsidence (>2 mm) was 2%. The rate of fusion success was 100%. No significant difference was found on clinical or radiographic outcomes between the patients (n=27) who were fused by n-HA/PA66 cage with pure local bone and the ones (n=20) with hybrid bone (local bone associating with bone from iliac crest). Conclusions: The n-HA/PA66 cage is a satisfactory reconstructing implant after anterior cervical discectomy, which can effectively promote bone graft fusion and prevent cage subsidence. abstract_id: PUBMED:27055447 Cervical disc replacement - emerging equivalency to anterior cervical discectomy and fusion. Purpose: Cervical disc replacement has become an acceptable alternative to anterior cervical fusion for the surgical treatment of cervical spine spondylosis resulting in radiculopathy or myelopathy following anterior discectomy and decompression. This concise overview considers the current state of knowledge regarding the continued debate of the role of cervical disc replacement with an update in light of the latest clinical trial results. Methods: A literature review was performed identifying clinical trials pertaining to the use of cervical disc replacement compared to cervical discectomy and fusion. Single level disease and two level disease were considered. Outcome data from the major clinical trials was reviewed and salient points identified. Results: With lengthier follow-up data becoming available, the equivalence of CDR in appropriately selected cases is becoming clear. This is chiefly manifested by reduced re-operation rates and reduced incidence of adjacent level disease in those treated with arthroplasty. Conclusion: Cervical disc replacement shows emerging equivalence in outcomes compared to the gold standard anterior cervical discectomy and fusion. Further longer term results are anticipated to confirm this trend. abstract_id: PUBMED:38421334 The Use of Osteobiologics in Single versus Multi-Level Anterior Cervical Discectomy and Fusion: A Systematic Review. Study Design: Systematic literature review. Objectives: In this study we assessed evidence for the use of osteobiologics in single vs multi-level anterior cervical discectomy and fusion (ACDF) in patients with cervical spine degeneration. The primary objective was to compare fusion rates after single and multi-level surgery with different osteobiologics. Secondary objectives were to compare differences in patient reported outcome measures (PROMs) and complications. Methods: After a global team of reviewers was selected, a systematic review using different repositories was performed, confirming to PRISMA and GRADE guidelines. In total 1206 articles were identified and after applying inclusion and exclusion criteria, 11 articles were eligible for analysis. Extracted data included fusion rates, definition of fusion, patient reported outcome measures, types of osteobiologics used, complications, adverse events and revisions. Results: Fusion rates ranged from 87.7% to 100% for bone morphogenetic protein 2 (BMP-2) and 88.6% to 94.7% for demineralized bone matrix, while fusion rates reported for other osteobiologics were lower. All included studies showed PROMs improved significantly for each osteobiologic. However, no differences were reported when comparing osteobiologics, or when comparing single vs multi-level surgery specifically. Conclusion: The highest fusion rates after 2-level ACDF for cervical spine degeneration were reported when BMP-2 was used. However, PROMs did not differ between the different osteobiologics. Further blinded randomized trials should be performed to compare the use of BMP-2 in single vs multi-level ACDF specifically. abstract_id: PUBMED:29796425 Dyspnea in the Supine Position after Anterior Cervical Discectomy and Fusion. Posterior occipitocervicothoracic fusion in a flexed position may cause dyspnea, and the onset of obstructive sleep apnea after anterior upper cervical fusion. However, there are no reports of dyspnea occurring after anterior lower cervical fusion. Here, we present an unusual case of dyspnea in the supine position after a C5-C6 anterior cervical discectomy and fusion. abstract_id: PUBMED:31893147 Comparison of 30-day outcome following anterior cervical discectomy and fusion with or without instrumentation for cervical spondylosis: A review of 2352 elective cases. Background: Anterior cervical discectomy and fusion (ACDF) is a commonly performed procedure to address cervical myeloradiculopathy. However, 30-day outcomes after additional plating/instrumentation are not very clear. Methods: The authors reviewed The National Surgical Quality Improvement Program database to identify all elective ACDF cases with or without instrumentation for patients having cervical spondylosis with or without myelopathy from 2011 to 2013 using current procedural terminology and International Classification of Disease-9 codes. We identified 2352 cases and subdivided these into two cohorts based on instrumentation procedures (588 cases without instrumentation and 1764 cases with instrumentation). Baseline differences in two cohorts were adjusted by propensity score matching analysis, yielding well-matched 583 pairs. Results: Following propensity matching, the authors observed no significant difference in 30-day complication rates (prematch, 2.4% vs. 2.4%; and postmatch, 2.4% vs. 1.7%), readmission (prematch, 4.1% vs. 3.2%; and postmatch, 3.9% vs. 3.3%), and reoperation (prematch 0.9% vs. 1.8%; and postmatch 0.9% vs. 1.5%). Conclusion: Our results demonstrate similar 30-day outcomes in both cohorts and suggest that instrumentation can be safely implemented in the setting of ACDF. abstract_id: PUBMED:28120067 Anterior Cervical Discectomy and Fusion with a Compressive C-JAWS Staple. Background: Anterior cervical discectomy and fusion is currently the most commonly used technique in cervical surgery. But the implantation of a traditional plate is time-consuming and exposes the patient to additional adverse events. In this study, we analyzed results in patients who underwent anterior cervical discectomy and fusion with C-JAWS fixation. The C-JAWS device is a new cervical compressive staple developed to stabilize the spacer. Methods: At our department, between January 2012 and December 2013, nine consecutive patients with cervical spondylopathy underwent an anterior cervical discectomy and fusion process in which we used a polyether ether ketone cervical spacer prefilled with bone substitute and secured with a cervical compressive staple. The Neck Disability Index (NDI) score and visual analog score (VAS) for neck and arm pain, as well as radiographic examinations, were adopted to assess postoperative outcome and fusion. Results: Bony fusion was observed in all of the nine patients, and no serious surgery-related or implant-related complications were observed during the operation or in the postoperative period. The average operative time was 60.3 ± 11.6 min. The average hospital stay was 3.2 ± 0.8 days. The average skin incision length was 3.0 ± 0.3 cm. The average follow-up was 18.4 ± 4.3 months. At the last follow-up, the NDI had changed from the baseline value of 23.4 ± 10.3 to 7.1 ± 4.8, and the VAS values for neck and arm pain had changed from 6.1 ± 1.0 and 4.6 ± 1.6,respectively, to 2.3 ± 1.7 and 2.4 ± 1.1, respectively. The patients' subjective satisfaction was excellent in six and good in three. Conclusions: Without screws, this low-profile design compressive staple, the C-JAWS, performed well in anterior cervical discectomy and fusion surgeries. abstract_id: PUBMED:31629142 Prognosis of Hardware-Related Problems in Anterior Cervical Discectomy and Fusion with Cage and Plate Constructs. Objective: To analyze hardware-related problems and their prognoses after anterior cervical discectomy and fusion (ACDF) using cages and plates for degenerative and traumatic cervical disc diseases. Methods: The study included 808 patients who underwent anterior cervical discectomy and fusion for degenerative and traumatic disc diseases with >1 year of follow-up. We investigated time of onset and progression of problems associated with instrumentation and cage usage. The mean follow-up time was 3.4 years. Type of plate, range and level of fusion, patient factors (age, sex, body mass index, and bone mineral density), and local kyphosis were evaluated. Results: Complications were found in 132 cases (16.5%), including subsidence in 69 cases, plate loosening in 8 cases, screw loosening in 8 cases, screw breakage in 1 case, and multiple complications in 36 cases. In 3 cases, additional posterior cervical surgery was performed. One case needed hardware revision. There were no esophageal, tracheal, or neurovascular structural injuries secondary to metal failure. There were no significant differences in type of plate, level of surgery, or patient factors. The greater the number of fusion segments, the greater the incidence of complications (P = 0.001). The clinical outcomes improved regardless of the complications (P = 0.083). Conclusions: Most hardware-related complications are not symptomatic and can be treated conservatively. Only a few cases need revision surgery. Precise surgical techniques are needed in multilevel anterior cervical discectomy and fusion (>3 levels) because of the increased complication rate. Answer: The use of rigid instrumentation in one-level anterior cervical discectomy and fusion (ACDF) does not appear to significantly increase the fusion rate compared to nonplated ACDF. A study reviewed 69 consecutive patients who underwent one-level ACDF with autograft, with 38 patients undergoing nonplated ACDF and 31 patients undergoing plated ACDF. The results showed that 66 patients (95.7%) achieved a solid fusion, with a 100% fusion rate in nonplated patients and a 90.3% fusion rate in plated patients. No statistically significant difference was noted between nonplating and rigid plating upon fusion rate (p>.05). The study concluded that solid bone fusion can be adequately obtained without plate fixation, and instrumentation-related complications can be avoided. It suggested that plate fixation should be reserved for patients unwilling or unable to wear a hard orthosis postoperatively for an extended period of time or for those who seek a quicker return to normal activities (PUBMED:15541695). In contrast, another study identified preoperative segmental slope as an independent predictor for non-fusion at 3, 6, and 12 months postoperatively in single-level ACDF, but it did not directly address the impact of rigid instrumentation on fusion rates (PUBMED:35972886). Other studies have focused on different aspects of ACDF, such as the use of different types of cages (PUBMED:24741136), cervical disc replacement as an alternative to ACDF (PUBMED:27055447), the use of osteobiologics (PUBMED:38421334), and complications related to hardware (PUBMED:31629142), but they do not provide direct evidence regarding the specific question of whether rigid instrumentation increases the fusion rate in one-level ACDF. Therefore, based on the available evidence from the study with direct relevance to the question (PUBMED:15541695), rigid instrumentation does not significantly increase the fusion rate in one-level ACDF.
Instruction: Are integrated HIV services less stigmatizing than stand-alone models of care? Abstracts: abstract_id: PUBMED:23336726 Are integrated HIV services less stigmatizing than stand-alone models of care? A comparative case study from Swaziland. Introduction: Integrating HIV with primary health services has the potential to reduce HIV-related stigma through delivering care in settings disassociated with HIV. This study investigated the relationship between integrated care and felt stigma. The study design was a comparative case study of four models of HIV care in Swaziland, ranging from fully integrated to fully stand-alone HIV care. Methods: An exit survey (N=602) measured differences in felt stigma across model of care; the primary outcome "perception of HIV status exposure through clinic attendance" was analyzed using multivariable logistic regression. In-depth interviews (N=22) explored whether and how measured differences in stigma experiences were related to service integration. Results: There were significant differences in perceived status exposure across models of care. After adjustment for potential confounding between sites, those at a partially integrated site and a partially stand-alone site had greater odds of perceived status exposure than those at the fully stand-alone site (aOR 3.33, 95% CI 1.98-5.60; and aOR 11.84, 95% CI 6.89-20.36, respectively). There was no difference between the fully stand-alone and the fully integrated clinic. Qualitative data suggested that many clients at HIV-only sites felt greater confidentiality knowing that those around them were positive, and support was gained from other HIV care clients. Confidentiality was maintained in various ways, even in stand-alone sites, through separate waiting areas for HIV testing and HIV treatment, and careful clinic and room labelling. Conclusions: The relationship between model of care and stigma was complex, and the hypothesis that stigma is higher at stand-alone sites did not hold true in this high prevalence setting. Policy-makers should ensure that service integration does not increase stigma, in particular within partially integrated models of care. abstract_id: PUBMED:25978632 Exploring the Feasibility of Service Integration in a Low-Income Setting: A Mixed Methods Investigation into Different Models of Reproductive Health and HIV Care in Swaziland. Integrating reproductive health (RH) with HIV care is a policy priority in high HIV prevalence settings, despite doubts surrounding its feasibility and varying evidence of effects on health outcomes. The process and outcomes of integrated RH-HIV care were investigated in Swaziland, through a comparative case study of four service models, ranging from fully integrated to fully stand-alone HIV services, selected purposively within one town. A client exit survey (n=602) measured integrated care received and unmet family planning (FP) needs. Descriptive statistics were used to assess the degree of integration per clinic and client demand for services. Logistic regression modelling was used to test the hypothesis that clients at more integrated sites had lower unmet FP needs than clients in a stand-alone site. Qualitative methods included in-depth interviews with clients and providers to explore contextual factors influencing the feasibility of integrated RH-HIV care delivery; data were analysed thematically, combining deductive and inductive approaches. Results demonstrated that clinic models were not as integrated in practice as had been claimed. Fragmentation of HIV care was common. Services accessed per provider were no higher at the more integrated clinics compared to stand-alone models (p>0.05), despite reported demand. While women at more integrated sites received more FP and pregnancy counselling than stand-alone models, they received condoms (a method of choice) less often, and there was no statistical evidence of difference in unmet FP needs by model of care. Multiple contextual factors influenced integration practices, including provider de-skilling within sub-specialist roles; norms of task-oriented routinised HIV care; perceptions of heavy client loads; imbalanced client-provider interactions hindering articulation of RH needs; and provider motivation challenges. Thus, despite institutional support, factors related to the social context of care inhibited provision of fully integrated RH-HIV services in these clinics. Programmes should move beyond simplistic training and equipment provision if integrated care interventions are to be sustained. abstract_id: PUBMED:23078548 The relationship between service integration and client satisfaction: a mixed methods case study within HIV services in a high prevalence setting in Africa. Integration of HIV with other primary health services, including sexual and reproductive health (SRH) care, is being widely promoted in sub-Saharan Africa. However, evidence on its benefits is equivocal, and some studies indicate client preferences for stand-alone HIV services. A comparative case study investigated the relationship between integration and client satisfaction across different models of care in Swaziland: two clinics integrated with SRH services, and two stand-alone HIV clinics. An exit survey (n=602) measured satisfaction across model using logistic regression. In-depth interviews (n=22), repeated three times, explored satisfaction and how it was influenced by integration or not. Satisfaction was highest at the fully stand-alone clinic; in multivariable models, clients there were more satisfied than partially integrated and partially stand-alone sites (adjusted odds ratio [aOR] 0.45, 95% confidence interval [CI] 0.25-0.81; and aOR 0.53, 95% CI 0.31-0.90, respectively); there was no difference between the fully integrated and fully stand-alone sites. Qualitative analysis suggested that satisfaction was constituted by eleven dimensions of care, including access to HIV medication, interpersonal care, efficiency, and confidentiality. Some dimensions were positively influenced by integration, while others were positively influenced by stand-alone care; some were not influenced by integration at all. Assumptions on the inherent value of integrated care were challenged in this high HIV prevalence setting, where stand-alone HIV care could be equally if not more satisfying to many clients. Those aiming to scale-up access to HIV care in this region may need to consider providing a variety of treatment models for clients with different health care needs. abstract_id: PUBMED:33905822 Adolescents Living With HIV: Contraceptive Services and Uptake in General Primary Healthcare Clinics and Stand-Alone Youth Clinics. Study Objective: To investigate whether current contraceptive uptake, unmet need for contraception, and unintended pregnancy among female adolescents living with HIV (ALHIV) vary with the model of access sites for HIV-related services, that is, stand-alone youth clinics (YCs) versus general primary health clinics (PHC). Design: Exit interviews. Setting: Multi-center, primary healthcare facilities. Participants: Female adolescents living with HIV (ALHIV) aged 14-19 years and interviewed between March 1, 2017 and July 31, 2017. Interventions: Structured exit paper-based survey. Participants were divided into stand-alone YC and general PHC groups. Main Outcome Measures: Uptake of contraceptive counseling and products were compared between the 2 clinic groups. Results: A total of 303 female ALHIV completed the exit interviews. For female ALHIV, contraceptive prevalence was high among both general PHCs (84.2%) and stand-alone YCs (82.9%). Contraceptive method mix was similar between both groups of female ALHIV. Compared to stand-alone YCs, the general PHCs had similar likelihood of contraceptive counseling provision (aOR 0.84 95% CI [0.55, 1.26]) and provision of contraceptive products, including condoms (aOR 0.54 95% CI [0.26, 1.16]) and dual-method (aOR 1.80 95% CI [0.81, 3.99]) for female ALHIV. Also, there were no significant differences in the likelihood of contraceptive prevalence (aOR 0.99 95% CI [0.59, 1.67], unmet need for contraception (aOR 1.64 95% CI [0.90, 2.98]), and unintended pregnancy (aOR 1.06 95% CI [0.57, 1.98]) between clinic groups. Conclusion: Mainstreaming youth-friendly services into existing health facilities could achieve similar contraceptive service provision, uptake and outcomes for ALHIV as stand-alone youth clinics. abstract_id: PUBMED:37698755 Integrated Care Models: HIV and Substance Use. Purpose Of Review: Behaviors and practices associated with substance use contribute to lack of HIV virologic suppression and onward transmission. In the USA, many recent HIV outbreaks have been connected with substance use. Evidence-based strategies for integrating care of those at risk for and living with HIV and who use substances continue to evolve. This review, based on scientific and medical literature through March 2023, provides an overview and evaluation of initiatives for integrated care aimed to serve patients at risk for and with HIV and a substance use disorder. Recent Findings: Integrated care services can improve health outcomes for patients at risk for and with HIV and a substance use disorder; for instance, treatment for an opioid use disorder can help improve HIV viral suppression. Brick-and-mortar facilities can provide successful care integration with appropriate clinic leadership to support multidisciplinary care teams, up-to-date provider training, and sufficient pharmacy stock for substance use treatment. Delivering healthcare services to communities (e.g., mobile healthcare clinics and pharmacies, telehealth) may prove to be an effective way to provide integrated services for those with or at risk of HIV and substance use disorders. Incorporating technology (e.g., mobile phone applications) may facilitate integrated care. Other venues, including harm reduction programs and carceral settings, should be targets for integrated services. Venues providing healthcare should invest in integrated care and support legislation that increases access to services related to HIV and substance use. abstract_id: PUBMED:29590366 'Drug adherence levels are falling down again': health worker perceptions of women's service utilization before and after integration of HIV/AIDS services into general care in India. Integrated models of HIV/AIDS service delivery are believed to have advantages over stand-alone models of care from health planners' and providers' perspectives. Integration models differ, yet there is little information about the influence of differing models on workers' beliefs about models' efficacy. Here, we examine the effect of integration of HIV care into the general health system in India. In 2014, India replaced its stand-alone model of HIV service delivery-Community Care Centers (CCCs)-with a purported integrated model that delivers HIV medical services at general hospitals and HIV psychosocial services at nearby Care and Support Centers (CSCs). We examine 15 health workers' perceptions of how change from the earlier stand-alone model to the current model impacted women's care in a district in Uttar Pradesh, India. Results indicate that (1) Women's antiretroviral (ART) adherence and utilization of psychosocial support service for HIV/AIDS suffered when services were not provided at one site; (2) Provision of inpatient care in the CCC model offered women living in poverty personal safety in accessing HIV health services and promoted chances of competent ART usage and repeat service utilization; and (3) Although integration of HIV services with the general health system was perceived to improve patient anonymity and decrease chances of HIV-related stigma and discrimination, resource shortages continued to plague the integrated system while shifting costs of time and money to the patients. Findings suggest that integration efforts need to consider the context of service provision and the gendered nature of access to HIV care. abstract_id: PUBMED:28544500 Non-communicable diseases and HIV care and treatment: models of integrated service delivery. Objectives: Non-communicable diseases (NCD) are a growing cause of morbidity in low-income countries including in people living with human immunodeficiency virus (HIV). Integration of NCD and HIV services can build upon experience with chronic care models from HIV programmes. We describe models of NCD and HIV integration, challenges and lessons learned. Methods: A literature review of published articles on integrated NCD and HIV programs in low-income countries and key informant interviews were conducted with leaders of identified integrated NCD and HIV programs. Information was synthesised to identify models of NCD and HIV service delivery integration. Results: Three models of integration were identified as follows: NCD services integrated into centres originally providing HIV care; HIV care integrated into primary health care (PHC) already offering NCD services; and simultaneous introduction of integrated HIV and NCD services. Major challenges identified included NCD supply chain, human resources, referral systems, patient education, stigma, patient records and monitoring and evaluation. The range of HIV and NCD services varied widely within and across models. Conclusions: Regardless of model of integration, leveraging experience from HIV care models and adapting existing systems and tools is a feasible method to provide efficient care and treatment for the growing numbers of patients with NCDs. Operational research should be conducted to further study how successful models of HIV and NCD integration can be expanded in scope and scaled-up by managers and policymakers seeking to address all the chronic care needs of their patients. abstract_id: PUBMED:31167656 The financing of stand-alone palliative Care Services in Uganda: analysis of the implications for sustainability. Background: Sustainable funding is key for ensuring the quality and coverage of palliative care services. This study examined the sources of funding for stand-alone palliative care services in Uganda as well as their services financial sustainability plans. Methods: Researchers conducted a cross sectional survey of all stand-alone palliative care organizations that have operated for five or more years. Researchers administered a questionnaire survey and interviews on the audited financial statements, services provided and sustainability plans. Results: Nine of the stand-alone palliative care organizations surveyed had operated for five to 25 years. 93% of the funding for palliative care services comes from donations; while 7% is from income generating activities. 94% of the donations are from external sources. The Government of Uganda's major contribution is in the form of medicines, training and payment of taxes. All the organizations had good financial records. Six of the fifteen Hospices/palliative care providers had sustainability plans included in their operational manuals. The older organizations (those that had been operational for more than 10 years) had better resource mobilization capacity and strategies. Conclusion: The majority of stand-alone palliative care organizations in Uganda are largely donor funded. They have considerable financial sustainability and fund-raising capacity. Government support is in the form of medicines and training. Based on this study findings, the capacity of the stand-alone palliative care services to raise funds should be increased. The Government of Uganda should include palliative care in the national health system and increase funding for these services. abstract_id: PUBMED:28637733 Integration of sexually transmitted infection (STI) services into HIV care and treatment services for women living with HIV: a systematic review. Objective: To review and critically appraise the existing evidence on integration of sexually transmitted infection (STI) services into HIV care and treatment services for women living with HIV. Design: Systematic review. Data Sources: Four electronic databases were searched through 16February 2017 using keywords for HIV, STIs and integration. Reference lists of included articles and other reviews were also screened. Review Methods: We included studies that compared women living with HIV who received STI services integrated into HIV care and treatment services with those who received HIV care and treatment services without integrated STI services or standard of care. Results: Of 170 articles identified, 3 studies reported in 4 articles were included. Two studies evaluated comprehensive care for people living with HIV in the UK; in both cases, quality and uptake of STI services seemed to improve following integration. The third study conducted a comparative case study across different models of care in Swaziland: two clinics integrated with sexual and reproductive health services (including STI services), and two stand-alone HIV clinics (without STI services). Coverage for Pap smears among women living with HIV was higher at the fully integrated site, but there was no significant difference in the prevalence of sexual health screening or advice on sexual health. Reported client satisfaction was generally higher at the stand-alone HIV clinic, and a diverse range of factors related to implementation of different care models challenged the notion that integrated services are always superior or desired. Conclusion: While there is a limited evidence base for integrating STI services into HIV care and treatment services, existing studies indicate that integration is feasible and has the potential for positive outcomes. However, diverse population needs and health system factors must be considered when designing models of care to provide STI services to women living with HIV. abstract_id: PUBMED:33948140 Stigmatizing attitudes toward people living with HIV/AIDS (PLWHA) among primary health care providers in Kinta District, Perak. Background: Stigmatizing attitudes expressed by health care providers prevent some members of at-risk populations from accessing human immunodeficiency virus (HIV) screening and care. This attitude contributes to the continuity of the infection dissemination within our community, which gives an impact on the healthcare service and the curtailment of the global HIV/acquired immunodeficiency syndrome (AIDS) pandemic. Objective: This study was conducted to identify stigmatizing attitudes toward people living with HIV/AIDS (PLWHA) and their determinants among primary health care providers in Kinta District, Perak. Methodology: A cross-sectional study was conducted in 36 primary care clinics in Kinta District, Perak. Using stratified random sampling, 365 primary health care providers were recruited into the study. A validated self-administered questionnaire was used to obtain sociodemographic data as well as information on the healthcare experiences of healthcare providers, their knowledge of HIV/AIDS, and attitudes toward PLWHA. Determinants were identified using multiple linear regression. Results: More than half of the respondents (54.1%) had never provided care to HIV/AIDS patients. A minority (29.9%) had received training on HIV/AIDS. This study shows that doctors (Coef.= -9.50, 95% CI: -18.93, -0.07, p= 0.048), respondents with HIV-positive relatives, (Coef.= -5.61, 95% CI: -10.57, -0.65, p= 0.027), those who had provided care to HIV/AIDS patients (Coef.= -2.38, 95% CI: -4.31, -0.45, p= 0.016), and those with a higher knowledge score on HIV/AIDS (Coef.= -0.86, 95% CI: -1.59, -0.13, p= 0.021) were less likely to show stigmatizing attitudes toward PLWHA. Conclusion: The issue of stigmatizing attitudes toward PLWHA among primary health care providers needs to be addressed. This study finds that knowledge, profession, experiences with caring for PLWHA, gender, and having HIV-positive relatives are significant predictors of stigmatizing attitudes toward PLWHA among primary health care providers in Kinta District, Perak. Interventional programs to improve knowledge and awareness, as well as decrease stigma toward PLWHA, should be implemented among all health care providers, especially those who have no opportunity to provide direct care. Answer: The relationship between integrated HIV services and stigma is complex, and the evidence does not uniformly suggest that integrated services are less stigmatizing than stand-alone models of care. A comparative case study from Swaziland found that the hypothesis that stigma is higher at stand-alone sites did not hold true in a high prevalence setting. There were significant differences in perceived status exposure across models of care, but after adjustment for potential confounding, there was no difference in perceived status exposure between the fully stand-alone and the fully integrated clinic. Qualitative data suggested that clients at HIV-only sites felt greater confidentiality and support from other HIV care clients (PUBMED:23336726). Another study in Swaziland found that while women at more integrated sites received more family planning and pregnancy counseling than stand-alone models, they received condoms less often, and there was no statistical evidence of difference in unmet family planning needs by model of care. Multiple contextual factors influenced integration practices, including provider de-skilling, norms of task-oriented routinized HIV care, and provider motivation challenges (PUBMED:25978632). A study on client satisfaction across different models of HIV care in Swaziland indicated that satisfaction was highest at the fully stand-alone clinic, and some dimensions of care were positively influenced by stand-alone care; some were not influenced by integration at all (PUBMED:23078548). In the context of adolescents living with HIV, a study found that mainstreaming youth-friendly services into existing health facilities could achieve similar contraceptive service provision, uptake, and outcomes for adolescents living with HIV as stand-alone youth clinics (PUBMED:33905822). Overall, the evidence suggests that the impact of integrated HIV services on stigma is not straightforward and may depend on various factors, including the setting, the population, and the specific services offered. It is important for policy-makers to consider these factors and ensure that service integration does not inadvertently increase stigma, particularly within partially integrated models of care (PUBMED:23336726).
Instruction: Does the 'Liverpool Care Pathway' facilitate an improvement in quality of care for dying cancer patients? Abstracts: abstract_id: PUBMED:23681186 Does the 'Liverpool Care Pathway' facilitate an improvement in quality of care for dying cancer patients? Background: The Liverpool Care Pathway for the Dying Patient (LCP) aims to transfer hospice principles of care for dying patients to other health-care sectors. This post-bereavement survey explored the LCP's effectiveness in improving quality of care for cancer patients. Methods: Postal self-completion questionnaires were sent to 778 next-of-kin to consecutive deceased patients who had died an 'expected' cancer death in a hospice and acute tertiary hospital. Results: Following exclusions (n=53), 255 of the 725 next-of-kin agreed to participate (35.2% response rate). Overall hospice participants reported the best quality of care, and hospital participants, for whom care was not supported by the LCP, reported the worst quality of care. Multivariate analysis showed the hospice was an independent predictor for patients being treated with dignity (OR 8.46) and receiving adequate family support (OR 7.18) (P<0.0001). Care supported by the LCP and the hospital specialist palliative care team were both associated with good family support, but neither was an independent predictor. Conclusions: From the bereaved relatives' perspective, within the hospital, the LCP is effective in improving specific aspects of care, such as symptom control for dying patients. Further improvement is required, however, to attain the hospice standard of care. abstract_id: PUBMED:24210451 Assessing the quality of care for dying patients from the bereaved relatives' perspective: further validation of "Evaluating care and health outcomes--for the dying". Context: Evaluating Care and Health Outcomes-for the Dying (ECHO-D) is a post-bereavement questionnaire that assesses quality of care for the dying and is linked with the Liverpool Care Pathway for the Dying Patient (LCP). Objectives: To further assess the validity and reliability of the ECHO-D, namely the construct validity, internal consistency, and test-retest reliability of key composite scales. Methods: Self-completion questionnaires were mailed to 778 next-of-kin of consecutive deceased patients who had died an "expected" cancer death in a hospice or acute tertiary hospital. For those willing to complete ECHO-D for a second time, another copy was sent a month later. Maximum likelihood factor analysis and Cronbach's alpha test were conducted for four key composite scales. Test-retest reliability was assessed using percentage agreement, Kappa statistic, and Spearman's correlation coefficient (ordinal data). Comparisons between hospice and hospital groups were conducted using one-way between-groups analysis of variance. Results: Following exclusions (n = 52), 255 of 726 next-of-kin agreed to participate (35.2% response rate). Maximum likelihood factor analysis showed a single factor for three of the scales, and all had good internal consistency (Cronbach's alpha >0.78). Barring two questions, all showed good or moderate stability over time. Overall, hospice participants reported the best quality of care, and hospital participants, where care was not supported by the LCP, reported the worst quality of care. Conclusion: These findings support ECHO-D as a valid and reliable instrument to assess quality of care for the dying and assess the effectiveness of interventions such as the LCP. abstract_id: PUBMED:24950519 Assessing the uptake of the Liverpool Care Pathway for dying patients: a systematic review. Improving the care of the dying is regarded as a national priority and current guidelines stipulate the need to provide holistic palliative care. Despite this, many dying patients and carers report low levels of comfort and satisfaction with care. Reasons include poor coordination of care, variability in communication and crisis-driven interventions. Integrated care pathways aim to support care coordination and open communication with patients and carers. One example is the Liverpool Care Pathway (LCP). Using the LCP entails assessment of eligibility criteria which requires skills, knowledge and clinical judgement about its timing. This can be problematic, and little is known about actual uptake, characteristics of assessed patients and reasons for inclusion/exclusion. A comprehensive systematic review was conducted for papers published between January 1990 and July 2012 providing information on LCP uptake. 17 papers met inclusion criteria. A total of 18 052 patients were placed on the LCP, in a variety of inpatient and primary care settings, and cancer and non-cancer diagnoses. 47.4% of dying patients identified were placed on the LCP. Although the LCP is widely recommended, it is only used for around half of dying patients. Reasons may include lack of knowledge, high staff turnover and concerns about applicability particularly for unpredictable dying trajectories. The proportion of patients who meet the eligibility criteria and the reasons surrounding low uptake remain unclear. Research is urgently required to further quantify the variable use of the LCP, and to investigate whether alternative approaches should be developed for non-cancer groups. abstract_id: PUBMED:23652840 The Liverpool Care Pathway for cancer patients dying in hospital medical wards: a before-after cluster phase II trial of outcomes reported by family members. Background: Hospital is the most common place of cancer death but concerns regarding the quality of end-of-life care remain. Aim: Preliminary assessment of the effectiveness of the Liverpool Care Pathway on the quality of end-of-life care provided to adult cancer patients during their last week of life in hospital. Design: Uncontrolled before-after intervention cluster trial. Settings/participants: The trial was performed within four hospital wards participating in the pilot implementation of the Italian version of the Liverpool Care Pathway programme. All cancer patients who died in the hospital wards 2-4 months before and after the implementation of the Italian version of Liverpool Care Pathway were identified. A total of 2 months after the patient's death, bereaved family members were interviewed using the Toolkit After-Death Family Interview (seven 0-100 scales assessing the quality of end-of-life care) and the Italian version of the Views of Informal Carers - Evaluation of Services (VOICES) (three items assessing pain, breathlessness and nausea-vomiting). Results: An interview was obtained for 79 family members, 46 (73.0%) before and 33 (68.8%) after implementation of the Italian version of Liverpool Care Pathway. Following Italian version of Liverpool Care Pathway implementation, there was a significant improvement in the mean scores of four Toolkit scales: respect, kindness and dignity (+16.8; 95% confidence interval = 3.6-30.0; p = 0.015); family emotional support (+20.9; 95% confidence interval = 9.6-32.3; p < 0.001); family self-efficacy (+14.3; 95% confidence interval = 0.3-28.2; p = 0.049) and coordination of care (+14.3; 95% confidence interval = 4.2-24.3; p = 0.007). No significant improvement in symptom' control was observed. Conclusions: These results provide the first robust data collected from family members of a preliminary clinically significant improvement, in some aspects, of quality of care after the implementation of the Italian version of Liverpool Care Pathway programme. The poor effect for symptom control suggests areas for further innovation and development. abstract_id: PUBMED:25986540 Effectiveness of the Liverpool care pathway for the dying in residential care homes: An exploratory, controlled before-and-after study. Background: Clinical pathways aim to ensure that individuals receive appropriate evidence-based care and interventions, with the Liverpool Care Pathway for the Dying Patient focusing on end of life. However, controlled studies of the Liverpool Care Pathway for the Dying Patient, particularly outside of cancer settings, are lacking. Aim: To compare the effects of the Liverpool Care Pathway for the Dying Patient and usual care on patients' symptom distress and well-being during the last days of life, in residential care homes. Design: Exploratory, controlled before-and-after study. During a 15-month baseline, usual care was carried out in two areas. During the following 15-months, usual care continued in the control area, while residential care home staff implemented Liverpool Care Pathway for the Dying Patient use in the intervention area. The intervention was evaluated by family members completing retrospective symptom assessments after the patient's death, using the Edmonton Symptom Assessment System and Views of Informal Carers - Evaluation of Services. Settings/participants: Patients who died at all 19 residential care homes in one municipality in Sweden. Results: Shortness of breath (estimate = -2.46; 95% confidence interval = -4.43 to -0.49) and nausea (estimate = -1.83; 95% confidence interval = -3.12 to -0.54) were significantly reduced in Edmonton Symptom Assessment System in patients in the intervention compared to the control area. A statistically significant improvement in shortness of breath was also found on the Views of Informal Carers - Evaluation of Services item (estimate = -0.47; 95% confidence interval = -0.85 to -0.08). Conclusion: When implemented with adequate staff training and support, the Liverpool Care Pathway for the Dying Patient may be a useful tool for providing end-of-life care of elderly people at the end of life in non-cancer settings. abstract_id: PUBMED:28793905 The Liverpool Care Pathway: discarded in cancer patients but good enough for dying nursing home patients? A systematic review. Background: The Liverpool Care Pathway (LCP) is an interdisciplinary protocol, aiming to ensure that dying patients receive dignified and individualized treatment and care at the end-of-life. LCP was originally developed in 1997 in the United Kingdom from a model of cancer care successfully established in hospices. It has since been introduced in many countries, including Norway. The method was withdrawn in the UK in 2013. This review investigates whether LCP has been adapted and validated for use in nursing homes and for dying people with dementia. Methods: This systematic review is based on a systematic literature search of MEDLINE, CINAHL, EMBASE, and Web of Science. Results: The search identified 12 studies, but none describing an evidence-based adaption of LCP to nursing home patients and people with dementia. No studies described the LCP implementation procedure, including strategies for discontinuation of medications, procedures for nutrition and hydration, or the testing of such procedures in nursing homes. No effect studies addressing the assessment and treatment of pain and symptoms that include dying nursing home patients and people with dementia are available. Conclusion: LCP has not been adapted to nursing home patients and people with dementia. Current evidence, i.e. studies investigating the validity and reliability in clinically relevant settings, is too limited for the LCP procedure to be recommended for the population at hand. There is a need to develop good practice in palliative medicine, Advance Care Planning, and disease-specific recommendations for people with dementia. abstract_id: PUBMED:21261949 The effectiveness of the Liverpool care pathway in improving end of life care for dying cancer patients in hospital. A cluster randomised trial. Background: Most cancer patients still die in hospital, mainly in medical wards. Many studies in different countries have shown the poor quality of end-of-life care delivery in hospitals. The Program "Liverpool Care Pathway for the dying patient" (LCP), developed in the UK to transfer the hospice model of care into hospitals and other care settings, is a complex intervention to improve the quality of end-of-life care. The results from qualitative and quantitative studies suggest that the LCP Program can improve significantly the quality of end-of-life care delivery in hospitals, but no randomised trial has been conducted till now. Methods And Design: This is a randomized cluster trial, stratified by regions and matched for assessment period. Pairs of eligible medical wards from different hospitals will be randomized to receive the LCP-I Program or no intervention until the end of the trial. The LCP-I Program will be implemented by a Palliative Care Unit.The assessment of the end-points will be performed for all cancer deaths occurred in the six months after the end of the LCP-I implementation in the experimental wards and, in the same period of time, in the matched control wards. The primary end-point is the overall quality of end-of-life care provided on the ward to dying cancer patients and their families, assessed using the Global Scale of the Italian version of the Toolkit "After-death Bereaved Family Member Interview". Discussion: This study can be interpreted as a Phase III trial according to the Medical Research Council Framework. In this study, the effectiveness of a fully defined intervention is assessed by comparing the distribution of the endpoints in the experimental and in the control arm. RESEARCH ID: RFPS-2006-6-341619 Trial Registration: ClinicalTrials.gov Identifier: NCT01081899. abstract_id: PUBMED:16161894 Improving practice with the Liverpool Care Pathway. The Liverpool Care Pathway for the Dying Patient (LCP) is an evidence-based document that acts as a guide for health care professionals when caring for patients in the last days of their lives. abstract_id: PUBMED:20501514 Use of the proportion of patients dying on an End of Life Pathway as a quality marker: considerations for interpretation. The Department of Health as part of its End of Life Care Strategy has developed a set of markers as a quality of care proxy for adults at the end of life. 'The number/proportion of patients dying with the Liverpool Care Pathway (or equivalent) in place' is suggested as a quality metric for all care providers. A retrospective audit of uptake of use of the Liverpool Care Pathway (LCP) in an NHS hospital in the North of England showed that 39% of all patients who died had been placed on the LCP. Overall 58% of patients who died were judged to meet the criteria for LCP use. This represented 81% of patients dying with cancer as a primary cause compared to 51% of patients dying with non-cancer. This difference was statistically very significant. In the Trust under study, 67% of dying patients who fulfilled the LCP criteria were placed on the pathway. The results of this study suggest that a simple percentage of deaths on the pathway is an unsophisticated statistic which needs to be interpreted with care. In particular it will be influenced by the proportion of people dying with cancer or non-cancer. This should be considered particularly when presenting the results to the public and to healthcare stakeholders or when making comparisons between provider organizations. abstract_id: PUBMED:37946360 The quality of death and dying of patients with advanced cancer in hospice care in Uganda and Kenya. Objectives: Minimal information is available about the quality of dying and death in Uganda and Kenya, which are African leaders in palliative care. We investigated the quality of dying and death in patients with advanced cancer who had received hospice care in Uganda or Kenya. Methods: Observational study with bereaved caregivers of decedents (Uganda: n = 202; Kenya: n = 127) with advanced cancer who had received care from participating hospices in Uganda or Kenya. Participants completed the Quality of Dying and Death questionnaire and a measure of family satisfaction with cancer care (FAMCARE). Results: Quality of Dying and Death Preparation and Connectedness subscales were most frequently rated as good to almost perfect for patients in both countries (45.5% to 81.9%), while Symptom Control and Transcendence subscales were most frequently rated as intermediate (42.6% to 60.4%). However, 35.4% to 67.7% of caregivers rated overall quality of dying and overall quality of death as terrible to poor. Ugandan caregivers reported lower Preparation, Connectedness, and Transcendence (p < .001). Controlling for covariates, overall quality of dying was associated with better Symptom Control in both countries (p < .001) and Transcendence in Uganda (p = .010); overall quality of death, with greater Transcendence in Uganda (p = .004); and family satisfaction with care, with better Preparation in Uganda (p = .004). Significance Of Results: Findings indicate strengths in spiritual and social domains of the quality of dying and death in patients who received hospice care in Uganda and Kenya, but better symptom control is needed to improve this outcome in these countries. Answer: Yes, the 'Liverpool Care Pathway' (LCP) does facilitate an improvement in the quality of care for dying cancer patients. According to the research, the LCP aims to transfer hospice principles of care for dying patients to other healthcare sectors, and it has been associated with specific improvements in care. A post-bereavement survey indicated that the LCP is effective in improving aspects of care such as symptom control for dying patients within the hospital, although further improvement is required to attain the hospice standard of care (PUBMED:23681186). Additionally, the ECHO-D questionnaire, which is linked with the LCP, has been validated as a reliable instrument to assess the quality of care for the dying and assess the effectiveness of interventions such as the LCP (PUBMED:24210451). Furthermore, a before-after cluster phase II trial of outcomes reported by family members showed a significant improvement in the quality of end-of-life care after the implementation of the LCP in hospital medical wards, particularly in respect, kindness, dignity, family emotional support, family self-efficacy, and coordination of care (PUBMED:23652840). An exploratory controlled before-and-after study in residential care homes also found that the LCP may be a useful tool for providing end-of-life care of elderly people in non-cancer settings, with significant reductions in symptoms like shortness of breath and nausea (PUBMED:25986540). However, it is important to note that the LCP has been subject to criticism and was withdrawn in the UK in 2013. A systematic review highlighted that the LCP has not been adapted to nursing home patients and people with dementia, and current evidence is too limited for the LCP procedure to be recommended for these populations (PUBMED:28793905). Despite this, the LCP has been shown to be effective in improving end-of-life care for dying cancer patients in hospital settings when implemented properly (PUBMED:21261949).
Instruction: Is immunogenetic susceptibility to neuropsychiatric systemic lupus erythematosus (SLE) different from non-neuropsychiatric SLE? Abstracts: abstract_id: PUBMED:8774183 Is immunogenetic susceptibility to neuropsychiatric systemic lupus erythematosus (SLE) different from non-neuropsychiatric SLE? Objectives: To analyse frequency of HLA class II antigens (DR and DQ) and lymphocytotoxic autoantibodies in patients with systemic lupus erythematosus (SLE) and subsets with or without neuropsychiatric involvement. Methods: Ninety three patients with SLE (42 with neuropsychiatric features) were typed for HLA class II antigens and investigated for the presence of lymphocytotoxic autoantibodies by a complement dependent microlymphocytotoxicity assay. A total of 191 controls of similar ethnic background were also typed for HLA antigens. Results: HLA-DR3 antigen was increased in the total group of patients with SLE (p = 0.003) and in the neuropsychiatric group (p = 0.002). HLA-DR4 antigen frequency was increased in non-neuropsychiatric patients (p = 0.001) and decreased in patients with neuropsychiatric SLE (p = 0.0005). Comparisons of HLA frequencies between subgroups of patients showed decreased HLA-DR4 (p < 0.0001) and increased HLA-DR9 and HLA-DQ2 antigens (p = 0.0008 and 0.005 respectively) in the neuropsychiatric group. The frequency of lymphocytotoxic autoantibodies was increased in neuropsychiatric patients with SLE having HLA-DR9 specificity (p = 0.04). Conclusion: HLA-DR4 may have a protective specificity for the development of neuropsychiatric features of SLE and HLA-DR9, in addition to HLA-DR3, and the presence of lymphocytotoxic auto-antibodies may predispose to neuropsychiatric abnormalities. abstract_id: PUBMED:37941870 Tonic-Clonic Seizure in Patient With SLE: Posterior Reversible Encephalopathy Syndrome, or a Neuropsychiatric Manifestation of SLE? Posterior reversible encephalopathy syndrome (PRES) is a clinically and radiologically diagnosed reversible sudden onset disease with many neurological symptoms. SLE is the most common cause of PRES among autoimmune diseases. Many factors, such as SLE activity, hypertension, hematological and renal diseases, lymphopenia dyslipidemia, and immunosuppressive treatments, can trigger PRES in SLE. We wanted to draw attention to the difference between neuropsychiatric systemic lupus erythematosus (SLE) and PRES in a patient with SLE and the triggers for developing PRES in SLE by presenting a hypertensive patient on immunosuppressive therapy who had just started haemodialysis treatment and had generalised tonic-clonic seizures. abstract_id: PUBMED:34667659 Neuropsychiatric Systemic Lupus Erythematosus: A 2021 Update on Diagnosis, Management, and Current Challenges. Patients with systemic lupus erythematosus (SLE) experience neuropsychiatric symptoms. The term neuropsychiatric SLE (NPSLE) is a generic term that refers to a series of neurological and psychiatric symptoms directly related to SLE. In approximately 30% of patients with neuropsychiatric symptoms, SLE is the primary cause (NPSLE), and symptoms manifest more frequently around SLE onset. Neurovascular and psychotic conditions can also lead to NPSLE. Pathogenesis of NPSLE is implicated in both neuroinflammatory and ischemic mechanisms, and it is associated with high morbidity and mortality. After diagnosing and assigning causality, NPSLE treatment is individualized according to the type of neuropsychiatric manifestations, type of the predominant pathway, activity of SLE, and severity of the clinical manifestations. There are many problems to be addressed with regards to the diagnosis and management of NPSLE. Controlled clinical trials provide limited guidance for management, and observational cohort studies support symptomatic, antithrombotic, and immunosuppressive agents. The purpose of this review was to provide a detailed and critical review of the literature on the pathophysiology, diagnosis, and treatment of NPSLE. This study aimed to identify the shortcoming in diagnostic biomarkers, novel therapies against NPSLE, and additional research needs. abstract_id: PUBMED:33609799 Neuropsychiatric involvement in systemic lupus erythematosus: A review. The neuropsychiatric involvement in systemic lupus erythematosus (NPSLE) is a challenge for clinicians, both at a diagnostic and therapeutic level. Although in 1999 the American College of Rheumatology (ACR) proposed a set of definitions for 19 NPSLE syndromes, with the intention of homogenizing the terminology for research purposes and clinical practice, the prevalence of NPSLE varies widely according to different series and is estimated to be between 37 and 95%. This is due to multiple factors such as the unalike definitions used, the diverse design of the studies, type of population, race, type and severity of symptoms, and follow-up of the different cohorts of patients with SLE. In recent years, some authors have tried excluding minor neuropsychiatric manifestations in order to try to reduce this wide variation in the prevalence of NPSLE since they are very prevalent in the general population; others authors have developed various models for the attribution of neuropsychiatric events to SLE that can assist clinicians in this diagnostic process, and finally, some authors developed and validated in 2014 a new algorithm based on the definitions of the ACR that includes the evaluation of the patient's lupus activity together with imaging techniques and the analysis of cerebrospinal fluid (CSF), with the aim of trying to differentiate the true neuropsychiatric manifestations attributable to SLE. In 2010, the European League Against Rheumatism (EULAR) developed recommendations for the management of NPSLE. We found abundant literature published later where, in addition to the recommendations for the management of the 19 NPSLE syndromes defined by the ACR, additional recommendations are given for other neurological and/or psychiatric syndromes, conditions, and complications that have been associated to SLE in recent years. We review below the diagnostic and therapeutic management of the different entities. abstract_id: PUBMED:36440103 Anti-Ribosomal-P Antibody Association with Neuropsychiatric Lupus in Sudanese Patients Attending Rheumatology Clinic in Omdurman Military Hospital. Purpose: To assess and establish the relationship between neuropsychiatric systemic lupus erythematosus (NPSLE) involvement and serological biomarkers like antiribosomal-P antibodies. Patients And Methods: This is an analytical cross-sectional hospital-based study conducted on patients attending Omdurman Military Hospital from July 2019 to December 2019. A total of 90 patients were enrolled, 30 of whom had NPSLE compared with 60 SLE patients without NPSLE. SLE diagnosis was established based on the revised SLICC criteria (presence of at least 4 criteria) for SLE classification, with neuropsychiatric manifestations defined based on the ACR nomenclature. The immunological examination results have been performed by (ELISA immune-enzymatic method, immunofluorescence, and Western immunoblotting test). SPSS v 21.0 software was utilised for data analysis. Results: NPSLE patients exhibited +ve ANA in 96.7% vs 75% in non-NPSLE (P-value = 0.008), antiribosomal-P antibodies (46.7% vs 20%; P-value = 0.0001), anti-nucleosome antibodies (26.7% vs 5%; P-value = 0.005), and anti-histones antibodies (40% vs 20%; P-value = 0.04). ANA antibodies were significantly associated with neurological manifestations as ANA antibodies were common in epilepsy (n = 9; 91%) and stroke (n = 8; 27.6%) (P-value < 0.001). Conclusion: Neuropsychiatric manifestation of systemic lupus erythematosus exhibits variable clinical manifestations. Neuropsychiatric manifestations of SLE are strongly associated with the anti-ribosomal P antibody presence and can be employed as a powerful diagnostic tool. abstract_id: PUBMED:38105443 Attribution of neuropsychiatric symptoms and prioritisation of evidence in the diagnosis of neuropsychiatric lupus: mixed methods analysis of patient and clinician perspectives from the international INSPIRE study. Objective: Neuropsychiatric lupus (NPSLE) is challenging to diagnose. Many neuropsychiatric symptoms, such as headache and hallucinations, cannot be verified by tests or clinician assessment. We investigated prioritisations of methods for diagnosing NPSLE and attributional views. Methods: Thematic and comparative analyses were used to investigate how clinicians prioritise sources of evidence from a 13-item list, and explore discordances in clinician and patient perspectives on attribution. Results: We identified high levels of variability and uncertainty in clinicians' assessments of neuropsychiatric symptoms in SLE patients. In attributional decisions, clinicians (surveys n = 400, interviews n = 50) ranked clinicians' assessments above diagnostic tests (many of which they reported were often unenlightening in NPSLE). Clinicians ranked patient opinion of disease activity last, and 46% of patients reported never/rarely having been asked if their SLE was flaring, despite experienced patients often having "attributional insight". SLE Patients (surveys n = 676, interviews n = 27) estimated higher attributability of neuropsychiatric symptoms to the direct effects of SLE on the nervous system than clinicians (p < 0.001 for all symptoms excluding mania), and 24% reported that their self-assessment of disease activity was never/rarely concordant with their clinicians. Reports of misattributions were common, particularly of non-verifiable diffuse symptoms. Terminology differed between clinicians and influenced attribution estimates. Conclusion: NPSLE diagnostic tests and clinician assessments have numerous limitations, particularly in detecting diffuse neuropsychiatric symptoms that can be directly attributable and benefit from immunosuppression. Our findings suggest that incorporating patient attributional insights-although also subject to limitations-may improve attribution decision-making. Consensus regarding terminology and interpretations of "direct attributability" is required. abstract_id: PUBMED:31481114 Interferon-α exacerbates neuropsychiatric phenotypes in lupus-prone mice. Background: Neuropsychiatric systemic lupus erythaematosus (NP-SLE) is one of the major manifestations of lupus. However, the mechanisms involved in NP-SLE are still largely unknown. The abnormal activation of the type I IFN signalling pathway is involved in SLE pathogenesis and is linked to NP-SLE, but the effect of IFN-α on NP-SLE encephalopathy has not been systematically studied. Methods: An intravenous injection of Adv-IFN-α (10 mice, 10 × 109 vp) was administered to the IFN-α-treated group, and Adv-ctrl (10 mice, 10 × 109 vp) (ViGene Biosciences, China) was administered to the control group. Gene expression was determined by real-time quantitative polymerase chain reaction (RT-qPCR). Enzyme-linked immunosorbent assay (ELISA) was used to detect antibodies in the serum, and urinary protein levels were measured with a BCA Protein Assay kit. Haematoxylin-eosin (H&E) and periodic acid-Schiff (PAS)-light green staining were used for kidney histology. The elevated plus-maze test, novelty-suppressed feeding assay, open-field test, tail suspension test, social dominance tube test, three-chamber social interaction test, step-down passive avoidance test and novelty Y-maze task were used to assess behaviour. Results: In this study, we performed a series of behavioural tests to assess the neuropsychiatric phenotypes of IFN-α-treated NZB/NZW F1 mice and found that these mice developed a series of mental disorders such as anxiety-like phenotypes, depression-like phenotypes, deficits in sociability and cognitive impairments, which mimic the neuropsychiatric manifestations of NP-SLE, with a consistent onset and progression. Conclusions: Our research verified that IFN-α plays a critical role in NP-SLE and provides a comprehensive NP-SLE mouse model for dissecting the mechanisms of NP-SLE and developing novel therapies for intervention. abstract_id: PUBMED:36277536 Immunosuppressant Responsive Neuropsychiatric Systemic Lupus Erythematosus Manifestations Initially Diagnosed As Schizophrenia and Bipolar Disorder. Systemic lupus erythematosus (SLE) is a well-documented multi-system autoimmune disease with increased frequency noted in younger females and among minority populations. Disease-defining signs and symptoms can vary widely and involve multiple organ systems including the nervous system. Involvement of the nervous system, known as neuropsychiatric SLE (NPSLE) can present as manifestations consistent with central nervous system or peripheral nervous system pathologies, with the former including presentations of psychiatric illnesses. This case report reviews a 21-year-old Black female's presentation that was most notable for psychosis with other findings on examination and laboratory investigation resulting in a diagnosis of NPSLE. Our patient had a positive initial response to high-dose steroids with improvement of her psychosis and was planned for further treatment with the well-known chemotherapy and immunomodulatory agent, cyclophosphamide. abstract_id: PUBMED:30564579 Laboratory and Neuroimaging Biomarkers in Neuropsychiatric Systemic Lupus Erythematosus: Where Do We Stand, Where To Go? Systemic lupus erythematosus (SLE) is a chronic autoimmune disease characterized by multi-systemic involvement. Nervous system involvement in SLE leads to a series of uncommon and heterogeneous neuropsychiatric (NP) manifestations. Current knowledge on the underlying pathogenic processes and their subsequent pathophysiological changes leading to NP-SLE manifestations is incomplete. Several putative laboratory biomarkers have been proposed as contributors to the genesis of SLE-related nervous system damage. Alongside the laboratory biomarkers, several neuroimaging tools have shown to reflect the nature of tissue microstructural damage associated with SLE, and thus were suggested to contribute to the understanding of the pathophysiological changes and subsequently help in clinical decision making. However, the number of useful biomarkers in NP-SLE in clinical practice is disconcertingly modest. In some cases it is not clear whether the biomarker is truly involved in pathogenesis, or the result of non-specific pathophysiological changes in the nervous system (e.g., neuroinflammation) or whether it is the consequence of a concomitant underlying abnormality related to SLE activity. In order to improve the diagnosis of NP-SLE and provide a better targeted care to these patients, there is still a need to develop and validate a range of biomarkers that reliably capture the different aspects of disease heterogeneity. This article critically reviews the current state of knowledge on laboratory and neuroimaging biomarkers in NP-SLE, discusses the factors that need to be addressed to make these biomarkers suitable for clinical application, and suggests potential future research paths to address important unmet needs in the NP-SLE field. abstract_id: PUBMED:28420059 Neuropsychiatric symptoms in systemic lupus erythematosus: impact on quality of life. Objective Assess quality of life in patients with systemic lupus erythematosus (SLE) presenting with neuropsychiatric symptoms (neuropsychiatric SLE, NPSLE). Methods Quality of life was assessed using the Short-Form 36 item Health Survey (SF-36) in patients visiting the Leiden NPSLE clinic at baseline and at follow-up. SF-36 subscales and summary scores were calculated and compared with quality of life of the general Dutch population and patients with other chronic diseases. Results At baseline, quality of life was assessed in 248 SLE patients, of whom 98 had NPSLE (39.7%). Follow-up data were available for 104 patients (42%), of whom 64 had NPSLE (61.5%). SLE patients presenting neuropsychiatric symptoms showed a significantly reduced quality of life in all subscales of the SF-36. Quality of life at follow-up showed a significant improvement in physical functioning role ( p = 0.001), social functioning ( p = 0.007), vitality ( p = 0.023), mental health ( p = 0.014) and mental component score ( p = 0.042) in patients with neuropsychiatric symptoms not attributed to SLE, but no significant improvement was seen in patients with NPSLE. Conclusion Quality of life is significantly reduced in patients with SLE presenting neuropsychiatric symptoms compared with the general population and patients with other chronic diseases. Quality of life remains considerably impaired at follow-up. Our results illustrate the need for biopsychosocial care in patients with SLE and neuropsychiatric symptoms. Answer: Yes, immunogenetic susceptibility to neuropsychiatric systemic lupus erythematosus (SLE) appears to be different from non-neuropsychiatric SLE. A study analyzing the frequency of HLA class II antigens (DR and DQ) and lymphocytotoxic autoantibodies in patients with SLE found that the HLA-DR3 antigen was increased in patients with neuropsychiatric SLE, while the HLA-DR4 antigen frequency was decreased in this group compared to non-neuropsychiatric patients. Additionally, the neuropsychiatric group showed increased frequencies of HLA-DR9 and HLA-DQ2 antigens. The presence of lymphocytotoxic autoantibodies was also increased in neuropsychiatric patients with SLE having HLA-DR9 specificity. These findings suggest that certain HLA class II antigens, such as HLA-DR3 and HLA-DR9, may predispose individuals to neuropsychiatric abnormalities in SLE, while HLA-DR4 may have a protective effect against the development of neuropsychiatric features in SLE (PUBMED:8774183).
Instruction: Urgent aortic valve replacement in octogenarians: does an 'urgent' status increase morbidity and mortality? Abstracts: abstract_id: PUBMED:22953676 Urgent aortic valve replacement in octogenarians: does an 'urgent' status increase morbidity and mortality? Background And Aim Of The Study: The study aim was to investigate the influence of 'urgent' status on short and long-term mortality and morbidity in a consecutive series of octogenarian patients who underwent aortic valve replacement (AVR). Methods: Between January 2005 and March 2009, data were extracted for 141 consecutive patients aged over 80 years who underwent AVR either urgently (n = 49) or electively (n = 92). Early outcomes (in-hospital mortality, postoperative morbidities) and long-term outcomes were compared during a 39-month follow up period. The outcomes were compared after adjustment for differences in baseline risk factors. Results: Patients of the 'urgent' group were older and had more comorbidities than those in the 'elective' group. There was a non-significant trend towards a higher in-hospital mortality in the 'urgent' group (10.2% versus 4.3%, p = 0.18). The 39-month survival rate was 81% in elective patients, and 71% in urgent patients (p = 0.166). Conclusion: AVR can be performed in the elderly population with acceptable early and mid-term results, despite their urgent status. A large clinical trial is warranted to confirm the influence of an urgent status in octogenarian patients treated with surgical AVR. abstract_id: PUBMED:34626175 Impact of concomitant replacement of the ascending aorta in patients undergoing aortic valve replacement on operative morbidity and mortality. Objectives: The aim of this study was to evaluate the impact of concomitant ascending aortic replacement on operative morbidity and mortality in patients undergoing aortic valve replacement (AVR). Methods: We retrospectively analysed our institutional database for all patients undergoing elective isolated AVR and AVR with concomitant replacement of the ascending aorta between January 2009 and May 2020. Patients undergoing surgery for infective endocarditis or requiring hypothermic circulatory arrest were excluded. A 3:1 propensity matching was performed for 688 patients to compare isolated AVR (120 patients) with AVR + ascending aortic replacement (40 patients). Results: There were significant differences in median cardiopulmonary bypass (CPB) time [92.5 (75-114) vs 118.5 (104-131) min; P < 0.001], median aortic cross-clamp time [65.0 (51.5-78.5) vs 84.5 (77-94) min; P < 0.001] and median intensive care unit stay [1 (1-3) vs 2 (1-6) days; P < 0.01]. There was no significant difference in the use of intraoperative and postoperative blood products, re-exploration for bleeding, postoperative atrial fibrillation, acute renal failure, incidence of stroke, perioperative myocardial infarction and 30-day mortality. Conclusions: Concomitant replacement of the ascending aorta significantly prolongs CPB and aortic clamp times but does not increase operative morbidity and mortality. Therefore, replacement of a dilated ascending aorta appears to be the most durable and safest treatment option in patients undergoing AVR with an aneurysmatic ascending aorta. abstract_id: PUBMED:38023350 Socioeconomic factors and long-term mortality risk after surgical aortic valve replacement. Background: There is scarce knowledge about the association between socioeconomic status and mortality in patients undergoing surgical aortic valve replacement. This study explores the associations between income, education and marital status, and long-term mortality risk. Methods: In this national registry-based observational cohort study we included all 14,537 patients aged >18 years who underwent isolated surgical aortic valve replacement for aortic stenosis in Sweden 1997-2020. Socioeconomic status and comorbidities were collected from three mandatory national registries. Cox regression models adjusted for patient characteristics and comorbidities were used to estimate the mortality risk. Results: Mortality risk was higher for patients in the lowest versus the highest income quintile (adjusted hazard ratio [aHR] 1.36, 95 % confidence interval [CI]: 1.11-1.65), for patients with <10 years education versus >12 years (aHR 1.20, 95 % CI:1.08-1.33), and for patients who were not married/cohabiting versus those who were (aHR 1.24, 95 % CI:1.04-1.48). Patients with the most unfavorable socioeconomic status (lowest income, shortest education, never married/cohabiting) had an adjusted median survival of 2.9 years less than patients with the most favorable socioeconomic status (14.6 years, 95 % CI: 13.2-17.4 years vs. 11.7 years, 95 % CI: 9.8-14.4). Conclusions: Low socioeconomic status in patients undergoing surgical aortic valve replacement is associated with shorter survival and an increased long-term adjusted mortality risk. These results emphasize the importance of identifying surgical aortic valve replacement patients with unfavorable socioeconomic situation and ensure sufficient post-discharge surveillance. abstract_id: PUBMED:23530025 Preoperative anaemia is a risk factor for mortality and morbidity following aortic valve surgery. Objectives: The impact of anaemia on patients undergoing aortic valve surgery has not been well studied. We sought to evaluate the effect of anaemia on early outcomes following aortic valve replacement (AVR). Methods: All patients undergoing non-emergent aortic valve surgery (n = 2698) with or without other concomitant procedures between 1997 and 2010 were included. Preoperative anaemia was defined as per World Health Organization guidelines as haemoglobin (Hb) < 130 g/l in men and Hb < 120 g/l in women. Multivariable analyses were used to determine the association between preoperative anaemia and postoperative outcomes. Results: The prevalence of preoperative anaemia was 32.2%. Patients with anaemia were older (71 ± 12 vs 66 ± 13 years, P < 0.001), more likely to have urgent surgery, recent MI, higher creatinine level and impaired preoperative left ventricular function. Overall unadjusted mortality was 2.8% in non-anaemic patients vs 8% in anaemic patients. Anaemic patients were more likely to require renal replacement therapy (11 vs 3%, P < 0.0001) and prolonged ventilation (24 vs 10%, P < 0.0001). Following multivariable adjustment, lower preoperative Hb was an independent predictor of mortality (odds ratio 1.19, 95% CI: 1.04-1.34, P = 0.007) and composite morbidity (odds ratio 1.36, 95% CI: 1.05-1.77, P = 0.02) after AVR. Mortality and composite morbidity were significantly higher with lower levels of preoperative Hb. Conclusions: Preoperative anaemia is a common finding in patients undergoing aortic valve surgery and is an important and potentially modifiable risk factor for postoperative morbidity and mortality. abstract_id: PUBMED:30879326 Sex-Specific Determinants of Outcomes After Transcatheter Aortic Valve Replacement. Background Women account for a large proportion of patients treated with transcatheter aortic valve replacement, yet there remain conflicting reports about the effect of sex on outcomes. Moreover, the sex-specific prevalence and prognostic impact of frailty has not been systematically studied in the context of transcatheter aortic valve replacement. Methods and Results A preplanned analysis of the FRAILTY-AVR study (Frailty Aortic Valve Replacement) was performed to analyze the determinants of outcomes in older women and men undergoing transcatheter aortic valve replacement. FRAILTY-AVR was a multinational, prospective, observational cohort assembled at 14 institutions in North America and Europe from 2012 to 2017. Multivariable logistic regression models were stratified by sex and adjusted for covariates. Interaction between sex and each of these covariates was assessed. The primary outcome was 12-month mortality, and the secondary outcome was 1-month composite mortality or major morbidity. The cohort consisted of 340 women and 419 men. Women were older and had higher predicted risk of mortality. Women were more likely to have physical frailty traits, but not cognitive or psychosocial frailty traits, and global indices of frailty were similarly associated with adverse events regardless of sex. Women were more likely to require discharge to a rehabilitation facility, particularly those with physical frailty at baseline, although their functional status was similar to men at 12 months. The risk of 1-month mortality or major morbidity was greater in women, particularly those treated with larger prostheses. The risk of 12-month mortality was not greater in women, with the exception of those with pulmonary hypertension, in whom, there was a significant interaction for increased mortality. Conclusions The present study highlights sex-specific differences in older adults undergoing transcatheter aortic valve replacement and draws attention to the impact of physical frailty in women and their potential risk associated with oversized prostheses and pulmonary hypertension. abstract_id: PUBMED:32652835 Appraisal of urgent transcatheter aortic valve replacement. Urgent transcatheter aortic valve replacement (TAVR) is needed in unstable patients with severe aortic stenosis and acute decompensated heart failure who are refractory to initial medical management. The outcome of urgent TAVR is worse and heavily influenced by the severity of the clinical status prior to intervention. Careful patient evaluation, procedural planning, and postoperative management should be optimized to improve the clinical results. abstract_id: PUBMED:33693127 Risk Factors for Mortality and Morbidity of Surgical Aortic Valve Replacement for Aortic Stenosis - Risk Model From a Japan Cardiovascular Surgery Database. Background: The aim of this study was to determine adequate indication for transcatheter aortic valve replacement (TAVR). We analyzed risk factors of surgical aortic valve replacement (SAVR) not only for mortality, but also for morbidity, including long hospital stay (≥90 days) and patient activity at discharge, in patients who underwent SAVR for aortic stenosis (AS). Methods and Results: Using the Japan Adult Cardiovascular Surgery Database (JCVSD), 13,961 patients with or without coronary artery bypass grafting who underwent elective SAVR for AS were identified from January 2008 to December 2012. The hospital mortality rate was 3.1%. The percentage of patients who had long hospital stay (≥90 days) and who had moderately or severely decompressed activity at discharge (modified Rankin scale ≥4) was 2.9% and 6.5%, respectively. Eleven and 20 preoperative predictors of hospital mortality and morbidity, respectively, including long hospital stay and compromised status at discharge, were identified. Based on these risk factors, the risk model predicted hospital mortality (area under the curve [AUC], 0.732) and morbidity (AUC, 0.694). Conclusions: Using JCVSD, a risk model of SAVR was developed for AS. This model can identify patients at high risk not only for mortality, but also for mortality and morbidity, including long hospital stay and status at discharge. abstract_id: PUBMED:33150396 Trend in morbidity and mortality in surgical aortic valve replacement: a retrospective, observational, single-centre study. Objectives: Continuous improvement in the management of patients undergoing surgical aortic valve replacement (SAVR) may have considerably enhanced surgical outcomes including in-hospital mortality and perioperative complications. We aimed to analyse in-hospital mortality and morbidity trends in patients undergoing SAVR in a single centre to provide insights for future benchmarking for transcatheter aortic valve implantation indications. Methods: This was a retrospective study of prospectively collected data from patients undergoing either isolated SAVR or combined with coronary artery bypass grafting (CABG) at the Bristol Heart Institute, UK, from January 2000 to December 2017. Baseline characteristics were extracted and analysed across 3 different eras (2000-2005, 2006-2011 and 2012-2017). Risk-adjusted time trend was obtained from univariate and multivariate logistic regression including all baseline characteristics. Results: A total of 2719 patients (63.2%) underwent isolated SAVR, and 1581 (36.8%) underwent combined CABG and SAVR during the study period. For patients undergoing SAVR, in-hospital mortality decreased from 2.9% in 2000-2005 to 0.7% in 2012-2017 (risk-adjusted time trend 0.0001). Hospital mortality in patients aged 75-79 and ≥80 years decreased from 5.6% and 5.3% to 0.4% and 2.2%, respectively. Mortality after combined SAVR and CABG did not significantly decrease (from 3.9% in 2000-2005 to 3.5% in 2012-2017; risk-adjusted time trend = 0.62). However, in patients aged ≥80 years, index hospitalization mortality showed a decreasing non-significant trend from 9.8% to 4.8%. Conclusions: Our findings support the hypothesis that mortality and morbidity rates following SAVR have significantly improved over the years, including for patients at high risk. abstract_id: PUBMED:29746650 Age-dependent morbidity and mortality outcomes after surgical aortic valve replacement. Objectives: This study addressed the assumption of increased morbidity and mortality after surgical aortic valve replacement in patients older than 80 years with severe aortic stenosis. Methods: This prospective study was performed in consecutive patients referred for aortic valve replacement. The age-dependent change in cognitive and physical function, quality of life and rehospitalization and complication rates during the following year and 5-year all-cause mortality were documented. Results: A total of 351 patients underwent surgical aortic valve replacement. The death risk at 5 years was 10%, 20% and 34% in patients aged <70 years, 70-79 years and ≥80 years, respectively. Patients aged 70-79 years and ≥80 years had a hazard ratio of 1.88 [95% confidence interval (95% CI) 0.92-3.83, P = 0.08] and 2.90 [95% CI 1.42-5.92, P = 0.003] for mortality, respectively, when compared with patients aged <70 years. The length of stay and rehospitalization rate during the following year were similar between the groups. Patients ≥80 years of age experienced more delirium and infections, whereas the risks of new pacemaker, transient ischaemic attack (TIA) or stroke, myocardial infarction and heart failure were comparable between the age groups. All groups exhibited reduced New York Heart Association class, improved physical quality of life and unchanged mental scores without any clinically significant Mini Mental Status reduction. Conclusions: Elderly patients (≥80 years of age) have important gains in health measures and satisfactory 5-year survival with an acceptable complications rate during the year following surgery. Active respiratory mobilization and the removal of an indwelling urethra catheter can prevent adverse effects, and measures should be taken to prevent delirium and confusion in elderly patients. Clinical Trial Registration: clinicaltrials.gov (NCT 01794832). abstract_id: PUBMED:31647180 Outcomes of urgent versus nonurgent transcatheter aortic valve replacement. Background: There is a paucity of data regarding the outcomes of transcatheter valve replacement (TAVR) performed in an urgent clinical setting. Methods: The Nationwide Inpatient Sample (NIS) database years 2011-2014 was used to identify hospitalizations for TAVR in the urgent setting. Using propensity score matching, we compared patients who underwent TAVR in nonurgent versus urgent settings. Results: Among 42,154 hospitalizations in which TAVR was performed, 10,114 (24%) underwent urgent TAVR. There was an uptrend in the rate of urgent TAVR procedures (p = .001). The rates of in-hospital mortality among this group did not change during the study period (p = .713). Nonurgent TAVR was associated with lower mortality (odds ratio [OR] = 0.78; 95% confidence interval [CI]: 0.69-0.89, p < .001) compared with urgent TAVR. Nonurgent TAVR was associated with lower incidence of cardiogenic shock (OR = 0.46; 95%CI: 0.40-0.53 p < .001), use of mechanical circulatory support devices (OR = 0.69; 95%CI: 0.59-0.82, p < .001), AKI (OR = 0.60; 95%CI: 0.56-0.64 p < .001), hemodialysis (OR = 0.67; 95%CI: 0.56-0.80 p < .001), major bleeding (OR = 0.94; 95%CI: 0.89-0.99 p = .045) and shorter length of stay (7.08 ± 6.317 vs. 12.39 ± 9.737 days, p < .001). There was no difference in acute stroke (OR = 0.96; 95%CI: 0.81-1.14, p = .636), vascular complications (OR = 1.07; 95%CI: 0.89-1.29, p = .492), and pacemaker insertions (OR = 0.92; 95%CI: 0.84-1.01, p = .067) between both groups. Among those undergoing urgent TAVR, subgroup analysis showed higher mortality in patients ≤80 years (p = .033), women (p < .001), chronic kidney disease (p = .001), heart failure (p < .001), and liver disease (p = .003). Conclusion: In this large nationwide analysis, almost a quarter of TAVR procedures were performed in the urgent settings. Although urgent TAVR was associated with higher mortality and increased complications compared with nonurgent TAVR, the absolute difference in in-hospital mortality was not remarkably higher. Thus, urgent TAVR can be considered as a reasonable approach when indicated. Answer: The question of whether an 'urgent' status increases morbidity and mortality in octogenarians undergoing aortic valve replacement (AVR) is addressed in the study by PUBMED:22953676. The study compared outcomes between octogenarians who underwent urgent AVR and those who had the procedure electively. The results indicated that although the 'urgent' group was older and had more comorbidities, there was only a non-significant trend towards higher in-hospital mortality in the 'urgent' group compared to the 'elective' group (10.2% versus 4.3%, p = 0.18). Furthermore, the 39-month survival rate was 81% in elective patients and 71% in urgent patients (p = 0.166). The conclusion drawn from this study is that AVR can be performed in the elderly population with acceptable early and mid-term results, despite their urgent status. Additionally, other studies have explored factors that may influence outcomes in patients undergoing AVR. For instance, PUBMED:23530025 found that preoperative anaemia is a risk factor for mortality and morbidity following AVR. PUBMED:30879326 highlighted sex-specific determinants of outcomes after transcatheter aortic valve replacement (TAVR), noting that physical frailty in women and the potential risk associated with oversized prostheses and pulmonary hypertension are important considerations. PUBMED:32652835 discussed the appraisal of urgent TAVR, emphasizing that the outcome is worse and heavily influenced by the severity of the clinical status prior to intervention. Moreover, PUBMED:33693127 developed a risk model for surgical AVR that can identify patients at high risk not only for mortality but also for morbidity, including long hospital stays and status at discharge. PUBMED:33150396 reported that mortality and morbidity rates following SAVR have significantly improved over the years, including for patients at high risk. PUBMED:29746650 found that elderly patients (≥80 years of age) have important gains in health measures and satisfactory 5-year survival with an acceptable complications rate following surgery. Lastly, PUBMED:31647180 compared urgent TAVR with nonurgent TAVR and found that although urgent TAVR was associated with higher mortality and increased complications, the absolute difference in in-hospital mortality was not remarkably higher, suggesting that urgent TAVR can be considered a reasonable approach when indicated.
Instruction: Does decarboxylation make 2,5-dihydroxybenzoic acid special in matrix-assisted laser desorption/ionization? Abstracts: abstract_id: PUBMED:24711271 Does decarboxylation make 2,5-dihydroxybenzoic acid special in matrix-assisted laser desorption/ionization? Rationale: Among the six positional isomers of dihydroxybenzoic acid (DHB), 2,5-DHB is a more favorable matrix for use in matrix-assisted laser desorption/ionization (MALDI) than the other isomers because of its high ion-generation efficiency at 337 and 355 nm. The generation of hydroquinone or p-benzoquinone through the decarboxylation of 2,5-DHB has been suggested to play a crucial role in the ion-generation efficiency of 2,5-DHB. Methods: The mass spectra of desorbed neutrals generated from MALDI were measured using electron impact ionization (70 eV) and a quadrupole mass spectrometer and vacuum ultraviolet (118 nm) photoionization and a time-of-flight mass spectrometer. The mass spectra of desorbed ions generated from MALDI were investigated using a time-of-flight mass spectrometer. The dissociation barrier height and dissociation rate of decarboxylation were calculated by an ab initio method and RRKM theory. Results: Decarboxylation of neutral 2,5-DHB and 2,5-DHB cations was not observed. Theoretical calculations indicated that decarboxylation of neutral 2,5-DHB and 2,5-DHB cations is too slow to occur. Conclusions: The high ion-generation efficiency of the 2,5-DHB matrix at 337 and 355 nm is not related to decarboxylation. abstract_id: PUBMED:18059041 Matrix-assisted laser desorption/ionization mechanism study with dihydroxybenzoic acid isomers as matrices. Desorption and ionization efficiencies of matrix-assisted laser desorption/ionization (MALDI) for various biomolecules with different dihydroxybenzoic acid isomers were studied. No clear relationships were observed between MALDI biomolecule signals vs. gas-phase basicity, proton affinity and ionization potential. This indicates the the gas-phase protonation mechanism is not adequate to explain the observed results. abstract_id: PUBMED:26757075 MALDI ionization mechanisms investigated by comparison of isomers of dihydroxybenzoic acid. Matrix-assisted laser desorption/ionization (MALDI) ion formation mechanisms were investigated by comparison of isomers of dihydroxybenzoic acid (DHB). These exhibit substantially different MALDI performance, the basis for which was not previously understood. Luminescence decay curves are used here to estimate excited electronic state properties relevant for the coupled chemical and physical dynamics (CPCD) model. With these estimates, the CPCD predictions for relative total ion and analyte ion yields are in good agreement with the data for the DHB isomers. Predictions of a thermal equilibrium model were also compared and found to be incompatible with the data. Copyright © 2015 John Wiley & Sons, Ltd. abstract_id: PUBMED:30769830 Complementarity of Matrix- and Nanostructure-Assisted Laser Desorption/Ionization Approaches. In recent years, matrix-assisted laser desorption/ionization (MALDI) has become the main tool for the study of biological macromolecules, such as protein nano-machines, especially in the determination of their molecular masses, structure, and post-translational modifications. A key role in the classical process of desorption and ionization of the sample is played by a matrix, usually a low-molecular weight weak organic acid. Unfortunately, the interpretation of mass spectra in the mass range of below m/z 500 is difficult, and hence the analysis of low molecular weight compounds in a matrix-assisted system is an analytical challenge. Replacing the classical matrix with nanomaterials, e.g., silver nanoparticles, allows improvement of the selectivity and sensitivity of spectrometric measurement of biologically important small molecules. Nowadays, the nanostructure-assisted laser desorption/ionization (NALDI) approach complements the classic MALDI in the field of modern bioanalytics. In particular, the aim of this work is to review the recent advances in MALDI and NALDI approaches. abstract_id: PUBMED:25905868 Chemical aspects of the primary ionization mechanisms in matrix-assisted laser desorption ionization. It has been proposed that the primary ionization mechanism occurring in matrix-assisted laser desorption ionization (MALDI) experiments originates from the presence, in the solid-state matrix-analytes sample, of matrix dimers. These species are formed by the interaction of carboxylic groups present in the matrix molecules with the formation of strong hydrogen bonds. Theoretical calculations proved that the laser irradiation of these structures leads to one or two H-bridge cleavages, giving rise to an "open" dimer structure or to disproportionation with the formation of MH(+) and [M-H](-) species. The ions so formed can be considered highly effective in their reaction with analyte ions, leading to their protonation (or deprotonation). To achieve further evidence for these proposals, in the present study the energetics of the reactions of ions from different aromatic carboxylic acids with two amino acids (glycine and lysine) and three multipeptides (gly-gly, gly-gly-gly and gly-gly-gly-gly) was investigated. The lowest ∆G values were obtained for 2,5- dihydroxybenzoic acid, widely employed as the MALDI matrix. Also, for p-nitrobenzoic acid the reaction is slightly exothermic, while for the other aromatic carboxylic acids derivatives positives values of ∆G are present. abstract_id: PUBMED:14574771 Matrix-assisted laser desorption ionization Since its invention, matrix-assisted laser desorption ionization (MALDI) has found wide application in mass spectrometry of high molecular weight compounds such as synthetic polymers and biopolymers. Despite widespread application of MALDI, the fundamental processes of ion formation and desorption are still poorly understood. The chemistry of the MALDI process, occurring both during sample preparation and during ionization is reflected in the mass spectrum. As the MALDI technique now stands a low concentration of analyte molecules, which usually exhibit only moderate absorption per molecules, is embedded in matrix crystals consisting of a small, highly absorbing species. In this manner the efficient and controllable energy transfer is retained while the analyte molecules are separated from excessive energy that would lead to their decomposition. The matrix is believed to serve two major functions: adsorption of energy from the laser light and the isolation of analyte molecules from each other. There are 3 major methods for the preparation of samples for analysis which are quite quick and simple: dried droplet, surface and sandwich preparation. Experiments with pH indicator dyes serve as proof that analyte's charge state in the matrix crystals is the same as in solution. Upon laser desorption a sudden and explosive phase transition occurs and a dense plume of desorbed material is formed. The initial velocity of analyte ions in the plume depends only on the matrix used. Initial species formed as a result of laser desorption are tiny clusters. They consist of a matrix, analyte and other ionic species embedded in the matrix crystals all held together by hydrogen bonds and coulombic interactions. The first essential charging and thus ionization process is the statistical occurrence of clusters with a deficit/excess of anions of cations. Very small initial clusters are likely to be highly charged. Highly charged initial clusters cannot survive in the matrix plume and their charge drops. Clusters shrink by evaporation of neutral molecules. This paper presents only those cases leading to analyte ions and compares MALDI and the electrospray ionization technique. abstract_id: PUBMED:10847609 Observation of tetrahydrofolylpolyglutamic acid in bacteria cells by matrix-assisted laser desorption/ionization mass spectrometry. Tetrahydrofolylpolyglutamic acid in whole bacteria cells and cell lysates is analyzed by matrix-assisted laser desorption/ionization mass spectrometry. The speed, mass information, and tolerance to impurities of this technique make it ideal for monitoring the glutamation levels of folic acid in biological systems. Folylpolyglutamic acid is observed in a few strains of E. coli and two species of Staphylococcus bacteria. The effects of growth time, growth media, and the addition of methotrexate, a dihydrofolate reductase inhibitor, are also studied. abstract_id: PUBMED:8694245 Matrix-assisted laser desorption ionization of neutral clusters composed of matrix and analyte molecules. A new version of the matrix-assisted laser desorption ionization experiment is described. Composed clusters created by laser desorption of a mixture of analyte and matrix material were injected into a supersonic jet and then analyzed by multiphoton ionization time-of-flight mass spectrometry. Selective two-photon ionization of matrix molecules of a cluster component caused their dissociation, followed by intercluster proton or cation transfer to an analyte that is transparent for the wavelength of ionizing radiation. The intercluster charge transfer reactions are the only pathway for analyte ion formation under these conditions, as can be concluded from analysis of the mass spectra of a variety of substances. abstract_id: PUBMED:11507754 Two-photon ionization thresholds of matrix-assisted laser desorption/ionization matrix clusters. Direct two-photon ionization of the matrix has been considered a likely primary ionization mechanism in matrix-assisted laser desorption/ionization (MALDI) mass spectrometry. This mechanism requires that the vertical ionization threshold of matrix materials be below twice the laser photon energy. Because dimers and larger aggregates may be numerous in the early stages of the MALDI plume expansion, their ionization thresholds are important as well. We have used two-color two-photon ionization to determine the ionization thresholds of jet cooled clusters of an important matrix, 2,5-dihydroxy benzoic acid (DHB), and mixed clusters with the thermal decomposition product of DHB, hydroquinone. The thresholds of the clusters were reduced by only a few tenths of an eV compared to the monomers, to an apparent limit of 7.82 eV for pure DHB clusters. None of the investigated clusters can be directly ionized by two nitrogen laser photons (7.36 eV), and the ionization efficiency at the thresholds is low. abstract_id: PUBMED:11746906 Effect of impurities on the matrix-assisted laser desorption/ionization mass spectra of insulin. The effect of impurity (sodium sulfate Na2SO4; copper sulfate CuSO4; potassium ferrocyanide K4Fe(CN)6; and triammonium citrate (NH4)3C6H5O7) concentration on the positive matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) spectra of insulin is described. 2,5-Dihydroxybenzoic acid was used as a desorption matrix. The estimated maximum concentrations of tested salts for which no insulin signal was observed were determined. Answer: No, decarboxylation does not make 2,5-dihydroxybenzoic acid (2,5-DHB) special in matrix-assisted laser desorption/ionization (MALDI). Although 2,5-DHB is a more favorable matrix for use in MALDI due to its high ion-generation efficiency at 337 and 355 nm, the generation of hydroquinone or p-benzoquinone through the decarboxylation of 2,5-DHB was suggested to play a crucial role in this efficiency. However, studies have shown that decarboxylation of neutral 2,5-DHB and 2,5-DHB cations was not observed. Theoretical calculations indicated that decarboxylation of neutral 2,5-DHB and 2,5-DHB cations is too slow to occur, leading to the conclusion that the high ion-generation efficiency of the 2,5-DHB matrix at 337 and 355 nm is not related to decarboxylation (PUBMED:24711271).
Instruction: Does abciximab improve the prognosis of diabetics after percutaneous coronary intervention? Abstracts: abstract_id: PUBMED:12199976 Does abciximab improve the prognosis of diabetics after percutaneous coronary intervention? Introduction And Objectives: It is known that the outcome of percutaneous coronary intervention is worse in diabetics than in non-diabetics. The aim of our study was to determine whether abciximab therapy could improve clinical outcome in an unselected diabetic population that underwent percutaneous coronary interventions. Material And Methods: We analyzed retrospectively 198 diabetic patients who underwent PTCA from January 1997 to January 2000. Seventy-three patients (36.7%) were treated with abciximab and the remaining 125 patients (63.3%) did not receive abciximab. The mean follow-up was 12.6 months. The events considered were death, non-fatal myocardial infarction, any revascularization procedure (including the target vessel), and hospital admission for unstable angina. Results: Patients who received abciximab had more frequent previous myocardial infarction (67.1 vs. 52.8%; p = 0.04), worse left ventricular function (0.53 vs. 0.59%; p = 0.02), more frequent angiographic thrombus (67.1 vs. 36.8%; p < 0.001), more complex lesions (B2/C) (76.4 vs. 55.8%; p = 0.004), and less frequent location in left anterior descending artery (34.2 vs. 60.8%; p = 0.002). The indication for PTCA in patients who received abciximab was most often related to myocardial infarction. There were no differences between the groups in sex, age and distribution of diabetes treatment. Events were more frequent in diabetics not treated with abciximab than in those who were treated with abciximab (38 vs. 22%; p < 0.037). The patients not treated with abciximab suffered more frequently target vessel revascularization (22.7 vs. 7.2%; p < 0.007). There were no significant differences in the frequency of death or non-fatal myocardial infarction, but hospital readmissions for unstable angina were significantly more frequent in diabetics not treated with abciximab (29.1 vs. 15.9%; p = 0.045). Multivariate analysis identified abciximab as a predictor of the absence of complications during follow-up (OR: 0.45; p = 0.03). Conclusion: Abciximab treatment seems to reduce events in unselected diabetic patients undergoing percutaneous coronary intervention, particularly target vessel revascularization. abstract_id: PUBMED:12668857 Why diabetics are at risk in percutaneous coronary intervention and the appropriate management of diabetics in interventional cardiology. The increasing prevalence of diabetes mellitus in the U.S., particularly among younger patients, will place significant additional demands on an already strained healthcare delivery system. The metabolic and hormonal milieu created by diabetes mellitus fosters the development of systemic atherosclerosis, and mortality in these patients is driven largely by complications of cardiovascular disease. Unfortunately, diabetic pathophysiology poses unique challenges to attaining successful coronary revascularization, whether it is by coronary artery bypass grafting (CABG) or percutaneous coronary intervention (PCI). In determining the optimal treatment strategy for a diabetic patient with coronary atherosclerotic disease, it is imperative that the treating physician fully understands the following: the inherent limitations to revascularization in diabetics, the pharmacologic therapies of proven benefit in diabetics with atherosclerotic disease, the currently available comparative data of CABG versus PCI in diabetics, and the ongoing medical, surgical, and interventional advances which may render earlier data obsolete. abstract_id: PUBMED:10732889 Abciximab reduces mortality in diabetics following percutaneous coronary intervention. Objectives: We sought to determine whether abciximab therapy at the time of percutaneous coronary intervention (PCI) would favorably affect one-year mortality in patients with diabetes. Background: Diabetics are known to have increased late mortality following PCI. Methods: Data from three placebo-controlled trials of PCI, EPIC, EPILOG, and EPISTENT, were pooled. The one-year mortality rate for patients with a clinical diagnosis of diabetes mellitus was compared with the rate for nondiabetic patients treated with either abciximab or placebo. Results: In the 1,462 diabetic patients, abciximab decreased the mortality from 4.5% to 2.5%, p = 0.031, and in the 5,072 nondiabetic patients, from 2.6% to 1.9%, p = 0.099. In patients with the clinical syndrome of insulin resistance--defined as diabetes, hypertension, and obesity--mortality was reduced by abciximab treatment from 5.1% to 2.3%, p = 0.044. The beneficial reduction in mortality with abciximab use in diabetics classified as insulin-requiring was from 8.1% to 4.2%, p = 0.073. Mortality in diabetics who underwent multivessel intervention was reduced from 7.7% to 0.9% with use of abciximab, p = 0.018. In a Cox proportional hazards survival model, the risk ratio for mortality with abciximab use compared with placebo was 0.642 (95% confidence interval 0.458-0.900, p = 0.010). Conclusions: Abciximab decreases the mortality of diabetic patients to the level of placebo-treated nondiabetic patients. This beneficial effect is noteworthy in those diabetic patients who are also hypertensive and obese and in diabetics undergoing multivessel intervention. Besides its potential role in reducing repeat intervention for stented diabetic patients, abciximab therapy should be strongly considered in diabetic patients undergoing PCI to improve their survival. abstract_id: PUBMED:27014357 Spontaneous Bleeding from Internal Pudendal Artery associated with Abciximab after Primary Percutaneous Coronary Intervention: Successful Treatment with Percutaneous Gel-Foam Embolization. We describe a case of spontaneous bleeding from a branch of the right internal pudendal artery that resulted in massive scrotal swelling in a patient who had underwent primary percutaneous coronary intervention with the use of abciximab concurrent with conventional anticoagulation and dual antiplatelet therapies for the treatment of acute ST-segment elevation myocardial infarction. This unusual complication was promptly identified by percutaneous peripheral arteriography and successfully treated with gel-foam embolization. abstract_id: PUBMED:16226815 Late outcomes after intervention of vein grafts in diabetics with abciximab use. Objectives: We sought to evaluate late clinical outcomes following elective percutaneous coronary interventions for saphenous vein graft with planned abciximab use in diabetics. Background: Diabetes adversely affects long-term outcomes after percutaneous coronary intervention including percutaneous coronary intervention of saphenous vein grafts. Glycoprotein IIb/IIIa inhibitors reduce adverse outcomes of native vessel percutaneous coronary intervention in diabetics; however, their effects in diabetic patients undergoing percutaneous coronary interventions of saphenous vein grafts is not known. Methods: We evaluated 509 elective percutaneous coronary interventions of saphenous vein grafts. Glycoprotein IIb/IIIa inhibitors were used in 99% of the percutaneous coronary interventions. Distal protection devices were not used. Clinical follow-up (mean 30+/-20 months) was available in 369/397 (93%). Results: Overall, procedural success rate was 97% and was similar for diabetics and non-diabetics. Procedural major adverse cardiac events occurred in 4.8% diabetics, and 4.1% non-diabetics (p = NS). Late outcomes were similar for diabetics and non-diabetics with major adverse cardiac event rates of 57.6%, and 54.9%, respectively, p=NS. Multivariate logistic regression identified graft age and the presence of thrombus as predictors of increased late major adverse cardiac events. Conclusions: Following routine use of abciximab, late outcomes after percutaneous coronary interventions of saphenous vein grafts were similar for diabetics and non-diabetics suggesting that the historically observed adverse effect of diabetes on late outcomes after percutaneous coronary interventions of saphenous vein grafts may be attenuated by glycoprotein IIb/IIIa use. Selective use of glycoprotein IIb/IIIa inhibitor should be considered for diabetics undergoing percutaneous coronary interventions of saphenous vein grafts. abstract_id: PUBMED:25518758 Intraprocedural thrombus formation in the left main tract during primary percutaneous coronary intervention. A 67 years old male presented with acute myocardial infarction. Emergency coronary angiography demonstrated subocclusive stenosis in the proximal Left Anterior Descending artery (LAD). Primary Percutaneous Coronary Intervention (PCI) was complicated with intraprocedural thrombosis in the distal Left Main Tract (LMT) following implantation of a stent in the mid LAD. The thrombus was successfully managed with heparin and quadruple antiplatelet therapy (abciximab, aspirin, clopidogrel, and cilostazol) after several attempts of thrombectomy adequate distal flow was achieved. The lesion in the proximal LAD was successfully treated using a kissing stent technique in the second stage. abstract_id: PUBMED:15246898 Relation of an elevated white blood cell count after percutaneous coronary intervention to long-term mortality. Increased inflammatory markers are associated with a poor prognosis after percutaneous coronary intervention. Leukocytes play a key role in inflammation, and an increase in white blood cell (WBC) counts is a nonspecific marker of inflammation. In patients undergoing percutaneous coronary intervention, baseline WBC counts independently predict long-term mortality. In a pooled cohort of patients from the Evaluation of c7E3 for the Prevention of Ischemic Complications (EPIC), the Evaluation in PTCA to Improve Long-term Outcome with abciximab Glycoprotein IIb/IIIa blockade (EPILOG), and Evaluation of Platelet IIb/IIIa inhibitor for STENTing (EPISTENT) trials, postprocedural WBC counts were also found to be an independent predictor of long-term mortality. abstract_id: PUBMED:23394547 Intracoronary injection of glycoprotein IIb/IIIa, abciximab, as adjuvant therapy in primary coronary intervention. Acute coronary syndromes and, in paticular, ST - segment elevation myocardial infarction are the principle causes or mortality and morbidity in the industrialized countries. The manadgement of acute myocadial infarction is much debated in the literature; primary percutaneous coronary intervention is the treatment of choice. In the recent years there has been an increasing interest in the concept of adjunctive pharmacological therapy to improve outcomes in primary percutaneous coronary intervention. In the literature randomized trials of intravenous or more recently intracoronary injection of glycoprotein IIb/IIIa inhibitors have provided conflicting results with no definitive evidence for efficacy. The aim of the report is to review the evidence to our date on the role of intracoronary injection of abciximab during primary percutaneous intervention in the setting of acute myocardial infarction. abstract_id: PUBMED:22694809 Risk score comparison of outcomes in patients presenting with ST-elevation myocardial infarction treated with percutaneous coronary intervention. Background: Accurate risk stratification is important in the management of acute coronary syndrome (ACS) patients. Several risk scores have been developed to stratify patients hospitalized with ACS. Aim: To compare the prognostic value of three risk scores used to determine the risk for secondary events in patients diagnosed with ST-elevation myocardial infarction (STEMI) treated with primary percutaneous coronary intervention (PCI). Methods: A retrospective analysis of patients with an ACS experiencing a STEMI treated with PCI presenting to a Kaiser Permanente Foundation Hospital in Northern California from January 2007 to January 2008 (n=186). For each patient, the Thrombolysis in Myocardial Infarction (TIMI), Controlled Abciximab and Device Investigation to Lower Late Angioplasty complications (CADILLAC), and Global Registry for Acute Coronary events (GRACE) risk scores were calculated and predictive ability was analysed using the area under the curve or C statistic method. Results: The CADILLAC, TIMI, and GRACE risk scores all had high predictive accuracy for a major cardiac event in hospital, with C statistics ranging from 0.63 to 0.718 with the CADILLAC risk score superior. For non-major cardiac events in hospital, the three risk scores were similar in accuracy for predictability, with C statistics ranging from 0.62 to 0.66. Conclusions: Risk stratification of patients with STEMI undergoing primary percutaneous coronary intervention can be done using all three risk scores. In our analysis, the CADILLAC and GRACE score was more predictive for major cardiac events in hospital and CADILLAC and TIMI for major cardiac events at 1 year. abstract_id: PUBMED:22343257 Impact of abciximab on prognosis in diabetic patients undergoing primary percutaneous coronary intervention. Background: The impact of diabetes in patients with acute myocardial infarction (AMI) treated with primary percutaneous coronary intervention (PCI) is unclear. The benefit of abciximab in this subset of patients remains controversial. Methods And Results: Three hundred and twenty-seven consecutive and unselected patients with acute AMI treated with primary PCI were included in our single-center retrospective registry, 103 diabetic (31%) and 224 nondiabetic (69%). Abciximab was given at the physician's discretion. Diabetic patients were older (mean age 68.5±11 vs. 65±12 years; P=0.009), had an increased prevalence of hypertension (73 vs. 54%; P=0.001), a decreased prevalence of smoking (31 vs. 45%; P=0.02), a longer duration of symptoms before hospital admission (190 vs. 143 min; P=0.031), and a higher number of stents implanted (1.4 vs. 1.2; P=0.04). Other clinical and angiographic characteristics were comparable in the two groups. Diabetic patients had a higher incidence of the combined end-point of death and reinfarction rate at 30 days (18 vs. 10%; P=0.04) compared to nondiabetic patients. Abciximab treatment was associated with a lower in-hospital (23.8 vs. 5%; P=0.005) and 30-day (23.8 vs. 6.6%; P=0.012) mortality, and a lower incidence of death and reinfarction at 30 days (33.3 vs. 9.8%; P=0.003) in diabetic patients. In nondiabetic patients, abciximab was not associated with improved outcome measures. Advanced Killip class (III and IV) and abciximab were found to be independently associated with 30-day death or myocardial infarction [respectively, odds ratio (OR) 6.075, 95% confidence interval (CI) 1.59-23.218, P=0.008 and OR 0.177, 95% CI 0.034-0.938, P=0.042] in the propensity score-matched populations of diabetic patients. Advanced Killip class and thrombolysis in myocardial infarction score index were found to be independently associated with 30-day death or myocardial infarction (respectively, OR 6.607, 95% CI 1.5-29.106, P=0.013 and OR 1.094 95% CI 1.042-1.148, P<0.001) in the propensity score-matched populations of nondiabetic patients. Conclusions: In our registry diabetic patients treated with primary PCI for AMI had a worse in-hospital and 30-day outcome than nondiabetic patients. Adjunct pharmacologic treatment with abciximab was associated to a better prognosis only in diabetic patients. Answer: Yes, abciximab appears to improve the prognosis of diabetics after percutaneous coronary intervention (PCI). Studies have shown that abciximab treatment can reduce events in unselected diabetic patients undergoing PCI, particularly target vessel revascularization (PUBMED:12199976). Additionally, abciximab has been found to decrease mortality from 4.5% to 2.5% in diabetic patients, and this beneficial effect is noteworthy in those diabetic patients who are also hypertensive and obese and in diabetics undergoing multivessel intervention (PUBMED:10732889). Furthermore, late outcomes after elective PCI for saphenous vein graft with planned abciximab use in diabetics were similar to non-diabetics, suggesting that the historically observed adverse effect of diabetes on late outcomes after PCI of saphenous vein grafts may be attenuated by glycoprotein IIb/IIIa use (PUBMED:16226815). In a registry of diabetic patients treated with primary PCI for acute myocardial infarction, abciximab treatment was associated with a lower in-hospital and 30-day mortality, and a lower incidence of death and reinfarction at 30 days in diabetic patients (PUBMED:22343257). These findings indicate that abciximab can be an important adjunctive therapy in improving the prognosis of diabetic patients undergoing PCI.
Instruction: Biventricular response after pulmonary valve replacement for right ventricular outflow tract dysfunction: is age a predictor of outcome? Abstracts: abstract_id: PUBMED:28180102 Concomitant Right Ventricular Outflow Tract Cryoablation during Pulmonary Valve Replacement in a Patient with Tetralogy of Fallot. A 38-year-old female patient with a history of tetralogy of Fallot repair at 10 years of age underwent pulmonary valve replacement with a mechanical prosthesis, tricuspid annuloplasty, and right ventricular outflow tract cryoablation due to pulmonary regurgitation, tricuspid regurgitation, and multiple premature ventricular contractions with sustained ventricular tachycardia. After surgery, she had an uneventful postoperative course with arrhythmia monitoring. She was discharged without incident, and a follow-up Holter examination showed a decrease in the number of ventricular ectopic beats from 702 to 41. abstract_id: PUBMED:21720459 Left ventricular function improves after pulmonary valve replacement in patients with previous right ventricular outflow tract reconstruction and biventricular dysfunction. Congenital heart defects that have a component of right ventricular outflow tract obstruction, such as tetralogy of Fallot, are frequently palliated in childhood by disruption of the pulmonary valve. Although this can provide an initial improvement in quality of life, these patients are often left with severe pulmonary valve insufficiency. Over time, this insufficiency can lead to enlargement of the right ventricle and to the deterioration of right ventricular systolic and diastolic function. Pulmonary valve replacement in these patients decreases right ventricular volume overload and improves right ventricular performance. To date, few studies have examined the effects of pulmonary valve replacement on left ventricular function in patients with biventricular dysfunction. We sought to perform such an evaluation.Records of adult patients who had undergone pulmonary valve replacement from January 2003 through November 2006 were analyzed retrospectively. We reviewed preoperative and postoperative echocardiograms and calculated left ventricular function in 38 patients.In the entire cohort, the mean left ventricular ejection fraction increased by a mean of 0.07 after pulmonary valve replacement, which was a statistically significant change (P < 0.01). In patients with preoperative ejection fractions of less than 0.50, mean ejection fractions increased by 0.10.We conclude that pulmonary valve replacement in patients with biventricular dysfunction arising from severe pulmonary insufficiency and right ventricular enlargement can improve left ventricular function. Prospective studies are needed to verify this finding. abstract_id: PUBMED:37088131 Use of the Inspiris valve in the native right ventricular outflow tract is associated with early prosthetic regurgitation. Objective: The Inspiris Resilia prosthesis (Edwards Lifesciences) has been increasingly used in the pulmonic position with limited performance data. We sought to investigate its durability as a surgical pulmonary valve replacement (PVR). Methods: We retrospectively reviewed patients who underwent PVR or conduit replacement with an Inspiris or non-Inspiris valve/conduit from 2018 to 2022. The primary end point was freedom from a composite of at least moderate pulmonary regurgitation, pulmonary stenosis, or valve/conduit reintervention. Secondary end points were individual components of the composite outcome. To account for baseline differences, propensity matching identified 70 patient pairs. Results: A total of 227 patients (median age: 19.3 years [interquartile range, 11.8-34.4]) underwent PVR or conduit replacement (Inspiris: n = 120 [52.9%], non-Inspiris: n = 107 [47.1%]). Median follow-up was 26.6 months [interquartile range, 12.4-41.1]. Among matched patients, 2-year freedom from valve failure was lower in the Inspiris group (53.5 ± 9.3% vs 78.5 ± 5.9%, P = .03), as was freedom from at least moderate pulmonary regurgitation (54.2 ± 9.6% vs 86.4 ± 4.9%, P < .01). There was no difference in 2-year freedom from at least moderate pulmonary stenosis (P = .61) or reintervention (P = .92). Inspiris durability was poorer when implanted in the native right ventricular outflow tract compared with as a conduit, with 18-month freedom from valve failure of 59.0 ± 9.5% versus 85.9 ± 9.5% (P = .03). Conclusions: Early durability of the Inspiris valve is poor when implanted in the native right ventricular outflow tract; its unique design may be incompatible with the compliant pulmonary root. Modified implantation techniques or alternative prostheses should be considered. abstract_id: PUBMED:31560970 Transesophageal Echocardiographic Predictor of Significant Right Ventricular Outflow Tract Obstruction After Tetralogy of Fallot Repair. The optimal method for reconstruction of the right ventricular outflow tract in tetralogy of Fallot repair is yet to be established. Transatrial repair with pulmonary valve preservation avoids the detrimental effects of ventriculotomy and transannular patch; however, because of moderately hypoplastic pulmonary valve, these techniques are associated with undesirable residual right ventricular outflow tract gradients resulting in reoperations. The study sought to define intraoperative echocardiographic predictors of significant right ventricular outflow tract obstruction after tetralogy of Fallot repair. From July 2016 to December 2017, 58 patients with tetralogy of Fallot underwent repair with direct intraoperative pressures measuring and intraoperative transesophageal echocardiography performing after tetralogy of Fallot repair completion. Multivariate analysis was performed using logistic regression analysis to identify independent predictors of right ventricular outflow tract obstruction. One year after tetralogy of Fallot repair, significant right ventricular outflow tract obstruction was present in 12 patients (20.6%): 10 patients (26.2%) after pulmonary valve-sparing procedure and 2 patients (12.5%) after transannular patch (P = 0.35). Right ventricular outflow tract z-score lower than -3.2 had the best predictive value for significant residual right ventricular outflow tract obstruction (area under the curve = 0.979, P < 0.001). Multivariate analysis found that the right ventricular outflow tract z-score reached a statistical significance as a predictor of significant right ventricular outflow tract obstruction (P < 0.0001). Right ventricular outflow tract z-score <-3.2 measured by intraoperative transesophageal echocardiography is an indicator of postoperative right ventricular outflow tract obstruction right after surgery and a predictor of obstruction development following tetralogy of Fallot repair. abstract_id: PUBMED:37892051 Role of Computed Tomography before Transcatheter Pulmonary Valve Implantation in Patients with Dysfunctional Native Right Ventricular Outflow Tract. The most performed percutaneous valve replacement procedure is for the aortic valve. In recent years, there have been developments in percutaneous valve replacement methods for other valves, including the pulmonary valve. Computed tomography plays a crucial role in various stages of procedure planning and provides essential information regarding potential complications after the procedure. We present images documenting step by step how to evaluate cardiac computed tomography in the qualification procedure for transcatheter pulmonary valve implantation in patients with dysfunctional native right ventricular outflow tract. abstract_id: PUBMED:37892867 Treatment of Severe Pulmonary Regurgitation in Enlarged Native Right Ventricular Outflow Tracts: Transcatheter Pulmonary Valve Replacement with Three-Dimensional Printing Guidance. Background: Severe pulmonary regurgitation (PR) often occurs following a transannular patch repair of tetralogy of Fallot, resulting in an enlarged native right ventricular outflow tract (nRVOT) with varying shapes. Methods: We conducted a single-center study with eight patients having severe PR and enlarged nRVOT (diameters ≥ 29 mm). Transcatheter pulmonary valve replacement (TPVR) was performed using the self-expanding PT-Valve. Preoperative evaluation included echocardiography, computed tomography, and magnetic resonance imaging. A 3D-printed model of the nRVOT was used for preoperative assessment. Follow-up data were collected in 1-year follow-up. Results: PT-Valve was successfully implanted in all patients, resulting in immediate improvement of severe PR. Pulmonary artery diastolic pressure increased significantly (p < 0.001). No deaths or coronary compression occurred during the procedure. Over a 1-year follow-up, no stent displacement or fracture occurred. Only two patients had trace paravalvular leaks. Magnetic resonance imaging revealed a reversal of right ventricular remodeling, with a significant reduction in right ventricular end-diastolic volume index (p < 0.001) and improved right ventricular ejection fraction (p < 0.001). All patients achieved primary endpoints. Conclusion: 3D printing-guided PT-Valve implantation in enlarged nRVOT for severe PR is safe and effective, expanding TPVR indications and offering potential treatment for a broader patient population. abstract_id: PUBMED:18824753 Biventricular response after pulmonary valve replacement for right ventricular outflow tract dysfunction: is age a predictor of outcome? Background: The timing of pulmonary valve replacement (PVR) for free pulmonary incompetence in patients with congenital heart disease remains a dilemma for clinicians. We wanted to assess the determinants of improvement after PVR for pulmonary regurgitation over a wide range of patient ages and to use any identified predictors to compare clinical outcomes between patient groups. Methods And Results: Seventy-one patients (mean age 22+/-11 years; range, 8.5 to 64.9; 72% tetralogy of Fallot) underwent PVR for severe pulmonary regurgitation. New York Heart Association class improved after PVR (median of 2 to 1, P<0.0001). MRI and cardiopulmonary exercise testing were performed before and 1 year after intervention. After PVR, there was a significant reduction in right ventricular volumes (end diastolic volume 142+/-43 to 91+/-18, end systolic volume 73+/-33 to 43+/-14 mL/m(2), P<0.0001), whereas left ventricular end diastolic volume increased (66+/-12 to 73+/-13 mL/m(2), P<0.0001). Effective cardiac output significantly increased (right ventricular: 3.0+/-0.8 to 3.3+/-0.8 L/min, P=0.013 and left ventricular: 3.0+/-0.6 to 3.4+/-0.7 L/min, P<0.0001). On cardiopulmonary exercise testing, ventilatory response to carbon dioxide production at anaerobic threshold improved from 35.9+/-5.8 to 34.1+/-6.2 (P=0.008). Normalization of ventilatory response to carbon dioxide production was most likely to occur when PVR was performed at an age younger than 17.5 years (P=0.013). Conclusions: A relatively aggressive PVR policy (end diastolic volume <150 mL/m(2)) leads to normalization of right ventricular volumes, improvement in biventricular function, and submaximal exercise capacity. Normalization of ventilatory response to carbon dioxide production is most likely to occur when surgery is performed at an age </=17.5 years. This is also associated with a better left ventricular filling and systolic function after surgery. abstract_id: PUBMED:30449423 Self-Expanding Pulmonary Valves for Large Diameter Right Ventricular Outflow Tracts. Congenital heart defects that involve obstruction to the right ventricular outflow tract are common. Surgical repair involves early relief of right ventricular outflow tract obstruction, which typically results in pulmonary regurgitation and large irregularly shaped "native" right ventricular outflow tract. This type of anatomy represents the majority of patients who could potentially benefit from transcatheter pulmonary valve therapy. Currently approved balloon-expandable devices were not designed for this application and the unique anatomy of these patients presents tremendous challenges for designing a valve that is. This article explores those challenges and the newest self-expanding devices designed to treat this challenging population. abstract_id: PUBMED:34889176 First experiences with Myval Transcatheter Heart Valve System in the treatment of severe pulmonary regurgitation in native right ventricular outflow tract and conduit dysfunction. The rate of morbidity and mortality related to pulmonary regurgitation and pulmonary stenosis are big concerns after the surgery for CHD. Percutaneous pulmonary valve implantation has been established as a less invasive technique compared to surgery with promising results according to long-term follow-up of the patients. There are only two approved valve options for percutaneous pulmonary valve implantation until now, which are Melody (Medtronic, Minneapolis, Minn, USA) and Sapien (Edwards Lifesciences, Irvine, Ca, USA). Both valves have limitations and do not cover entire patient population. Therefore, the cardiologists need more options to improve outcomes with fewer complications in a such promising area. Herein, we present a case series applying for pulmonary position in conduits and native right ventricular outflow tract of a new transcatheter valve system Myval ® which is designed for transcatheter aortic valve implantation procedures. This is the first patient series in which the use of Myvalv in dysfunctional right ventricular outflow tracts is described, after surgical repair of CHD. abstract_id: PUBMED:31236202 Percutaneous pulmonary valve implantation in patients with right ventricular outflow tract dysfunction: a systematic review and meta-analysis. Background: Pulmonary valve replacement is required for patients with right ventricular outflow tract (RVOT) dysfunction. Surgical and percutaneous pulmonary valve replacement are the treatment options. Percutaneous pulmonary valve implantation (PPVI) provides a less-invasive therapy for patients. The aim of this study was to evaluate the effectiveness and safety of PPVI and the optimal time for implantation. Methods: We searched PubMed, EMBASE, Clinical Trial, and Google Scholar databases covering the period until May 2018. The primary effectiveness endpoint was the mean RVOT gradient; the secondary endpoints were the pulmonary regurgitation fraction, left and right ventricular end-diastolic and systolic volume indexes, and left ventricular ejection fraction. The safety endpoints were the complication rates. Results: A total of 20 studies with 1246 participants enrolled were conducted. The RVOT gradient decreased significantly [weighted mean difference (WMD) = -19.63 mmHg; 95% confidence interval (CI): -21.15, -18.11; p < 0.001]. The right ventricular end-diastolic volume index (RVEDVi) was improved (WMD = -17.59 ml/m²; 95% CI: -20.93, -14.24; p < 0.001), but patients with a preoperative RVEDVi >140 ml/m² did not reach the normal size. Pulmonary regurgitation fraction (PRF) was notably decreased (WMD = -26.27%, 95% CI: -34.29, -18.25; p < 0.001). The procedure success rate was 99% (95% CI: 98-99), with a stent fracture rate of 5% (95% CI: 4-6), the pooled infective endocarditis rate was 2% (95% CI: 1-4), and the incidence of reintervention was 5% (95% CI: 4-6). Conclusions: In patients with RVOT dysfunction, PPVI can relieve right ventricular remodeling, improving hemodynamic and clinical outcomes. Answer: The question of whether age is a predictor of outcome for biventricular response after pulmonary valve replacement (PVR) for right ventricular outflow tract (RVOT) dysfunction has been addressed in the literature. According to a study with 71 patients who underwent PVR for severe pulmonary regurgitation, it was found that a relatively aggressive PVR policy led to normalization of right ventricular volumes, improvement in biventricular function, and submaximal exercise capacity. The study also suggested that normalization of ventilatory response to carbon dioxide production was most likely to occur when PVR was performed at an age younger than 17.5 years. This was associated with better left ventricular filling and systolic function after surgery, indicating that age may indeed be a predictor of outcome (PUBMED:18824753). This finding is consistent with the understanding that congenital heart defects involving RVOT obstruction, such as tetralogy of Fallot, often require intervention in childhood. Pulmonary valve replacement in patients with biventricular dysfunction arising from severe pulmonary insufficiency and right ventricular enlargement can improve left ventricular function (PUBMED:21720459). However, the timing of PVR remains a clinical dilemma, and the study mentioned above suggests that earlier intervention may lead to more favorable outcomes in terms of biventricular response and exercise capacity. It is important to note that while age at the time of PVR may be a predictor of certain outcomes, individual patient factors, the specifics of the RVOT dysfunction, and the type of valve or repair technique used can also significantly influence the results. For instance, the use of the Inspiris valve in the native RVOT has been associated with early prosthetic regurgitation, indicating that the choice of prosthesis is also a critical factor in the success of the procedure (PUBMED:37088131). In summary, age appears to be a predictor of biventricular response after PVR for RVOT dysfunction, with younger patients potentially experiencing better outcomes. However, this is a complex issue that must be considered in the context of other patient-specific and procedural factors.
Instruction: Is the healthy middle ear a normally sterile site? Abstracts: abstract_id: PUBMED:19060773 Is the healthy middle ear a normally sterile site? Objective: To systematically evaluate the presumption that the healthy middle ear becomes colonized with organisms via the patent eustachian tube using modern microbiologic techniques. Study Design: Sterile saline washings were obtained from the middle ear of patients in a prospective fashion. Setting: Tertiary/quaternary referral centers. Patients: Pediatric and adult patients undergoing cochlear implantation surgery. Intervention(s): Standard bacterial and viral cultures, and nucleic acid amplification techniques. Main Outcome Measure(s): Identification of organisms. Results: Specimens were obtained from 13 children and 9 adults. No organisms were identified in any of the specimens, either through standard culture or PCR testing. Conclusion: The presumption that the healthy middle ear is colonized by bacteria from the nasopharynx is unsubstantiated. abstract_id: PUBMED:31484741 Examining the Evidence for an Adult Healthy Middle Ear Microbiome. Otitis media (OM) is a cluster of diseases of the middle ear that commonly result from bacterial infection. OM subtypes in which the tympanic membrane is intact (acute otitis media and otitis media with effusion) are presumed to result from pathogen translocation through the eustachian tube. Recent molecular-based studies have suggested that a diverse middle ear microbiome exists in the absence of disease. These have been largely unsupported by culture and feature species that commonly contaminate low-biomass sequencing data. Combining culture-based and molecular techniques, we undertook a detailed investigation of the evidence for bacterial colonization of the healthy middle ear. Middle ear (ME), nasopharynx (NP), and external ear canal (EC) swabs were collected from a total of 25 adult patients undergoing cochlear implant, stapedotomy, or translabyrinthine vestibular schwannoma resection. Diagnostic culture, microscopy, quantitative PCR, and 16S rRNA gene amplicon sequencing were used to assess sample bacterial content. EC and NP microbiota were consistent with previous reports. In contrast, bacterial levels in ME samples were not significantly above those in unused control swabs. Commonly detected taxa were among recognized sequencing contaminants (Methylobacterium, Pseudomonas, and Acinetobacter). Linear regression of dominant ME taxa confirmed a negative relationship between relative abundance and bacterial load, consistent with contamination. No bacteria were detected by microscopy or diagnostic culture in any middle ear sample. Our findings cast substantial doubt on previous reports identifying a healthy middle ear microbiome using 16S amplicon sequencing.IMPORTANCE Recent molecular-based studies have suggested that a diverse middle ear microbiome in adults and children can exist in the absence of disease. These studies have been largely unsupported by culture and feature species that commonly contaminate low-biomass sequencing data. While 16S rRNA gene amplicon sequencing has proven to be a highly informative technique in many clinical contexts, it is susceptible to spurious signal arising from sequencing reagent contaminants where sample biomass is low. Combining culture-based and molecular techniques, we undertook a detailed investigation of the evidence for bacterial colonization of the healthy middle ear. In finding no evidence of viable bacterial cells in middle ear samples, our study further underlines the importance of careful consideration of amplicon sequence data derived from very-low-biomass contexts and the value of analytical approaches that combine culture and molecular techniques. abstract_id: PUBMED:17201985 Glial heterotopia of the middle ear. A choristoma is the presence of histologically normal tissue in an area where it is not normally located. The most frequently reported choristoma of the middle ear is salivary choristoma. Heterotopic brain tissue is very uncommon in the middle ear and mastoid. We present a rare case of glial choristoma of the middle ear, together with a review of the literature. abstract_id: PUBMED:34160830 Respiratory viruses in the healthy middle ear and middle ear with otitis media with effusion. To investigate the presence of respiratory viruses in the middle ear cavity of the individuals with a healthy middle ear and the children with otitis media with effusion (OME). A total of 72 middle ear samples were collected from 25 children with OME (Group 1) and 47 individuals with no middle ear disease (Group 2). Multiplex real-time polymerase chain reaction was used to investigate the presence of 20 different respiratory viruses. Virus results were compared with bacteriomes of the same populations. At least one respiratory virus was detected in 56% of the patients in Group 1 and 12.8% of the individuals in Group 2. The viral co-infection rate for Group 1 and 2 was 8% and 2.1%, respectively. In Group 1, adenovirus was the most frequently detected virus with a rate of 24%, either alone (16%) or concurrent with other viruses (8%), followed by influenza B (12%), rhinovirus, and bocavirus (8%) each. Parainfluenza 4, coronavirus OC43, and RSV A/B were detected in 4% of the sample each. In Group 2, rhinovirus was detected in two samples (4.3%) followed by adenovirus, coronavirus OC43, coronavirus E299, and coronavirus NL63 with a rate of 2.1% each. The detection rate of respiratory viruses was significantly higher in children aged 6 to 11 years. There was no positive association between virus and bacteria found in the middle ear cavity. The current study has provided comprehensive data indicating the presence of diverse respiratory viruses in the healthy middle ear cavity. Our results also suggest that respiratory viruses might have a contribution to OME pathogenesis. abstract_id: PUBMED:8289007 Middle ear lipoma. We report a case of primary middle ear lipoma diagnosed in the right ear of a five-year-old child with concurrent bilateral middle ear effusions. The lipoma occupied a site favoured by congenital cholesteatoma and was occlusive to the eustachian tube contributing to its dysfunction. This is the first case of de novo middle ear lipoma diagnosed in the UK, and the third in world literature. Our CT scans are suggestive of a similar but smaller lesion in the left ear of the same child. abstract_id: PUBMED:8562042 Collagen types in the middle ear mucosa. The distribution of various collagen types--in particular that of type II as the major collagen in cartilage--in normal auricular structures is discussed with reference to a 1994 report by Ovesen describing the presence of collagen type II in normal middle ear mucosa. In contrast to this report, no collagen type II is normally found in the normal mucosa of the middle ear, while multiple structures of the normal middle and inner ear (which mostly consist of cartilage or have at least a cartilaginous origin) contain this unique collagen type. This indicates that the collagen type II staining previously observed in the normal middle ear mucosa is most likely an artifact. Of the at least 18 collagen types now known, interstitial and ubiquitously present types I, III and V can be found in the normal middle ear. The minor cartilage collagen types IX and XI have been found in cartilage of the middle (and inner) ear. The basement membrane collagen type IV is normally present in different microanatomical settings, particularly in the epithelial and endothelial basement membranes. In general, an exclusive morphometric estimation of area changes for collagens, particularly for collagen type IV, regardless of any tissue localization seems to be of minor value, since no attribution can be drawn to any of the histoanatomical structures present in the ear. The presence of further collagen types has not yet been tested in normal auricular structures. abstract_id: PUBMED:2705704 Acute effects of sulfur dioxide exposure on the middle ear mucosa. A variety of atmospheric pollutants are known to depress mucociliary function in the respiratory system. Since the mucociliary function in the middle ear is similar, and the middle ear may be invaded by atmospheric pollutants, we decided to investigate the possible contribution of sulfur dioxide to middle ear effusion. Guinea pigs were exposed for 24 hours to 300 ppm of sulfur dioxide or air. Immediately after exposure, ciliary activity and epithelial structure were examined close to the tympanic orifice (proximal site) and more distal to it (distal site). In the animals exposed to sulfur dioxide, no effusion was found in the tympanic cavity. Ciliary activity was reduced only in the distal site. Electron microscopy demonstrated hypersecretion in the proximal site and severe pathologic changes in the distal site. Although the normally functioning cilia in the proximal site may prevent retention of surplus secretions in the ear, sulfur dioxide may promote middle ear effusion when combined with other detrimental factors, because it stimulates mucus secretion in the proximal site and impairs ciliary function in the distal site. abstract_id: PUBMED:922280 Congenital deformities of the middle and external ear. Congenital deformities of the external and middle ears were shown by tomography in 246 patients. Surgical exploration has been undertaken at some time in many of these ears. The appearance of the tomograms and their significance were reviewed in the light of the operative findings and the authors' experience of congenital ear lesions. In nearly every case a middle ear cavity could be demonstrated although this varied from a normally aerated middle ear and mastoid in association with an isolated unilateral atresia of the external auditory meatus to a small slit-like hypotympanum in patients with craniofacial abnormalities. The ossicles were nearly always present but deformed. The typical appearances of the ossicles and the frequently aberrant pathways of the facial nerve are described. abstract_id: PUBMED:10488982 Carbon dioxide exchange via the mucosa in healthy middle ear. Background: Recent studies have shown that gas exchange via the middle ear mucosa, which is performed between the middle ear cleft and capillaries in the submucosal connective tissue, has an essential role in ventilation and pressure regulation in the middle ear cleft. We speculated that gas exchange via the mucosa is induced by the gas diffusion caused by the partial pressure gradient of gas between the middle ear cleft and submucosal capillaries. Objective: To evaluate the capacity of the gas exchange via the mucosa in the healthy middle ear of humans by examining the effect of the respiratory mode on middle ear pressure. Subjects And Methods: We selected 13 volunteers ranging in age from 25 to 44 years with healthy ear drums and type A tympanograms. Middle ear pressure was measured in 1 ear of each subject every 2 minutes using tympanometry while the respiratory mode was altered, with the subject in the supine position. Results: The partial pressure of carbon dioxide in the venous blood (PvCO2) and middle ear pressure were decreased by hyperventilation and increased by hypoventilation. The partial pressure of oxygen in the venous blood showed little change. Conclusions: Carbon dioxide diffused into the blood from the middle ear cleft in accord with the partial pressure gradient when the PvCO2 was reduced by hyperventilation, resulting in a decrease of middle ear pressure, whereas CO2 diffused into the middle ear cleft when the PvCO2 was elevated by hypoventilation, resulting in an increase of middle ear pressure. These findings suggest that a bidirectional CO2 exchange via the middle ear mucosa functions in the normal human middle ear. abstract_id: PUBMED:10648072 Middle ear adenoma with neuroendocrine differentiation. The lining of the middle ear cleft is normally a thin modified respiratory type mucosa. Normal mucosa of the middle ear is devoid of salivary type seromucous glands. Middle ear adenoma is a rare tumor that seems to be derived from the middle ear mucosa. This tumor has been previously described under a variety of names because of its different nature and biological behavior. We herein report a case of middle ear tumor that shows adenomatous and neuroendocrine features. A 64-year-old woman presented with a history of hearing loss, tinnitus and stuffy feeling of the right ear. The patient was treated two times for ear polyp. She finally underwent a tympanomastoidectomy and there was no evidence of recurrence 18 months after the procedure. Histopathological examination displayed cuboidal and columnar cells, arranged as glands, trabeculae and solid sheets. Neuroendocrine differentiation was revealed by immunohistochemical staining with polyclonal antibodies against neuron specific enolase and chromogranin antigens. Answer: The question of whether the healthy middle ear is a normally sterile site has been addressed in several studies. According to the findings from a study where sterile saline washings were obtained from the middle ears of patients undergoing cochlear implantation surgery, no organisms were identified in any of the specimens, either through standard culture or PCR testing (PUBMED:19060773). This suggests that the presumption that the healthy middle ear is colonized by bacteria from the nasopharynx is unsubstantiated. Another study combined culture-based and molecular techniques to investigate bacterial colonization of the healthy middle ear. The results showed that bacterial levels in middle ear samples were not significantly above those in unused control swabs, and commonly detected taxa were among recognized sequencing contaminants. No bacteria were detected by microscopy or diagnostic culture in any middle ear sample, casting substantial doubt on previous reports identifying a healthy middle ear microbiome using 16S amplicon sequencing (PUBMED:31484741). Furthermore, a study investigating the presence of respiratory viruses in the middle ear cavity of individuals with a healthy middle ear and children with otitis media with effusion found that at least one respiratory virus was detected in 12.8% of the individuals with no middle ear disease. However, the study did not establish a positive association between virus and bacteria found in the middle ear cavity (PUBMED:34160830). Based on these findings, it appears that the healthy middle ear is generally a sterile site, and the presence of microorganisms is not common. The detection of organisms in some studies may be due to contamination rather than true colonization.
Instruction: Can colorectal cancer mass-screening organization be evidence-based? Abstracts: abstract_id: PUBMED:16156391 Evidence-based planning: the case of mass screening for colorectal cancer in the Lazio region of Italy Several population-based trials have shown the efficacy of colorectal cancer mass-screening based on guaiac faecal blood testing in a generic risk population. SCCR represents a very complex and resource-demanding public health intervention and, for this reason, its planning requires actual efficacy as a main goal. Since evidence of efficacy demonstrated by population-based trials may not actually generate effectiveness, the Regional Government of Latium Region decided to implement some experimental studies before introducing a screening programme, in order to define an evidence-based organisational model of SCCR and a feasibility evaluation of the real needs for screening. The aims of the pilot studies were to define an evidence-based organisational model, to evaluate the necessary resources and the actual quality standard of clinical examination, treatment and surgery. The aim of the feasibility study is to test the organisational model for SCCR for about 300,000 citizens residing in the Latium region. The present article illustrates the scheduling path set out, which is based on the involvement of experts, GP representatives and specialists from scientific societies and it is planned by the following actions: Definition of evidence-based recommendations; identification of further investigations; realization of experimental studies; definition of an evidence-based organisational model. The main research areas have been dealt with using randomised trials, in order to evaluate the efficacy of the involvement of GPs and the kind of test for RSOF Our work has produced evidences which were sometimes in contrast with information in the literature, demonstrating that guaiac RSOF testing is less reproducible and determines lower uptake than immunochemical testing. Our work also shows that the involvement of GPs should be based on their personal skills rather than on their role. Such evidences are fundamental to the definition of the organisational model and confirm the need of basing an evidence-based planning not only on evidences found in the literature. The necessity of this approach is strengthened by the level of organisational complexity and by the amount of resources needed to put a public health intervention into practice. abstract_id: PUBMED:18084349 Evidence-based public health: strategies aimed at increasing adherence to colorectal cancer screening programs The ministerial decree 29/11/2001 included colorectal carcinoma screening procedures among the "essential" health care services that should be delivered free of charge to citizens. Secondary prevention programs for colorectal cancer must therefore be implemented in all Italian regions. An international literature search on colorectal cancer screening was performed in order to provide a resource for public health workers and decision makers, for selecting interventions to improve adherence to screening programs. The following interventions have been proven to be effective: reducing structural barriers to screening, active recall systems, multicomponent interventions involving active recall and health education, active reminder systems, periodic dissemination of results, and physician health education. Opportunistic screening by general practitioners, that is, performing faecal occult blood testing in asymptomatic patients consulting a GP for other reasons, is a strategy should also be implemented. abstract_id: PUBMED:16607794 Evidence-based colorectal cancer screening Colorectal cancer has been identified as an appropriate candidate for early detection and treatment of both asymptomatic individuals and those at risk of the disease. The lifetime incidence of colorectal cancer among persons at average risk is sufficiently high to justify screening the general population. This review presents the latest strategies employed in early detection of the colorectal cancer. An ideal screening test should satisfy some basic criteria: must be simple, secure, accepted by the population with valid and reproductive results. Despite the demonstrated benefits of colorectal cancer screening, incidence and mortality rates of this common cancer remain high. More optimally tailored screening tools are needed that combine the features of a potentially ideal screening test. abstract_id: PUBMED:18803810 Can colorectal cancer mass-screening organization be evidence-based? Lessons from failures: the experimental and pilot phases of the Lazio program. Background: Screening programmes should be organized to translate theoretical efficacy into effectiveness. An evidence-based organizational model of colorectal cancer screening (CRCS) should assure feasibility and high compliance. Methods: A multidisciplinary Working Group (WG), reviewed literature and guidelines to define evidence-based recommendations. The WG identified the need for further local studies: physicians' CRCS attitudes, the effect of test type and provider on compliance, and individual reasons for non-compliance. A survey of digestive endoscopy services was conducted. A feasibility study on a target population of 300.000 has begun. Results: Based on the results of population trials and on literature review the screening strategy adopted was Faecal Occult Blood Test (FOBT) every two years for 50-74 year olds and, for positives, colonoscopy. The immunochemical test was chosen because it has 20% higher compliance than the Guaiac. GPs were chosen as the preferred provider also for higher compliance. Since we observed that distance is the major determinant of non-compliance, we choose GPs because they are the closest providers, both geographically and emotionally, to the public. The feasibility study showed several barriers: GP participation was low, there were administrative problems to involve GPs; opportunistic testing by the GPs; difficulties in access to Gastroenterology centres; difficulties in gathering colonoscopy results; little time given to screening activity by the gastroenterology centre. Conclusion: The feasibility study highlighted several limits of the model. Most of the barriers that emerged were consequences of organisational choices not supported by evidence. The principal limit was a lack of accountability by the participating centres. abstract_id: PUBMED:33994253 Evidence-Based Interventions and Colorectal Cancer Screening Rates: The Colorectal Cancer Screening Program, 2015-2017. Introduction: The Centers for Disease Control and Prevention administers the Colorectal Cancer Control Program to increase colorectal cancer screening rates among people aged 50-75 years in areas where rates are lower than state or national levels. The aim of this study is to better understand the effectiveness of specific Colorectal Cancer Control Program components. Methods: The study population included clinics enrolled in the Colorectal Cancer Control Program during Years 1 and 2. Clinic data collected by the Centers for Disease Control and Prevention annually from 2015 to 2017 for program evaluation were used. The outcome variable was screening rate change through Program Year 2, and predictor variables were a new implementation or enhancement of evidence-based interventions and other program components. The analysis, conducted in 2020, used ordinary least square and generalized estimating equations regressions and first difference models to estimate the associations of independent variables with the outcome. Results: Of the total 336 clinics, 50%-70% newly implemented or enhanced different evidence-based interventions. Among these, client reminders were most highly associated with the increase in screening rates (8.0 percentage points). Provider reminder was not significantly associated with any change in screening rates. Among all program components, having a colorectal cancer screening champion was most highly (8.4 percentage points) associated with screening rate change. Results from different models were slightly different but in agreement. Conclusions: Client reminders, provider assessment and feedback, and colorectal cancer screening champions were associated with increased clinic-level colorectal cancer screening rates. Universal implementation of these strategies can substantially increase colorectal cancer screening rates in the U.S. abstract_id: PUBMED:23535550 Colorectal cancer screening One of the most common cancers, colorectal cancer is a leading cause of cancer deaths worldwide. The vast majority of colorectal carcinomas develop slowly from easily-removed precursors (adenomas), and in contrast to most other types of cancer, chances of prevention and early detection are very good. Tests for fecal occult blood, sigmoidoscopy and colonoscopy are frequently recommended and widely used. Most western countries have initiated population-wide screening programs for colorectal cancer, with the goal of greatly reducing incidence and mortality. Various models for screening programs and different screening examinations have been proposed and implemented. Evidence from a range of countries shows that the highest participation rates are obtained when programs are designed to invite eligible individuals to come in for screening. abstract_id: PUBMED:22618623 Screening for colorectal cancer. Current evidence and novel developments Clinical Issue: Colorectal cancer is one of the leading causes of cancer-related morbidity and mortality. Screening has been demonstrated to reduce both the incidence and mortality of colorectal cancer. In addition to the large group with a normal risk level, two further risk groups need to be distinguished: increased family risk and hereditary colorectal cancer syndromes. STANDARD METHODS FOR SCREENING: The highest evidence for all screening tests has been demonstrated for guaiac-based fecal occult blood testing. Colonoscopy is a diagnostic and therapeutic tool and it serves as the reference standard for other tests in clinical studies. Innovations: Fecal immunochemical tests have a higher sensitivity than guaiac-based tests. Several novel techniques are under development and could be adopted by screening programs in the future. Next to colonoscopy, computed tomography (CT) colonography and colon capsule endoscopy have the highest sensitivity for colorectal neoplasia. Molecular tests which are based on the detection of genetic and epigenetic changes of DNA released by the tumor into feces or blood have a high potential and could potentially replace occult blood tests in the future. Practical Recommendations: Colonoscopy is the primary instrument for screening for colorectal neoplasia. Fecal occult blood testing should only be performed if colonoscopy is denied and CT colonography has not yet been approved for screening in Germany. abstract_id: PUBMED:32164140 Construction of key question list in the evidence-based guidelines for colorectal cancer screening in China Objective: To establish the key question list for the development of evidence- based guideline in China according to the content and limitation of current evidence-based guidelines around the world. Methods: First, we introduced the evidence-based guidelines in detail which met the criteria based on World Health Organization guideline development handbook and then formulated the draft list of key questions for the development of evidence-based guidelines. At last, the Delphi method was used to determine the list of key questions in developing evidence-based guidelines of colorectal cancer screening. Results: Totally, 34 questionnaires were collected, with experts from clinical and epidemiological fields. The average experts' authority coefficient was 0.81, indicating a high degree of authority. The concentration of opinions on all items in the questionnaire was relatively high, with the full score ratio greater than 75% and the coefficient of variation less than 0.3. The list of key questions on evidence-based guidelines for colorectal cancer screening has been divided into six parts: epidemiological problems, risk classification, screening age, screening tools, implementation and selection of steering group members, which covers the issues that need to be considered in the development of evidence-based colorectal cancer screening guidelines in China. Conclusion: The key question list for evidence-based guideline development in our study can be applied to the development of evidence-based guidelines for colorectal cancer screening in the future, as well as the development of evidence-based guidelines for other cancer screening in China. abstract_id: PUBMED:10659641 Evidence based reevaluation of primary screening tests Preventive medicine consists of all efforts which delay the development of diseases (primary prevention), make early detection possible (secondary prevention) as well as rehabilitation of patients (tertiary prevention). The aim to reduce premature morbidity and mortality and increase life expectancy and quality of life. The authors summarize the secondary preventive interventions of asymptomatic individuals namely primary screening methods, with special regard to so-called "western diseases": hypertension, hyperlipidemia, colorectal cancer, neoplasms of breast, cervix, prostata, also considering type 2 diabetes and osteoporosis on the basis of the supporting evidence and the recommendation of certain national committees. abstract_id: PUBMED:19581336 Review of the evidence for a colorectal cancer screening programme in elderly people. Colorectal cancer is a major public health issue, contributing to 16,000 UK deaths per year, most of these in the elderly population. A new NHS screening programme for colorectal cancer in people over 60 is being introduced across the country throughout 2009. The aim of this research was to review the current literature on colorectal cancer screening and determine how much of the evidence for screening is applicable to elderly people. MEDLINE database was searched for articles published between 1990 and 2007, using search terms of colorectal neoplasms, mass-screening, faecal occult blood, colonoscopy and sigmoidoscopy. Articles for inclusion were limited to those in English and those including older adults. The results showed that evidence for colorectal cancer screening in general has been well researched. However, little was found specifically on screening for elderly people, or looking at the different benefits and limitations in older people compared to younger people. Very few health agencies suggested an upper age limit for screening. In conclusion, there is very little research on screening for colorectal cancer specifically in elderly people, although many health authorities advise such screening. The health needs of an older population are different to those of middle-aged people and at present the screening programmes do not appear to reflect this. Answer: Yes, colorectal cancer mass screening organization can be evidence-based. The literature suggests that screening programs should be organized to translate theoretical efficacy into effectiveness, and an evidence-based organizational model of colorectal cancer screening (CRCS) should assure feasibility and high compliance (PUBMED:18803810). The process involves reviewing literature and guidelines to define evidence-based recommendations, identifying the need for further local studies, and conducting feasibility studies on target populations (PUBMED:18803810). For instance, in the Lazio region of Italy, experimental studies were implemented before introducing a screening program to define an evidence-based organizational model and evaluate the real needs for screening (PUBMED:16156391). These studies aimed to evaluate the efficacy of the involvement of general practitioners (GPs) and the type of test for fecal occult blood testing (FOBT), showing that immunochemical testing had higher compliance than guaiac testing and that GP involvement should be based on personal skills rather than role (PUBMED:16156391). Evidence-based interventions such as reducing structural barriers to screening, active recall systems, multicomponent interventions involving active recall and health education, active reminder systems, periodic dissemination of results, and physician health education have been proven effective in increasing adherence to screening programs (PUBMED:18084349). Additionally, client reminders, provider assessment and feedback, and having a colorectal cancer screening champion were associated with increased clinic-level colorectal cancer screening rates (PUBMED:33994253). However, the feasibility study in the Lazio program highlighted several barriers, such as low GP participation, administrative problems, opportunistic testing by GPs, difficulties in access to gastroenterology centers, and gathering colonoscopy results, indicating that organizational choices need to be supported by evidence to overcome these barriers (PUBMED:18803810). In conclusion, colorectal cancer mass screening can be organized based on evidence, but it requires careful planning, consideration of local contexts, and addressing potential barriers to ensure the effectiveness and high compliance of the screening program.
Instruction: Does retrograde cerebral perfusion via superior vena cava cannulation protect the brain? Abstracts: abstract_id: PUBMED:17071098 Does retrograde cerebral perfusion via superior vena cava cannulation protect the brain? Objective: The retrograde cerebral perfusion via cannulation of the superior vena cava is a widespread method for optimising protection of the brain during hypothermic circulatory arrest. Methods: In 14 cadavers (8 females, 6 males) of the local department of pathology, an examination was performed to check the competence of the valves of the internal jugular veins. After a complete preparation of the superior vena cava, the innominate vein and both internal jugular veins, ligating all side branches, a retrograde perfusion on 7 cadavers was installed, documenting flow and pressure of each internal jugular vein (IJV) in vitro. Afterwards, the veins were opened and their valves inspected. Results: In all 14 cadavers, anatomically and functionally competent valves on the right proximal IJV were found. Only 1/14 cadaver had no valve in the left proximal IJV. Additional rudimentary and incompetent valves could be identified in 1/14 cadaver on the distal right IJV, and in 2/14 cadavers on the left IJV. Retrograde flow measurement of 7/14 cadavers revealed 0 ml/min in 4/7 cadavers, 6 ml/min in 1/7, 340 ml/min in 1/7 and 2500 ml/min in 1/7 cadaver. Conclusions: As a rule, anatomically and functionally competent valves in the proximal IJV are present. In human beings, they obstruct the direct retrograde inlet to the intracranial venous system, which suggests an unbalanced and unreliable perfusion of the brain. Therefore, retrograde cerebral perfusion by cannulating the superior vena cava may help flushing out embolism and supporting 'the cold jacket' of the brain. However, its effect of retrograde backflow cannot be a sign of adequate cerebral perfusion. abstract_id: PUBMED:10892956 Acute type A aortic dissection: retrograde perfusion with left superior vena cava. Retrograde cerebral perfusion with hypothermic circulatory arrest confers additional cerebral protection during repair of type A aortic dissection. We present a 42-year-old man with acute type A aortic dissection and a persistent, left superior vena cava. Cannulation of the right and left superior vena cava is used for retrograde perfusion of both hemispheres with bilateral monitoring of electroencephalogram and somatosensory-evoked potentials during and after the hypothermic circulatory arrest interval. abstract_id: PUBMED:11479507 Retrograde cerebral perfusion provides negligible flow through brain capillaries in the pig. Objectives: Although retrograde cerebral perfusion is being used clinically during aortic arch surgery, whether retrograde flow perfuses the brain effectively is still uncertain. Methods: Fourteen pigs were cooled to 20 degrees C with cardiopulmonary bypass and perfused retrogradely via the superior vena cava for 30 minutes: 7 underwent standard retrograde cerebral perfusion and 7 underwent retrograde perfusion with occlusion of the inferior vena cava. Antegrade and retrograde cerebral blood flow were calculated by quantitating fluorescent microspheres trapped in brain tissue after the animals were put to death; microspheres returning to the aortic arch, the inferior vena cava, and the descending aorta were also analyzed during retrograde cerebral perfusion. Results: Antegrade cerebral blood flow was 16 +/- 7.7 mL. min(-1). 100 g(-1) before retrograde cerebral perfusion and 22 +/- 6.3 mL. min(-1). 100 g(-1) before perfusion with caval occlusion (P =.14). During retrograde perfusion, calculations based on the number of microspheres trapped in the brain showed negligible flows (0.02 +/- 0.02 mL. min(-1). 100 g(-1) with retrograde cerebral perfusion and 0.04 +/- 0.02 mL. min(-1). 100 g(-1) with perfusion with caval occlusion; P =.09): only 0.01% and 0.02% of superior vena caval inflow, respectively. Less than 13% of retrograde superior vena caval inflow blood returned to the aortic arch with either technique. During retrograde cerebral perfusion, more than 90% of superior vena caval input was shunted to the inferior vena cava and was then recirculated, as indicated by rapid development of an equilibrium in microspheres between the superior and inferior venae cavae. With retrograde perfusion and inferior vena caval occlusion, less than 12% of inflow returned to the descending aorta and only 0.01% of microspheres. Conclusions: The paucity of microspheres trapped within the brain indicates that retrograde cerebral perfusion, either alone or combined with inferior vena caval occlusion, does not provide sufficient cerebral capillary perfusion to confer any metabolic benefit. The slightly improved outcome previously reported with retrograde cerebral perfusion during prolonged circulatory arrest in this model may be a consequence of enhanced cooling resulting from perfusion of nonbrain capillaries and from venoarterial and venovenous shunting. abstract_id: PUBMED:1820016 Retrograde cerebral perfusion and deep hypothermic systemic circulatory arrest This study was carried out in order to find out whether retrograde cerebral perfusion has a beneficial effect on protection of the brain during deep hypothermic circulatory arrest. Six mongrel dogs weighing 11 to 18 Kg were placed on hypothermic cardiopulmonary bypass. After induction of circulatory arrest, retrograde cerebral perfusion was implemented by using the bypass connecting the arterial and venous lines of the extracorporeal circuit to reverse flow into the cranial vena cava in 2 dogs or selective cannulation of bilateral external jugular veins in remained 4 dogs. Retrograde cerebral perfusion was continued for 60 minutes. Each dog was rewarmed, and weaned from cardiopulmonary bypass. During the experiment, electroencephalography and somatosensory evoked potentials were monitored. In one dog, India ink with gelatin was infused retrogradely into bilateral retroarticular veins and the brain subjected to microscopic examination of the distribution of the pigment. The protective effect of retrograde cerebral perfusion for the brain in comparison with deep hypothermic circulatory arrest alone was not demonstrated clearly by electroencephalography and somatosensory evoked potentials. The reason was that the external jugular veins had many functioning valves that obstructed retrograde perfusion and the internal jugular veins were rudimentary in dogs. In the dog with selective infusion of India ink into retroarticular veins, however, wide range of the distribution of the pigment in subarchnoidal venulae was observed. It is suggested that retrograde cerebral perfusion via selected route depends on anatomical variations of the venous drainage from the brain is effective. It is also considered that retrograde cerebral perfusion via the superior vena cava is available for the surgery of the aortic arch and may be beneficial. abstract_id: PUBMED:33048802 Persistent left superior vena cava, absence of the right superior vena cava and tricuspid regurgitation The persistence of a left superior vena cava in the general population represents one of the most frequent thoracic venous return malformations, however its association with abscence of the right superior vena cava and tricuspid regurgitation are extremely rare findings. abstract_id: PUBMED:9464609 Early experience of retrograde cerebral perfusion. Since 1990, retrograde cerebral perfusion has been applied in aortic arch surgery in the Nagoya University group to protect the brain. This study reviews the group's early clinical results, and especially of neurological outcome in patients undergoing aortic arch surgery via mid sternotomy by using retrograde cerebral perfusion only via the superior vena cava. Seventy-three cases (47 men, 26 women; mean age 62.3 (range 26-82 years)) participated in the study. True aneurysm was diagnosed in 17 cases and aortic dissection in 56. Emergency operations were performed in 49 cases (67%). The proximal aortic arch was replaced in 38 cases, the total aortic arch in 21, the distal arch in six, and the aortic root in two. Mean (s.d.) retrograde cerebral perfusion duration was 55(23) (range 12-115) min and superior vena cava flow rate 350(143) ml/min. Excluding four cases of early surgical death, a total of 10 patients (14.5%) showed neurological dysfunction. Symptoms were coma in eight cases and motor paralysis in two. Three of these 10 cases were recovered without symptoms and six died. The early mortality rate was 19.2%. Significant differences in retrograde cerebral perfusion duration (49(20) versus 83(18) minutes, P < 0.001), superior vena cava pressure (23.2(7.2) versus 28.2(7.4) mmHg, P = 0.046), and preoperative cardiac arrest P < 0.05) were evident between groups with and without neurological dysfunction. There were no neurological dysfunctions in patients undergoing retrograde cerebral perfusion for < 60 minutes at under 30 mmHg of superior vena cava pressure. In conclusion, retrograde cerebral perfusion may be used to extend the duration of safe cerebral circulatory arrest. abstract_id: PUBMED:38073687 Persistent left superior vena cava in right hemiarch replacement under deep hypothermic circulatory arrest: A case report. Background: Persistent left superior vena cava (PLSVC), a relatively rare thoracic vascular malformation, can inconvenience perfusionists and operators when encountered during deep hypothermic circulatory arrest (DHCA). Case Summary: Herein, we describe the case of a patient with concurrent giant aortic arch aneurysm, aortic stenosis, and PLSVC. To treat these conditions, we performed right hemiarch and aortic valve replacements under DHCA. Notably, we applied "bilateral superior vena cava retrograde cerebral perfusion (RCP)" for cerebral protection, which significantly optimized the surgical procedure and reduced the risk of postoperative complications. The patient was discharged 14 d after surgery with no complications. Conclusion: Surgical intervention for PLSVC under DHCA can be performed using the bilateral superior vena cava RCP approach. abstract_id: PUBMED:8076126 Cerebral death-like conditions after aortic aneurysm surgery using retrograde cerebral perfusion. A patient with Stanford type A acute aortic dissection who developed severe cerebral dysfunction after surgery using retrograde cerebral perfusion through the superior vena cava was examined. The patient was able to regain normal brain function with no neurological deficit. Postoperative venography taken through the superior vena cava showed the presence of competent venous valves at the venous angles. This suggests that the valves interrupted cerebral perfusion, decreasing blood flow to less than that required. After consideration of the possible causes of brain damage incurred during surgery for aortic aneurysm using retrograde cerebral perfusion, it was concluded that this patient was able to recover normal brain function because the surgery was performed under profound hypothermia. abstract_id: PUBMED:8037512 Determination of organ blood flows during retrograde inferior vena caval perfusion. Cerebral, renal, and hepatic blood flows and oxygen metabolism were measured in 8 mongrel dogs undergoing hypothermic (20 degrees C) retrograde perfusion via the inferior vena cava (IVC) and compared with cardiopulmonary bypass and retrograde superior vena caval perfusion. Inferior vena caval perfusion was performed with aortic drainage and clamping of the superior vena cava at an IVC pressure of 20 or 30 mm Hg. Cerebral, renal, and hepatic blood flows at 30 mm Hg of IVC pressure were 7.5 +/- 3.8, 8.1 +/- 3.1, and 15.3 +/- 5.5 mL.min-1.100 g-1, respectively, as determined by the hydrogen clearance method. Organ blood flows during retrograde IVC perfusion were 28%, 42%, and 57% of cardiopulmonary bypass values at a flow rate of 1,000 mL/min and 61%, 119%, and 131% of retrograde superior vena caval perfusion values at 30 mm Hg of superior vena caval pressure, respectively. Oxygen consumption was 7.4 +/- 3.7 mL/min. At an IVC pressure of 20 mm Hg, cerebral, renal, and hepatic blood flows and oxygen consumption were 5.1 +/- 2.7, 5.9 +/- 4.1, and 11.6 +/- 4.0 mL.min-1 x 100 g-1 and 3.0 +/- 0.8 mL/min. As IVC pressure increased, cerebral, renal, and hepatic blood flows and oxygen consumption increased. However, high IVC pressure was associated with high portal venous pressure, which may produce ascites. Regional blood flow during retrograde IVC perfusion was measured by the colored microsphere method in another 8 normothermic dogs. Inferior vena caval perfusion at 30 mm Hg supplied adequate blood flow to the liver (15.44 +/- 12.1 mL.min-1 x 100 g-1) and kidneys (6.35 +/- 2.0 mL.min-1 x 100 g-1).(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:9338653 Retrograde cerebral perfusion provides limited distribution of blood to the brain: a study in pigs. Objective: The objective of this study was to investigate flow distribution during retrograde and antegrade cerebral perfusion with India ink as a marker. Methods: Ten pigs received cerebral perfusion with a solution containing 50% filtered India ink for 5 minutes either antegradely through both internal carotid arteries at a flow of 180 to 200 ml/min (n = 5) or retrogradely via the superior vena cava at a flow of 300 to 500 ml/min (n = 5). The brains were then fixed for quantitative measurement of the density of ink-filled capillaries (reported as a percentage of the total selected area). The assessment was done with the use of an in-house software program. Results: In the antegrade cerebral perfusion group, the intracranial arterial and venous systems were completely filled with ink. The gray matter was colored uniformly black, and light coloring was observed in the white matter. During retrograde cerebral perfusion, the majority of ink was returned to the inferior vena cava, and only a small amount of ink was found in the innominate artery draining from the brain. Massive ink filling was observed in the sagittal sinus and other venous sinuses in all the pigs. Vessels on the surface of the brain and large vessels in the brain were also well filled with ink. However, only 10% of capillaries were filled with ink during retrograde cerebral perfusion relative to the number observed with antegrade cerebral perfusion. Conclusions: Retrograde cerebral perfusion supplies a limited amount of blood to brain tissue, which flows mainly through superficial and large deep cerebral vessels. Answer: Retrograde cerebral perfusion (RCP) via superior vena cava cannulation is a method used to potentially protect the brain during procedures that require hypothermic circulatory arrest, such as repairs of aortic dissections. However, the effectiveness of this technique in providing adequate cerebral protection is debated. Studies have shown that anatomically and functionally competent valves in the proximal internal jugular vein (IJV) are usually present in humans, which can obstruct direct retrograde flow to the intracranial venous system, suggesting that RCP may not guarantee balanced and reliable perfusion of the brain (PUBMED:17071098). This could mean that while RCP might assist in flushing out emboli and supporting the brain's 'cold jacket', it may not be indicative of adequate cerebral perfusion. In a case involving a patient with a persistent left superior vena cava, RCP was used for cerebral protection during repair of a type A aortic dissection, with bilateral monitoring of electroencephalogram and somatosensory-evoked potentials, indicating that the technique can be adapted for use in patients with unusual venous anatomy (PUBMED:10892956). However, a study in pigs demonstrated that RCP provides negligible flow through brain capillaries, suggesting that it does not confer significant metabolic benefit to the brain. The study found that less than 13% of retrograde superior vena caval inflow blood returned to the aortic arch, and more than 90% was shunted to the inferior vena cava (PUBMED:11479507). Another study in dogs indicated that the effectiveness of RCP might depend on the anatomical variations of the venous drainage from the brain, as the external jugular veins had many functioning valves that obstructed retrograde perfusion (PUBMED:1820016). In clinical practice, RCP has been used in aortic arch surgery with varying results. Some patients have experienced neurological dysfunction postoperatively, suggesting that the duration and pressure of RCP may need to be carefully managed to avoid complications (PUBMED:9464609). In conclusion, while RCP via superior vena cava cannulation is used with the intention of protecting the brain during certain surgical procedures, its actual effectiveness in ensuring adequate cerebral perfusion is uncertain and may be limited. The technique's efficacy may be influenced by individual anatomical variations and the specific parameters of the perfusion (PUBMED:17071098, PUBMED:11479507, PUBMED:1820016, PUBMED:9464609).
Instruction: Is spa therapy cost-effective in rheumatic disorders? Abstracts: abstract_id: PUBMED:9574474 Is spa therapy cost-effective in rheumatic disorders? Objective: To evaluate the cost-effectiveness of spa therapy for rheumatic diseases and sequelae of bone and joint injuries. Patients And Methods: A prospective randomized, case-control design was used with health care cost reimbursements during the year before and after spa therapy as the measure of efficacy. Spa therapy is reimbursed by the national health insurance system in France, subject to prior agreement. The 220 patients who filed applications for spa therapy with the national health insurance payment center for northern France during the first half of 1994 were included in the study. Seventy-two patients met national health insurance system criteria for spa therapy (Group 1), the remaining 148 patients were assigned at random to spa therapy (Group 2, n = 74) or no spa therapy (Group 3, n = 74). This method allowed us to avoid selection bias during recruitment of our control group. Results: Perceived morbidity was evaluated based on health care utilization, expressed as numbers of health care investigations or procedures and as days of medication use in mean recommended dosages. Health care cost reimbursements by the national health insurance system failed to decrease during the year after spa therapy, and increased in those patients who had already received spa therapy on one or more occasions. Only in the first-ever spa patients were decreases in health care utilization in the area of rheumatology seen during the year after spa therapy; the largest decreases were for use of nonsteroidal antiinflammatory drugs (from 49 to 26 days) and muscle relaxants. No differences in health care utilization were found in patients who had received spa therapy on at least one other occasion. Also, when criteria for spa therapy were not met, no differences in health care utilization were found between patients who did and did not receive spa therapy. Conclusion: These data suggest that current national health insurance policies regarding the reimbursement of spa therapy need to be reevaluated. abstract_id: PUBMED:26156832 May spa therapy be a valid opportunity to treat hand osteoarthritis? A review of clinical trials and mechanisms of action. Osteoarthritis (OA) is the most common form of arthritis and its current treatment includes non-pharmacological and pharmacological modalities. Spa therapy represents a popular treatment for many rheumatic diseases. The aim of this review was to summarize the currently available information on clinical effects and mechanisms of action of spa therapy in OA of the hand. We conducted a search of the literature to extract articles describing randomized clinical trials (RCTs) in hand OA published in the period 1952-2015. We identified three assessable articles reporting RCTs on spa therapy in hand OA. Data from these clinical trials support a beneficial effect of spa therapy on pain, function and quality of life in hand OA. Spa therapy seems to have a role in the treatment of hand OA. However, additional RCTs are necessary to clarify the mechanisms of action and the effects of the application of thermal treatments. abstract_id: PUBMED:37444733 The Potential Role of Spa Therapy in Managing Frailty in Rheumatic Patients: A Scoping Review. Frailty is not limited to the elderly, as patients with rheumatic diseases can also experience this condition. The present scoping review aimed to investigate the possibility of using the health resort setting as an alternative location for managing rheumatic patients with frailty. The research resulted in finding several in vitro, in vivo, and clinical studies, resulting in evidence supporting the effectiveness of spa treatments in reducing pain, improving function, and managing comorbidity in rheumatic diseases. Additionally, spa treatments were demonstrated to modulate the MAPK/ERK pathway and the NF-kB pathway's activation and to reduce proinflammatory molecules' secretion in rheumatic diseases, thus suggesting their potential effective role in the regulation of inflammaging in frailty. Moreover, the health resort setting may offer potential resources to reduce risk factors, such as drug consumption, inactivity, and disease severity, and may serve as a setting for developing prevention protocols for frailty. Future research should explore innovative approaches, such as exercise training and early diagnostics, for the overall management of frailty in rheumatic patients in the spa setting. abstract_id: PUBMED:28560466 Real-life effectiveness of spa therapy in rheumatic and musculoskeletal diseases: a retrospective study of 819 patients. The objective of this study is to determine the use and efficacy of spa therapy in patients with a wide spectrum of rheumatic and musculoskeletal diseases under real-life clinical practice circumstances. In this retrospective observational study at the Medical Ecology and Hydroclimatology Department of Istanbul Faculty of Medicine, the records of all adult patients with rheumatic and musculoskeletal diseases who were prescribed a spa therapy in various health resorts in Turkey between 2002 and 2012 were analyzed. Patients sojourned to and stayed at a health resort and followed a usual 2-week course of spa therapy. The patients were examined within a week before and after the spa therapy at the department by the physicians and outcome measures were pain intensity (visual analog scale, VAS), patient's general evaluation (VAS), physician's general evaluation (VAS), Health Assessment Questionnaire (HAQ), Lequesne's Functional Index (LFI), Western Ontario and McMaster Universities Index (WOMAC), Waddell Index (WI), Neck Pain and Disability Scale (NPDS), Shoulder Disability Questionnaire (SDQ), Fibromyalgia Impact Questionnaire (FIQ), and Beck's Depression Inventory (BDI). In total, 819 patients were included in the analysis. The diagnoses were 536 osteoarthritis; 115 fibromyalgia; 50 lumbar disc herniation; 34 cervical disc herniation; 23 nonspecific low back pain; 22 ankylosing spondylitis; 16 rheumatoid arthritis; 9 rotator cuff tendinitis; and 14 other conditions/diseases including scoliosis, stenosing flexor tenosynovitis, congenital hip dislocation in adult, Behçet's disease, de Quervain tendinopathy, psoriatic arthritis, osteoporosis, fracture rehabilitation, and diffuse idiopathic skeletal hyperostosis. Statistically significant decrease in pain scores was found in all patients except hip osteoarthritis (p = 0.063) and rheumatoid arthritis (p = 0.134) subgroups; and statistically significant improvement in function in all patients except hip osteoarthritis (p = 0.068), rheumatoid arthritis (p = 0.111), and rotator cuff tendinitis (p = 0.078) subgroups. In daily clinical practice, spa therapy is prescribed and practiced mainly for osteoarthritis, then fibromyalgia, lumbar/cervical disc herniation, and nonspecific low back pain; and less for ankylosing spondylitis, rheumatoid arthritis, and rotator cuff tendinitis. The study results suggest that real-life spa therapy may be effective in a variety of rheumatic and musculoskeletal diseases by improving pain and function. abstract_id: PUBMED:15004446 Balneotherapy and spa therapy of rheumatic diseases in Turkey: a systematic review Aim: Turkey has a lot of thermal and mineral springs and is looking back on a still vivid tradition of spa therapy and balneotherapy, applied especially for the treatment of rheumatic diseases. This tradition is predominantly empiric and intuitive, however, it has assumed some important aspects of modern balneotherapeutic methods as well. This article is aimed at presenting the characteristics of traditional and modern balneological and spa therapy forms in Turkey. Method: The studies which have been conducted between 1990 and 2000 in different spas in Turkey on the efficacy and effectiveness of spa therapy and balneotherapy for rheumatic diseases have been searched and analyzed independent of their design. A descriptive evaluation of the studies was carried out. Results: A total of 15 published studies have been found and analyzed. The investigations have been carried out in 8 different spa resorts in Turkey. In these studies the effectiveness and efficacy of different balneological and spa therapies on a variety of rheumatic diseases (from osteoarthritis to fibromyalgia and from rheumatoid arthritis to low back pain) could be shown. Nearly all studied balneotherapeutic modalities were applied as bathing cures. Only in one study peloid therapy was applied. Balneotherapeutic therapy was applied in a modern and traditional way, and both open and stationary spa therapy forms were used at the same time. Conclusions: The review has shown the effectiveness of the investigated spa therapy and balneotherapy forms. It could be concluded that nearly all forms of spa therapy and balneotherapy used for the treatment of rheumatic diseases in Turkey are effective. A definitive conclusion, however, is not possible because of the heterogeneity of the study designs, methodological flaws, and the publication bias. In future good quality randomized controlled trials are needed. abstract_id: PUBMED:18998040 New evidences on spa therapy in fibromyalgia Spa therapy is one of the most commonly used non-pharmacological approaches for many rheumatic diseases. In Fibromyalgia Syndrome (FS) it may be useful for the chronic widespread musculoskeletal pain. Because of the unknown aetiology and the not clear understood pathogenesis, there is no standard therapy regimen for FS. Also the mechanisms of action of spa therapy are not completely known, but most probably the benefits could be derived from mechanical, physical and chemical factors. Muscle tone and pain intensity can be positively influenced by mud packs and thermal baths. The review of international data from 2000 to 2007 confirms that spa therapy should be a valid tool in the multidisciplinary approach of the Primary FS. abstract_id: PUBMED:15257412 Hydrotherapy, balneotherapy, and spa treatment in pain management. The use of water for medical treatment is probably as old as mankind. Until the middle of the last century, spa treatment, including hydrotherapy and balneotherapy, remained popular but went into decline especially in the Anglo-Saxon world with the development of effective analgesics. However, no analgesic, regardless of its potency, is capable of eliminating pain, and reports of life-threatening adverse reactions to the use of these drugs led to renewed interest in spa therapy. Because of methodologic difficulties and lack of research funding, the effects of 'water treatments' in the relief of pain have rarely been subjected to rigorous assessment by randomised, controlled trials. It is our opinion that the three therapeutic modalities must be considered separately, and this was done in the present paper. In addition, we review the research on the mechanism of action and cost effectiveness of such treatments and examine what research might be useful in the future. abstract_id: PUBMED:37361027 Improvement of Pain Symptoms in Musculoskeletal Diseases After Multimodal Spa Therapy in the Austrian Gastein Valley-A Study Based on Longitudinal Registry Data. Objective: The study aim was to investigate the course of pain in rest and motion in seven different rheumatic diseases (RMD), prior and after multimodal spa therapy including low-dose radon treatment and at 3-, 6-; and 9-month follow up. Methods: Complete data from the radon indication registry including information on 561 subjects with RMD were analysed to explore the association of timepoint of measurement with pain in rest and motion. For this purpose, linear regression models adjusted for RMD-type, age, sex and body mass index (BMI) were applied. Results: The mean age of the sample was 55 years, the average body mass index was 26.8, and 275 subjects were women. Pain scores were significantly improved at all-time points compared to baseline. Pain courses were different for each RMD with the largest improvement seen in fibromyalgia. Conclusion: Timing spa facility visits according to RMD-specific pain courses may result in sustained pain reduction. abstract_id: PUBMED:16338194 The effect of combined therapy (spa and physical therapy) on pain in various chronic diseases. Objective: Spa therapy is commonly used in the treatment of daily chronic diseases practice, but its benefits are still the subjects of discussion. This study investigates possible effects of a combined spa and physical therapy program on pain and hemodynamic responses in various chronic diseases. Methods: The pain intensity and hemodynamic responses of 472 patients involved in a spa and physical therapy program were studied retrospectively. Assessment criteria were pain [Visual Analog Scale (VAS)] and hemodynamic responses (heart rate, blood pressure, respiratory rate). Assessments took place before, immediately after treatment, and after completion of the spa program (before discharge). Results: The patients with ankle arthrosis, fibromyalgia and cervical disc herniation reported the highest VAS score before treatment program (P < 0.05). After the therapy program, VAS scores were seen to decrease compared to before treatment (P < 0.05). The patients with osteoarthritis of the hip (1.3+/-1.2) and soft tissue rheumatism (1.3+/-1.2) had the lowest VAS score before discharge compared to patients with other pathologies (P < 0.05). No statistically significant differences were detected between both sexes in terms of pain improvement (P > 0.05). On discharge, all hemodynamic responses decreased significantly compared to before and immediately after initiation of the therapy program (P < 0.01). Conclusion: To decrease pain and high blood pressure without hemodynamic risk, a combined of spa and physical therapy program may help to decrease pain and improve hemodynamic response in patients with irreversible pathologies. abstract_id: PUBMED:10573764 The role of spa therapy in various rheumatic diseases. Spa therapy seems to have a role in the treatment of a broad range of joint diseases. It cannot substitute for conventional therapy but can complement it. The improvement reported in some of the studies is of short duration, lasting for months. It should be considered for patients suffering from various types of inflammatory arthritides or noninflammatory arthritides who are symptomatic, despite accepted medical therapy and conservative physiotherapy, if they can afford the expense. The patients should be told that the effectiveness and success of this therapy cannot be predicted in advance. Because we have no way to date, of curing most rheumatic diseases, clinical trials of alternative therapeutic methods are justified. These methods may alleviate patient suffering and are almost totally devoid of serious adverse effects. No studies have been reported that evaluate their cost-effectiveness. Answer: The cost-effectiveness of spa therapy in rheumatic disorders is not conclusively determined based on the provided abstracts. Some studies suggest potential benefits, while others indicate the need for reevaluation of current policies regarding reimbursement. A study evaluating the cost-effectiveness of spa therapy for rheumatic diseases in France found that health care cost reimbursements by the national health insurance system did not decrease after spa therapy. The study observed decreases in health care utilization only in first-ever spa patients, particularly in the use of nonsteroidal anti-inflammatory drugs and muscle relaxants. However, no differences in health care utilization were found in patients who had received spa therapy on at least one other occasion or when criteria for spa therapy were not met (PUBMED:9574474). Conversely, other studies report beneficial effects of spa therapy on pain, function, and quality of life in hand osteoarthritis (PUBMED:26156832), and suggest that spa therapy may be effective in a variety of rheumatic and musculoskeletal diseases by improving pain and function (PUBMED:28560466). Additionally, spa therapy has been shown to be effective in the treatment of various rheumatic diseases in Turkey (PUBMED:15004446), and it is considered a valid tool in the multidisciplinary approach of fibromyalgia (PUBMED:18998040). A study based on longitudinal registry data from the Austrian Gastein Valley reported significant improvements in pain symptoms in various rheumatic diseases after multimodal spa therapy (PUBMED:37361027). Another study found that a combined spa and physical therapy program could decrease pain and improve hemodynamic response in patients with irreversible pathologies (PUBMED:16338194). However, the role of spa therapy in rheumatic diseases is seen as complementary to conventional therapy, and its effectiveness and success cannot be predicted in advance. The improvement reported in some studies is of short duration, lasting for months, and no studies have been reported that evaluate their cost-effectiveness (PUBMED:10573764). In summary, while there is evidence supporting the effectiveness of spa therapy in improving symptoms of rheumatic diseases, the cost-effectiveness is not clearly established, and further research is needed to evaluate the economic impact and long-term benefits of spa therapy in this context.
Instruction: Is there gender bias in JAMA's peer review process? Abstracts: abstract_id: PUBMED:35640347 Peer parent programs in child welfare: A systematic review. Background: Mentoring, specifically peer mentoring, emerged in the child welfare setting in the early 2000s. Peer parent programs provide child welfare involved families a unique opportunity to connect with parents that have successfully navigated the child welfare system and who share similar lived experiences. No systematic review has been conducted to summarize findings associated with parental participation in peer parent programs and associated child welfare case outcomes. Objective: This study systematically summarizes the effectiveness of peer parent programs in child welfare on case outcomes, specifically permanency, time-in-care, and re-entry post reunification. Methods: Four citations were identified for review from seven academic databases. A data template was utilized to extract the following information from each study: location, target of intervention, research design, setting of intervention, intervention description, peer parent criteria, inclusion and exclusion criteria for participants, sample size, statistical tests, outcomes assessed, and key findings. Results: Several peer parent programs have been developed to support and empower child welfare involved families with the aim of improving case outcomes. All of the peer parent programs reviewed had a positive impact on reunification rates; that is, parents who participated in a peer mentor program were more likely to reunify with their child(ren) when compared to non-participants. Results regarding time-in-care and re-entry post reunification at 12 and 24 months were mixed. Conclusions: Child welfare agencies should consider developing peer parent programs as they appear to be a promising intervention to support parents and increase positive child welfare outcomes. abstract_id: PUBMED:29862368 Evaluations of the behavioral attributes of only children in Beijing, China: moderating effects of gender and the one-child policy. The purpose of this study was to explore whether evaluations of the behavioral attributes of only children in Beijing differed from evaluations of children with siblings, and how these evaluations were affected by gender and China's One-Child Policy (OCP). This study applies hierarchical linear regression analyses to data collected from children born before or after the initiation of the OCP. The participants (N = 1000) were randomly selected schoolchildren whose behavioral attributes were evaluated by the children themselves, their peers, parents, and teachers, using a 32 attributes checklist, consisting of attributes Chinese experts considered important for school-aged children. In addition, a difference score, representing the difference between self and peer evaluations, was considered in order to assess degrees of self-enhancement. The results indicated that male only children received less positive self, peer, parent, and teacher evaluations than female only children and that among children born before the OCP, only children evaluated themselves less positively than their peers with siblings. Parents evaluated their only children born after the OCP more positively than did parents of only children born before the OCP. In terms of self-enhancement, only children, particularly male only children, evaluated themselves more positively than they were evaluated by their peers. These findings are discussed in terms of the major social and cultural changes happening in China since the OCP that affected how only children saw themselves and were seen by others. abstract_id: PUBMED:10446768 Trends in adolescent alcohol and other substance use: relationships to trends in peer, parent, and school influences. The aim of this study is to improve understanding of trends in adolescent substance use by comparing them with trends in peer substance use, school problem behavior, parental disapproval of alcohol and drugs, and exposure to school prevention information. These trends were determined using data from three large comparable representative surveys of 7-12th grade students in New York State conducted in 1983, 1990, and 1994. Analysis of variance was used to test the significance of the trends and to identify meaningful differences in trends by demographic subgroups (gender, grace level, and ethnicity). Adolescent alcohol and substance use declined in the 1980s, then increased from 1990 to 1994. Trends in friend's substance use and school problem behavior paralleled the alcohol and other substance use trends, consistent with their being part of the same adolescent problem behavior syndrome. Parental disapproval also followed a trend consistent with the substance use trends, i.e., parental disapproval increased in the 1980s but then decreased in the 1990s. However, the trend in school prevention influences did not parallel these substance use trends: student exposure to school primary prevention programs continued to increase from 1990 to 1994. Use of alcohol, cigarettes, marijuana, and other illicit drugs from 1990 to 1994, increased fastest among the younger students, despite increased school prevention exposure. Other factors must be sought as possible explanations of the increasing alcohol and substance use among adolescents. abstract_id: PUBMED:23782478 Understanding the mental health of youth living with perinatal HIV infection: lessons learned and current challenges. Introduction: Across the globe, children born with perinatal HIV infection (PHIV) are reaching adolescence and young adulthood in large numbers. The majority of research has focused on biomedical outcomes yet there is increasing awareness that long-term survivors with PHIV are at high risk for mental health problems, given genetic, biomedical, familial and environmental risk. This article presents a review of the literature on the mental health functioning of perinatally HIV-infected (PHIV+) adolescents, corresponding risk and protective factors, treatment modalities and critical needs for future interventions and research. Methods: An extensive review of online databases was conducted. Articles including: (1) PHIV+ youth; (2) age 10 and older; (3) mental health outcomes; and (4) mental health treatment were reviewed. Of 93 articles identified, 38 met inclusion criteria, the vast majority from the United States and Europe. Results: These studies suggest that PHIV+ youth experience emotional and behavioural problems, including psychiatric disorders, at higher than expected rates, often exceeding those of the general population and other high-risk groups. Yet, the specific role of HIV per se remains unclear, as uninfected youth with HIV exposure or those living in HIV-affected households displayed similar prevalence rates in some studies, higher rates in others and lower rates in still others. Although studies are limited with mixed findings, this review indicates that child-health status, cognitive function, parental health and mental health, stressful life events and neighbourhood disorder have been associated with worse mental health outcomes, while parent-child involvement and communication, and peer, parent and teacher social support have been associated with better function. Few evidence-based interventions exist; CHAMP+, a mental health programme for PHIV+ youth, shows promise across cultures. Conclusions: This review highlights research limitations that preclude both conclusions and full understanding of aetiology. Conversely, these limitations present opportunities for future research. Many PHIV+ youth experience adequate mental health despite vulnerabilities. However, the focus of research to date highlights the identification of risks rather than positive attributes, which could inform preventive interventions. Development and evaluation of mental health interventions and preventions are urgently needed to optimize mental health, particularly for PHIV+ youth growing up in low-and-middle income countries. abstract_id: PUBMED:24853361 The use of an ecodevelopmental approach to examining substance use among rural and urban Latino/a youth: peer, parental, and school influences. Using an ecodevelopmental framework, we examined how peer, parent, and student variables influence substance (tobacco, alcohol, and marijuana) use among rural and urban Latino/a adolescents (N = 2,500). Generally speaking, Latino/a adolescents in rural areas endorsed higher levels of substance use. Among the primary variables of study, there were a number of nuanced differences noted based on location, gender, and type of substance. Peer use was related to individual substance use in both rural and urban areas. However, peer use was a significantly stronger predictor of tobacco use among rural Latinas than urban dwelling. Parental monitoring was not predictive of urban marijuana use, yet was negatively associated with substance use for all subgroups and was especially pronounced for female alcohol use. Parental emotional involvement predicted higher alcohol use among urban boys. School achievement was negatively associated with substance use for all subgroups while, conversely, school involvement was associated with higher alcohol use for rural boys. Cultural and contextual implications for intervention and prevention are discussed. abstract_id: PUBMED:20855170 Peer, parent and media influences on adolescent smoking by developmental stage. Previous studies of social influences on adolescent smoking have focused on peers and parents, using data collected prior the 1998 Master Settlement Agreement. This study used the 2004 wave of the National Youth Tobacco Survey to examine associations between peer smoking, smoking at home, tobacco-related media exposure, and smoking behavior during early and middle adolescence. Findings indicate that peer smoking and smoking at home remain strongly associated with current smoking among early and middle adolescents, controlling for gender, race/ethnicity and exposure to tobacco industry and anti-tobacco media. The magnitude of the association between peer smoking and current smoking decreases from early adolescence to middle adolescence while the association between smoking at home and current smoking is static across developmental stage. Exposure to tobacco-related media is associated with increased current and former smoking in both early and middle adolescence. abstract_id: PUBMED:29562946 Allocation of attention to scenes of peer harassment: Visual-cognitive moderators of the link between peer victimization and aggression. As approximately one-third of peer-victimized children evidence heightened aggression (Schwartz, Proctor, & Chien, 2001), it is imperative to identify the circumstances under which victimization and aggression co-develop. The current study explored two potential moderators of victimization-aggression linkages: (a) attentional bias toward cues signaling threat and (b) attentional bais toward cues communicating interpersonal support. Seventy-two fifth- and sixth-grade children (34 boys; Mage = 11.67) were eye tracked while watching video clips of bullying. Each scene included a bully, a victim, a reinforcer, and a defender. Children's victimization was measured using peer, parent, and teacher reports. Aggression was measured using peer reports of overt and relational aggression and teacher reports of aggression. Victimization was associated with greater aggression at high levels of attention to the bully. Victimization was also associated with greater aggression at low attention to the defender for boys, but at high attention to the defender for girls. Attention to the victim was negatively correlated with aggression regardless of victimization history. Thus, attentional biases to social cues integral to the bullying context differentiate whether victimization is linked to aggression, necessitating future research on the development of these biases and concurrent trajectories of sociobehavioral development. abstract_id: PUBMED:16488828 The role of peer, parent, and culture in risky sexual behavior for Cambodian and Lao/Mien adolescents. Purpose: The purpose of this study was to investigate the role of age, gender, peer, family, and culture in adolescent risky sexual behavior for Cambodian and Laotian (Lao)/Mien youth. Methods: We obtained cross-sectional, in-home interview data including measures of individualism, collectivism, acculturation, risky sexual behavior, peer delinquency, parent engagement, and parent discipline from a sample of mostly second-generation Cambodian (n = 112) and Lao/Mien (n = 67) adolescents. Data were analyzed using step-wise, hierarchical multiple regressions. Results: Peer delinquency and age (older) were significant predictors of risky sexual behavior in both groups. Parent discipline also significantly predicted risky sexual behavior, but only for Lao/Mien adolescents. Vertical and horizontal individualism were associated positively with risky sexual behavior for Cambodian youth whereas collectivism (horizontal) was associated negatively with risky sexual behavior for Lao/Mien youth. Acculturation was nonsignificant in both groups. Conclusions: In addition to age, parents, and peer groups, the findings suggest that culture also matters in risky sexual behavior, particularly for Cambodian and Laotian youth. abstract_id: PUBMED:15209506 Saccharomyces cerevisiae Hsp104 enhances the chaperone capacity of human cells and inhibits heat stress-induced proapoptotic signaling. Hsp104, the most potent thermotolerance factor in Saccharomyces cerevisiae, is an unusual molecular chaperone that is associated with the dispersal of aggregated, non-native proteins in vivo and in vitro. The close cooperation between Hsp100 oligomeric disaggregases and specific Hsp70 chaperone/cochaperone systems to refold and reactivate heat-damaged proteins has been dubbed a "bichaperone network". Interestingly, animal genomes do not encode a Hsp104 ortholog. To investigate the biochemical and biological consequences of introducing into human cells a stress tolerance factor that has protein refolding capabilities distinct from those already present, Hsp104 was expressed as a transgene in a human leukemic T-cell line (PEER). Hsp104 inhibited heat-shock-induced loss of viability in PEER cells, and this action correlated with reduced procaspase-3 cleavage but not with reduced c-Jun N-terminal kinase phosphorylation. Hsp104 cooperated with endogenous human Hsp70 and Hsc70 molecular chaperones and their J-domain-containing cochaperones Hdj1 and Hdj2 to produce a functional hybrid bichaperone network capable of refolding aggregated luciferase. We also established that Hsp104 shuttles across the nuclear envelope and enhances the chaperoning capacity of both the cytoplasm and nucleoplasm of intact cells. Our results establish the fundamental properties of protein disaggregase function in human cells with implications for the use of Hsp104 or related proteins as therapeutic agents in diseases associated with protein aggregation. abstract_id: PUBMED:11134434 Peer, parent, and media influences on the development of weight concerns and frequent dieting among preadolescent and adolescent girls and boys. Objective: To assess prospectively the influence of peers, parents, and the media on the development of weight concerns and frequent dieting. Design: Prospective cohort study. Setting: Questionnaires mailed annually to participants throughout the United States. Participants: One-year follow-up of 6770 girls and 5287 boys who completed questionnaires in 1996 and 1997 and were between 9 and 14 years of age in 1996. Main Outcome Measure: Onset of high levels of concern with weight and dieting frequently to control weight. Results: During 1 year of follow-up, 6% of girls and 2% of boys became highly concerned with weight and 2% of girls and 1% of boys became constant dieters. Peer influence was negligible. Independent of age and body mass index, both girls (odds ratio [OR]): 1.9; 95% confidence interval [CI]: 1.1-3.1) and boys (OR: 2.7; 95% CI: 1.1-6. 4) who were making a lot of effort to look like same-sex figures in the media were more likely than their peers to become very concerned with their weight. Moreover, both girls (OR: 2.3; 95% CI: 1.1-5.0) and boys (OR: 2.6; 95% CI: 1.1-6.0) who reported that their thinness/lack of fat was important to their father were more likely than their peers to become constant dieters. Conclusions: Our results suggest that parents and the media influence the development of weight concerns and weight control practices among preadolescents and adolescents. However, there are gender differences in the relative importance of these influences. Answer: The provided abstracts do not contain information regarding the peer review process of the Journal of the American Medical Association (JAMA) or any potential gender bias within it. The abstracts focus on various topics such as the effectiveness of peer parent programs in child welfare (PUBMED:35640347), evaluations of behavioral attributes of only children in Beijing (PUBMED:29862368), trends in adolescent substance use (PUBMED:10446768), mental health of youth living with perinatal HIV infection (PUBMED:23782478), substance use among Latino/a youth (PUBMED:24853361), influences on adolescent smoking (PUBMED:20855170), attention allocation to scenes of peer harassment (PUBMED:29562946), risky sexual behavior among Cambodian and Lao/Mien adolescents (PUBMED:16488828), the chaperone capacity of human cells and heat stress-induced proapoptotic signaling (PUBMED:15209506), and influences on the development of weight concerns and frequent dieting among preadolescents and adolescents (PUBMED:11134434). To answer the question about gender bias in JAMA's peer review process, one would need to look at studies specifically examining the editorial and peer review practices of the journal, including any analyses of acceptance rates, reviewer composition, or editorial decisions broken down by the gender of the authors submitting manuscripts. This information is not present in the provided abstracts.
Instruction: Does loop electrosurgical excision procedure of the uterine cervix affect anti-Müllerian hormone levels? Abstracts: abstract_id: PUBMED:24707500 Does loop electrosurgical excision procedure of the uterine cervix affect anti-Müllerian hormone levels? Background: A delayed time to pregnancy was recently reported for women who had a loop electrosurgical excision procedure (LEEP) to remove cervical intraepithelial neoplasia (CIN) grade 2 or 3. The objective of the current study was to determine if treatment of CIN with LEEP is associated with decreased levels of anti-Müllerian hormone (AMH), a marker of ovarian reserve. Methods: AMH levels were measured in 18 women treated with LEEP and 18 age-matched controls, who had colposcopy only and did not require LEEP. Cases and controls had their blood drawn at study entry time zero and again 6 months later. Results: The mean AMH level decreased significantly from baseline to follow-up; however, no significant differences were observed when stratifying by LEEP status, suggesting that both groups experienced a similar decrease in AMH levels during the follow-up period. Although women treated with LEEP had lower overall AMH levels than controls at both baseline and follow-up, these differences were not statistically significant. Conclusion: Overall, the delayed time to pregnancy observed in women treated with LEEP is likely not due to a LEEP-associated decrease in ovarian reserve as measured by AMH; thus, other mechanism are responsible for the delayed time to pregnancy associated with LEEP. abstract_id: PUBMED:36564265 Reducing blood loss during laparoscopic myomectomy using a tourniquet loop around the lower uterine segment. Objective: To present a simple and effective hemostatic technique using a tourniquet loop during laparoscopic myomectomy. Design: Pericervical tourniquet has been proven to be a safe and effective measure to reduce blood loss during open myomectomy. However, the use of a tourniquet in laparoscopic myomectomy has been rarely reported probably because the application is difficult and troublesome. In our technique, a prefabricated tourniquet loop, adapted from a Foley catheter, is applied around the lower segment of the uterus. It is easy to apply a tourniquet loop around the lower uterine segment during laparoscopic myomectomy. There is no need to make a window in the broad ligament to apply a pericervical tourniquet or triple tourniquets. Meanwhile, complete blockage of blood supply from the uterine artery and utero-ovarian anastomoses may ensure better hemostasis. Setting: A tertiary hospital. Patient(s): The patient was a 34-year-old woman with uterine leiomyoma and a desire for future fertility. She had been suffering from urinary frequency and chronic bladder pressure for the past 6 months. Magnetic resonance imaging confirmed 2 intramural masses measuring 96 mm × 91 mm and 25 mm × 13 mm at the anterior uterine wall. Intervention(s): Institutional review board and ethics committee approval was obtained. Laparoscopic myomectomy was performed with the application of a tourniquet loop around the lower segment of the uterus (step-by-step video demonstration): homemade tourniquet loop formation using a 14-Fr latex Foley catheter; trocar placement with 2 umbilical ports (10 mm and 5 mm) and a 5-mm port at the lower-left quadrant of the abdomen; application of a tourniquet loop around the lower uterine segment; tumor enucleation and myometrial closure; removal of the tourniquet loop and a check for bleeding; contained specimen extraction via the merged umbilical incision; and inspection of the abdominal cavity and closure of the merged umbilical incision. Main Outcome Measure(s): Feasibility of using a tourniquet loop as an effective hemostatic technique in laparoscopic myomectomy. Result(s): The surgery lasted for approximately 90 minutes, and the tourniquet time was approximately half an hour. The estimated blood loss was only 20 mL. Her hemoglobin value on day 1 after the surgery was 131 g/L, the same as the preoperative level. Pathology confirmed the diagnosis of leiomyoma. The patient was discharged 2 days after the surgery with no complications. During follow-up, the patient reported that there was no discomfort and that her menses were normal. Her fallopian tubes were patent in the hysterosalpingogram. Her ovarian function, which was assessed by serum follicle-stimulating hormone concentration (5.34 mIU/mL) on day 3 of her menstrual cycle and antimüllerian hormone level (2.01ng/mL), was in the normal range. She was suggested to conceive 1 year after the procedure. Conclusion(s): Application of a tourniquet loop around the lower uterine segment is a simple and effective hemostatic technique during laparoscopic myomectomy. Randomized prospective studies are needed to determine the hemostatic effect of the laparoscopic use of a tourniquet loop and its impact on fertility and ovarian function. abstract_id: PUBMED:28876143 Anti-Müllerian hormone levels before and after uterine artery embolization. Objective: To determine the effects of uterine artery embolization (UAE) on ovarian reserve as measured by Anti-Müllerian hormone (AMH) levels. Material And Methods: Non-randomized, observational study of 89 women 23-40 years of age who received UAE. Control hormone levels were measured prior to UAE and the first post-embolization measurement was taken at various times post-procedure (mean = 190 ± 229 days). Results: Historical work verified by our earlier work has shown that AMH levels decline with age. Regression analysis allows us to determine whether UAE contributes to a greater decline in AMH values over that naturally occurring with aging. The effect of the procedure was found to contribute no deleterious effect to the natural decline in AMH levels. In addition, multiple blood draws were obtained from 32 patients up to 47 months post-UAE. Regression studies with these patients as their own controls showed no long-term diminishment of ovarian reserve due to the UAE procedure. Conclusions: Earlier reported data are consistent with larger sample size. UAE does not affect ovarian reserve in women <40 as evidenced by no significant change in AMH levels after embolization. Women who are of reproductive age and have uterine fibroids can consider UAE without concern for adverse effects on their fertility. abstract_id: PUBMED:33926340 The impact on Anti-Mullerian Hormone (AMH), uterine fibroid size and uterine artery patency following Uterine Fibroid Embolization (UFE) with a resorbable embolic agent. The effect of Uterine Fibroid Embolization on fertility and ovarian reserve remains uncertain. We assessed the impact of a new resorbable, spherical particle (Gelbead) on concentration of Anti-Mullerian (AMH) hormone, fibroid volume and uterine artery patency. This prospective cohort study recruited consecutive patients from July 2017 to June 2018. Serum AMH, fibroid and uterine volume, UFS-QOL (uterine fibroid score-quality of life) scores were measured prior to and at 1 month and/or 3 months post embolization. Twenty-four participants were enrolled (median age 44 years, uterine volume 484 cm3, initial dominant fibroid volume 167 cm3). One patient was lost to follow-up. AMH (median ± SD) immediately prior to embolization was 3.2 ± 13.7 pmol/L. At 1-month postembolization, AMH was 4.1 ± 8.6 pmol/L and at 3 months 4.4 ± 8.6 pmol/L. We found no significant difference in AMH levels between baseline and at 1 month (p = 0.58) or baseline and 3 months (p = 0.17). The median dominant uterine fibroid volume decreased (167 to 64 cm3, p < 0.001). At 3 months post-embolization, 17/23 patients had patent uterine arteries bilaterally (73.9%). UFE with Gel-bead did not significantly affect AMH at 3 months post embolization, whilst maintaining a high rate of uterine artery patency. abstract_id: PUBMED:35983349 Uterine artery embolization with highly compressible calibrated microspheres for the treatment of uterine fibroids. Objective: To evaluate the safety and efficacy of using highly compressible calibrated microspheres in uterine artery embolization (UAE) for the treatment of uterine fibroids. Materials And Methods: This was a prospective multicenter study. Thirty-two women with symptomatic uterine fibroids were selected for UAE between January 2019 and March 2020. The participants completed the Uterine Fibroid Symptom and Quality of Life (UFS-QOL) questionnaire, underwent contrast-enhanced pelvic magnetic resonance imaging (MRI), and were submitted to anti-Müllerian hormone measurement, subsequently undergoing UAE with Embosoft microspheres. After six months, the patients again completed the UFS-QOL questionnaire and underwent pelvic MRI. Results: The most common symptoms were abnormal uterine bleeding (in 81.3% of the cases), pelvic pain (in 81.3%), and compression (in 46.9%). Of the 32 patients evaluated, 12 (37.5%) had anemia due to abnormal uterine bleeding. Thirty patients completed the study. Among those patients, we observed median reductions of 21.4% in uterine volume and 15.9% in dominant fibroid volume. We identified no adverse events that could be attributed to the material itself, although there were events attributed to the UAE procedure in general. Conclusion: For the treatment of uterine fibroids, UAE using Embosoft microspheres shows satisfactory results, providing reductions in uterine and dominant fibroid volumes, with a low rate of adverse events, and improving patient quality of life, as well as demonstrating safety and efficacy. abstract_id: PUBMED:28856867 Changes in anti-müllerian hormone levels as a biomarker for ovarian reserve after ultrasound-guided high-intensity focused ultrasound treatment of adenomyosis and uterine fibroid. Objective: To assess the changes in antimüllerian hormone (AMH) levels after ablation for symptomatic uterine fibroids and adenomyosis using ultrasound-guided high-intensity focused ultrasound (USgHIFU). Design: A prospective study. Setting: Gynaecological department in multiple hospitals in South Korea. Population: Patients with uterus fibroids and adenomyosis. Methods: Seventy-nine women with symptomatic uterine fibroids and adenomyosis who met the inclusion criteria were enrolled in our study between January 2014 and December 2014. All patients underwent USgHIFU ablations. Each patient was examined before and after treatment, and at 6 and 12 months after treatment by T2-weighted MRI imaging (T2WI) and T1-weighted MRI imaging (T1WI) with gadolinium injection. Symptom severity scores (SSS), Uterine Fibroid Symptom Quality of Life (UFS-QOL) questionnaire subscales, and reductions of treated volume were assessed. AMH levels before and 6 months after HIFU ablation were compared to determine whether USgHIFU ablation affected ovarian reserve. Main Outcome Measures: HIFU treatment did not affect the ovarian function. Results: HIFU treatment time (mean ± standard deviation), HIFU ablation time, and treatment energy were 73.5 ± 25.6 minutes, 9994.7 ± 386.8 seconds, and 364 713.8 ± 156 350.7 Joules, respectively. AMH levels before and 6 months after HIFU ablation were 2.11 ± 2.66 and 1.84 ± 2.57 μg/l, respectively. There was no significant difference in AMH level between the two time points (P > 0.05). Conclusions: USgHIFU ablation for uterine fibroid and adenomyosis was effective without affecting ovarian reserve. Tweetable Abstract: HIFU ablation is a safe and effective treatment for patients with uterine fibroids and adenomyosis that does not affect ovarian function. abstract_id: PUBMED:25727093 Anti Müllerian hormone levels before and after uterine artery embolization: A preliminary report. Objective: To determine the effects of uterine artery embolization on Anti-Müllerian hormone levels for women under 40 years of age. Material And Methods: Non-randomized, observational study of women under 40 years of age and who received UAE. Twenty-seven women under the age of 40 who underwent UAE were included in the study. Hormone levels were measured prior to UAE, and at least six months post UAE. Results: Statistical analysis from a paired t-test showed that the AMH levels pre- and post-embolization of these women fell within the normal range. The average AMH level prior to embolization was 2.54 ± 3.99 ng/mL. The mean AMH concentration after embolization was 2.33 ± 2.70 ng/mL. The average change in AMH levels between pre-and post-embolization was -0.21 ± 1.08 ng/mL. There is no statistically significant difference between the pre- and post- UAE AMH levels (95% CI -0.64 to 0.22, p=0.32). Conclusion: UAE does not affect ovarian reserve in women under 40 as evidenced by no significant change in AMH levels before and after embolization. Women who are of reproductive age and have fibroids can consider UAE without concern for adverse effects on their fertility. abstract_id: PUBMED:24620523 Anti-Müllerian hormone serum levels in women with and without uterine fibroids Background: The etiology of uterine leiomyomatosis is multifactorial and it is unknown if a relation between anti-Müllerian hormone (hormona anti-mülleriana) and uterine leiomyomatosis exists. Objective: To determine the differences of hormona anti-mülleriana levels in women with and without uterine leiomyomatosis. Methods: 60 women were studied (30 with and 30 without uterine leiomyomatosis). The diagnosis was confirmed by histopathology. Both groups were paired by age and in all them serum levels of hormona antimülleriana were measured using ELISA, also estradiol and progesterone serum levels were determined. hormona anti-mülleriana-RII immunohistochemistry was done in healthy myometrium and in leiomyomas. Results: The mean age between the groups didn't show statistical difference (41.8 +/- 5.6 years vs. 41.4 +/- 5.7 years). Also no differences were found in weight, height and body mass index. Serum levels of hormona antimülleriana were lower in those with leiomyomatosis [0.21 (0-10.4) ng/ml vs. 1.83 (0-6.38) ng/ml, p < 0.005]. No statistical differences were found in estradiol and progesterone serum levels between the groups. The hormona antimülleriana receptor was no expressed neither in the healthy myometrium nor in the leiomyomas. Conclusions: Women with leiomyomatosis had lower hormona antimülleriana levels. More studies are needed to determine if a relation exists between hormona antimülleriana and uterine leiomyomas. abstract_id: PUBMED:30827075 Does bilateral uterine artery ligation have negative effects on ovarian reserve markers and ovarian artery blood flow in women with postpartum hemorrhage? Objective: Bilateral uterine artery ligation (UAL) is a fertility-preserving procedure used in women experiencing postpartum hemorrhage (PPH). However, the long-term effects of this procedure on ovarian function remain unclear. The aim of this study was to investigate whether bilateral UAL compromised ovarian reserve and ovarian blood supply. Methods: This prospective study included 49 women aged between 21 and 36 years who had undergone a cesarean section for obstetric indications. Of these, 25 underwent uterine bilateral UAL to control intractable atonic PPH. The control group consisted of 24 women who had not undergone bilateral UAL. Standard clinical parameters, the results of color Doppler screening, and ovarian reserve markers were assessed in all participants at 6 months after surgery. The clinical parameters included age, parity, cycle history, body mass index, and previous medication and/or surgery. Color Doppler screening findings included the pulsatility index (PI) and resistance index (RI) for both the uterine and ovarian arteries. The ovarian reserve markers included day 3 follicle-stimulating hormone (FSH) levels, antral follicle count, and anti-Müllerian hormone (AMH) levels. Results: There were no significant differences in the ovarian reserve markers of day 3 FSH levels, antral follicle count, and AMH levels between the study and control groups (p>0.05 for all). In addition, no significant differences were observed in the PI and RI indices of the uterine and ovarian arteries (p>0.05 for all). Conclusion: In this study, we showed that bilateral UAL had no negative effects on ovarian reserve or ovarian blood supply, so this treatment should be used as a fertility preservation technique to avoid hysterectomy in patients experiencing PPH. abstract_id: PUBMED:29055072 Outcome of assisted reproduction in women with congenital uterine anomalies: a prospective observational study. Objectives: To assess the prevalence of congenital uterine anomalies, including arcuate uterus, and their effect on reproductive outcome in subfertile women undergoing assisted reproduction. Methods: Consecutive women referred for subfertility between May 2009 and November 2015 who underwent assisted reproduction were included in the study. As part of the initial assessment, each woman underwent three-dimensional transvaginal sonography. Uterine morphology was classified using the modified American Fertility Society (AFS) classification of congenital uterine anomalies proposed by Salim et al. If the external contour of the uterus was uniformly convex or had an indentation of < 10 mm, but there was a cavity indentation, it was defined as arcuate or septate. Arcuate uterus was further defined as the presence of a concave fundal indentation with a central point of indentation at an obtuse angle. Subseptate uterus was defined as the presence of a septum, not extending to the cervix, with the central point of the septum at an acute angle; if the septum extended to the internal cervical os, the uterus was defined as septate. Reproductive outcomes, including live birth, clinical pregnancy and preterm birth, were compared between women with a normal uterus and those with a congenital uterine anomaly. Subgroup analysis by type of uterine morphology and logistic regression analysis adjusted for age, body mass index, levels of anti-Müllerian hormone, antral follicle count and number and day of embryo transfer were performed. Results: A total of 2375 women were included in the study, of whom 1943 (81.8%) had a normal uterus and 432 (18.2%) had a congenital uterine anomaly. The most common anomalies were arcuate (n = 387 (16.3%)) and subseptate (n = 16 (0.7%)) uterus. The rate of live birth was similar between women with a uterine anomaly and those with a normal uterus (35% vs 37%; P = 0.47). The rates of clinical pregnancy, mode of delivery and sex of the newborn were also similar between the two groups. Preterm birth before 37 weeks' gestation was more common in women with uterine anomalies than in controls (22% vs 14%, respectively; P = 0.03). Subgroup analysis by type of anomaly showed no difference in the incidence of live birth and clinical pregnancy for women with an arcuate uterus, but indicated worse pregnancy outcome in women with other major anomalies (P = 0.042 and 0.048, respectively). Conclusions: Congenital uterine anomalies as a whole, when defined using the modified AFS classification, do not affect clinical pregnancy or live-birth rates in women following assisted reproduction, but do increase the incidence of preterm birth. The presence of uterine abnormalities more severe than arcuate uterus significantly worsens all pregnancy outcomes. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd. Answer: The loop electrosurgical excision procedure (LEEP) of the uterine cervix does not appear to affect anti-Müllerian hormone (AMH) levels. A study measured AMH levels in 18 women treated with LEEP and 18 age-matched controls who only had colposcopy and did not require LEEP. The results showed that while the mean AMH level decreased significantly from baseline to follow-up in both groups, there were no significant differences when stratified by LEEP status. This suggests that both groups experienced a similar decrease in AMH levels during the follow-up period. Although women treated with LEEP had lower overall AMH levels than controls at both baseline and follow-up, these differences were not statistically significant. Therefore, the study concluded that the delayed time to pregnancy observed in women treated with LEEP is likely not due to a LEEP-associated decrease in ovarian reserve as measured by AMH; thus, other mechanisms are responsible for the delayed time to pregnancy associated with LEEP (PUBMED:24707500).
Instruction: Is prescription of physical activity a part of health care in civilization disorders? Abstracts: abstract_id: PUBMED:17554976 Is prescription of physical activity a part of health care in civilization disorders? Results of questionnaire study in medical doctors Background: Low level of physical activity is an independent risk factor of civilization disorders. Intervention for increasing physical activity has been for generations mentioned in health care. Because of low adherence of the population to those general appeals it is necessary to improve radically the knowledge of health professionals about individual exercise prescription. The aim of this study was to analyze approach of medical doctors in this particular dilemma. Methods And Results: A questionnaire was distributed at postgraduate courses for medical doctors. Data from doctors of different specializations were summarized (N=657, from which 458 were females, i.e. 69,7 %, mean age=38,8+/-9,74). 96,4 % of doctors stated that they recommend exercise to their patients though only up to 23,4 % of them are regularly asked by their patients about the exercise. Concrete (type, intensity, duration and frequency) or individually tailored recommendation give 66,2 %, or 62,6 % of doctors respectively. Most respondents (56,0 %) also recommend a consultation of another specialist (mostly rehabilitation doctor and physiotherapist). Majority of addressed professionals shows that current medical education structure does not enable adequate prescription of physical activity without the help of specialist. Conclusions: Study showed a positive attitude of medical doctors to exercise prescription. However, information about the need of individualized prescription and knowledge about possibilities of exercise therapy in particular regions should be increased. abstract_id: PUBMED:32781558 Physical Activity on Prescription in Routine Health Care: 1-Year Follow-Up of Patients with and without Counsellor Support. The effectiveness of counsellor support in addition to physical activity on prescription (PAP) from health care professionals has rarely been evaluated. This observational follow-up study investigated differences in physical activity levels and health-related quality of life (HRQoL) one year after PAP regarding patients' use of counsellor support in addition to PAP in routine care. The study was conducted in a Swedish health care region in which all patients receiving PAP from health care professionals were offered counsellor support. Data were collected from medical records and questionnaires (baseline and follow-up). Of the 400 study participants, 37% used counsellor support. The group of counsellor users attained a higher level of physical activity one year after receiving PAP compared to the group of non-users (p < 0.001). The level of physical activity was measured by a validated index (score 3-19) calculated from weekly everyday activity and exercise training. Comparison of the change in scores between baseline and follow-up showed a significant difference between the two groups, (p < 0.001). The median difference in the PAP + C group was 2.0 (interquartile range, 7.0) and 0.0. among non-users (interquartile range, 4.0). Significant differences in HRQoL were due to positive improvements among counsellor users, with the main improvement in general health. The conclusion is that patients using counsellor support after receiving PAP from health care professionals had higher physical activity and better HRQoL one year after compared with patients who did not use this support. abstract_id: PUBMED:30623501 The effect of prescription drug insurance on health behavior: Evidence from Medicare Part D. This paper provides empirical evidence on the health behavior effect of prescription drug insurance. Using the difference in the regression discontinuity research design, I compare health behavior changes at age 65 before and after the introduction of Medicare Part D. I find that the implementation of Medicare Part D increased prescription drug insurance coverage and reduced out-of-pocket spending per prescription and the use of cardiovascular and metabolic therapeutic class drugs. I also find that Medicare Part D led to a 6.36 percentage point decrease in the probability of engaging in moderate physical exercise and a 27.4 percentage point decrease in the amount of time spent on moderate physical activity. It also increased the probability of being overweight by 5.75 percentage points. Its effects on vigorous and muscle-strengthening exercise both at the extensive and intensive margins, obesity, BMI, and cigarette consumption are not statistically significant. abstract_id: PUBMED:32759872 Physical Activity Counseling in Primary Care in Germany-An Integrative Review. Physical activity counseling in primary health care is regarded as a useful complementary preventive and therapeutic measure and is advocated by leading public health institutions. This integrative review summarizes the available data on physical activity counseling in primary care in Germany. A systematic literature search in various databases (peer reviewed and grey literature) was carried out for quantitative and qualitative studies on physical activity counseling and use of "Exercise on Prescription". The 25 studies included show a very high methodological diversity and, in some cases, considerable risks of bias, with limited comparability across studies. Counseling was provided in all studies by physicians. They report frequent physical activity counseling, which is partly confirmed and partly refuted by patient data. The use of "Exercise on Prescription" is at a very low level. Information on the frequency of physical activity counseling in Germany varies depending on data source and is sometimes contradictory. Our review provides a synthesis of various perspectives on routine physical activity counseling in primary care in Germany. Future studies using standardized and validated instruments in representative samples are needed to further knowledge on counseling and to be able to establish trends in prevalence. Strengthening the topics of physical activity and health and physical activity counseling in medical curriculum is strongly recommended. abstract_id: PUBMED:17555539 Physical activity on prescription in primary health care: a follow-up of physical activity level and quality of life. To examine whether physical activity on prescription in routine primary care patients would influence physical activity level and quality of life 6 months later. In 2001-2003, 13 Swedish primary health care units took part in an uncontrolled clinical study. If a patient in primary health care needed physical activity preventively or for treatment of a disease and patient-centered motivational counseling found physical activity to be suitable, individualized physical activity could be prescribed. Patients (n=481) of both sexes and all ages [75% women, mean age 50 (12-81)] participated in the study. Self-reported physical activity, readiness to change to a more physically active lifestyle and quality-of-life data were collected through questionnaires. The follow-up rate was 62% at 6 months. Intention-to-treat analysis showed a significant increase (P<0.01) in self-reported physical activity level, the stages of action and maintenance of physical activity as well as quality of life. Physical activity level, stages of change and quality of life increased analogically, indicating that physical activity on prescription may be suitable as a conventional treatment in an ordinary primary health care setting to promote a more physically active lifestyle. abstract_id: PUBMED:29152542 Prescribing Outdoor Physical Activity to Children: Health Care Providers' Perspectives. Little evidence exists on health care provider (HCP) prescriptions for children's outdoor physical activity (PA). Semistructured interviews were conducted with 15 children's HCPs to explore perspectives on outdoor PA prescription programs for children and barriers to implementation. Thematic analytic techniques were used to analyze the data. Most participants reported an awareness of health benefits to children being in the outdoors. Ten themes emerged from the data related to 3 thematic categories: (1) current strategies that HCPs are using to promote PA among children, (2) barriers that HCPs see to prescribing outdoor PA, and (3) potential strategies for promoting outdoor PA among children. Assessment of the local outdoor PA environment and resource development must be done prior to a prescription program. HCPs should be skilled in conducting conversations and setting goals related to outdoor PA tailored to the patient. Developing a system for follow-up with patients on established goals should also be included. abstract_id: PUBMED:35292017 Factors associated with increased physical activity among patients prescribed physical activity in Swedish routine health care including an offer of counselor support: a 1-year follow-up. Background: The study addresses knowledge gaps in research regarding influences of routine health care delivery of physical activity on prescription (PAP). The aim was to investigate if patient and health care characteristics are associated with increased physical activity 1 year after prescription among patients offered counselor support in addition to health care professionals' prescription. The study was conducted in primary and secondary care in a Swedish health care region. Methods: All PAP recipients during 1 year were invited (N = 1503) to participate in this observational prospective study. Data were collected from medical records and questionnaires (baseline and follow-up). Descriptive statistics and multiple logistic regression analysis were used. The outcome variable was increased physical activity after 1 year. Study variables were patient and health care characteristics. Results: Three hundred and fifty-five patients with complete follow-up data were included. The mean age was 62 years (SD = 14; range, 18-90) and 68% were females. Almost half (47%) had increased physical activity 1 year after PAP. Multiple logistic regression analysis showed that increased physical activity at follow-up was positively associated with lower baseline activity, counselor use, and positive perception of support. Counselor users with low baseline activity had higher odds ratio for increased physical activity at follow-up than non-users (OR = 7.2, 95% CI = 2.2-23.5 vs. OR = 3.2, 95% CI = 1.4-7.5). Positive perception of support was associated with increased physical activity among counselor users but not among non-users. Conclusions: An increase in physical activity after PAP was related to low baseline activity, positive perception of support, and use of counselor support after PAP. Qualified counseling support linked to PAP seems to be important for achieving increased physical activity among patients with lower baseline activity. abstract_id: PUBMED:34752352 Physical activity upon prescription The concept of adapted physical activity (APA) was born in the 1980s in French universities, and more particularly in the Sciences and techniques of physical and athletic activities degree. Today, it is an integral part of the caregivers' therapeutic arsenal. However, if everything seems to be done to ensure that the practice of APA is prescribed to as many people as possible, for doctors, this new therapeutic opportunity to improve the health of their patients is not so easy to put in place in the field. abstract_id: PUBMED:29659546 Physical Activity on Prescription with Counsellor Support: A 4-Year Registry-Based Study in Routine Health Care in Sweden. Background: Public health gains from physical activity on prescription (PAP) depend on uptake in routine care. We performed an evaluation of the implementation, in a Swedish county council, of counsellors who give personalized support to PAP recipients aimed at facilitating PAP delivery. The aim was to compare characteristics between PAP recipients and the health care population as well as between PAP recipients who used and did not use counsellor support. We also investigated professional belonging and health care setting of health care professionals who prescribed PAP. Methods: All patients’ ≥18 years who received PAP during 2009–2012 in primary and secondary care in the County Council of Kronoberg were included (n = 4879). Data were retrieved from electronic medical records. Main outcome measures were patient and professional characteristics. Results: A third of the PAP recipients had diseases in ≥5 diagnostic groups and more than half had ≥11 office visits the year before receiving PAP. Counsellor support was used by one-third and PAP recipients who used counsellor support had more multiple diagnoses and office visits compared with non-users. Physicians issued 44% of prescriptions and primary care was the predominant setting. The amount of PAP did not change over time, but the proportion of physicians’ prescriptions decreased while the proportion of nurses’ prescriptions increased. Conclusions: PAP recipients had high morbidity and were frequent health care attenders, indicating that PAP was predominantly used for secondary or tertiary prevention. PAP rates did not increase as intended after the implementation of counsellor support. abstract_id: PUBMED:29522690 Physical activity prescription by Canadian Emergency Medicine Physicians. An increase in physical activity has been shown to improve outcomes in many diseases. An estimated 600 000 Canadians receive their primary health care from emergency departments (ED). This study aims to examine physical activity prescription by emergency medicine physicians (EPs) to determine factors that influence decisions to prescribe physical activity. A survey was distributed to EPs via email using the Canadian Association of Emergency Physicians (CAEP) survey distribution protocol. Responses from 20% (n = 332) of emergency physician/residents in Canada were analyzed. Of the EPs, 62.7% often/always counsel patients about preventative medicine (smoking, diet, and alcohol). Only 12.7% (42) often/always prescribe physical activity. The CCFP-trained physicians (College of Family Physicians Canada) were significantly more likely to feel comfortable than CCFP-EM-trained physicians (Family Physicians with Enhanced Skills in Emergency Medicine) prescribing physical activity (p = 0.0001). Both were significantly more likely than the FRCPC-trained EPs (Fellows of the Royal College of Physicians of Canada). Of the EPs, 73.4% (244) believe the ED environment does not allow adequate time for physical activity prescription. Family medicine-trained EPs are more likely to prescribe physical activity; the training they receive may better educate them compared with FRCPC-trained emergency medicine. Further education is required to standardize an approach to ED physical activity prescription. Answer: Yes, the prescription of physical activity is a part of health care in civilization disorders. The questionnaire study conducted among medical doctors revealed that 96.4% of doctors recommend exercise to their patients, with 66.2% providing concrete recommendations and 62.6% offering individually tailored recommendations. However, the study also highlighted that the majority of professionals believe that current medical education does not adequately enable the prescription of physical activity without the help of a specialist (PUBMED:17554976). Additionally, physical activity on prescription (PAP) with counselor support has been shown to be effective in increasing physical activity levels and health-related quality of life (HRQoL) one year after PAP in routine care (PUBMED:32781558). Furthermore, physical activity counseling in primary care is considered a useful preventive and therapeutic measure, and there is a call for strengthening the topics of physical activity and health in the medical curriculum (PUBMED:30623501, PUBMED:32759872). Studies have also shown that physical activity on prescription may be suitable as a conventional treatment in an ordinary primary health care setting to promote a more physically active lifestyle (PUBMED:17555539). Despite the recognition of the health benefits of outdoor physical activity for children, health care providers (HCPs) face barriers to prescribing outdoor physical activity, and strategies are needed to promote it effectively (PUBMED:29152542). Lastly, the concept of adapted physical activity (APA) has become an integral part of the therapeutic arsenal for caregivers, although its implementation in the field can be challenging for doctors (PUBMED:34752352).
Instruction: Gastric intramucosal pH in trauma patients: an index for organ failure risk? Abstracts: abstract_id: PUBMED:7617242 Gastric intramucosal pH in trauma patients: an index for organ failure risk? Objective: To assess if low gastric intramucosal pH (pHi), in the first 24 hours from trauma, is an early risk index for organ failure in severe trauma. Design: Prospective clinical study. Setting: General ICU in a university hospital. Patients: Thirty-one consecutive trauma patients, aged 15 to 71 years (mean 34.2), 26 men and 5 women. Measurements And Results: In all patients pHi was measured using a gastric tonometer. All patients presented systemic inflammatory response syndrome (SIRS) and 14 patients (45.2%) developed sepsis. Seven patients developed one or more organ failures (22.6%). Six patients died (19.4%), five because of organ failure and one because of primary brain injury. The worst 12-24 hour pHi, the worst 24 hours blood Base Excess, APACHE II and ISS were grouped by absence or presence of sepsis, organ failure and by outcome. Patients developing organ failure had pHi values (median = 7.06) significantly lower than patients who did not developed organ failure (median = 7.33) (chi 2 = 5.35; p = 0.02). Conclusions: Our data suggest that low pHi during the first 24 hours from trauma seems to be a good predictor for the development of organ failure. abstract_id: PUBMED:8686797 Delayed gastric emptying after gastric surgery. Background: The reported incidence of delayed gastric emptying (DGE) after gastric surgery is 5% to 25% and usually is based on operations for peptic ulcer disease. Ongoing improvements in perioperative care, nutritional support, and new prokinetic drugs may have had a beneficial effect on the frequency and course of postoperative DGE. Methods: We therefore studied our recent experience with DGE in 416 patients who had gastric surgery for ulcer disease (283), cancer (92), or trauma and other indications (41) between January 1985 and December 1993. DGE was defined as inability to eat a regular diet by postoperative day 10. Results: DGE occurred in 99 of 416 patients (24%). In 75 of these 99 patients, a postoperative contributing factor for DGE was identified. These factors were sepsis (32), anastomotic edema and leaks (23), obstruction (4), pancreatitis (3), multiple system organ failure (5), and miscellaneous conditions (8). In 24 patients there was no obvious cause for DGE; these patients recovered with nutritional support and time. Re-operation specifically for gastric stasis was not performed. Among the 99 patients with DGE, 67% were eating by day 21, 92% by 6 weeks, and 100% by 10 weeks. Significant risk factors for DGE were diabetes (55%), malnutrition (44%), and operations for malignancy (38%). The Whipple procedure had the highest incidence of DGE (70%), highly selective vagotomy the lowest (0%), while truncal vagotomy had no significant effect. The response to metoclopramide was 20% and unpredictable. Conclusion: DGE continues to affect a considerable number of our patients (24%) after gastric surgery and is particularly common in patients with diabetes, malnutrition, and gastric or pancreatic cancer. However, gastric motility does return in 3 to 6 weeks in most patients and the need for re-operation for gastric stasis is rare. abstract_id: PUBMED:8420726 Continuous intravenous cimetidine decreases stress-related upper gastrointestinal hemorrhage without promoting pneumonia. Objectives: To determine whether a continuous i.v. infusion of cimetidine, a histamine-2 (H2) receptor antagonist, is needed to prevent upper gastrointestinal (GI) hemorrhage when compared with placebo and if that usage is associated with an increased risk of nosocomial pneumonia. Due to the importance of this latter issue, data were collected to examine the occurrence rate of nosocomial pneumonia under the conditions of this study. Design: A multicenter, double-blind, placebo-controlled study. Interventions: Patients were randomized to receive cimetidine (n = 65) as an iv infusion of 50 to 100 mg/hr or placebo (n = 66). Setting: Intensive care units in 20 institutions. Patients: Critically ill patients (n = 131), all of whom had at least one acute stress condition that previously had been associated with the development of upper GI hemorrhage. Measurements And Main Results: Samples of gastric fluid from nasogastric aspirates were collected every 2 hrs for measurement of pH and were examined for the presence of blood. Upper GI hemorrhage was defined as bright red blood or persistent (continuing for > 8 hrs) "coffee ground material" in the nasogastric aspirate. Baseline chest radiographs were performed and sputum specimens were collected from all patients, and those patients without clear signs of pneumonia (positive chest radiograph, positive cough, fever) at baseline were followed prospectively for the development of pneumonia while receiving the study medication. Cimetidine-infused patients experienced significantly (p = .009) less upper GI hemorrhage than placebo-infused patients: nine (14%) of 65 cimetidine vs. 22 (33%) of 66 placebo patients. Cimetidine patients demonstrated significantly (p = .0001) higher mean intragastric pH (5.7 vs. 3.9), and had intragastric pH values at > 4.0 for a significantly (p = .0001) higher mean percentage of time (82% vs. 41%) than placebo patients. Differences in pH variables were not found between patients who had upper GI hemorrhage and those patients who did not, although there was no patient in the cimetidine group who bled with a pH < 3.5 compared with 11 such patients in the placebo group. Also, the upper GI hemorrhage rate in patients with one risk factor (23%) was similar to that rate in patients with two or more risk factors (25%). Of the 56 cimetidine-infused patients and 61 placebo-infused patients who did not have pneumonia at baseline, no cimetidine-infused patient developed pneumonia while four (7%) placebo-infused patients developed pneumonia. Conclusions: The continuous i.v. infusion of cimetidine was highly effective in controlling intragastric pH and in preventing stress-related upper GI hemorrhage in critically ill patients without increasing their risk of developing nosocomial pneumonia. While the number of risk factors and intragastric pH may have pathogenic importance in the development of upper GI hemorrhage, neither the risk factors nor the intragastric pH was predictive. Therefore, short-term administration of continuously infused cimetidine offers benefits in patients who have sustained major surgery, trauma, burns, hypotension, sepsis, or single organ failure. abstract_id: PUBMED:9637157 Intra-abdominal hypertension after life-threatening penetrating abdominal trauma: prophylaxis, incidence, and clinical relevance to gastric mucosal pH and abdominal compartment syndrome. Objective: To define the incidence, prophylaxis, and treatment of intra-abdominal hypertension (IAH) and its relevance to gut mucosal pH (pHi), multiorgan dysfunction syndrome, and the abdominal compartment syndrome (ACS). Methods: Seventy patients in the SICU at a Level I trauma center (1992-1996) with life threatening penetrating abdominal trauma had intra-abdominal pressure estimated by bladder pressure. pHi was measured by gastric tonometry every 4 to 6 hours. IAH (intra-abdominal pressure> 25 cm of H2O) was treated by bedside or operating room laparotomy. Results: Injury severity was comparable between patients who had mesh closure as prophylaxis for IAH (n = 45) and those who had fascial suture (n = 25). IAH was seen in 10 (22.2%) in the mesh group versus 13 (52%) in the fascial suture group (p = 0.012) for an overall incidence of 32.9%. Forty-two patients had pHi monitoring, and 11 of them had IAH. Of the 11 patients, eight patients (72.7%) had acidotic pHi (7.10 +/- 0.2) with IAH without exhibiting the classic signs of ACS. The pHi improved after abdominal decompression in six and none developed ACS. Only two patients with IAH and low pHi went on to develop ACS, despite abdominal decompression. Multiorgan dysfunction syndrome points and death were less in patients without IAH than those with IAH and in patients who had mesh closure. Conclusions: IAH is frequent after major abdominal trauma. It may cause gut mucosal acidosis at lower bladder pressures, long before the onset of clinical ACS. Uncorrected, it may lead to splanchnic hypoperfusion, ACS, distant organ failure, and death. Prophylactic mesh closure of the abdomen may facilitate the prevention and bedside treatment of IAH and reduce these complications. abstract_id: PUBMED:3872612 Bleeding from stress gastritis. Has prophylactic pH control made a difference? Control of intragastric pH with antacids or H2-receptor antagonists is a standard prophylactic treatment routinely used to prevent or control bleeding from acute erosive gastritis (AEG) in critically ill patients on trauma and surgical services. The incidence of bleeding from AEG was documented on these services retrospectively before and after the institution of standard prophylaxis treatment to determine the relative morbidity, mortality, and risk factors of each era. During the preprophylaxis and postprophylaxis era, the incidence of bleeding was the same, 2.3 per cent (13 of approximately 550 patients in each study period). Prior to prophylaxis, three times as many patients required operation (1 versus 3 patients) although the average transfusion requirement was the same (4.4 units of blood). During both eras the majority of patients were septic and showed other signs of organ failure. Inadequate prophylaxis (failure to prescribe or early discontinuation) was documented in over half of the patients who bled and in the patient requiring operation in spite of a departmental interest in stress gastritis prophylaxis. If antacids and H2-receptor antagonists are used routinely, complications from AEG should be reduced. Even with improved methods of pH control, it appears unlikely that this problem will be eliminated. abstract_id: PUBMED:26233689 The effect of pH versus base deficit on organ failure in trauma patients. Background: Base deficit (BD) calculations are affected by trauma-related changes in circulating concentrations of anions after injury. In contrast, pH is a direct measurement that corresponds to hypoperfusion. We hypothesized that changes in pH would more closely correspond to organ dysfunction compared with changes in BD. Materials And Methods: BD and pH values were collected for the first 48 h after injury from a retrospective cohort of 74 multiply injured adult patients who were admitted to the surgical intensive care unit for a minimum of 1 wk. Mean and extreme (minimum pH and maximum BD) values of pH and BD were determined for day 1 (0-24 h) and for day 2 (24-48 h) after injury. Organ dysfunction was measured by averaging daily sequential organ failure assessment scores over the entire duration of intensive care unit admission. BD and pH values were compared with mean modified sequential organ failure assessment scores by univariate and multivariate linear regression. Results: Organ dysfunction corresponded more closely with changes in pH compared with those in BD. Minimum pH and maximum BD showed better correspondence to organ dysfunction compared with mean values. Minimum pH values at 24-48 h had the highest univariate (r(2) = 0.43) correspondence to organ dysfunction. In contrast, mean BD values at 24-48 h showed no correspondence (r(2) = 0.07) to organ dysfunction. Multivariate analysis demonstrated that 24-48 h of minimum pH had the highest numerical effect on organ dysfunction. Conclusions: Correspondence between organ dysfunction and BD deteriorated in contrast to increasing correspondence between organ dysfunction and pH measured within 48 h after injury. abstract_id: PUBMED:10432079 Serial pHi measurement as a predictor of mortality, organ failure, and hospital stay in surgical patients. Intestinal ischemia is a common condition in critically ill patients and has been postulated to play a role in the development of organ failure and death. This has resulted in the recent interest in monitoring gastric intramucosal pH (pHi) in critically ill patients to provide earlier evidence of inadequate resuscitation, cardiogenic dysfunction, or sepsis. Several reports have indicated that low pHi values obtained during the initial 24 to 48 hours of intensive care unit (ICU) admission were associated with the development of organ failure and death. The purpose of this study was to assess the predictive value of serial pHi measurements obtained throughout the entire ICU admission. A retrospective analysis of critically ill trauma, burn, and surgical patients who had frequent pHi determinations during ICU treatment was performed. When stratified by pHi values, there were no significant differences in length of stay, organ dysfunction, or mortality. Our findings suggest that serial pHi determinations obtained beyond the early critical care period are less reliable predictors of poor outcome. abstract_id: PUBMED:34833461 Effects of Mean Artery Pressure and Blood pH on Survival Rate of Patients with Acute Kidney Injury Combined with Acute Hypoxic Respiratory Failure: A Retrospective Study. Background and Objectives: In the intensive care unit (ICU), renal failure and respiratory failure are two of the most common organ failures in patients with systemic inflammatory response syndrome (SIRS). These clinical symptoms usually result from sepsis, trauma, hypermetabolism or shock. If this syndrome is caused by septic shock, the Surviving Sepsis Campaign Bundle suggests that vasopressin be given to maintain mean arterial pressure (MAP) > 65 mmHg if the patient is hypotensive after fluid resuscitation. Nevertheless, it is important to note that some studies found an effect of various mean arterial pressures on organ function; for example, a MAP of less than 75 mmHg was associated with the risk of acute kidney injury (AKI). However, no published study has evaluated the risk factors of mortality in the subgroup of acute kidney injury with respiratory failure, and little is known of the impact of general risk factors that may increase the mortality rate. Materials and Methods: The objective of this study was to determine the risk factors that might directly affect survival in critically ill patients with multiple organ failure in this subgroup. We retrospectively constructed a cohort study of patients who were admitted to the ICUs, including medical, surgical, and neurological, over 24 months (2015.1 to 2016.12) at Chiayi Chang Gung Memorial Hospital. We only considered patients who met the criteria of acute renal injury according to the Acute Kidney Injury Network (AKIN) and were undergoing mechanical ventilator support due to acute respiratory failure at admission. Results: Data showed that the overall ICU and hospital mortality rate was 63.5%. The most common cause of ICU admission in this cohort study was cardiovascular disease (31.7%) followed by respiratory disease (28.6%). Most patients (73%) suffered sepsis during their ICU admission and the mean length of hospital stay was 24.32 ± 25.73 days. In general, the factors independently associated with in-hospital mortality were lactate > 51.8 mg/dL, MAP ≤ 77.16 mmHg, and pH ≤ 7.22. The risk of in-patient mortality was analyzed using a multivariable Cox regression survival model. Adjusting for other covariates, MAP ≤ 77.16 mmHg was associated with higher probability of in-hospital death [OR = 3.06 (1.374-6.853), p = 0.006]. The other independent outcome predictor of mortality was pH ≤ 7.22 [OR = 2.40 (1.122-5.147), p = 0.024]. Kaplan-Meier survival curves were calculated and the log rank statistic was highly significant. Conclusions: Acute kidney injury combined with respiratory failure is associated with high mortality. High mean arterial pressure and normal blood pH might improve these outcomes. Therefore, the acid-base status and MAP should be considered when attempting to predict outcome. Moreover, the blood pressure targets for acute kidney injury in critical care should not be similar to those recommended for the general population and might prevent mortality. abstract_id: PUBMED:2237279 Stress ulcers. Gastric stress ulceration occurs rapidly in patients after severe trauma. However, bleeding from stress ulceration is a rare but very serious complication after trauma and major surgery. Important risk factors for stress ulcer bleeding are shock, especially septic shock, and the development of other components of the multiple system organ failure syndrome. The pathophysiology and treatment of stress ulceration is reviewed in this paper. Prophylaxis is the best form of treatment, and the most effective prophylaxis is optimal resuscitation and intensive care. In addition, pharmacologic prophylaxis, including antacids, sucralfate, or acid secretory inhibitors, has been advocated. Once profuse bleeding has started, measures other than aggressive treatment of shock and sepsis are usually unsuccessful. abstract_id: PUBMED:12024087 Importance of monitoring in high risk surgical patients. Increased metabolic requirements by surgical trauma is responsible of the changes in cardiac index and oxygen delivery (DO2), represent compensatory adaptation in circulatory functions stimulated by increased metabolic needs. The amount of this compensation varies with age, gender, severity of illness, operation type, associated medical conditions, shock duration, complications, organ failure, and outcome. Hypotension, low cardiac index, arterial hemoglobin desaturation, low oxygen delivery, and low oxygen consumption served and sometimes are persistent over time; these abnormalities were more notorious in the nonsurvivors than in the survivors. Lethal circulatory dysfunctions may begin during the intraoperative period but become more apparent before and are responsible of organ failure during postoperative stages. An approach that measures adequacy of tissue perfusion is a fundamental objective in anesthesia and critical care medicine. Traditional measures of tissue perfusion have included arterial pressure monitoring, urine output, arterial pH and blood lactate. These measures have major limitations, and changes in these measures may significantly lag behind clinical interventions. After trauma and major surgery despite normalization of these variables, global tissue hypoxia may still persist. Unrecognized or untreated conditions may lead to organ failure and death. In critical or high risk patients, early invasive monitoring is necessary to precisely define the adequacy of the cardiac response and to individually tailor therapy. The care provided in the first hours significantly impacts the progression of organ failure and mortality. Although this period is brief compared with the total length of hospitalization, physiological determinants of outcome may be established before ICU admission. Early postoperative monitoring revealed differences in survivor and nonsurvivor patterns and provided goals for improving outcome. Answer: Yes, gastric intramucosal pH (pHi) in trauma patients can be an index for organ failure risk. A study assessing the value of pHi in the first 24 hours from trauma found that low pHi seems to be a good predictor for the development of organ failure. Patients who developed organ failure had significantly lower pHi values compared to those who did not develop organ failure (PUBMED:7617242). Additionally, another study showed that intra-abdominal hypertension (IAH) may cause gut mucosal acidosis at lower bladder pressures, long before the onset of clinical abdominal compartment syndrome (ACS), and uncorrected IAH may lead to splanchnic hypoperfusion, ACS, distant organ failure, and death (PUBMED:10432079). Furthermore, correspondence between organ dysfunction and pH measured within 48 hours after injury was found to be stronger than with base deficit, indicating that pH changes more closely correspond to organ dysfunction (PUBMED:26233689). However, it is important to note that while low pHi values obtained during the initial 24 to 48 hours of ICU admission were associated with the development of organ failure and death, serial pHi determinations obtained beyond the early critical care period are less reliable predictors of poor outcome (PUBMED:9637157). This suggests that the predictive value of pHi may diminish over time during a patient's ICU stay. In summary, gastric intramucosal pH can serve as an early risk index for organ failure in severe trauma patients, particularly when measured within the first 48 hours after injury.
Instruction: Is There An Ideal Stump Closure Technique In Laparoscopic Appendectomy? Abstracts: abstract_id: PUBMED:27121413 Is There An Ideal Stump Closure Technique In Laparoscopic Appendectomy? Background: There is no standard routine closure technique in the management of appendiceal stump. Therefore, the ideal method should be determined based on the condition of the radix of the appendix. Aim: The aim of this study was to evaluate appendiceal stump closure methods in patients undergoing laparoscopic appendectomy due to acute appendicitis. Materials And Methods: The retrospective study included 196 patients who underwent laparoscopic appendectomy due to the diagnosis of acute appendicitis at Gazipasa State Hospital, Golhisar State Hospital, and Isparta State Hospital between 2009 and 2013. The methods performed for the closure of the appendiceal stump included stapler closure technique (Group I), ENDOLOOP® (Johnson & Johnson Inc., New Brunswick, New Jersey) technique (Group II), Hem-o-lok® (Teleflex®, Limerick, Pennsylvania) clip (Group III), and handmade endo-loop technique (Group IV). The groups were compared in terms of duration of surgery, length of hospital stay, postoperative pain, complication rates, and cost of treatment. Results: No significant difference was observed among the groups in terms of patient characteristics. In eight patients, laparoscopic appendectomy was converted to open surgery due to various reasons. Mean length of hospital stay was two days in all of the groups that underwent laparoscopic appendectomy. No significant difference was found among the groups in terms of duration of surgery, complications, and postoperative analgesic requirement, but a significant difference was observed in terms of the cost of treatment. Conclusions: We suggest that the handmade endo-loop technique is a safe, applicable, and cost-effective method to be used in the closure of appendiceal stump. abstract_id: PUBMED:34059039 A randomized clinical trial of technical modifications of appendix stump closure during laparoscopic appendectomy for uncomplicated acute appendicitis. Background: Closure of the appendix stump presents the most critical part of laparoscopic appendectomy. The aim of the present study was to compare the medical outcomes and cost analysis of laparoscopic appendectomy with respect to the different methods of stump closure. Methods: This was a prospective randomized clinical trial conducted in a single institution (University Hospital Ostrava) within a 2-year study period. All included patients were randomized into one of three trial arms (endoloop, hem-o-lok clips or endostapler). Results: In total, 180 patients (60 patients in each arm) were enrolled into the study. The mean length of hospital stay (3.6 ± 1.7 days) was comparable in all study arms. The shortest operative time was noted in the hem-o-lok subgroup of patients (37.9 ± 12.5 min). Superficial surgical site infection was detected in 4.4% of study patients; deep surgical site infection was noted in 1.7% of the patients. The frequency of surgical site infections was comparable in all study arms (p = 0.7173). The mean direct costs of laparoscopic appendectomy were significantly the lowest in the hem-o-lok subgroup of patients. Laparoscopic appendectomy is not a profit-making procedure in our institution (mean profit of made from the study patients was-104.3 ± 579.2 Euro). Closure of the appendix stump by means of endostapler presents the most expensive and the highest loss-incurring technique (p = 0.0072). Conclusions: The present study indicates that all technical modifications of appendix stump closure are comparable with regards to postoperative complications. The stapler technique is significantly the most expensive. We concluded that hem-o-lok clips have the potential for becoming the preferred method of securing the appendix base during laparoscopic appendectomy. Trial registration NCT03750032 ( http://www.clinicaltrials.gov ). abstract_id: PUBMED:23630557 Analysis of intracorporeal knotting with invaginating suture versus endoloops in appendiceal stump closure. Introduction: Laparoscopic appendectomy is a well-described surgical technique and has gained wide clinical acceptance. Laparoscopic appendectomy offers fewer wound infections, faster recovery and an earlier return to work in comparison to open surgery. However, concerns still exist regarding the appendiceal stump closure. Aim: The aim of this study was to compare the overall incidence and specific intraoperative and postoperative complications after application of intracorporeal knotting with invaginating suture versus endoloops for stump closure in laparoscopic appendectomy. Material And Methods: ONE HUNDRED FIFTY TWO CONSECUTIVE PATIENTS ACCORDING TO THE FOLLOWING INCLUSION CRITERIA WERE INCLUDED IN THE STUDY: 1. Laparoscopic appendectomy was performed during the study period; 2. Acute phlegmonous or gangrenous appendicitis without perforation was diagnosed during operation. Exclusion criteria - patients with acute perforated appendicitis and local or diffuse peritonitis. Data was grouped according to the appendiceal stump closure technique, with either endoloops - 112 patients (73.7 percent) or intracorporeal knotting with invaginating suture - 40 patients (26.3 per cent). The primary outcome measure was the rate of intraabdominal surgical-site infection, defined as post-operative intra-abdominal abscess. Secondary outcome variables were intraoperative and postoperative complications, duration of operation, hospital stay. Results: There were no significant differences between the two groups in overall intraoperative and postoperative complications rate and in hospital stay. The median duration of operation was significantly shorter when the endoloop was used. The use of intracorporeal knotting with invaginating suture instead of endoloop to close the appendiceal stump decreased the total cost of laparoscopic appendectomy. Conclusions: According our study results, intracorporeal knotting with invaginating suture appendiceal stump closure technique is acceptable laparoscopic procedure, which intraoperative and postoperative results do not differ from endoloops technique. The total cost of this procedure is 80 € cheaper then endoloops technique. abstract_id: PUBMED:34381657 Comparison of Handmade Endoloop Versus Polymeric Endoclip for Stump Closure in Laparoscopic Appendectomy. Background Stump closure is an important stage of laparoscopic appendectomy. This study aimed to establish whether the handmade endoloop or polymeric endoclip method was more effective for stump closure in laparoscopic appendectomy. Methods The study included 76 patients who underwent laparoscopic appendectomy between October 2017 and January 2019. Patients' demographic characteristics, duration of surgery, length of hospital stay, and any complications were retrospectively recorded from files. Patients were divided into two groups according to stump closure method as polymeric endoclip and handmade endoloop. Results Among the patients, 59.2% (n = 45) were male and 40.8% (n = 31) were female. For stump closure, the polymeric endoclip method was used in 37 patients (48.7%) and the handmade endoloop method in 39 patients (51.3%). The two groups were not significantly different in terms of age and appendix diameter ( p=0.408, p=0.218). A total of four patients (5.3%) developed wound infection, including three from the handmade endoloop group and one from the polymeric endoclip group. One patient (1.3%) in the polymeric endoclip group developed ileus. The two groups were also not significantly different in terms of complications (p = 1.000). Conclusion We conclude that both stump closure methods are safe, and the more easily accessible handmade endoloop method can be performed reliably in all hospitals, including secondary healthcare facilities such as small hospitals. abstract_id: PUBMED:32802324 Closure of the appendiceal stump in laparoscopic appendectomy: A systematic review of the literature. Background: Closure of the appendiceal stump is a key step performed during laparoscopic appendicectomy. Inadequate management of the appendiceal stump has the potential to cause significant morbidity. Several methods of stump closure have been described, however high-level evidence is limited. We performed a systematic review evaluating clinical outcomes and quality of the evidence for the methods of appendiceal stump closure. Methods: A systematic literature search was performed using Medline, Embase, Cochrane Database and Google Scholar to identify studies comparing appendiceal stump closure methods in laparoscopic appendectomy for acute appendicitis from inception to October 2019. Data regarding operative duration, peri-operative complications, length of stay and costs were collated from all included studies. Results: From 160 identified studies, 19 met the inclusion criteria. Endoloops and endoclips provide equivalent clinical outcomes at lower cost, while operative duration was shortest with endoclip closure. Endostapler devices have the lowest rate of peri-operative complications (3.56%), however their cost limits their regular use in many healthcare environments. Post-operative complication rate and length of stay were similar for all stump closure methods. Conclusion: Although there are no significant differences in method of stump closure in laparoscopic appendectomy, closure with endoclips provides the shortest operative duration. There is a need for robust and standardized reporting of cost data when comparing stump closure methods, together with higher level evidence in the form of multi-centre randomized controlled trials before firm conclusions can be drawn regarding the optimal method of stump closure. abstract_id: PUBMED:36984534 Self-Locking Polymeric Clips Are Safe for the Closure of Appendiceal Stump in Laparoscopic Appendectomy. Background: Closure of the appendix stump is critical to avoid serious postoperative complications. There are a number of options, but the best one has not been identified yet. The purpose of this study is to evaluate the outcomes of appendiceal stump closure using self-locking polymeric clips and endoloops. Methods: A retrospective analysis of the prospectively maintained database of patients with acute appendicitis was performed. Patient demographic details and surgical characteristics, including the duration of hospital stay, postoperative complications, and also the cost of the appendix stump closure, were recorded. Patients were divided into two groups according to the appendix stump closure method: the clips group if it was closed with self-locking polymeric clips and the loops group if Vicryl or PDS loops were used. Statistical analysis was performed using Pearson's chi-squared test, Wilcoxon rank sum (Mann-Whitney U) test, and Fisher's exact test in R statistical software package version 4.2.1. Results: 515 patients were included in the study from June 2016 to April 2021. There were no significant differences in terms of demographics (p-value in comparison of groups' sex > 0.99, age p-value 0.16), postoperative complications (p-value > 0.99), histological findings (p-value 0.27), or length of hospital stays (p-value 0.18) between the two patient groups (clips group, N = 454 and loops group, N = 61). The price of operation while using different appendiceal stump closures is significantly different. In a laparoscopic appendectomy, one stump closure with self-locking clips costs 7.69 €, with Vicryl loops-91.35 €, with PDS loops-96.51 €, and with a stapler-514.50 €. Conclusions: Self-locking polymeric clips can be used for the safe and effective closure of an appendiceal stump. There were no significant differences in the postoperative time (30 days) or complication rates among patients in both (clips and loops) groups. Thus, this might be a technique to reduce expenses while maintaining good postoperative results after laparoscopic appendectomy. abstract_id: PUBMED:30747049 Comparison of stump closure techniques during laparoscopic appendectomies for complicated appendicitis - results from Pol-LA (Polish laparoscopic appendectomy) multicenter large cohort study. Background: In general, the three main options for stump closure in laparoscopic appendectomy are clips, endoscopic staplers and endoloops. However, there is no gold standard, especially regarding complicated acute appendicitis which is generally associated with worse outcomes.Objectives: We aimed to assess the outcomes of different stump closure techniques during laparoscopic appendectomies for complicated appendicitisMethods: Our multicenter observational study of 18 surgical units assessed the outcomes of 1269 laparoscopic appendectomies for complicated appendicitis that used the three main stump closure techniques: clips, staplers and endoloops. Groups were compared in terms of peri- and postoperative outcomes.Results: Staplers were superior in terms of overall morbidity (9.79 vs. 3.29% vs. 7.41%, p = .017) and length of stay (4 vs. 3 vs. 4 days, p < .001) respectively for clips, staplers and endoloops. However, no differences in major complication rates, postoperative intraabdominal abscess formation, reintervention rates and readmission rates were found.Conclusion: Although our results show some clinical benefits of staplers for appendix stump closure, they are based on a non-randomized group of patients and are therefore prone to selection bias. Further well-designed trials taking into consideration not only the clinical benefits, but also, the economic aspects of the surgical treatment of complicated acute appendicitis are needed to confirm our results. abstract_id: PUBMED:25712546 Appendix stump closure with endoloop in laparoscopic appendectomy. Aim: To address the question of the appendix stump closure in laparoscopic appendectomy and evaluate the incidence of intraoperative and postoperative complications after application of endoloops. Material Of Study: Data included age and gender, ASA score, co-morbid illness, prior abdominal surgery, clinical presentation, operative time, conversion rate and reasons for conversion, postoperative morbidity and mortality rates, pathologic diagnosis, and length of hospital stay. Results: Laparoscopic appendectomy with stump closure using endoloops was performed in 35 consecutive patients. Postoperative complications occurred in three patients. The mean postoperative hospital stay was 2 days. The perioperative mortality rate was 0%. Discussion: The treatment of appendiceal stump closure in laparoscopic appendectomy represents the main technical aspect and it is an important step for its potentially serious postoperative complications due from an inappropriate management. Endostapling, hem-o-lok clips and intracorporeal knotting are the most commonly recommended methods. All alternatives have advantages and disadvantages against the different clinical stages of acute appendicitis, and it should be noted that the different forms of appendiceal stump closure have never been assessed in prospective randomized studies. Conclusions: From our experience we can confirm that the endoloop can be safely used and should be the preferred modality in treating the appendiceal stump, especially for low-grade appendicitis. Alternative methods should be used in case of phlegmonous or gangrenous appendicitis, perityphlitic abscess or simultaneous operation for Meckel's diverticulum. Knowledge about and appropriate use of all of them are important for a safe and more cost-effective procedure. Key Words: Appendiceal stump closure, Laparoscopic appendectomy. abstract_id: PUBMED:34229211 Stump appendicitis after laparoscopic appendectomy; laparoscopic management and literature review. Introduction: Appendectomy is one of the most common surgical procedures performed worldwide There are different etiologies for acute appendicitis such as obstruction of the appendiceal lumen by fecalith, lymphoid hyperplasia, or neoplasm. Laparoscopic appendectomy has become the treatment of choice for both complicated and uncomplicated appendicitis; common postoperative complications include wound infection, bleeding, intraabdominal abscess. Stump appendicitis is defined as the interval repeated inflammation of remaining residual appendiceal tissue after an appendectomy. Presentation Of Case: 38-Year-old female patient with a history of laparoscopic appendectomy performed in 2016 for acute uncomplicated appendicitis. She arrived to the emergency room due to abdominal pain 7 out of 10, located in the periumbilical region, later with migration to the right lower quadrant, abdominal CT scan evidenced the presence of a cecal and pericecal inflammatory process as well as the base and residual proximal portion of the cecal appendix laparoscopic stump appendectomy was performed. Discussion: Stump appendicitis (SA) is defined as the inflammation of the remnant of the cecal appendix after an appendectomy, whether due to impaction of a fecalith or secondary to an ischemic process, the probability of developing SA is estimated to be about 1/50,000 cases throughout life. The most frequently used treatment is exploratory laparotomy to complete the previous appendectomy; however, there are 5 reported cases of stump appendicitis, where surgical resolution was performed through laparoscopic surgery. Conclusions: It is important to keep this entity in mind when evaluating a patient with acute abdomen with previous history of appendectomy, since the delay in diagnosis and treatment increases morbidity and mortality; laparoscopic stump appendectomy has been shown to be a safe treatment (Agha et al., 2020 [14]).1. abstract_id: PUBMED:29056133 Polymeric Clips for Stump Closure in Laparoscopic Appendectomy. Many different techniques are used to perform laparoscopic appendectomy in terms of locations of trocars and closure of the appendiceal stump. They include mechanical endostaplers, endoligature, metal clips, bipolar endocoagulation, polymeric clips and intracorporeal sutures. The method of choice for appendiceal stump closure should be inexpensive and easy to perform. Non-absorbable polymer clips is an acceptable option for this purpose. Polymeric clips provide considerable cost savings as compared with endoscopic staplers, and are easy to apply in comparison to suture ligature techniques. In this study, we aimed to investigate outcomes of appendectomy carried out by using polymeric clips in 123 patients without any intraabdominal collection of pus or abscess. As such, the authors found polymeric clips to be safe, cheap and effective for stump closure in laparoscopic appendectomy. Answer: Based on the provided abstracts, there is no single "ideal" stump closure technique in laparoscopic appendectomy that is universally superior in all aspects. Different studies have compared various methods of appendiceal stump closure, including stapler closure, ENDOLOOP®, Hem-o-lok® clips, handmade endo-loop technique, polymeric endoclip, and intracorporeal knotting with invaginating suture. The choice of technique often depends on factors such as the condition of the appendix, cost considerations, the surgeon's experience, and the availability of materials. Several studies suggest that while there are no significant differences in terms of postoperative complications and length of hospital stay among different stump closure methods, there are differences in terms of cost and ease of use. For instance, the handmade endo-loop technique is considered safe, applicable, and cost-effective (PUBMED:27121413), while Hem-o-lok® clips are noted for having the shortest operative time and being cost-effective (PUBMED:34059039). Intracorporeal knotting with invaginating suture is also reported to be an acceptable and cheaper alternative to endoloops (PUBMED:23630557). The use of polymeric clips for stump closure is described as safe and cost-effective, and it is suggested that they can be used reliably even in smaller hospitals (PUBMED:34381657; PUBMED:32802324; PUBMED:36984534; PUBMED:29056133). Self-locking polymeric clips are also highlighted for their safety and effectiveness, with no significant differences in postoperative complications compared to other methods, and they offer a significant cost advantage (PUBMED:36984534). In cases of complicated appendicitis, staplers are shown to have some clinical benefits in terms of overall morbidity and length of stay, but they are also the most expensive (PUBMED:30747049). Endoloops are considered safe and preferred for low-grade appendicitis, with alternative methods suggested for more complicated cases (PUBMED:25712546). Overall, while there is no consensus on an ideal stump closure technique, the choice should be tailored to the specific clinical situation, taking into account the safety, efficacy, and cost, as well as the surgeon's preference and experience.
Instruction: Demonstration of EGFR gene copy loss in colorectal carcinomas by fluorescence in situ hybridization (FISH): a surrogate marker for sensitivity to specific anti-EGFR therapy? Abstracts: abstract_id: PUBMED:16324192 Demonstration of EGFR gene copy loss in colorectal carcinomas by fluorescence in situ hybridization (FISH): a surrogate marker for sensitivity to specific anti-EGFR therapy? Aims: To investigate EGFR gene copy number heterogeneity in colorectal carcinomas compared with copy number of chromosome 7 and immunohistochemical expression of the EGFR protein. Methods And Results: Fluorescence in situ hybridization of the EGFR gene and CEP7 was carried out on paraffin-embedded material from 48 rectal carcinomas combined with immunohistochemical detection of EGFR with a polymer detection kit. EGFR gene copy number had a range of 1.4-7.3 with a mean of 2.5. CEP7 copy number had a range of 1.5-6.1 with a mean of 2.5. The EGFR gene/CEP7 ratio ranged from 0.4 to 1.5 with a mean of 0.96. Most cases had a balanced EGFR gene/CEP7 ratio (37 cases = 77%). Copy gain was found in seven cases (15%) with a ratio of up to 1.5, consistent with gain of one EGFR gene copy in one chromosome. Copy loss was found in four cases (8%). All cases with EGFR gene copy loss were immunohistochemically positive. Conclusions: Demonstration of EGFR gene copy loss might be a surrogate marker for EGFR mutation/deletion and could be used in a routine setting in pathology departments. Further studies are needed to determine whether this may be used to select patients that might benefit from specific anti-EGFR therapy. abstract_id: PUBMED:30096327 EGFR gene copy number decreases during anti-EGFR antibody therapy in colorectal cancer. Epidermal growth factor receptor (EGFR) gene copy number (GCN) increase is associated with a favorable anti-EGFR antibody treatment response in RAS wild-type metastatic colorectal cancer. However, there are limited and comparative data regarding the EGFR GCN in primary colorectal cancer tumors and corresponding metastases or the effect of anti-EGFR antibody treatment on EGFR GCN in recurrent disease. In addition, little is known about the potential EGFR GCN changes during anti-EGFR therapy in comparison with other treatment regimens. EGFR GCN was analyzed by EGFR immunohistochemistry-guided silver in situ hybridization in primary and corresponding recurrent local or metastatic tumors from 80 colorectal cancer patients. GCN levels were compared between KRAS wild-type patients having received anti-EGFR therapy and patients having received other forms of treatment after primary surgery. The EGFR GCN decrease between primary and recurrent tumors was more pronounced among the anti-EGFR-treated patients than among patients not treated with anti-EGFR therapy (P = .047). None of the patients experiencing an EGFR GCN increase of at least 1.0 between the primary and recurrent tumors were treated with anti-EGFR antibodies. When including only patients with distant metastases, an EGFR GCN decrease of at least 1.0 was more common among the anti-EGFR-treated patients than among patients not treated with anti-EGFR therapy (P = .028). Our results suggest that anti-EGFR antibody treatment is associated with EGFR GCN decrease between the primary and recurrent colorectal adenocarcinomas, whereas no GCN change is observed among patients receiving other forms of treatment after primary surgery. abstract_id: PUBMED:20843314 Chromogenic in situ hybridization to detect EGFR gene copy number in cell blocks from fine-needle aspirates of non small cell lung carcinomas and lung metastases from colo-rectal cancer. Background: Several studies demonstrated that epidermal growth factor receptor (EGFR) gene copy number (GCN) correlates to the response to tyrosine kinase inhibitors in non small cell lung cancer (NSCLC) and to anti-EGFR monoclonal antibodies (MoAbs) in metastatic colorectal cancer (CRC). In the presence of lung nodules, cytology is often the only possible diagnostic approach. Chromogenic in situ hybridization (CISH) is an alternative technique to fluorescence in situ hybridization (FISH), but its feasibility in detecting EGFR GCN in cell blocks from fine-needle aspiration cytology (FNAC) of lung nodules has not yet been established. Methods: We evaluated the feasibility of CISH on 33 FNAC from 20 primary NSCLC (5 squamous carcinomas, 8 large cell carcinomas and 7 adenocarcinomas) and 13 lung metastases from CRC. Results: Of the 33 FNAC analyzed by CISH, 27 (82%) presented a balanced increase in EGFR gene and chromosome 7 number: 10 cases (30%) showed a low polysomy, 15 (45%) a high polysomy and 2 (6%) NSCLC were amplified. No significant differences between NSCLC and CRC lung metastases were found in relation to disomic or polysomic status. In addition, no correlation between EGFR GCN and EGFR immunohistochemical overexpression was found. Furthermore, we compared CISH results with those obtained by FISH on the same samples and we found 97% overall agreement between the two assays (k = 0.78, p < 0.0001). Two cases were amplified with both assays, whereas 1 case of NSCLC was amplified by FISH only. CISH sensitivity was 67%, the specificity and positive predictive value (PPV) was 100%, and the negative predictive value (NPV) was 97%. Conclusions: Our study shows that CISH is a valid method to detect EGFR GCN in cell blocks from FNAC of primary NSCLC or metastatic CRC to the lung. abstract_id: PUBMED:18794099 Clinical usefulness of EGFR gene copy number as a predictive marker in colorectal cancer patients treated with cetuximab: a fluorescent in situ hybridization study. Purpose: To evaluate the usefulness and the pitfalls inherent to the assessment of the epidermal growth factor receptor (EGFR) gene copy number (GCN) by fluorescence in situ hybridization (FISH) for outcome prediction to cetuximab in metastatic colorectal cancer. The value of testing KRAS mutation status, in addition to EGFR GCN, was also explored. Experimental Design: FISH analysis of 87 metastatic colorectal cancer patients treated with cetuximab was done, recording individual GCN per cell and using different samples per tumor. Performances of published cutoff points and different summaries of EGFR GCN distribution were assessed for response prediction. Results: In our data set, two published cutoff points performed less well than in their training set, yielding positive predictive values and negative predictive values between 40.0% and 48.3% and between 81.0% and 86.5%, respectively. Among summaries of GCN distribution explored, mean and right-tailed distribution of GCN yielded the highest performances. A mean EGFR GCN > or = 2.83 provided an area under the curve of 0.71. Important heterogeneity of repeated measures of mean EGFR GCN was observed within tumors (intraclass correlation, 0.61; within-class SD, 0.40), leading to potential misclassifications of FISH status in 7 of 18 (38.8%) patients if a cutoff point were used. In multivariable analysis, EGFR GCN testing provided significant information independent of the KRAS status to predict response (P = 0.016) and overall survival (P = 0.005). Conclusions: We confirm the association between increased EGFR GCN and outcome after cetuximab. However, because of reproducibility concerns, any decision making based on published cutoff points is not warranted. abstract_id: PUBMED:20884623 EGFR fluorescence in situ hybridization pattern of chromosome 7 disomy predicts resistance to cetuximab in KRAS wild-type metastatic colorectal cancer patients. Purpose: Metastatic colorectal cancer patients with low epidermal growth factor receptor (EGFR) gene copy number are unlikely to respond to anti-EGFR monoclonal antibody (mAb) treatment. The objective of this study was to investigate EGFR fluorescence in situ hybridization (FISH) patterns of chromosome 7 disomy with efficacy of cetuximab therapy in metastatic colorectal cancer patients. Experimental Design: We detected the EGFR FISH patterns and KRAS status in 74 tumors from cetuximab-treated metastatic colorectal cancer patients and analyzed with response rate (RR) and progression-free survival (PFS). Results: One of the 16 (6.25%) patients with chromosome 7 homogeneous disomy (defined as FISH negative) had objective response to cetuximab. A total of 53(76.8%) patients with chromosome 7 pattern of variable ratios of disomy versus polysomy (defined as FISH positive) had a significantly higher RR (37.7% versus 6.25%; P = 0.01), a trend towards longer PFS (4.5 versus 2.9 months; P = 0.07). Among 54 KRAS wild-type patients, EGFR FISH-positive patients had significantly higher RR (51.3% versus 9%; P = 0.01) and longer PFS (5.0 versus 2.3 months; P = 0.02) than EGFR FISH-negative patients. However, among 20 KRAS mutant-type patients, there was no difference in RR (0% versus 0%) and PFS (2.5 versus 3.8 months; P = 0.51) between EGFR FISH-positive and -negative patients. Conclusion: Our results show firstly that patients with EGFR FISH pattern of chromosome 7 disomy have a very low chance to benefit from cetuximab-based therapy. EGFR FISH pattern of chromosome 7 disomy may be as a negative predicative factor for cetuximab response in KRAS wild-type metastatic colorectal cancer patients. abstract_id: PUBMED:37386758 Detection of Nine Oncogenes Amplification in Lung and Colorectal Cancer Formalin-Fixed Paraffin-Embedded Tissue Samples using Combined Next-Generation Sequencing-Based Script and Digital Droplet Polymerase Chain Reaction. Introduction: Gene copy number variations have theranostic impact and require reliable methods for their identification. We aimed to evaluate the reliability of combined next-generation sequencing (NGS) and digital droplet PCR (ddPCR) method for gene amplification evaluation. Methods: We conducted a retrospective multicentric observational study. MET/ERBB2 amplifications were assessed in patients with lung or colorectal carcinoma (cohort A), from 2016 to 2020, by fluorescence in situ hybridization (FISH)/immunohistochemistry (IHC), NGS and ddPCR. NGS-based script and ddPCR were then used to detect amplifications of 7 additional oncogenes (EGFR, KRAS, BRAF, FGFR1, FGFR2, FGFR3, PIK3CA) in a cohort of patients (cohort B). Results: 55 patients (9 control, 25 ERBB2-amplified and 21 MET-amplified) out of 3779 patients tested were included in cohort A. Correlation coefficient between NGS-based script and FISH/IHC results were .88 for MET (P < .001) and .89 (P < .001) for ERBB2. Using a threshold ratio of 1.56 with the NGS-based script, the sensitivity was 100% for both genes and the specificity 69% for MET and 90% for ERBB2, respectively. With an alternative 1.76 threshold, sensitivity was 94% for MET and 96% for ERBB2, while specificity was 85% for MET and 90% for ERBB2. Correlation coefficient between FISH and ddPCR ratio was .90 for MET and .88 for ERBB2. In both cohorts, NGS-based script and ddPCR results were significantly correlated regarding all genes (P < .001). Conclusion: Combined NGS-based script and ddPCR method is reliable and easily feasible for the detection of gene amplifications, providing useful data for guided therapy in cancer. abstract_id: PUBMED:21694725 EGFR gene copy number assessment from areas with highest EGFR expression predicts response to anti-EGFR therapy in colorectal cancer. Background: Only 40-70% of metastatic colorectal cancers (mCRCs) with wild-type (WT) KRAS oncogene respond to anti-epidermal growth factor receptor (anti-EGFR) antibody treatment. EGFR amplification has been suggested as an additional marker to predict the response. However, improved methods for bringing the EGFR analysis into routine laboratory are needed. Methods: The material consisted of 80 patients with mCRC, 54 of them receiving anti-EGFR therapy. EGFR gene copy number (GCN) was analysed by automated silver in situ hybridisation (SISH). Immunohistochemical EGFR protein analysis was used to guide SISH assessment. Results: Clinical benefit was seen in 73% of high (≥ 4.0) EGFR GCN patients, in comparison with 59% of KRAS WT patients. Only 20% of low EGFR GCN patients responded to therapy. A high EGFR GCN number associated with longer progression-free survival (P<0.0001) and overall survival (P=0.004). Together with KRAS analysis, EGFR GCN identified the responsive patients to anti-EGFR therapy more accurately than either test alone. The clinical benefit rate of KRAS WT/high EGFR GCN tumours was 82%. Conclusion: Our results show that automated EGFR SISH, in combination with KRAS mutation analysis, can be a useful and easily applicable technique in routine diagnostic practise for selecting patients for anti-EGFR therapy. abstract_id: PUBMED:17845756 Protein overexpression and gene copy number of EGFR and HER2 in colorectal carcinoma Objective: To investigate the protein expression and gene copy number of EGFR and HER2, and the correlation between the two markers in colorectal carcinomas in Chinese. Method: Total 42 samples of paraffin-embedded colorectal carcinomas in tissue microarray format were studied by immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH) for EGFR and HER2 protein expression and gene copy number status, respectively. Results: Among 42 cases evaluated, EGFR scores were 0 in 18 cases, 1+ in 10 cases, 2+ in 5 cases and 3+ in 9 cases. HER2 expression was negative in 39 tumors, 1+ in 1 tumor, 2+ in 1 tumor and 3+ in 1 tumor. For FISH assessing EGFR, 18 (42.9%) cases showed no apparent copy number changes, including 14 (33.3%) cases of disomy and 4 (9.5%) cases of low trisomy, 24 (57.1%) cases showed increased gene copy numbers including high trisomy in 3/42 (7.1%), low polysomy in 9/42 (21.4%) and high polysomy in 12/42 (28.6%) cases. Gene amplification of EGFR is not detected. Four of 42 patients (9.5%) had increased HER2 gene copy number, including 3 patients with high polysomy and 1 patient with gene amplification. Significant association was not seen between EGFR protein expression and the gene copy number, nor between two markers and tumor differentiation. There was a highly significant concordance between the gene amplification and IHC 3+ for HER2 similar to that of breast cancer. Conclusions: Protein expression and/or increased gene copy number of EGFR is common in colorectal carcinomas but unrelated to pathological features in this cohort. HER2 protein overexpression and/or gene amplification are rare. abstract_id: PUBMED:17987340 Cetuximab shows activity in colorectal cancer patients with tumors for which FISH analysis does not detect an increase in EGFR gene copy number. Background: EGFR (epidermal growth factor receptor) gene gain assessed by FISH (fluorescence in situ hybridization) has been shown to be predictive of response to EGFR-targeted therapies in patients with non-small cell lung cancer. The aim or our study was to relate the EGFR gene copy number to therapeutic results in patients with metastatic colorectal cancer (CRC) treated with a cetuximab-containing regimen. Methods: Forty-seven patients with metastatic CRC treated with a cetuximab-containing regimen between August 2004 and September 2006 were included in our study. EGFR status was assessed by immunohistochemistry (IHC) and by FISH on fixed paraffin-embedded sections of tumor specimens. Results: By IHC (n = 47), 39 patients (83%) had EGFR-positive tumors. EGFR gene copy gain was detected in 8 (19.5%) of 41 tumors. Neither EGFR expression assessed by IHC nor EGFR gene copy gain assessed by FISH were statistically significantly correlated with objective response rate, disease control rate, progression-free survival, and overall survival. Of the 33 patients whose tumors were FISH negative, 8 patients (24.2%) had a partial response, and 10 (30.3%) had stable disease. Conclusions: EGFR FISH analysis does not seem to be a sufficiently robust test for selecting candidate CRC patients for cetuximab therapy. abstract_id: PUBMED:19671679 Oncogenic activating mutations are associated with local copy gain. Although activating mutations and gains in copy number are key mechanisms for oncogene activation, the relationship between the two is not well understood. In this study, we focused on KRAS copy gains and mutations in non-small cell lung cancer. We found that KRAS copy gains occur more frequently in tumors with KRAS activating mutations and are associated with large increases in KRAS expression. These copy gains tend to be more focal in tumors with activating mutations than in those with wild-type KRAS. Fluorescence in situ hybridization analysis revealed that some tumors have homogeneous low-level gains of the KRAS locus, whereas others have high-level amplification of KRAS, often in only a fraction of tumor cells. Associations between activating mutation and copy gains were also observed for other oncogenes (EGFR in non-small cell lung cancer, BRAF and NRAS in melanoma). Activating mutations were associated with copy gains only at the mutated oncogene locus but not other oncogene loci. However, KRAS activating mutations in colorectal cancer were not associated with copy gains. Future work is warranted to clarify the relationship among the different mechanisms of oncogene activation. Answer: The demonstration of EGFR gene copy loss in colorectal carcinomas using fluorescence in situ hybridization (FISH) has been investigated as a potential surrogate marker for sensitivity to specific anti-EGFR therapy. In a study by Personeni et al., EGFR gene copy number was analyzed in rectal carcinomas, and it was found that copy loss was present in 8% of cases, all of which were immunohistochemically positive for EGFR. This suggests that EGFR gene copy loss might be indicative of EGFR mutation/deletion and could potentially be used to select patients who might benefit from specific anti-EGFR therapy (PUBMED:16324192). However, further studies are needed to determine the clinical utility of this marker. For instance, in a study by Sartore-Bianchi et al., EGFR gene copy number was assessed in metastatic colorectal cancer patients treated with cetuximab, and it was found that neither EGFR expression assessed by immunohistochemistry nor EGFR gene copy gain assessed by FISH were statistically significantly correlated with objective response rate, disease control rate, progression-free survival, and overall survival. This indicates that EGFR FISH analysis may not be a sufficiently robust test for selecting candidate colorectal cancer patients for cetuximab therapy (PUBMED:17987340). Moreover, the relationship between EGFR gene copy number and response to anti-EGFR therapy is complex. For example, a study by Scartozzi et al. found that patients with EGFR FISH pattern of chromosome 7 disomy had a very low chance to benefit from cetuximab-based therapy, suggesting that this FISH pattern may be a negative predictive factor for cetuximab response in KRAS wild-type metastatic colorectal cancer patients (PUBMED:20884623). In summary, while there is some evidence to suggest that EGFR gene copy loss as detected by FISH could be a surrogate marker for sensitivity to anti-EGFR therapy in colorectal carcinomas, the predictive value of this marker is not yet fully established, and more research is needed to validate its clinical relevance.
Instruction: Thyroid nodules with microfollicular findings reported on fine-needle aspiration: invariably surgical treatment? Abstracts: abstract_id: PUBMED:10025384 Aspiration needle biopsy in preoperative selection of thyroid nodules defined at fine-needle aspiration as microfollicular lesions. The palpable thyroid nodules that are diagnosed as microfollicular by fine-needle aspiration cytologic analysis are usually excised for the low probability that the nodule is a well-differentiated follicular cancer. The authors retrospectively assess the use of aspiration needle biopsy (either 16- or 18-gauge needles) in the preoperative selection of the nodules diagnosed as microfollicular at fine-needle aspiration (either 20- or 22-gauge needles). Aspiration needle biopsy is a type of large needle biopsy that is a relatively easy and innocuous method of obtaining tissue fragments for preoperative histologic examination of palpable thyroid nodules. From 1980 through 1996, 6,314 patients with palpable thyroid nodules were examined by fine-needle aspiration; 29.5% of these nodules were also examined preoperatively by aspiration needle biopsy. Of all the patients with nodules, 6% received a fine-needle aspiration diagnosis of microfollicular nodule. Two hundred sixty of the 380 nodules (68%) that were diagnosed as microfollicular at fine-needle aspiration were also examined by aspiration needle biopsy; inadequate specimens were obtained in 17% of cases; pure microfollicular structure was confirmed by aspiration needle biopsy in 35% of the nodules; and aspiration needle biopsy showed the remaining 48% to contain a macrofollicular component suggesting a benign hyperplastic lesion. Seventeen nodules that were found to be microfollicular at fine-needle aspiration cytologic analysis and micromacrofollicular at aspiration needle biopsy were excised and the postoperative result was benign in all cases. Twenty-five nodules diagnosed as microfollicular either on both fine-needle aspiration and aspiration needle biopsy were excised and the postoperative diagnoses were benign (20 nodules) or malignant (5 nodules). These data indicate that aspiration needle biopsy can be used for preoperative selection of the nodules that are microfollicular at fine-needle aspiration by identifying the nodules with high probability of being malignant and thus contributing to the reduction in the number of surgical operations for benign nodules. abstract_id: PUBMED:11585370 Thyroid nodules with microfollicular findings reported on fine-needle aspiration: invariably surgical treatment? Objective: To assess whether fine-needle aspiration (FNA) can be a diagnostic procedure for identification of malignant thyroid nodules when the smear reveals the cytologic (microfollicular) finding of "follicular neoplasm" (FN). Methods: We reviewed a group of 79 patients with FN who underwent surgical treatment and evaluated the possible significance of the cytologic features of anisokaryosis, amount of colloid, and nuclear overlapping (NO). We excluded from the study those patients with FN who had other highly suspicious cytologic features, such as sporadic grooves and sporadic pseudonucleoli. Results: In our series of 79 patients with FN at FNA, we found anisokaryosis in 49.4%, scant or no colloid in 31.6%, and NO in 5.1%. Thyroid cancer (TC)--follicular cancer or follicular variant of papillary cancer--was found in six patients (7.6%): four patients (10.2%) with anisokaryosis and two patients (5.0%) without anisokaryosis (P<0.5; chi2 test). All the patients with malignant lesions had scant or no colloid in the smears, and four of them were those with NO. Therefore, TC was found in 24.0% of nodules with scant or no colloid versus 0.0% of nodules with abundant colloid (P<0.001; chi2 test), and TC was found in 100% of nodules with NO versus 2.6% of nodules without NO (P<0.001; chi2 test). Conclusion: Because of the high prevalence of thyroid nodules and the frequency of FN at FNA, the number of unnecessary interventions may be very high. FNA cannot be diagnostic in FN because invasion of the tumor capsule and blood vessels can be evidenced only at histologic examination. Although it is common opinion that microfollicular nodules should be surgically treated, simple cytologic features can help select patients at low risk who can be monitored and reassessed clinically and by FNA. Patients with high-risk cytologic features such as NO should be advised that a total thyroidectomy may be necessary. abstract_id: PUBMED:11943905 Large-needle aspiration biopsy for the preoperative selection of follicular adenoma diagnosed by fine-needle aspiration as a microfollicular nodule or suspected cancer. The preoperative fine-needle aspiration (FNA) cytologic diagnoses of microfollicular nodule and suspected cancer include nodules that yield histologic results of typically benign lesions, nodular hyperplasia and follicular adenomas, and differentiated thyroid cancer postoperatively. We reviewed the clinical and pathologic data for patients with palpable thyroid nodules with these two cytologic diagnoses; nodules were also examined by large-needle aspiration biopsy (LNAB) to assess whether the distinction achieved by LNAB in pure microfollicular or mixed microfollicular-macrofollicular nodules could be used preoperatively to better predict the diagnosis of follicular adenoma. One hundred fourteen nodules of this type were excised. Postoperative results were grouped in nodular hyperplasia (90 nodules), follicular adenoma (8), and differentiated carcinoma (16). The prevalence of the follicular adenoma and thyroid carcinoma postoperative findings was 33% among the microfollicular nodules and 5.9% among the microfollicular-macrofollicular nodules at preoperative LNAB, (p = 0.0004). Within the postoperatively benign nodules, the prevalence of follicular adenoma was 14.3% among the microfollicular nodules and 2% among the microfollicular-macrofollicular nodules at LNAB (p = 0.028). These data indicate that histologic examination of the LNAB specimen can be used to identify those thyroid nodules diagnosed by FNA as microfollicular nodules or suspected cancer, which are most likely to be follicular adenomas postoperatively. abstract_id: PUBMED:8952850 Aspiration needle biopsy refines preoperative diagnosis of thyroid nodules defined at fine needle aspiration as microfollicular nodule. The aim of this paper was to verify the hypothesis that large needle biopsy performed preoperatively can refine preoperative fine needle aspiration (FNA) cytological diagnoses of microfollicular nodules. Since 1980 we have been using FNA and aspiration needle biopsy (ANB) (18 or 16 gauge needles) to select for surgery all euthyroid patients with palpable thyroid nodules referred to our department. From 1980 to 1994, 6,124 patients (12% male, 88% female) with thyroid nodules (71% single, 29% multiple) were examined by FNA; 29% of these patients were also examined preoperatively by ANB histology. Of all the nodule patients examined, 371 received a preoperative FNA diagnosis of microfollicular nodule. Two hundred and fifty-four of these nodules (68%) were also examined preoperatively by ANB. Unsatisfactory ANB specimens constituted 17% of cases; pure microfollicular structure was confirmed by ANB in 36% of the nodules; ANB showed the remaining 47% to contain a macrofollicular component, thus suggesting a benign hyperplastic lesion. Twelve nodules which were found to be microfollicular at FNA cytology and micro-macrofollicular at ANB were excised and were subsequently determined as benign at definitive postoperative histology. These data indicate the utility of ANB in refining the preoperative FNA diagnosis of microfollicular nodule and in preoperatively identifying benign hyperplastic mixed micro-macrofollicular lesions which can be followed by observation. abstract_id: PUBMED:10874889 Large-needle aspiration biopsy for the preoperative selection of palpable thyroid nodules diagnosed by fine-needle aspiration as a microfollicular nodule or suspected cancer. The palpable thyroid nodules with a fine-needle aspiration (FNA) diagnosis of microfollicular nodule or suspected cancer usually are excised; however, most of them are proved benign by postoperative histologic examination. We reviewed the clinical and pathologic data for patients with thyroid nodules with an FNA diagnosis of microfollicular nodule or suspected cancer; nodules also were examined by large-needle aspiration biopsy (LNAB) to assess whether the distinction achieved by LNAB into pure microfollicular or mixed microfollicular-macrofollicular nodules could be used preoperatively to better predict malignancy. One hundred fourteen nodules of this type were excised. The prevalence of cancer was 22% (14/63) among the microfollicular and 4% (2/51) among the microfollicular-macrofollicular nodules at LNAB. These data indicate that histologic examination of the LNAB specimen can be used for preoperative selection of thyroid nodules diagnosed by FNA as a microfollicular nodule or suspected cancer. abstract_id: PUBMED:22057754 Incidence of malignancy in thyroid nodules determined to be follicular lesions of undetermined significance on fine-needle aspiration. Background: Fine-needle aspiration (FNA) for thyroid nodules is the most important method for determining a diagnosis. The system for reporting results is based on a cytopathologic classification that stratifies the risk of malignancy. Methods: We retrospectively studied 197 patients who underwent FNA for diagnostic evaluation of a thyroid nodule and had their results reported as a follicular lesion of undetermined significance (FLUS) using the Bethesda classification system. The objective of the study was to analyze the incidence and histopathologic types of malignancy in these cases. Results: The final histopathologic breakdown is as follows: 65 cases (32.9%) of follicular adenoma, 81 cases (41.1%) of microfollicular adenomatoid nodule, 19 cases (9.6%) of microfollicular adenomatoid nodule on the background of thyroiditis, 17 cases (8.6%) of follicular carcinoma, 9 cases (4.6%) of follicular variant papillary carcinoma, and 6 cases (3.1%) of classic papillary carcinoma, for a 16.2% incidence of malignancy. Beyond these diagnoses in the FNA-biopsied nodules, we observed 29 cases (14.7%) of incidental ipsilateral papillary thyroid microcarcinoma (PTM) and 13 cases (6.6%) of incidental contralateral thyroid lobe PTM. Conclusions: This study observed a 16.2% incidence of thyroid cancer in the nodule designated FLUS compared to the 5 to 15% rate reported by the Bethesda FNA classification. The overall incidence of incidental PTM in the thyroid gland was 21.3%. These data support considering surgical intervention for at least diagnostic purposes in a patient with the FNAB diagnosis of FLUS. abstract_id: PUBMED:9118431 Differentiating parathyroid adenoma from microfollicular thyroid nodule in fine-needle aspiration cytology Two cases of fine needle aspiration cytology of parathyroid adenoma were not misinterpreted as microfollicular nodules of the thyroid. Discrimination of those two lesion is a diagnostic pitfall in cytological material. Authors pointed out criteria for differential diagnosis. Radioimmunoanalysis of parathormone in aspirate served as a simple, safe and reliable method in these cases. abstract_id: PUBMED:15376197 Fine-needle aspiration cytology of solitary thyroid nodules: how far can we go in rendering differential cytologic diagnoses? Fine-needle aspiration cytology (FNAC) is a diagnostic tool used in the clinical workup of solitary thyroid nodules; however, differential cytologic diagnosis of these nodules often is challenging. With the goal of identifying cytologic findings that could improve predictions regarding the presence of neoplastic lesions, the authors performed a retrospective review of cases in which FNAC led to diagnoses of solitary cellular nodules or cellular microfollicular lesions at two university hospitals. FNAC smears associated with cases for which surgical specimens subsequently were obtained were reviewed. FNAC accurately detected follicular neoplasms in 76% of cases at one hospital and in 67% of cases at the other. In the current report, the cytologic findings made in these cases are reevaluated, and the potential diagnostic contribution of available clinical data is discussed. abstract_id: PUBMED:35446712 Comparison of fine-needle aspiration with fine-needle capillary cytology in thyroid nodules. Introduction: High false-negative results have been reported for fine-needle aspiration (FNA) cytology in thyroid nodules. Fine-needle capillary (FNC) cytology is an alternative technique that prevents aspiration, reducing tissue damage. This study aimed to compare FNA and FNC in assessing thyroid nodules and in terms of their predictive role in the appropriate diagnosis of malignancy. Methods: This is a comparative prospective study conducted on 486 patients. FNA was performed in 235 patients during 2016 and 2017 and FNC in 251 patients during 2018 and 2019. The quality of cytological specimens was compared and then correlated with the final histopathological findings of 39 patients who underwent thyroidectomy. Results: Both groups were statistically similar regarding age and sex distribution. The FNA technique yielded significantly higher adequate specimens compared with FNC (p<0.001). Abundant blood in the background was found more frequently in the FNA technique (p<0.001). The sensitivity and specificity of FNA for malignancy diagnosis were both 100%, compared with 83.3% and 57.7% for FNC, respectively. Conclusions: The two methods, FNA and FNC, did not differ in terms of overall quality. FNA was superior regarding consistency with the histopathological results and the ability to diagnose malignancy. abstract_id: PUBMED:35676051 Usefulness of ultrasound-guided core biopsy in thyroid nodules with inconclusive fine-needle aspiration biopsy findings. Objectives: To determine the diagnostic performance of ultrasound-guided core-needle biopsy in thyroid nodules after two inconclusive fine-needle aspiration biopsies. To assess the complications of core-needle biopsy. To analyze the reliability of diagnoses obtained with core-needle biopsy. To measure the economic impact of avoiding lobectomies in patients with benign core-needle biopsy findings. Material And Methods: This retrospective study reviewed 195 core-needle biopsies in 178 patients. To determine the reliability of the core-needle biopsy findings, we compared the diagnosis from the core-needle specimen versus the histologic findings in the surgical specimens when core-needle biopsy findings indicated malignancy or follicular proliferation and versus the stability of the nodule on ultrasound follow-up for one year when core-biopsy findings indicated benignity. Results: Core-needle biopsy yielded a diagnosis for 179 (91.7%) nodules, of which 122 (62.5%) were classified as benign, 50 (25.6%) as follicular proliferation, and 7 (3.6%) as malignant. The findings were inconclusive for 16 (8.3%) nodules. Minor complications were observed in 4 (2%) patients; no major complications were observed. The sensitivity of core-needle biopsy for the diagnosis of thyroid cancer was low (42.8%) because the technique was unable to detect capsular or vascular invasion, although the specificity and positive predictive value (PPV) were 100%. However, when we considered histologic findings of malignancy and follicular proliferation positive because both require surgical resection, the sensitivity increased to 97.5% and the PPV decreased to 83.3%. There were 79 nodules with ultrasound follow-up for at least one year; 76 (96.2%) had negative core-needle biopsy findings, and 74 (97.3%) of these remained stable. The negative predictive value (NPV) for malignancy of the benign nodules was 98.6%, although no malignant transformation was observed. Nevertheless, the results of the statistical analysis do not allow us to recommend forgoing ultrasound follow-up in patients with benign core-biopsy findings. The cost savings of avoiding lobectomy in patients with benign nodules and stability of the nodule on ultrasound follow-up for at least one year was about 90%. Conclusions: Core-needle biopsy of thyroid nodules is effective because it diagnoses more than 90% of nodules with inconclusive findings after fine-needle aspiration biopsy. It is safe if done by experienced professionals. It is reliable because it yields 100% specificity and 100% PPV for malignant nodule, 97.5% sensitivity for the detection of nodules that require surgery, and 98.6% NPV for benign nodules. It is efficient because it reduces the costs of diagnosis compared to lobectomy in benign nodules. Answer: Thyroid nodules with microfollicular findings on fine needle aspiration (FNA) do not invariably require surgical treatment. While it is a common opinion that microfollicular nodules should be surgically treated due to the potential for malignancy, certain cytologic features can help select patients at low risk who can be monitored and reassessed clinically and by FNA (PUBMED:11585370). Simple cytologic features such as nuclear overlapping (NO) can help identify patients at higher risk, for whom a total thyroidectomy may be necessary (PUBMED:11585370). Aspiration needle biopsy (ANB) using larger gauge needles can refine the preoperative diagnosis of microfollicular nodules obtained by FNA. ANB can identify nodules with a high probability of being malignant, thus contributing to the reduction in the number of surgical operations for benign nodules (PUBMED:10025384). Histologic examination of large-needle aspiration biopsy (LNAB) specimens can be used to identify those thyroid nodules diagnosed by FNA as microfollicular nodules or suspected cancer, which are most likely to be follicular adenomas postoperatively (PUBMED:11943905). The use of ultrasound-guided core biopsy in thyroid nodules with inconclusive FNA findings can also be effective, as it diagnoses more than 90% of nodules with inconclusive findings after FNA and has a high negative predictive value (NPV) for benign nodules (PUBMED:35676051). This suggests that not all microfollicular nodules require immediate surgical intervention, and some can be managed with observation and follow-up. In summary, while the presence of microfollicular findings on FNA has traditionally been an indication for surgery due to the risk of malignancy, advancements in diagnostic techniques such as ANB, LNAB, and core-needle biopsy have provided additional tools to refine the preoperative assessment of these nodules. These methods can help to differentiate between benign and malignant lesions, potentially reducing the number of unnecessary surgeries.
Instruction: Does early administration of epidural analgesia affect obstetric outcome in nulliparous women who are receiving intravenous oxytocin? Abstracts: abstract_id: PUBMED:8010465 Does early administration of epidural analgesia affect obstetric outcome in nulliparous women who are receiving intravenous oxytocin? Background: Some studies suggest that epidural analgesia prolongs labor and increases the incidence of cesarean section, especially if it is administered before 5 cm cervical dilation. The purpose of the current study was to determine whether early administration of epidural analgesia affects obstetric outcome in nulliparous women who are receiving intravenous oxytocin. Methods: Informed consent was obtained from healthy nulliparous women with a singleton fetus in a vertex presentation, who requested epidural analgesia while receiving intravenous oxytocin at at least 36 weeks' gestation. Each patient was randomized to receive either early or late epidural analgesia. Randomization occurred only after the following conditions were met: (1) the patient requested pain relief at that moment, (2) a lumbar epidural catheter had been placed, and (3) the cervix was at least 3 but less than 5 cm dilated. Patients in the early group immediately received epidural bupivacaine analgesia. Patients in the late group received 10 mg nalbuphine intravenously. Late-group patients did not receive epidural analgesia until they achieved a cervical dilation of at least 5 cm or until at least 1 h had elapsed after a second dose of nalbuphine. Results: Early administration of epidural analgesia did not prolong the interval between randomization and the diagnosis of complete cervical dilation, and it did not increase the incidence of malposition of the vertex at delivery. Also, early administration of epidural analgesia did not result in an increased incidence of cesarean section or instrumental vaginal delivery. Thirteen (18%) of 74 women in the early group and 14 (19%) of 75 women in the late group underwent cesarean section (relative risk for the early group 0.94; 95% confidence interval 0.48-1.84). Patients in the early group had lower pain scores between 30 and 120 min after randomization, and were more likely to experience transient hypotension. Infants in the late group had lower umbilical arterial and venous blood pH and higher umbilical arterial and venous blood carbon dioxide tension measurements at delivery. Conclusions: Early administration of epidural analgesia did not prolong labor or increase the incidence of operative delivery, when compared with intravenous nalbuphine followed by late administration of epidural analgesia, in nulliparous women who were receiving intravenous oxytocin. abstract_id: PUBMED:8010466 Does early administration of epidural analgesia affect obstetric outcome in nulliparous women who are in spontaneous labor? Background: Some studies suggest that epidural analgesia prolongs labor and increases the incidence of cesarean section, especially if it is administered before 5 cm cervical dilation. The purpose of the current study was to determine whether early administration of epidural analgesia affects obstetric outcome in nulliparous women who are in spontaneous labor. Methods: Informed consent was obtained from 344 healthy nulliparous women with a singleton fetus in a vertex presentation, who requested epidural analgesia during spontaneous labor at at least 36 weeks' gestation. Each patient was randomized to receive either early or late epidural analgesia. Randomization occurred only after the following conditions were met: (1) the patient requested pain relief at that moment, (2) a lumbar epidural catheter had been placed, and (3) the cervix was at least 3 cm but less than 5 cm dilated. Patients in the early group immediately received epidural bupivacaine analgesia. Patients in the late group received 10 mg nalbuphine intravenously. Late-group patients did not receive epidural analgesia until they achieved a cervical dilation of at least 5 cm or until at least 1 h had elapsed after a second dose of nalbuphine. Ten of the 344 patients were excluded because of a protocol violation or voluntary withdrawal from the study. Results: Early administration of epidural analgesia did not increase the incidence of oxytocin augmentation, prolong the interval between randomization and the diagnosis of complete cervical dilation, or increase the incidence of malposition of the vertex at delivery. Also, early administration of epidural analgesia did not result in an increased incidence of cesarean section or instrumental vaginal delivery. Seventeen (10%) of 172 women in the early group and 13 (8%) of 162 women in the late group underwent cesarean section (relative risk for the early group 1.22; 95% confidence interval 0.62-2.40). Patients in the early group had lower pain scores between 30 and 150 min after randomization. Infants in the late group had lower umbilical arterial and venous blood pH and higher umbilical venous blood carbon dioxide tension measurements at delivery. Conclusions: Early administration of epidural analgesia did not prolong labor, increase the incidence of oxytocin augmentation, or increase the incidence of operative delivery, when compared with intravenous nalbuphine followed by late administration of epidural analgesia, in nulliparous women who were in spontaneous labor at term. abstract_id: PUBMED:16434183 Impact of walking epidural analgesia on obstetric outcome of nulliparous women in spontaneous labour. Background: To explore the effects of walking epidural analgesia on obstetric and neonatal outcomes, we performed a case-control study. Method: Each nulliparous woman receiving walking epidural analgesia using 0.0625% bupivacaine (n = 44) was matched to two nulliparous historical controls receiving 0.125% or 0.25% bupivacaine (n = 88 each) for epidural analgesia while recumbent. Results: Maternal and obstetric parameters, fetal status and presentation, and oxytocin use were comparable among groups. Those receiving walking epidural analgesia walked for a mean of 60 min (range: 20-75 min). In the control groups the mean total durations of labour were shorter (58 min in the 0.125% group and 99 min in the 0.25% group, P < 0.05). Significantly fewer walking epidural analgesia cases than controls required instrumental vaginal delivery (P < 0.05). No other differences in obstetric or fetal outcome were observed and no mother fell or stumbled while walking. Conclusion: Although it was associated with a prolonged first stage of labour, walking epidural analgesia appeared safe for nulliparous women and their babies. abstract_id: PUBMED:15200182 Obstetric epidural analgesia: relationship between obstetric variables and the course of labor Objectives: To analyze the relationship between epidural analgesia and diverse obstetric and fetal variables as well as the impact of epidural analgesia on the rates of instrumental and cesarean delivery. Patients And Methods: Observational study of women who gave birth at Fundación Hospital Alcorcón over a period of 3 years. All the women were offered obstetric epidural analgesia based on 0.0625% bupivacaine plus 2 microg/mL of fentanyl. The following data were recorded: age, nulliparity (yes/no) administration of epidural analgesia (yes/no), induction of labor (yes/no), stimulation of uterine activity with oxytocin (yes/no), type of delivery, fetal weight, duration of dilation, duration of expulsion, cause of cesarean. Results: The records of 4364 women were gathered. The percentages of inductions, nulliparas, oxytocin stimulation, and fetal weight greater than 4 kg and less than 2.5 kg were higher among women taking epidural analgesia. The age of women who received epidurals was significantly lower. The durations of dilation and expulsion were longer among women receiving epidural analgesia, and epidural analgesia was associated with greater risk of instrumental and cesarean deliveries. The significant increase in administration of epidural blocks over the 3-year period of the study was not accompanied by an increased rate of instrumentally assisted deliveries or cesareans. Conclusions: It is difficult to evaluate the real influence of epidural analgesia on certain aspects of labor and its evolution. The strength of the association between epidural analgesia and greater risk of increased rates of instrumental and cesarean deliveries may be influenced by factors not considered in the present study. abstract_id: PUBMED:26045927 The degree of labor pain at the time of epidural analgesia in nulliparous women influences the obstetric outcome. Background: The increased pain at the latent phase can be associated with dysfunctional labor as well as increases in cesarean delivery frequency. We aimed to research the effect of the degree of pain at the time of epidural analgesia on the entire labor process including the mode of delivery. Methods: We performed epidural analgesia to 102 nulliparous women on patients' request. We divided the group into three based on NRS (numeric rating scale) at the moment of epidural analgesia; mild pain, NRS 1-4; moderate pain, NRS 5-7; severe pain, NRS 8-10. The primary outcome was the mode of delivery (normal labor or cesarean delivery). Results: There were significant differences in the mode of delivery among groups. Patients with severe labor pain had a significantly higher cesarean delivery compared to patients with moderate labor pain (P = 0.006). The duration of the first and second stage of labor, fetal heart rate, use of oxytocin and premature rupture of membranes had no differences in the three groups. Conclusions: Our research showed that the degree of pain at the time of epidural analgesia request might influence the rate of cesarean delivery. Further research would be necessary for clarifying the mechanism that the augmentation of pain affects the mode of delivery. abstract_id: PUBMED:27692473 Effect of combined spinal-epidural analgesia in labor on frequency of emergency cesarean delivery among nulliparous Chinese women. Objective: To determine whether combined spinal-epidural analgesia (CSEA) during labor increases the frequency of emergency cesarean delivery among Chinese nulliparous women. Methods: In a retrospective study, the medical records of nulliparous women with a singleton fetus in cephalic presentation who delivered at term at Tongling Maternity Hospital, China, between January 2012 and December 2014 were reviewed. Information about CSEA, mode of delivery, labor duration, oxytocin augmentation, and neonatal outcome was obtained. Logistic regression was used to examine independent associations between CSEA and emergency cesarean after controlling for confounding variables. Results: Among 3456 women included, 1786 (51.7%) received CSEA and 1670 (48.3%) received no labor analgesia. Emergency cesarean was more frequent among CSEA users (219/1786 [12.3%]) than non-users (119/1670 [7.1%]; P<0.001). Among the maternal-fetal variables included in multivariate regression, maternal age, maternal height, cervical dilatation at admission, birth weight, and CSEA use were significantly associated with emergency cesarean. After adjustment, women with CSEA maintained a slightly increased risk for cesarean (adjusted odds ratio 1.54, 95% confidence interval 1.20-2.00). Conclusion: Among Chinese nulliparous women, use of CSEA for labor pain was associated with an increased risk of emergency cesarean delivery; moreover, this effect was maintained after adjustment for other potential obstetric risk factors. abstract_id: PUBMED:7566849 Randomized trial of epidural versus intravenous analgesia during labor. Objective: To compare the effects of epidural analgesia with intravenous (IV) analgesia on the outcome of labor. Methods: Thirteen hundred thirty women with uncomplicated term pregnancies and in spontaneous labor were randomized to be offered epidural bupivacaine-fentanyl or IV meperidine analgesia during labor. Results: Comparison of the allocation groups by intent to treat revealed a significant association between epidural allocation and operative delivery for dystocia. However, only 65% of each randomization group accepted the allocated treatment. Four hundred thirty-seven women accepted and received meperidine as allocated, and they were compared with 432 women accepting epidural allocation. Significant associations resulted between epidural administration and prolongation of labor, increased rate of oxytocin administration, chorioamnionitis, low forceps, and cesarean delivery. Because of the high rate of noncompliance with treatment allocation, a multifactorial regression analysis was performed on the entire cohort, and a twofold relative risk of cesarean delivery persisted in association with epidural treatment. The impact of epidural treatment on cesarean delivery was significant for both nulliparous and parous women (risk ratios 2.55 and 3.81, respectively). Epidural analgesia provided significantly better pain relief in labor than did parenteral meperidine. Conclusion: Although labor epidural analgesia is superior to meperidine for pain relief, labor is prolonged, uterine infection is increased, and the number of operative deliveries are increased. A two- to fourfold increased risk of cesarean delivery is associated with epidural treatment in both nulliparous and parous women. abstract_id: PUBMED:11926643 The impact of intrapartum analgesia on labour and delivery outcomes in nulliparous women. Objective: To determine if nulliparous women intending to have epidural analgesia have a similar labour profile and delivery outcome to women who intend to have their labour managed using alternative forms of pain relief. Materials And Methods: A prospective randomised controlled clinical trial conducted at a tertiary obstetric institution. Nulliparous women intending to deliver vaginally with a term singleton fetus were eligible for recruitment. Results: 1159 women were recruited, of whom 992 were subsequently randomised to receive continuous midwifery support (CMS) or epidural analgesia (EPI) on presentation for delivery. The duration of labour was shorter in the CMS group compared with EPI (10.7 hours (inter quartile (IQ) 7.0,15.2) versus 11.4 hours (IQ 8.2,15.2), p = 0.039). The median duration of the first stage was 8.9 hours (IQ 6,12.5) versus 9.5 hours (IQ 7,12.7) (p = 0.069), and the median duration of the second stage was 1.33 hours (IQ 0.6,2.5) versus 1.48 hours (IQ 0.77,2.6) (p = 0.034). The requirement for oxytocin augmentation in spontaneous labour was 39.8% CMS versus 46.2% EPI (p = 0.129). There was no significant difference in the caesarean section rates. The need for any operative delivery was significantly lower in CMS (43.9% CMS versus 51.5% EPI, p = 0.019). Conclusions: Nulliparous women have a high usage of epidural analgesia, regardless of their prelabour intentions. In women who do not intend to use epidural analgesia, the temporal delay in insertion compared with those who use epidural analgesia as their primary analgesic modality is associated with a small but statistically significant reduction in overall labour duration and operative delivery rates. abstract_id: PUBMED:21348212 Epidural analgesia in obstetrics: is there an effect on labor and delivery? Background And Objective: Epidural analgesia is routinely used in obstetrics but has been blamed for possible effects on labor that lead to greater use of instruments or conversion to cesarean delivery. We aimed to assess this possibility in a cohort of obstetric patients receiving or not receiving epidural analgesia. Patients And Methods: Prospectively enrolled full-term obstetric patients were distributed in 2 groups according to whether they received epidural analgesia or not. We compared maternal and fetal characteristics, obstetric variables, and type of delivery between groups to record the likely causes of difficult labor and delivery and detect a possible influence of epidural analgesia. Results: Of a total of 602 patients, 462 received epidural analgesia and 140 did not. Epidural analgesia was related to a higher rate of use of instruments but not cesareans (P < .01) and more frequent need for oxytocin (30.7% of the epidural analgesia group vs 0% of the group receiving no epidural analgesia, P < .001). The women receiving analgesia also had a longer mean (SD) duration of the dilatation phase of labor (6.4 [4.2] hours in the epidural group vs 4.7 [3.5] hours in the no-epidural group, P < .01) and of the expulsion phase (1.0 [0.6] hours vs 0.7 [0.6] hours, respectively; P<.01). We observed no effects on the incidence of tearing, rate of episiotomy, or other variables. Predictors of instrumentation or conversion to cesarean delivery were longer duration of the first phase (odds ratio [OR] 1.2; 95% confidence interval [CI], 1.1-1.3), longer duration of the second phase (OR 2.3; 95% CI, 1.3-3.9), and maternal obesity (OR, 1.1; 95% CI, 0.9-1.2). Previous deliveries and initiation of epidural analgesia after the fetus has reached Hodge's first plane decreased risk 2.7-fold and 3.03-fold, respectively. Conclusions: Although epidural analgesia has traditionally been associated with a higher incidence of difficult labor and delivery, this association was not unequivocally evident in this cohort of patients. The apparent increase seems to be attributable to such obstetric factors as longer duration of stages of labor, higher body mass index, and first delivery. abstract_id: PUBMED:8238138 The effect of intrapartum epidural analgesia on nulliparous labor: a randomized, controlled, prospective trial. Objective: Our purpose was to determine the effect of epidural analgesia on nulliparous labor and delivery. Study Design: Normal term nulliparous women in early spontaneous labor were randomized to receive either narcotic or epidural analgesia. Results: When compared with the group receiving narcotic analgesia (n = 45), the group receiving epidural analgesia (n = 48) had a significant prolongation in the first and second stages of labor, an increased requirement for oxytocin augmentation, and a significant slowing in the rate of cervical dilatation. Epidural analgesia was associated with a significant increase in malposition (4.4% vs 18.8%, p < 0.05). Cesarean delivery occurred more frequently in the epidural group (2.2% vs 25%, p < 0.05), primarily related to an increase in cesarean section for dystocia (2.2% vs 16.7%, p < 0.05). Conclusions: In a randomized, controlled, prospective trial epidural analgesia resulted in a significant prolongation in the first and second stages of labor and a significant increase in the frequency of cesarean delivery, primarily related to dystocia. Answer: The early administration of epidural analgesia does not appear to negatively affect obstetric outcomes in nulliparous women who are receiving intravenous oxytocin. According to the study with PUBMED:8010465, early administration of epidural analgesia did not prolong the interval between randomization and the diagnosis of complete cervical dilation, nor did it increase the incidence of malposition of the vertex at delivery. Furthermore, it did not result in an increased incidence of cesarean section or instrumental vaginal delivery when compared with intravenous nalbuphine followed by late administration of epidural analgesia. The study found that 18% of women in the early group and 19% of women in the late group underwent cesarean section, indicating no significant difference between the two groups. Additionally, patients in the early group had lower pain scores after randomization and were more likely to experience transient hypotension. Infants in the late group had lower umbilical arterial and venous blood pH and higher umbilical arterial and venous blood carbon dioxide tension measurements at delivery. These results suggest that early administration of epidural analgesia is not associated with adverse obstetric outcomes in nulliparous women receiving intravenous oxytocin.
Instruction: Are chest compressions safe for the patient reconstructed with sternal plates? Abstracts: abstract_id: PUBMED:20718981 Are chest compressions safe for the patient reconstructed with sternal plates? Evaluating the safety of cardiopulmonary resuscitation using a human cadaveric model. Background: Plate and screw fixation is a recent addition to the sternal wound treatment armamentarium. Patients undergoing cardiac and major vascular surgery have a higher risk of postoperative arrest than other elective patients. Those who undergo sternotomy for either cardiac or major vascular procedures are at a higher risk of postoperative arrest. Sternal plate design allows quick access to the mediastinum facilitating open cardiac massage, but chest compressions are the mainstay of re-establishing cardiac output in the event of arrest. The response of sternal plates and the chest wall to compressions when plated has not been studied. The safety of performing this maneuver is unknown. This study intends to demonstrate compressions are safe after sternal plating. Methods: We investigated the effect of chest compressions on the plated sternum using a human cadaveric model. Cadavers were plated, an arrest was simulated, and an experienced physician performed a simulated resuscitation. Intrathoracic pressure was monitored throughout to ensure the plates encountered an appropriate degree of force. The hardware and viscera were evaluated for failure and trauma respectively. Results: No hardware failure or obvious visceral trauma was observed. Rib fractures beyond the boundaries of the plates were noted but the incidence was comparable to control and to the fracture incidence after resuscitation previously cited in the literature. Conclusions: From this work we believe chest compressions are safe for the patient with sternal plates when proper plating technique is used. We advocate the use of this life-saving maneuver as part of an ACLS resuscitation in the event of an arrest for rapidly re-establishing circulation. abstract_id: PUBMED:34988215 Resection of a giant sternal chondrosarcoma and chest wall reconstruction: a case report. Chondrosarcomas are common bone carcinomas; however, they are uncommon in the sternum, and giant sternal tumors have rarely been reported in advanced-age patients. This study aimed to describe the clinical presentation, method of preoperative planning and surgery, and perioperative management of a giant sternal chondrosarcoma in an advanced-age patient. We describe the case of an 80-year-old woman who presented with a rare giant sternal chondrosarcoma. The patient's symptoms included significant painful swelling and limited activity. The mass was firm and fixed, and the boundary was unclear. We first performed a simulated surgery on a three-dimensional (3D) model using the mimics system for preoperative planning. An extensive resection of the tumor was then performed. Due to the financial status of the patient, the huge chest wall defect was reconstructed with simple ordinary metal locking bone plates and polyester surgical mesh, and good results were achieved. The patient was discharged without any complications 12 days after surgery. The postoperative pathological examination confirmed the diagnosis of primary grade I-II chondrosarcoma. At the 12-month follow-up examination, the patient was completely rehabilitated, and there was no evidence of recurrence. Giant, low-grade sternal chondrosarcoma is an extremely rare disease in elderly women. 3D modeling and simulated surgery are effective approaches for the preoperative planning of surgery. Postoperative ventilators, antibiotics, and nutritional support are also necessary. Using our reconstructive techniques, chest wall reconstruction with polyester patches and orthopedic steel plates could be a safe, reliable and affordable surgery procedure. It may be an appropriate option for similar cases. abstract_id: PUBMED:34872576 Postoperative transverse sternal nonunion with a chest wall defect managed by a tibial locking plate and a Gore-Tex dual mesh membrane: a case report. Background: Transverse sternal nonunion is a rare but disabling complication of chest trauma or a transverse sternotomy. Fixation methods, mainly used to manage the more common longitudinal sternal nonunion, often fail, leaving the surgical treatment of transverse nonunion to be a challenge. Case Presentation: We present a case of a highly-disabling, postoperative chest wall defect resulting from transverse sternal nonunion after a transverse thoracosternotomy (clamshell incision) and a concomitant rib resection. Following unsuccessful surgical attempts, the sternal nonunion was fixed with a tibial locking plate and bone grafted, while the post-rib resection chest defect was reconstructed with a Gore-Tex dual mesh membrane. Adequate chest stability was achieved, enabling complete healing of the sternal nonunion and the patient's complete recovery. Conclusion: We believe it is important to address both in the rare case of combined postoperative transverse sternal nonunion and the chest wall defect after rib resection. A good outcome was achieved in our patient by fixing the nonunion with an appropriately sized and shaped locking plate with bone grafting and covering the chest defect with a dual mesh membrane. abstract_id: PUBMED:34699924 Safety of mechanical and manual chest compressions in cardiac arrest patients: A systematic review and meta-analysis. Aim: Summarise the evidence regarding the safety of mechanical and manual chest compressions for cardiac arrest patients. Methods: Two investigators separately screened the articles of EMBASE, PubMed, and Cochrane Central databases. Cohort studies and randomized clinical trials (RCTs) that evaluated the safety of mechanical (LUCAS or AutoPulse) and manual chest compressions in cardiac arrest patients were included. A meta-analysis was performed using a random effects model to calculate the pooled odds ratios (ORs) and their 95% confidence intervals (CIs). The primary outcome was the rate of overall compression-induced injuries. The secondary outcomes included the incidence of life-threatening injuries, skeletal fractures, visceral injuries, and other soft tissue injuries. Results: The meta-analysis included 11 trials involving 2,818 patients. A significantly higher rate of overall compression-induced injuries was found for mechanical compressions than manual compressions (OR, 1.29; 95% CI, 1.19-1.41), while there was no significant difference between the two groups in respect of the rate of life-threatening injuries. Furthermore, both modalities shared similar incidences of sternal fractures, vertebral fractures, lung, spleen, and kidney injuries. However, compared to mechanical compressions, manual compressions were shown to present a reduced risk of posterior rib fractures, and heart and liver lesions. Conclusions: The findings suggested that manual compressions could decrease the risk of compression-induced injuries compared to mechanical compressions in cardiac arrest patients. Interestingly, mechanical compressions have not increased the risk of life-threatening injuries, whereas additional high-quality RCTs are needed to further verify the safety of mechanical chest devices. Trial Registry: INPLASY; Registration number: INPLASY2020110111; URL: https://inplasy.com/. abstract_id: PUBMED:30526640 Reconstruction of anterior chest wall: a clinical analysis. Objective: To investigate the methods and clinical efficacy of reconstruction of chest defects with titanium sternal fixation system after the surgical resection of sternal tumors. Methods: A total of 6 patients with sternal tumor who were diagnosed and underwent resection and repair of the chest wall defects by titanium plates system, from 2017.3 to 2017.11 in our hospital were reviewed. Their pathological types, surgical reconstruction methods, follow-up results were analyzed. Results: Six cases of sternal tumor were completely resected and the sternums were reconstructed with titanium sternal fixation system. There was no operative death, postoperative chest wall deformity, abnormal breathing or complications of respiratory circulation. After 3 to 10 months of follow-up, there was no loose screw or plate exposure. Not only the thoracic appearances were good, but patients' satisfaction was high. Conclusions: Surgical resection is the best treatment for sternal tumors, no matter it is benign or malignant. Titanium sternal fixation system combine with other soft materials can reconstruct the chest wall well after resection, and this technique is efficient as well as easy to learn. abstract_id: PUBMED:23920076 Factors modulating effective chest compressions in the neonatal period. The need for chest compressions in the newborn is a rare occurrence. The methods employed for delivery of chest compressions have been poorly researched. Techniques that have been studied include compression:ventilation ratios, thumb versus finger method of delivering compressions, depth of compression, site on chest of compression, synchrony or asynchrony of breaths with compressions, and modalities to improve the compression technique and consistency. Although still in its early days, an evidence-based guideline for chest compressions is beginning to take shape. abstract_id: PUBMED:35052321 Giant Sternal Chondrosarcoma in a 50-Year-Old Patient. Chondrosarcomas represent approximately 20% of primary malignant bone cancers, being known as the most frequent neoplasia of the anterior thoracic wall. In our case, we present a case of a primary sternal chondrosarcoma in a 50-year-old female patient that has been polychemiotherapy and radiotherapy treated for breast cancer. Despite the initial treated malignancy of breast cancer in the personal pathologic history of the patient, it was discovered that the sternal tumor was not a metastatic disease from the breast neoplasm. After multiple investigations, the patient was successfully treated for the sternal chondrosarcoma after a radical sternal resection with a chest wall reconstruction completed with two titanium plates that were anchored on the ribs and with the placement of methyl methacrylate mesh. abstract_id: PUBMED:25352911 Resuscitative thoracotomies and open chest cardiac compressions in non-traumatic cardiac arrest. Since the popularisation of closed chest cardiac compressions in the 1960s, open chest compressions in non-traumatic cardiac arrest have become a largely forgotten art. Today, open chest compressions are only rarely performed outside operating theatres. Early defibrillation and high quality closed chest compressions is the dominating gold standard for the layman on the street as well as for the resuscitation specialist. In this paper we argue that the concept of open chest direct cardiac compressions in non-traumatic cardiac arrest should be revisited and that it might be due for a revival. Numerous studies demonstrate how open chest cardiac compressions are superior to closed chest compressions in regards to physiological parameters and outcomes. Thus, by incorporating resuscitative thoracotomies and open chest compressions in our algorithms for non-traumatic cardiac arrest we may improve outcomes. abstract_id: PUBMED:24199206 Provider Adherence to Neonatal Resuscitation Program Recommendations for Coordinated Neonatal Chest Compressions and Ventilations. Aim: Medical providers often do not perform chest compressions in accordance with recommended resuscitation guidelines for adults and children. Little is known regarding how well neonatal providers perform coordinated chest compressions and ventilations. The objective of this study was to characterize Neonatal Resuscitation Program (NRP) trained providers' adherence to NRP recommendations for coordinated chest compressions and ventilations in a simulated setting. Methods: Fifty NRP providers performed coordinated chest compressions for three minutes on a neonatal manikin. A compression sensor (accelerometer) was used to monitor and record compression data. Data analyzed included ratio of compressions to ventilation pauses, delivered chest compressions per minute (CC/min), and duration of ventilation pauses. Results: Delivered CC/min ranged from 61 to 136. The mean CC/min (99, SD 16) was significantly higher than the NRP-recommended value of 90 (p=0.002). Delivered CC/min did not differ from the first 30 seconds to the last 30 seconds of compressions (p=0.91). Duration of ventilation pauses was highly correlated with CC/min (Spearman's rho= -0.98, p<0.001), with a median duration of 0.92 seconds (IQ range (0.84, 1.02). Conclusions: NRP trained providers often do not adhere to NRP recommendations for delivery of coordinated chest compressions during simulated cardiac depression. The mean CC/min performed is higher than recommended. Duration of ventilation pauses is highly correlated with delivered CC/min. Future studies should focus on methods to improve the timing of delivered chest compressions by NRP providers to conform to NRP recommendations. abstract_id: PUBMED:31813404 The Effect of Operator Position on the Quality of Chest Compressions Delivered in a Simulated Ambulance. Background: Ambulances are where patient care is often initiated or maintained, but this setting poses safety risks for paramedics. Paramedics have found that in order to optimize patient care, they must compromise their own safety by standing unsecured in a moving ambulance. Hypothesis/problem: This study sought to compare the quality of chest compressions in the two positions they can be delivered within an ambulance. Methods: A randomized, counterbalanced study was carried out with 24 paramedic students. Simulated chest compressions were performed in a stationary ambulance on a cardiopulmonary resuscitation (CPR) manikin for two minutes from either: (A) an unsecured standing position, or (B) a seated secured position. Participants' attitudes toward the effectiveness of the two positions were evaluated. Results: The mean total number of chest compressions was not significantly different standing unsecured (220; SD = 12) as compared to seated and secured (224; SD = 21). There was no significant difference in mean compression rate standing unsecured (110 compressions per minute; SD = 6) as compared to seated and secured (113 compressions per minute; SD = 10). Chest compressions performed in the unsecured standing position yielded a significantly greater mean depth (52 mm; SD = 6) than did seated secured (26 mm; SD = 7; P < .001). Additionally, the standing unsecured position produced a significantly higher percentage (83%; SD = 21) for the number of correct compressions, as compared to the seated secured position (8%; SD = 17; P < .001). Participants also believed that chest compressions delivered when standing were more effective than those delivered when seated. Conclusions: The quality of chest compressions delivered from a seated and secured position is inferior to those delivered from an unsecured standing position. There is a need to consider how training, technologies, and ambulance design can impact the quality of chest compressions. Answer: Yes, chest compressions are considered safe for patients who have undergone reconstruction with sternal plates. A study using a human cadaveric model to investigate the effect of chest compressions on the plated sternum found no hardware failure or obvious visceral trauma as a result of the compressions. Rib fractures were noted beyond the boundaries of the plates, but the incidence was comparable to control and to the fracture incidence after resuscitation previously cited in the literature. The study concluded that chest compressions are safe for patients with sternal plates when proper plating technique is used, and they advocate the use of this life-saving maneuver as part of an Advanced Cardiac Life Support (ACLS) resuscitation in the event of an arrest for rapidly re-establishing circulation (PUBMED:20718981).
Instruction: Dietary linoleic acid-induced alterations in pro- and anti-nociceptive lipid autacoids: Implications for idiopathic pain syndromes? Abstracts: abstract_id: PUBMED:27030719 Dietary linoleic acid-induced alterations in pro- and anti-nociceptive lipid autacoids: Implications for idiopathic pain syndromes? Background: Chronic idiopathic pain syndromes are major causes of personal suffering, disability, and societal expense. Dietary n-6 linoleic acid has increased markedly in modern industrialized populations over the past century. These high amounts of linoleic acid could hypothetically predispose to physical pain by increasing the production of pro-nociceptive linoleic acid-derived lipid autacoids and by interfering with the production of anti-nociceptive lipid autacoids derived from n-3 fatty acids. Here, we used a rat model to determine the effect of increasing dietary linoleic acid as a controlled variable for 15 weeks on nociceptive lipid autacoids and their precursor n-6 and n-3 fatty acids in tissues associated with idiopathic pain syndromes. Results: Increasing dietary linoleic acid markedly increased the abundance of linoleic acid and its pro-nociceptive derivatives and reduced the abundance of n-3 eicosapentaenoic acid and docosahexaenoic acid and their anti-nociceptive monoepoxide derivatives. Diet-induced changes occurred in a tissue-specific manner, with marked alterations of nociceptive lipid autacoids in both peripheral and central tissues, and the most pronounced changes in their fatty acid precursors in peripheral tissues. Conclusions: The present findings provide biochemical support for the hypothesis that the high linoleic acid content of modern industrialized diets may create a biochemical susceptibility to develop chronic pain. Dietary linoleic acid lowering should be further investigated as part of an integrative strategy for the prevention and management of idiopathic pain syndromes. abstract_id: PUBMED:36284115 Excessive dietary linoleic acid promotes plasma accumulation of pronociceptive fatty acyl lipid mediators. Various fatty acyl lipid mediators are derived from dietary polyunsaturated fatty acids (PUFAs) and modulate nociception. The modern diet is rich in linoleic acid, which is associated with nociceptive hypersensitivities and may present a risk factor for developing pain conditions. Although recommendations about fatty acid intake exist for some diseases (e.g. cardiovascular disease), the role of dietary fatty acids in promoting pain disorders is not completely understood. To determine how dietary linoleic acid content influences the accumulation of pro- and anti-nociceptive fatty acyl lipid mediators, we created novel rodent diets using custom triglyceride blends rich in either linoleic acid or oleic acid. We quantified the fatty acyl lipidome in plasma of male and female rats fed these custom diets from the time of weaning through nine weeks of age. Dietary fatty acid composition determined circulating plasma fatty acyl lipidome content. Exposure to a diet rich in linoleic acid was associated with accumulation of linoleic and arachidonic acid-derived pro-nociceptive lipid mediators and reduction of anti-nociceptive lipid mediators derived from the omega-3 PUFAs. Our findings provide mechanistic insights into exaggerated nociceptive hypersensitivity associated with excessive dietary linoleic acid intake and highlight potential biomarkers for pain risk stratification. abstract_id: PUBMED:29413360 A sixteen-week three-armed, randomized, controlled trial investigating clinical and biochemical effects of targeted alterations in dietary linoleic acid and n-3 EPA+DHA in adults with episodic migraine: Study protocol. Migraine is a prevalent neurological disorder, affecting over 16% of adult women and 7% of adult men in the U.S., causing significant pain, disability, and medical expense, with incomplete benefits from conventional medical management. Migraine, as a chronic pain syndrome, provides a practical model for investigating the impact of dietary modifications in omega-3 (n-3) and omega-6 (n-6) fatty acids. This paper reports the protocol of a trial to assess whether targeted dietary modifications designed to increase n-3 eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), with or without concurrent reduction in n-6 linoleic acid (LA), will alter nociceptive lipid mediators and mediate decreases in frequency and severity of migraine. This prospective, randomized, controlled trial in 153 male and female adult subjects, ages 18-99, with diagnosed and actively managed episodic migraine tests the efficacy, safety, and biochemical effects of targeted, controlled alterations in dietary omega-3 and omega-6 fatty acids. Participants are masked to diet hypotheses and all assessors are masked to treatment assignment. Following a four-week baseline period, participants with migraine headache frequency of 5-20 per month are randomized to one of three intensive dietary regimens for 16 additional weeks followed by a less intensive observation period. Dietary intervention arms include: 1) increased n-3 EPA+DHA with low n-6 linoleic acid (H3 L6); 2) increased n-3 EPA+DHA with usual US dietary intake of n-6 linoleic acid (H3 H6); and 3) usual US dietary content of n-3 and n-6 fatty acids (L3 H6). During the actual intervention, subjects receive content-specific study oils and foods sufficient for two meals and two snacks per day, as well as dietary counseling. Biochemical and clinical outcome measures are performed at intervals throughout this period. This randomized controlled trial is designed to determine whether targeted alterations in dietary n-3 and n-6 fatty acids can alter nociceptive lipid mediators in a manner that decreases headache pain and enhances quality of life and function in adults with frequent migraines. Trial Registration: NCT02012790. abstract_id: PUBMED:26178479 Observing Anti-inflammatory and Anti-nociceptive Activities of Glycyrrhizin Through Regulating COX-2 and Pro-inflammatory Cytokines Expressions in Mice. The present study aimed to investigate the potential anti-inflammatory and anti-nociceptive activities of glycyrrhizin (GL) in mice and to explore the possible related mechanisms. Xylene-induced ear edema, carrageenan-induced paw edema and acetic acid-induced vascular permeability test were used to investigate the anti-inflammatory activities of GL in mice. Anti-nociceptive effects of GL were assessed by using acetic acid-induced writhing, hot plate test and formalin test, as well as evaluation of spontaneous locomotor activity and motor performance. The mRNA expression of pro-inflammatory cytokines (such as TNF-α, IL-6 and iNOS) and the protein expression of cyclooxygenase-2 (COX-2) were explored by using real-time fluorogenic PCR and Western blot, respectively. The results showed that GL significantly reduced xylene-induced ear edema, carrageenan-induced paw edema, and acetic acid-induced vascular permeation. Additionally, GL significantly inhibited the nociceptions induced by acetic acid and formalin. However, the nociceptions could not be decreased by GL in the hot plate test, and GL did not affect spontaneous locomotor activity and motor performance. The expression levels of TNF-α, IL-6, iNOS and COX-2 were significantly downregulated by GL. In conclusion, GL exerts significant anti-inflammatory and analgesic activities by attenuating the expression levels of TNF-α, IL-6, iNOS and COX-2. abstract_id: PUBMED:17125946 Antioxidant, anti-inflammatory, anti-nociceptive activities and composition of Lythrum salicaria L. extracts. Lythrum salicaria (purple loosestrife) known as "Tibbi hevhulma" in Turkish is used for its several beneficial health effects against as diarrhea, chronic intestinal catarrh, hemorrhoid and eczema in the form of a decoction or a fluid extract and to treat varicose veins, bleeding of the gums, hemorrhoid and eczema, externally. Dried herbal parts of Lythrum salicaria L. (Lythraceae) were sequentially extracted with different solvents such as petroleum ether, ethyl acetate, methanol and 50% aqueous methanol. Water extract of Lythrum salicaria was also prepared under reflux. Antioxidant, anti-inflammatory and anti-nociceptive activities of all the extracts were investigated using in vitro and in vivo methods, respectively. Free radical scavenging activity (1,1-diphenyl-2-picrylhydrazyl, DPPH* assay), iron(III) reductive activity, capacity of the inhibition of linoleic acid peroxidation and MDA formation, anti-nociceptive activity (p-benzoquinone-induced abdominal constriction test) and anti-inflammatory activity (carrageenan-induced hind paw edema model) were used for all the extracts. In addition, the content of total phenolics, flavonoids and flavonols in all the extracts were determined with spectrophotometric methods. Results were compared with reference antioxidants via ascorbic acid, butylated hydroxytoluene, and gallic acid. Qualitative and quantitative compositions of all the extracts were analysed using a HPLC-PDA system. Polar fractions were found to be rich in flavonoids such as isovitexin and isoorientin. abstract_id: PUBMED:24312820 Phytochemical Screening and Anti-nociceptive Properties of the Ethanolic Leaf Extract of Trema Cannabina Lour. Purpose: The present study was designed to investigate the anti-nociceptive activity of ethanolic leaf extract of Trema cannabina Lour (family: Cannabaceae) in experimental animal models. Methods: The anti-nociceptive action was carried out against two types of noxious stimuli, thermal (hot plate and tail immersion tests) and chemical (acetic acid-induced writhing) in mice. Results: Phytochemical analysis of crude extract indicated the presence of reducing sugar, tannins, steroid and alkaloid types of secondary metabolites. Crude extract of T. cannabina (500 mg/kg dose) showed maximum time needed for the response against thermal stimuli (6.79±0.15 seconds) which is comparable to diclofenac sodium (8.26±0.14 seconds) in the hot plate test. Hot tail immersion test also showed similar results as in hot plate test. At the dose of 250 and 500 mg/kg body weight, the extract showed significantly and in a dose-dependent (p<0.001) reduction in acetic acid induced writhing in mice with a maximum effect of 47.56% reduction at 500 mg/kg dose comparable to that of diclofenac sodium (67.07%) at 25 mg/kg. Conclusion: The obtained results tend to suggest the Anti-nociceptive activity of ethanolic leaf extract of Trema cannabina and thus provide the scientific basis for the traditional uses of this plant part as a remedy for pain. abstract_id: PUBMED:28884086 Anti-nociceptive and anti-inflammatory effects of the methanolic extract of Opuntia humifusa stem. Objective: Opuntia humifusa (O. humifusa) Raf. has been used for the prevention and treatment of rheumatoid arthritis, inflammation, and cancer. Our study was designed to unveil the anti-nociceptive and anti-inflammatory effects of the methanolic extract of O. humifusa Raf stem (OHS). Materials And Methods: The anti-nociceptive effect was measured by hot plate, acetic acid-induced writhing, and tail flick assays in mice and rats. Moreover, the anti-inflammatory effect was measured by vascular permeability and carrageenan and serotonin-induced paw edema tests in rats. Furthermore, anti-inflammatory effect was also measured using macrophage-like LPS-induced RAW 264.7 cells. Results: OHS extract inhibited acetic acid-induced writhing (p<0.0001), and delayed the reaction time of mice to the hot plate-induced thermal stimulation (p<0.0001) and tail flick tests (p<0.05). OHS extract attenuated the carrageenan and serotonin-induced paw edema in rats (p<0.001). Similarly, OHS extract significantly decreased Evans blue concentration in acetic acid induced vascular permeability test (p<0.0001), revealing its strong anti-inflammatory effect. Finally, among four different fractions of OHS extract, n-butanol fraction strongly decreased NO production (p<0.0001) and iNOS expression in LPS-induced RAW 264.7 cells. Conclusion: Our results suggest that the methanolic extract of O. humifusa stem can be used to develop a therapeutic or supportive drug and/or functional food against pain and inflammation related diseases. abstract_id: PUBMED:32387467 Anti-nociceptive and anti-inflammatory activities of ethanol extract and fractions of Morus mesozygia Stapf (Moraceae) leaves and its underlying mechanisms in rodents. Ethnopharmacological Relevance: Morus mesozygia Stapf (Moraceae), commonly known as African mulberry, is traditionally used for the treatment of inflammatory disorders such as rheumatism and dermatitis. Aim: This work aimed to evaluate the anti-nociceptive and anti-inflammatory effects of its ethanol (EEMm) extract, and ethylacetate fraction (EAFMm). Methods: The anti-nociceptive and anti-inflammatory effect of ethanol extracts of M. mesozygia (EEMm), and its ethylacetate (EAFMm) and residual aqueous fraction (RAFMm) was evaluated in hotplate, acetic acid and formalin tests and as well in membrane stabilizing assay and carrageenan-induced paw oedema models. Mechanism of anti-inflammation of EAFMm was investigated in the carrageenan-induced air-pouch model. Results: In the hot plate test of nociception, only the EAFMm showed significant (p < 0.05) anti-nociceptive activity. The extract and fractions significantly reduced number of writhing with EAFMm (400 mg/kg) showing highest inhibition (66.5%) in the acetic acid-induced writhing in mice. EEMm and EAFMm (400 mg/kg) significantly reduced the paw licking time in the early and late phases of the formalin test. The extract and fractions showed good membrane stabilizing activity comparable to indomethacin. EAFMm (100 and 400 mg/kg) showed the highest inhibition of paw oedema (53.4% and 58.1%) in the carrageenan-induced paw oedema model. The EAFMm (100 and 400 mg/kg) reduced exudate volume relative to carrageenan-control (2.64 ± 0.22, 2.08 ± 0.15 vs 3.83 ± 0.18 mL) and neutrophils (8.98 ± 1.36, 8.00 ± 0.22 vs 20.51 ± 1.14) in carrageenan-induced pouch. EAFMm significantly reduced exudate volume, pro-inflammatory cytokines and the expression of COX-2 and NFκB. Conclusion: M. mesozygia leaves demonstrated anti-nociceptive and anti-inflammatory activities by suppressing oxidative stress, pro-inflammatory cytokines, cyclooxygenase-2, and nuclear factor kappa B. abstract_id: PUBMED:25795107 Pro-nociceptive and anti-nociceptive effects of a conditioned pain modulation protocol in participants with chronic low back pain and healthy control subjects. Background: People with chronic pain may exhibit pro-nociceptive phenotypes characterised partly by reduced conditioned pain modulation (CPM). Characterising variability in CPM in people with chronic low back pain (CLBP) may inform management. Objectives: To investigate pro/anti-nociceptive effects of a CPM protocol in age/sex-matched healthy controls (HCs) and people with CLBP. Design: Case-controlled trial (64 participants/group). Method: The CPM protocol involved: test stimulus (TS) (noxious pressure applied by algometer to lumbar region); conditioning stimulus (CS) (noxious heat applied by thermode to dorsal hand). CPM recruitment was measured by the change in pain intensity (rated on a numeric rating scale (NRS)) of the TS in the presence and absence of the CS. Results: Responses to this CPM protocol were variable for both groups with measures consistent with either inhibitory or facilitatory effects. A significantly greater proportion of facilitatory responses were seen in the CLBP cohort compared to HCs (73% versus 31%). In response to the CS, participants with CLBP demonstrated a mean increase in NRS scores (mean 1.3 points; p < 0.001), while HCs did not (mean -0.2 points; p = 0.35) and the between-group difference in change scores was significant (mean 1.4 points; p < 0.001; effect size (Hedges' g): 1.03). Conclusion: In HCs and participants with CLBP this CPM protocol elicited responses consistent with varying pro/anti-nociceptive effects. The higher proportion of participants with CLBP demonstrating a facilitatory response suggests a pro-nociceptive phenotype may characterise this cohort. abstract_id: PUBMED:30508505 Anti-nociceptive and anti-inflammatory potentials of Akebia saponin D. Akebia saponin D, which is originates from Dipsacus asper Wall, has been used as a tonic, an analgesic and anti-inflammatory agent for the therapy of low back pain, rheumatic arthritis, traumatic hematoma, habitual abortion and bone fractures in traditional Chinese medicine. However, the anti-nociceptive and anti-inflammatory activity and mechanism of Akebia saponin D has been rarely reported. The aim of this study was to investigate the anti-nociceptive and anti-inflammatory activity of Akebia saponin D and to assess its possible mechanism. The anti-nociceptive effect was measured by formalin test, hot plate, and acetic acid-induced writhing in mice while the anti-inflammatory effect was measured by carrageenan induced paw edema test, xylene-induced ear swelling and acetic acid-induced vascular permeability in mice and rats. Furthermore, anti-inflammatory effect was also measured in vitro using LPS-induced RAW 264.7 cells. Our results demonstrated that Akebia saponin D dose-dependently decreased the licking time in the formalin test, delayed the reaction time of mice to the hot plate, and inhibited acetic acid-induced writhing. Treatment of Akebia saponin D attenuated the carrageenan induced paw edema in rats, inhibited the mouse ear swelling, and decreased Evans blue concentration in acetic acid induced vascular permeability test, revealing its strong anti-inflammatory effect. Akebia saponin D significantly decreased NO production and iNOS expression. Our results indicate that Akebia saponin D has anti-nociceptive and anti-inflammatory effects. It will provide experimental evidences for the use of Akebia saponin D and can be used to develop a therapeutic drug against pain and inflammation related diseases. Answer: The research suggests that dietary linoleic acid, an n-6 fatty acid that has increased in modern industrialized diets, may predispose individuals to physical pain by influencing the production of lipid autacoids. A study using a rat model found that increasing dietary linoleic acid led to a marked increase in pro-nociceptive derivatives of linoleic acid and a reduction in anti-nociceptive monoepoxide derivatives derived from n-3 fatty acids, such as eicosapentaenoic acid and docosahexaenoic acid. These changes occurred in a tissue-specific manner, with significant alterations in both peripheral and central tissues associated with idiopathic pain syndromes (PUBMED:27030719). Another study created custom rodent diets rich in either linoleic acid or oleic acid to investigate the impact of dietary linoleic acid on the accumulation of pro- and anti-nociceptive fatty acyl lipid mediators. The findings showed that a diet high in linoleic acid was associated with an accumulation of pro-nociceptive lipid mediators and a reduction in anti-nociceptive mediators derived from omega-3 PUFAs, providing mechanistic insights into the exaggerated nociceptive hypersensitivity associated with excessive dietary linoleic acid intake (PUBMED:36284115). These findings support the hypothesis that high dietary linoleic acid content may create a biochemical susceptibility to develop chronic pain. Therefore, dietary linoleic acid lowering should be further investigated as a potential strategy for the prevention and management of idiopathic pain syndromes. The implications for idiopathic pain syndromes are significant, as these findings suggest that dietary interventions targeting the balance of n-6 and n-3 fatty acids could potentially alter nociceptive lipid mediators in a way that decreases pain and improves quality of life for individuals with chronic pain conditions such as migraine (PUBMED:29413360).
Instruction: Do unmet expectations for specific tests, referrals, and new medications reduce patients' satisfaction? Abstracts: abstract_id: PUBMED:15566436 Do unmet expectations for specific tests, referrals, and new medications reduce patients' satisfaction? Background: Patient-centered care requires clinicians to recognize and act on patients' expectations. However, relatively little is known about the specific expectations patients bring to the primary care visit. Objective: To describe the nature and prevalence of patients' specific expectations for tests, referrals, and new medications, and to examine the relationship between fulfillment of these expectations and patient satisfaction. Design: Prospective cohort study. Setting: VA general medicine clinic. Patients/participants: Two hundred fifty-three adult male outpatients seeing their primary care provider for a scheduled visit. Measurements And Main Results: Fifty-six percent of patients reported at least 1 expectation for a test, referral, or new medication. Thirty-one percent had 1 expectation, while 25% had 2 or more expectations. Expectations were evenly distributed among tests, referrals, and new medications (37%, 30%, and 33%, respectively). Half of the patients who expressed an expectation did not receive one or more of the desired tests, referrals, or new medications. Nevertheless, satisfaction was very high (median of 1.5 for visit-specific satisfaction on a 1 to 5 scale, with 1 representing "excellent"). Satisfaction was not related to whether expectations were met or unmet, except that patients who did not receive desired medications reported lower satisfaction. Conclusions: Patients' expectations are varied and often vague. Clinicians trying to implement the values of patient-centered care must be prepared to elicit, identify, and address many expectations. abstract_id: PUBMED:11176547 Measuring patient expectations: does the instrument affect satisfaction or expectations? Background: Fulfillment of patients' expectations may influence health care utilization, affect patient satisfaction, and be used to indicate quality of care. Several different instruments have been used to measure expectations, yet little is known about how different assessment methods affect outcomes. Objective: The object of the study was to determine whether different measurement instruments elicit different numbers and types of expectations and different levels of patient satisfaction. Design: Patients waiting to see their physician were randomly assigned to receive 1 of 2 commonly used instruments assessing expectations or were assigned to a third (control) group that was not asked about expectations. After the visit, patients in all 3 groups were asked about their satisfaction and services they received. Subjects: The study subjects were 290 male, primary care outpatients in a VA general medicine clinic. Measures: A "short" instrument asked about 3 general expectations for tests, referrals, and new medications, while a "long" instrument nested similar questions within a more detailed list. Wording also differed between the 2 instruments. The short instrument asked patients what they wanted; the long instrument asked patients what they thought was necessary for the physician to do. Satisfaction was measured with a visit-specific questionnaire and a more general assessment of physician interpersonal skills. Results: Patients receiving the long instrument were more likely to express expectations for tests (83% vs. 28%, P <0.001), referrals (40% vs. 18%, P <0.001), and new medications (45% vs. 28%, P <0.001). The groups differed in the number of unmet expectations: 40% of the long instrument group reported at least 1 unmet expectation compared with 19% of the short instrument group (P <0.001). Satisfaction was similar among the 3 groups. Conclusions: These different instruments elicit different numbers of expectations but do not affect patient satisfaction. abstract_id: PUBMED:38458842 Impact of Anti-Estrogen Therapy on Early Cardiovascular Referrals, Tests and Medications in Premenopausal Women with Operable Breast Cancer. Introduction: Premenopausal women with high-risk hormone receptor (HR)-positive breast cancer often receive ovarian function suppression (OFS) and anti-estrogen therapy which induces near complete estrogen deprivation (NCED). This treatment improves recurrence-free survival but may increase cardiovascular risk. We sought to identify patterns of cardiovascular care and outcomes in premenopausal women with operable breast cancer. Methods: Premenopausal women ≤ 50 years of age with stage I-III HR-positive or triple negative breast cancer (TNBC) were identified by retrospective review. We categorized women into 3 groups based on anti-estrogen therapy approach: NCED (HR + OFS), anti-estrogen therapy without OFS (HRnoOFS), and no anti-estrogen therapy (TNBC). Baseline characteristics, post-diagnosis cardiovascular events and cardiovascular actions (tests, referrals and medications) were recorded. Categorical variables were compared among the groups using chi-square and Fisher's exact tests; continuous outcomes were compared using ANOVA. Results: 82, 83, and 52 women were identified in the HR + OFS, HRnoOFS, and TNBC groups respectively; mean follow-up was 5.0 years. Mean number of cardiovascular actions per year were highest in the HR + OFS group compared with HRnoOFS and TNBC groups (0.35 vs. 0.20 and 0.27, respectively; P = .036). The HR + OFS group had significantly more referrals and tests per year than the other groups. Cardiovascular medication initiation did not differ among groups. Conclusions: In this early follow-up period, there were meaningful numbers of cardiovascular actions, with women on NCED experiencing the most per year. Future work should seek to further understand the impact of anti-estrogen therapy on the cardiovascular health of premenopausal women and test strategies to mitigate cardiotoxicity. abstract_id: PUBMED:8872784 Meeting walk-in patients' expectations for testing. Effects on satisfaction. Objective: To examine the relation between meeting expectations for tests and visit satisfaction in walk-in patients. Design: Survey of patients before and after the visit. Setting: Walk-in medical clinic at a Veterans Affairs Medical Center. Patients: 143 male veterans were eligible for this study: 128 agreed to participate; 109 completed both questionnaires. Measurements: Before the visit, we measured health status, baseline satisfaction with care, and expectations for common tests. After the visit, we measured visit-specific satisfaction, patient perception of provider interpersonal behavior (provider humanism), and patient report of whether specific tests were received. Logistic regression was used to determine the effect of meeting expectations for tests while controlling for other factors. Results: Of all patients, 62% expected one or more tests, nearly as many as expected a medication or a diagnosis. In multivariate analysis restricted to those expecting tests, provider humanism was the sole significant predictor of visit-specific satisfaction (odds ration [OR] 6.4; 95% confidence interval [CI] 1.6, 26.1). The proportion of expectations for testing that were met was not significantly associated with satisfaction (OR 1.05; 95% CI 0.92, 1.21). Conclusions: Meeting patient expectations for tests does not have an important effect on satisfaction. Even in the walk-in setting, patient perception of the providers' interpersonal behavior was a more important factor in satisfaction with the visit. Physicians who order tests solely to improve patient satisfaction may be able to reduce unnecessary testing without decreasing patient satisfaction. abstract_id: PUBMED:32385837 Unmet expectations in prostate cancer patients and their association with decision regret. Purpose: Information about prostate cancer patients' experiences with their treatment is crucial to optimize shared decision-making. This study examined unmet expectations in prostate cancer patients and their association with decision regret. Methods: We conducted a prospective, observational, multi-center study of men diagnosed with localized prostate cancer between 2014 and 2016. Questionnaires were completed at baseline (pre-treatment), and up to 12 months after treatment. Unmet expectations were reported as the proportion of patients who experienced side effects as worse than expected. Linear regression analysis was used to identify factors associated significantly (p ≤ 0.05) with unmet expectations and its association with decision regret. Results: At 1-year follow-up, the majority of the patients (71%, 210/296) reported at least one unmet expectation. The proportion of patients who reported worse than expected erectile problems was 56%, recovery period = 29%, urinary problems = 28%, fatigue = 24%, and bowel problems = 17%. Unmet expectations were comparable between treatment groups, except for fatigue. A passive role in the decision-making process (eta squared (η2) = 0.02) and higher scores on the decisional conflict scale (η2 = 0.02) were associated with more unmet expectations, and unmet expectations were associated with decision regret (η2 = 0.08). Conclusions: Unmet expectations are common among men treated for localized prostate cancer. Involving patients in the treatment decision-making process and offering additional counseling to patients who indicate uncertainty about their decision, may help to avoid unmet expectations. Implications For Cancer Survivors: The current study emphasizes the need for involving prostate cancer patients in the decision-making process in order to mitigate unmet expectations. abstract_id: PUBMED:37728727 Patient Expectations and Satisfaction in Pediatric Orthopedics. Purpose Of Review: The purpose of the current review is to analyze the current literature regarding the tools available to evaluate patient expectations and satisfaction. There have been an increasing number of tools that have been developed and validated for various orthopedic procedures. Despite the growing number of tools, there are a limited number of tools available for pediatric patients. Recent Findings: Several tools have been developed in orthopedics to evaluate patient expectations. However, there are no tools that have been validated in the pediatric population. In addition, pediatric patient expectations should be collected in conjunction with parent/caregiver expectations. Although not specifically validated for pediatric patients, there are several tools available that may pertain to pediatric patients including the HSS ACL Expectations Survey, HSS Shoulder Expectations Survey, HSS Knee Surgery Expectations Survey, HSS Foot and Ankle Surgery Expectation Survey, Sunnybrook Surgery Expectations Survey, Musculoskeletal Outcomes Data Evaluation and Management System (MODEMS) Instruments, Quick DASH, and DASH. In terms of patient satisfaction, there are even fewer tools available. Several tools have been developed to evaluate patient satisfaction and five additional tools within orthopedics. Of these tools, there are two that have been validated for pediatric patients: The Swedish Parents Satisfaction Questionnaire and the Scoliosis Research Society-22. There are a growing number of tools to evaluate patient's expectations and satisfaction in the orthopedic literature. Given most of these tools pertain to adult patients, there is a need for further development of tools specifically validated for pediatric patients and their parents/caregivers. Through the measurement of expectations and satisfaction, medical professionals can hope to improve satisfaction and outcomes. abstract_id: PUBMED:32602914 Unmet Patient Expectations for Interventions Decrease Provider Satisfaction Scores. As health care reimbursement evolves, physicians will continue to be evaluated based on the quality of care that they provide. One measure of quality is based on patient satisfaction as reported through the Consumer Assessment of Healthcare Providers and Systems Survey (CAHPS). Although previous studies have explored expectations and their role in patient satisfaction, no studies have investigated the role of patient expectations for interventions in surgical specialty clinics. Patients (N=126) were given 2 surveys, a pre-visit and a post-visit questionnaire, with options based on the CAHPS survey. Patients were asked to select common orthopedic services that they expected to receive from their appointments, rate their physicians on Likert scales, and report the services that were actually provided. Patients who left with unmet expectations for interventions (45%) rated their physicians lower than patients who had all of their expectations met (9.0±1.4 vs 9.5± 0.8; P<.05). For most individual interventions, there was no difference in satisfaction between patients with met and unmet expectations. However, patients who expected surgery and did not have their expectations met rated their physician lower than patients whose expectations for surgery were met (9.0±1.3 vs 9.7±0.6; P<.05). No statistically significant difference was found in physician rating with increased numbers of unmet expectations, individual surgeon rating, perceptions of their providers, and wait time. This study reports that patient expectations for interventions, and particularly a recommendation for surgery, may alter a patient's reported satisfaction, particularly when surgical expectations are unmet. [Orthopedics. 2020;43(5):e378-e382.]. abstract_id: PUBMED:19813470 Side effects and patients expectations after vestibular tests Objective: Patients satisfaction and side effects assessment after vestibular tests. Materials And Methods: Questionnaires were sent to 150 patients after vestibular testing in 2006. Forty-two questionnaires were sent back to us. On each questionnaire there were 5 questions regarding patient expectations, 4 questions about side effects after vestibular tests and one question was related to follow-up medical attention. Results: Most of the patients think that the delay between clinical consultation and vestibular test was not too long. Information patients received beforehand regarding vestibular tests aim, duration, possible side effects and diagnostic expectations are good enough for most of the patients. However patient's assessment of the informed consent document and location of vestibular tests is diverse. The vast majority of patients feel "very important" to receive information about vestibular tests results on the same day. For most of the subjects caloric test is the most disconfortable tests. In the following days, more than half of the subjects feel vestibular symptoms. The most frequent suggestion we received from patients is to give them a phone calls after vestibular tests. Conclusion: It is worth mentioning the importance of informing the patient about vestibular test results upon completion. Our aim is to make some changes on the informed consent document given to patients and to make a phone call to patients after vestibular tests. abstract_id: PUBMED:25269531 Do previsit expectations correlate with satisfaction of new patients presenting for evaluation with an orthopaedic surgical practice? Background: Patient satisfaction is associated with increased compliance, improved treatment outcomes, and decreased risk of litigation. Factors such as patient understanding and psychological well-being are recognized influences on satisfaction. Less is known about the relationship between previsit expectations and satisfaction. Questions/purposes: (1) Are there correlations among previsit expectations, met expectations, and patient satisfaction? (2) What are the categories of expectations, and which one(s) correlate with satisfaction? Methods: Eighty-six new patients presenting to a hand surgery practice of a tertiary referral hospital with 70% direct primary care referrals, mostly with elective concerns, indicated their previsit expectations (Patient Intention Questionnaire [PIQ]). Immediately after the visit, the same patients rated the degree to which their previsit expectations were met (Expectation Met Questionnaire [EMQ]) and their satisfaction level (Medical Interview Satisfaction Scale). These tools have been used in primary care office settings and claim good psychometric properties, and although they have not been strictly validated for responsiveness and other test parameters, they have good face validity. We then conducted a multivariable backward linear regression to determine whether (1) scores on the PIQ; and (2) scores on the EMQ are associated with satisfaction. Results: Satisfaction correlated with met expectations (r=0.36; p<0.001) but not with previsit expectations (r=-0.01, p=0.94). We identified five primary categories of previsit expectations that accounted for 50% of the variance in PIQ: (1) "Information and Explanation"; (2) "Emotional and Understanding"; (3) "Emotional Problems"; (4) "Diagnostics"; and (5) "Comforting". The only category of met expectations that correlated with satisfaction was Information and Explanation (r=0.43; p<0.001). Conclusions: Among patients seeing a hand surgeon, met expectations correlate with satisfaction. In particular, patients with met expectations regarding information and explanation were more satisfied with their visit. Efforts to determine the most effective methods for conveying unexpected information warrant investigation. Level Of Evidence: Level II, prognostic study. abstract_id: PUBMED:33715782 Associations between teacher expectations and racial disproportionality in discipline referrals. We explored the racial discipline gap and sought to understand the extent to which elementary school teachers' (N = 33; Grades K-6) expectations for their students' (N = 496) future outcomes predicted racial discipline disproportionality. We used multilevel models, and Ordinary Least Squares (OLS) regression with standardized errors corrected, to examine associations between teacher expectations of academic and behavioral student outcomes and patterns of office discipline referrals (ODRs) for the year. Results showed disproportionate ODR outcomes for Black students in comparison to White students. Additionally, we found that students held to lower expectations by their teachers received more ODRs than peers rated highly by their teachers, regardless of race. We also found that for every unit teacher expectations increased, the ODR disparity between Black, Latinx, and White students decreased. Lastly, multilevel models exploring teacher expectations of student outcomes explained approximately 21% of the difference in ODRs between Black and White peers. Answer: Unmet expectations for specific tests, referrals, and new medications can have varying impacts on patient satisfaction. According to the study by Kravitz et al. (PUBMED:15566436), while a significant number of patients had expectations for tests, referrals, or new medications, satisfaction was generally high even when these expectations were not met. However, the study did note that patients who did not receive desired medications reported lower satisfaction, suggesting that unmet expectations for medications might reduce satisfaction more than unmet expectations for tests or referrals. Another study by Kravitz et al. (PUBMED:11176547) found that different instruments used to measure expectations elicited different numbers of expectations but did not affect patient satisfaction. This indicates that the method of assessing expectations can influence the reported expectations but does not necessarily correlate with satisfaction levels. In the context of prostate cancer patients, unmet expectations were common and associated with decision regret (PUBMED:32385837). This suggests that in certain medical situations, unmet expectations can lead to negative feelings about the decision-making process, which could potentially impact overall satisfaction. In pediatric orthopedics, there is a recognition of the need for tools to evaluate patient and parent/caregiver expectations and satisfaction, as these can influence outcomes and satisfaction levels (PUBMED:37728727). However, specific impacts of unmet expectations in this field were not detailed in the abstract provided. A study in an orthopedic surgical practice found that unmet expectations for interventions decreased provider satisfaction scores (PUBMED:32602914). Patients who had unmet expectations for surgery rated their physicians lower than those whose expectations were met, indicating that unmet expectations in this context can reduce satisfaction. In a study of new patients presenting for evaluation with an orthopedic surgical practice, satisfaction correlated with met expectations, particularly in the category of "Information and Explanation" (PUBMED:25269531). This suggests that meeting expectations, especially regarding information and explanation, is important for patient satisfaction. Overall, while unmet expectations do not uniformly reduce patient satisfaction across all contexts, they can have a negative impact in certain situations, particularly when it comes to medications and specific interventions like surgery. Patient satisfaction appears to be more closely related to the fulfillment of expectations regarding information and explanation, and the method of measuring expectations can influence the reported number of expectations without necessarily affecting satisfaction levels.
Instruction: Do clinically insignificant tumors of the prostate exist? Abstracts: abstract_id: PUBMED:19077395 Do clinically insignificant tumors of the prostate exist? Background: The discrepancy between minimal disease on biopsy and disease found in the subsequent prostatectomy specimen, in terms of the size and grade of tumor, extracapsular extension or positive margins, led several authors to dispute the existence of clinically insignificant impalpable tumors of the prostate. However, considering that prostate-specific antigen (PSA) is an indicator of prostate malignancy and since many impalpable prostatic carcinomas (PCs) are detected by a combination of PSA, transurethral ultrasound and needle biopsy (T1c), in the era of PSA screening, it is expected that most of the impalpable tumors found incidentally at transurethral resection of the prostate (stage T1a/b), could be clinically insignificant. Aim: The aim of this study was to identify the characteristics of latent, impalpable PCs and to analyze the incidence of clinically insignificant PCs among hypothetical stage T1 prostate cancers in tumors found incidentally at postmortem examination. Methods: We examined 40 cases of impalpable PCs found in 212 prostate autopsy specimens of men between 30 and 98 years of age who died of diseases other than carcinoma of the prostate and related conditions. Results: Most of T1 histological PCs (57.5%) had a Gleason score between 2 and 4, while 30% had Gleason score between 5 and 6. Only 5 (12.5%) had a Gleason score above 7. Twenty-nine of 40 stage T1 histological cancers (67.5%) had volume of <1 cm(3). The highest volume tumors were those of intermediate and high grade (Gleason sums 5-8). Among tumors with volumes of <1 cm(3), 96.55% were confined within the prostatic capsule. Conclusions: The majority of impalpable PCs were low-volume, well-differentiated tumors corresponding to clinically insignificant neoplasms. Similar characteristics could be attributed to most of the impalpable carcinomas detected after prostatectomy in clinical practice. abstract_id: PUBMED:33951662 Sextant Biopsy-Based Criteria for Clinically Insignificant Prostate Cancer Are Also Valid for the 12-Core Prostate Biopsy Scheme: A Multicenter Study of Urooncology Association, Turkey. Background: Epstein criteria based on sextant biopsy are assumed to be valid for 12-core biopsies. However, very scarce information is present in the current literature to support this view. Objectives: To investigate the validity of Epstein criteria for clinically insignificant prostate cancer (PCa) in a cohort of the currently utilized 12-core prostate biopsy (TRUS-Bx) scheme in patients with low-risk and intermediate-risk PCa. Method: Pathological findings were separately evaluated in the areas matching the sextant biopsy (6-core paramedian) scheme and in all 12-core schemes. Patients were divided into 2 groups according to the final pathology report of RP as true clinically significant PCa (sPCa) and insignificant PCa (insPCa) groups. Predictive factors (including Epstein criteria) and cutoff values for the presence of insPCa were separately evaluated for 6- and 12-core TRUS-Bx schemes. Then, different predictive models based on Epstein criteria with or without additional biopsy findings were created. Results: A total of 442 patients were evaluated. PSA density, biopsy GS, percentage of tumor and number of positive cores, PNI, and HG-PIN were independent predictive factors for insPCa in both TRUS-Bx schemes. For the 12-core scheme, the best cutoff values of tumor percentage and number of positive cores were found to be ≤50% (OR: 3.662) and 1.5 cores (OR: 2.194), respectively. The best predictive model was found to be that which added 3 additional factors (PNI and HG-PIN absence and number of positive cores) to Epstein criteria (OR: 6.041). Conclusions: Using a cutoff value of "1" for the number of positive biopsy cores and absence of biopsy PNI and HG-PIN findings can be more useful for improving the prediction model of the Epstein criteria in the 12-core biopsy scheme. abstract_id: PUBMED:26435643 Validation of revised Epstein's criteria for insignificant prostate cancer prediction in a Greek subpopulation. Background: Different treatment options for patients with prostate cancer (PCa) are applicable after stratifying patients according to various classification criteria. The purpose of our study is to evaluate the revised Epstein's criteria for insignificant PCa prediction in a Greek subpopulation. Methods: During a 4-year-period, 172 Cretan patients were submitted to radical retropubic prostatectomy in our institution. 23 out of them met the revised Epstein's criteria for the presence of clinically insignificant PCa (clinical stage T1c, prostate specific antigen density < 0.15 ng/ml/g, absence of Gleason pattern 4-5, <3 positive biopsy cores, presence of <50% tumor per core) during pre-treatment evaluation and were retrospectively included in the study. Post-surgery outcomes were evaluated including pathological stage, surgical margins and Gleason score upgrade. Results: Organ confined disease and insignificant PCa were predicted with a 74% and 31% accuracy, respectively. These figures are remarkably lower than those derived from similar studies worldwide. Conclusions: Due to the high variation in the revised Epstein's criteria prediction accuracy observed worldwide, the development and implementation of novel tools/nomograms with a greater predictive accuracy is still warranted. Hippokratia 2015, 19 (1): 30-33. abstract_id: PUBMED:31411968 A Multicentre Analysis of the Detection of Clinically Significant Prostate Cancer Following Transperineal Image-fusion Targeted and Nontargeted Systematic Prostate Biopsy in Men at Risk. Background: Prostate biopsy guided by magnetic resonance imaging (MRI) is increasingly used to obtain tissue from men with suspected prostate cancer (PC). Objective: To report a multicentre series of image-fusion transperineal prostate biopsies and compare the diagnostic yield of clinically significant PC (csPC) between targeted and nontargeted biopsies. Design, Setting, And Participants: The study included 640 consecutive patients with elevated prostate specific antigen (PSA) presenting for first biopsy or following a previous negative transrectal biopsy under the care of 13 urologists in 11 centres in the UK (April 2014-June 2017). Intervention: Multiparametric MRI was carried out in 61 approved prostate MRI centres with transperineal targeted alone (n=283) or targeted plus nontargeted (n=357) transperineal rigid image-fusion targeted biopsy (MIM-Symphony-DX). Outcome Measurements And Statistical Analysis: Rates of csPC and insignificant cancer detection in targeted and nontargeted biopsies were measured using a number of thresholds to define clinical significance. The primary definition was Gleason≥4+3 or any grade ≥6mm. Results And Limitations: The mean age, median PSA, and median prostate volume for the cohort were 63.8yr (standard deviation [SD] 8.4), 6.3 ng/ml (SD 5.8), and 42.0cm3 (SD 24.7), respectively. Overall, 276/640 men (43.1%) were diagnosed with csPC. csPC was detected from targeted biopsies alone in 263/640 cases (41.1%). Of the 357 men who underwent nontargeted biopsies, three (0.8%) had csPC exclusively in nontargeted cores, with no evidence of cancer in targeted cores. Overall, 32/357 (9.0%) had csPC in nontargeted biopsies regardless of the targeted biopsy findings. Clinically insignificant disease in nontargeted biopsies was detected in 93/357 men (26.1%). Our findings were consistent across all other thresholds of clinical significance. Limitations include the lack of nontargeted biopsies in all men. Conclusions: In this large multicentre series, nontargeted prostate biopsy cores had a low yield of csPC and a high yield of clinically insignificant PC. An image-fusion targeted-biopsy-only approach maintains high detection for csPC and low detection of clinically insignificant cancers. Patient Summary: In this report, we found that following prostate multiparametric magnetic resonance imaging and targeted transperineal biopsies of suspicious areas, the clinical value of performing additional extensive unguided biopsies of nonsuspicious areas is limited and can often find insignificant cancers that do not need treatment. abstract_id: PUBMED:8544268 The definition and preoperative prediction of clinically insignificant prostate cancer. Objectives: To define clinically insignificant prostate cancer according to cancer volume, grade, cancer doubling time, and life expectancy; and to determine how many insignificant cancers are removed by radical prostatectomy. Design And Patients: Clinically insignificant cancer was defined as a tumor that would give rise to no more than 20 cm3 of cancer within the prostate by the time of expected patient death (1990 life tables) and whose Gleason score was less than 4 in 40- to 49-year-olds, 5 in 50- to 59-year-olds, 6 in 60- to 69-year-olds, and 7 in 70- to 79-year-olds. Four definitions were formulated based on assumed cancer volume doubling times of 2, 3, 4, and 6 years. Using these four definitions, we reviewed 337 totally embedded prostates removed at Mayo Clinic between 1991 and 1993 for clinical stage T1c through T3 cancer to determine how many contained clinically insignificant cancer. Main Outcome Measures: Clinically significant vs clinically insignificant prostate cancer. Results: For cancer volume doubling time of 2, 3, 4, and 6 years, clinically insignificant cancer was identified in one (0.3%), 13 (3.9%), 25 (7.4%), and 49 (14.5%) of 337 prostatectomy specimens, respectively. Conclusions: Clinically insignificant prostate cancer can be defined by cancer volume, grade, cancer volume doubling time, and life expectancy of the patient. According to our definitions, most men treated with radical prostatectomy have clinically significant cancer. abstract_id: PUBMED:31130434 Magnetic Resonance Imaging-targeted Biopsy Versus Systematic Biopsy in the Detection of Prostate Cancer: A Systematic Review and Meta-analysis. Context: Magnetic resonance imaging (MRI)-targeted prostate biopsy (MRI-TB) may be an alternative to systematic biopsy for diagnosing prostate cancer. Objective: The primary aims of this systematic review and meta-analysis were to compare the detection rates of clinically significant and clinically insignificant cancer by MRI-TB with those by systematic biopsy in men undergoing prostate biopsy to identify prostate cancer. Evidence Acquisition: A literature search was conducted using the PubMed, Embase, Web of Science, Cochrane library, and Clinicaltrials.gov databases. We included prospective and retrospective paired studies where the index test was MRI-TB and the comparator test was systematic biopsy. We also included randomised controlled trials (RCTs) if one arm included MRI-TB and another arm included systematic biopsy. The risk of bias was assessed using a modified Quality Assessment of Diagnostic Accuracy Studies-2 checklist. In addition, the Cochrane risk of bias 2.0 tool was used for RCTs. Evidence Synthesis: We included 68 studies with a paired design and eight RCTs, comprising a total of 14709 men who either received both MRI-TB and systematic biopsy, or were randomised to receive one of the tests. MRI-TB detected more men with clinically significant cancer than systematic biopsy (detection ratio [DR] 1.16 [95% confidence interval {CI} 1.09-1.24], p<0.0001) and fewer men with clinically insignificant cancer than systematic biopsy (DR 0.66 [95% CI 0.57-0.76], p<0.0001). The proportion of cores positive for cancer was greater for MRI-TB than for systematic biopsy (relative risk 3.17 [95% CI 2.82-3.56], p<0.0001). Conclusions: MRI-TB is an attractive alternative diagnostic strategy to systematic biopsy. Patient Summary: We evaluated the published literature, comparing two methods of diagnosing prostate cancer. We found that biopsies targeted to suspicious areas on magnetic resonance imaging were better at detecting prostate cancer that needs to be treated and avoiding the diagnosis of disease that does not need treatment than the traditional systematic biopsy. abstract_id: PUBMED:25060664 Does cumulative prostate cancer length (CCL) in prostate biopsies improve prediction of clinically insignificant cancer at radical prostatectomy in patients eligible for active surveillance? Objectives: To evaluate if cumulative prostate cancer length (CCL) on prostate needle biopsy divided by the number of biopsy cores (CCL/core) could improve prediction of insignificant cancer on radical prostatectomy (RP) in patients with prostate cancer eligible for active surveillance (AS). Patients And Methods: Patients diagnosed with prostate cancer on extended (≥10 cores) biopsy with an initial prostate-specific antigen (iPSA) level of <15 ng/mL, clinical stage (cT) ≤ 2a, and highest biopsy Gleason score 3 + 3 = 6 or 3 + 4 = 7 with <3 positive cores who underwent RP were included in the study. The CCL/core and presence of insignificant cancer (organ-confined, volume <0.5 mL, Gleason score at RP ≤6) were recorded. pT2 prostate cancer with RP Gleason score ≤3 + 4 = 7 and volume <0.5 mL were categorised as low-tumour-volume organ-confined disease (LV-OCD). Results: In all, 221 patients met the inclusion criteria: the mean age was 59 years and the median iPSA level was 4.5 ng/mL. The clinical stage was cT1 in 86% of patients; biopsy Gleason score was 3 + 3 = 6 in 67% (group 1) and 3 + 4 = 7 in 33% of patients (group 2). The maximum percentage of biopsy core involvement was <50 in 85%; the median CCL/core was 0.15 mm. Insignificant cancer was found in 27% and LV-OCD in 44% of patients. Group 2 was associated with higher number of positive cores, maximum percentage core involvement, total prostate cancer length, and CCL/core. Group 1 was more likely to have insignificant cancer (39%) or LV-OCD (54%) than group 2 (3% and 23%, respectively). Group 2 had significantly higher RP Gleason score and pathological stage. Univariate analysis of group 1 showed that the iPSA level, maximum percentage core involvement, prostate cancer length, and CCL/core were all significantly associated with insignificant cancer and LV-OCD. For group 2, the number of positive cores (1 vs 2) was also significantly associated with LV-OCD. On multivariate logistic regression analysis, maximum percentage core involvement of <50, and number of positive cores (1 vs 2) were independent predictors of insignificant cancer in group 1; biopsy Gleason score, maximum percentage core involvement of <50 and prostate cancer length of <3 mm or CCL/core of <0.2 mm were all independent predictors of LV-OCD in the whole population. The maximum percentage of core involvement of <50 and prostate cancer length of <3 mm or CCL/core of <0.2 mm were also independent predictors of LV-OCD in group 1 patients. Conclusion: In patients eligible for AS, a CCL/core of <0.20 mm was significantly associated with insignificant cancer and LV-OCD. However, when parameters of cancer burden were considered, CCL/core did not independently add any additional value for predicting insignificant cancer in patients with biopsy Gleason score 6. The CCL/core was an independent predictor of LV-OCD in the whole population and in group 1 patients, although the model including prostate cancer length showed slightly higher area under the receiver operating characteristic curve. abstract_id: PUBMED:33488712 LMTK2 as Potential Biomarker for Stratification between Clinically Insignificant and Clinically Significant Prostate Cancer. A set of prostate tumors tend to grow slowly and do not require active treatment. Therefore, stratification between patients with clinically significant and clinically insignificant prostate cancer (PC) remains a vital issue to avoid overtreatment. Fast development of genetic technologies accelerated development of next-generation molecular tools for reliable PC diagnosis. The aim of this study is to evaluate the diagnostic value of molecular biomarkers (CRISP3, LMTK2, and MSMB) for separation of PC cases from benign prostatic changes and more specifically for identification of clinically significant PC from all pool of PC cases in patients with rising PSA levels. Patients (n = 200) who had rising PSA (PSA II) after negative transrectal systematic prostate biopsy due to elevated PSA (PSA I) were eligible to the study. In addition to PSA concentration, PSA density was calculated for each patient. Gene expression level was measured in peripheral blood samples of cases applying RT-PCR, while MSMB (-57 C/T) polymorphism was identified by pyrosequencing. LMTK2 and MSMB significantly differentiated control group from both BPD and PC groups. MSMB expression tended to increase from the major alleles of the CC genotype to the minor alleles of the TT genotype. PSA density was the only clinical characteristic that significantly differentiated clinically significant PC from clinically insignificant PC. Therefore, LMTK2 expression and PSA density were significantly distinguished between clinically significant PC and clinically insignificant PC. PSA density rather than PSA can differentiate PC from the benign prostate disease and, in combination with LMTK2, assist in stratification between clinically insignificant and clinically significant PC. abstract_id: PUBMED:21215436 Repeat prostate biopsy and the incremental risk of clinically insignificant prostate cancer. Objectives: To determine the incremental risk of diagnosis of clinically insignificant prostate cancer with serial prostate biopsies. Methods: We reviewed our institutional radical prostatectomy (RP) database comprising 2411 consecutive patients undergoing RP. We then stratified patients by the prostate biopsy on which their cancer was diagnosed and correlated biopsy number with the risk of clinically insignificant disease and adverse pathology at radical prostatectomy. Results: A total of 1867 (77.4%), 281 (11.9%), and 175 (7.3%) patients underwent 1, 2, and 3 or more prostate biopsies, respectively, before RP. Increasing number of prostate biopsies was associated with increasing prostate volume (P <.01), prostate-specific antigen (P <.01), associated prostate intraepithelial neoplasia (P <.01), and increased likelihood of clinical Gleason 6 or less disease (P <.01). On pathologic analysis, increasing number of prostate biopsies was associated with increased risk of low-volume (P <.01), organ-confined (P <.01) disease. The risk of clinically insignificant disease was found to be 31.1%, 43.8%, and 46.8% in those undergoing 1, 2, and 3+ prostate biopsies, respectively. Conversely, the risk of adverse pathology was found to be 64.6%, 53.0%, and 52.0% in those undergoing 1, 2, and 3+ prostate biopsies, respectively. Conclusions: Patients undergoing multiple prostate biopsies before RP are more likely to harbor clinically insignificant prostate cancer than those who only undergo 1 biopsy before resection. Nonetheless, the risk of adverse pathology in patients undergoing serial biopsies remains significant. The increased risk of prostate cancer overdiagnosis and overtreatment must be balanced with the continued risk of clinically significant disease when counseling patients regarding serial biopsies. abstract_id: PUBMED:26612463 Integrated analysis of the genomic instability of PTEN in clinically insignificant and significant prostate cancer. Patients with clinically insignificant prostate cancer remain a major over-treated population. PTEN loss is one of the most recurrent alterations in prostate cancer associated with an aggressive phenotype, however, the occurrence of PTEN loss in insignificant prostate cancer has not been reported and its role in the separation of insignificant from significant prostate cancer is unclear. An integrated analysis of PTEN loss was, therefore, performed for structural variations, point mutations and protein expression in clinically insignificant (48 cases) and significant (76 cases) prostate cancers treated by radical prostatectomy. Whole-genome mate pair sequencing was performed on tumor cells isolated by laser capture microdissection to characterize PTEN structural alterations. Fluorescence in situ hybridization probes were constructed from the sequencing data to detect the spectrum of these PTEN alterations. PTEN loss by mate pair sequencing and fluorescence in situ hybridization occurred in 2% of insignificant, 13% of large volume Gleason score 6, and 46% of Gleason score 7 and higher cancers. In Gleason score 7 cancers with PTEN loss, PTEN alterations were detected in both Gleason pattern 3 and 4 in 57% of cases by mate pair sequencing, 75% by in situ hybridization and 86% by immunohistochemistry. PTEN loss by sequencing was strongly associated with TMPRSS2-ERG fusion, biochemical recurrence, PTEN loss by in situ hybridization and protein loss by immunohistochemistry. The complex nature of PTEN rearrangements was unveiled by sequencing, detailing the heterogeneous events leading to homozygous loss of PTEN. PTEN point mutation was present in 5% of clinically significant tumors and not in insignificant cancer or high-grade prostatic intraepithelial neoplasia. PTEN loss is infrequent in clinically insignificant prostate cancer, and is associated with higher grade tumors. Detection of PTEN loss in Gleason score 6 cancer in a needle biopsy specimen indicates a higher likelihood of clinically significant prostate cancer. Answer: Yes, clinically insignificant tumors of the prostate do exist. The concept of clinically insignificant prostate cancer (PC) refers to tumors that are unlikely to cause symptoms or affect a man's lifespan. These tumors are often low-volume, well-differentiated, and confined within the prostatic capsule (PUBMED:19077395). The majority of impalpable prostate cancers (PCs) found in autopsy studies are low-volume and well-differentiated, corresponding to clinically insignificant neoplasms (PUBMED:19077395). The Epstein criteria, which were initially based on sextant biopsy, have been validated for the 12-core prostate biopsy scheme and are used to identify clinically insignificant PC. These criteria include factors such as PSA density, biopsy Gleason score, percentage of tumor, number of positive cores, perineural invasion (PNI), and high-grade prostatic intraepithelial neoplasia (HG-PIN) (PUBMED:33951662). However, the accuracy of predicting clinically insignificant PC can vary. For example, a study in a Greek subpopulation found that the revised Epstein's criteria predicted organ-confined disease and insignificant PC with lower accuracy than similar studies worldwide, suggesting that novel tools or nomograms with greater predictive accuracy are needed (PUBMED:26435643). Advancements in biopsy techniques, such as transperineal image-fusion targeted biopsy, have shown that targeted biopsies alone maintain high detection rates for clinically significant PC and low detection of clinically insignificant cancers (PUBMED:31411968). Similarly, magnetic resonance imaging-targeted biopsy (MRI-TB) has been found to detect more clinically significant cancer and fewer clinically insignificant cancers compared to systematic biopsy (PUBMED:31130434). The definition of clinically insignificant cancer can also be based on cancer volume, grade, doubling time, and life expectancy. Most men treated with radical prostatectomy have clinically significant cancer, but a small percentage have cancers that meet the criteria for being clinically insignificant (PUBMED:8544268). Furthermore, the cumulative prostate cancer length (CCL) in prostate biopsies, when divided by the number of biopsy cores, has been evaluated as a predictor of insignificant cancer at radical prostatectomy in patients eligible for active surveillance (PUBMED:25060664).
Instruction: Mobile phones: the next step towards healthcare delivery in rural India? Abstracts: abstract_id: PUBMED:25133610 Mobile phones: the next step towards healthcare delivery in rural India? Background: Given the ubiquity of mobile phones, their use to support healthcare in the Indian context is inevitable. It is however necessary to assess end-user perceptions regarding mobile health interventions especially in the rural Indian context prior to its use in healthcare. This would contextualize the use of mobile phone communication for health to 70% of the country's population that resides in rural India. Objectives: To explore the acceptability of delivering healthcare interventions through mobile phones among users in a village in rural Bangalore. Methods: This was an exploratory study of 488 mobile phone users, residing in a village, near Bangalore city, Karnataka, South India. A pretested, translated, interviewer-administered questionnaire was used to obtain data on mobile phone usage patterns and acceptability of the mobile phone, as a tool for health-related communication. The data is described using basic statistical measures. Results: The primary use of mobile phones was to make or receive phone calls (100%). Text messaging (SMS) was used by only 70 (14%) of the respondents. Most of the respondents, 484 (99%), were willing to receive health-related information on their mobile phones and did not consider receiving such information, an intrusion into their personal life. While receiving reminders for drug adherence was acceptable to most 479 (98%) of our respondents, 424 (89%) preferred voice calls alone to other forms of communication. Nearly all were willing to use their mobile phones to communicate with health personnel in emergencies and 367 (75%) were willing to consult a doctor via the phone in an acute illness. Factors such as sex, English literacy, employment status, and presence of chronic disease affected preferences regarding mode and content of communication. Conclusion: The mobile phone, as a tool for receiving health information and supporting healthcare through mHealth interventions was acceptable in the rural Indian context. abstract_id: PUBMED:31488497 mHealth and big-data integration: promises for healthcare system in India. Background: The use of mobile devices in health (mobile health/mHealth) coupled with related technologies promises to transform global health delivery by creating new delivery models that can be integrated with existing health services. These delivery models could facilitate healthcare delivery into rural areas where there is limited access to high-quality access care. Mobile technologies, Internet of Things and 5G connectivity may hold the key to supporting increased velocity, variety and volume of healthcare data. Objective: The purpose of this study is to identify and analyse challenges related to the current status of India's healthcare system-with a specific focus on mHealth and big-data analytics technologies. To address these challenges, a framework is proposed for integrating the generated mHealth big-data and applying the results in India's healthcare. Method: A critical review was conducted using electronic sources between December 2018 and February 2019, limited to English language articles and reports published from 2010 onwards. Main Outcome: This paper describes trending relationships in mHealth with big-data as well as the accessibility of national opportunities when specific barriers and constraints are overcome. The paper concentrates on the healthcare delivery problems faced by rural and low-income communities in India to illustrate more general aspects and identify key issues. A model is proposed that utilises generated data from mHealth devices for big-data analysis that could result in providing insights into the India population health status. The insights could be important for public health planning by the government towards reaching the Universal Health Coverage. Conclusion: Biomedical, behavioural and lifestyle data from individuals may enable customised and improved healthcare services to be delivered. The analysis of data from mHealth devices can reveal new knowledge to effectively and efficiently support national healthcare demands in less developed nations, without fully accessible healthcare systems. abstract_id: PUBMED:29755887 Current Impact, Future Prospects and Implications of Mobile Healthcare in India. India has a diverse geographical landscape and predominately rural population. Telemedicine is identified as one of the technological pillars to support healthcare services in this region, but is associated with high cost and complex infrastructure, thus restricting its wider use. Mobile-based healthcare (m-Health) services may provide a practical, promising alternative approach to support healthcare facilities. India has a high mobile user base, both in cities and in rural regions. The appropriate identification of mobile data transmission technology for healthcare services is vital to optimally use the available technology. In this article, we review current telecommunication systemsin India, specifically the evolving consensus on the need for CDMA (Code Division Multiple Access - a wireless technology used by leading international and national operators. This technology is deployed in 800MHz band making it economically viable and far reaching with high quality of services) to continue its operations in India along with mobile healthcare services. We also discuss how healthcare services may be extended using m-Health technologies, given the availability of telecommunications and related services. abstract_id: PUBMED:28554298 Nursing in rural India. Nursing in India is evolving, and the inequality of practice and standards between cities and villages affects the health of the population. This article describes the challenges of providing healthcare in rural parts of India, and how nurses have made the best use of available resources to strive to achieve healthcare goals. abstract_id: PUBMED:35847763 Rural community health workers' readiness for mobile-phone based telemedicine uptake in India. Objective: Healthcare services using mobile-phone based telemedicine provide simple technology that does not require sophisticated equipment. This study assessed community health workers' knowledge, attitude, and practice (i.e., their readiness) at the village level for uptake of mobile-phone based telemedicine. Materials and Methods: This cross-sectional study was conducted among 80 community health workers, including Auxiliary Nurse Midwives, Multipurpose Health Workers and Accredited Social Health Activists working in a rural health block of India. A pre-tested, semi-structured, interviewer-assisted, self-administered questionnaire was used to assess their mobile-phone based telemedicine readiness. Results: Sixty (75.0%) health workers owned mobile phones. The median readiness score for mobile-phone based telemedicine was 109.0. The Accredited Social Health Activists showed a better attitude toward mobile-phone based telemedicine than others. There was a significant moderate positive correlation (r=0.67) between knowledge and practice domains. Community health workers who had smartphones showed a significantly better attitude than those who did not. Conclusion: Training programs on telemedicine service delivery, focused on Auxiliary Nurse Midwives/ Multipurpose Health Workers, can improve their attitudes towards telemedicine. A better attitude of the Accredited Social Health Activists must be leveraged to initiate mobile-phone based telemedicine services on a pilot basis initially and later scaled up in other settings. abstract_id: PUBMED:22416147 Transforming trauma healthcare delivery in rural areas by use of an integrated call center. Introduction: There is poor penetration of trauma healthcare delivery in rural areas. On the other hand, mobile penetration in India is now averaging 80% with most families having access to mobile phone. Aims And Objectives: The aim of this study was to assess the implementation and socioeconomic impact of a call center in providing healthcare delivery for patients with head and spinal injuries. Materials And Methods: This was a prospective observational study carried out over a 6-month period at a level I trauma Center in New Delhi, India. A nine-seater call center was outsourced to a private company and the hospital's electronic medical records were integrated with the call-center operations. The call center was given responsibility of maintaining appointments and scheduling clinics for the whole hospital as well as ensuring follow-up visits. Trained call-center staff handled simple patient queries and referred the rest via email to concerned doctors. A telephonic survey was done prior to the start of call-center operations and after 3 months to assess for user satisfaction. Results: The initial cost of outsourcing the call center was Rs 1.6 lakhs (US$ 4000), with a recurring cost of Rs 80,000 (US$ 2000) per month. A total of 484 patients were admitted in the department of Neurosurgery during the study period. Of these, 63% (n=305) were from rural areas. Patients' overall experience for clinic visits improved markedly following implementation of call center. Patient satisfaction for follow-up visits increased from a mean of 32-96%. Ninety-five percent patients reported a significant decrease in waiting time in clinics 80.4% reporting improved doctor-patient interaction. A total of 52 visits could be postponed/cancelled for patients living in far flung areas resulting in major socioeconomic benefits to these families. Conclusions: As shown by our case study, call centers have the potential to revolutionize delivery of trauma healthcare to rural areas in an extremely cost-effective manner. abstract_id: PUBMED:29204014 Mental Healthcare Delivery in Rural Greece: A 10-year Account of a Mobile Mental Health Unit. Introduction: Patients living in rural and remote areas may have limited access to mental healthcare due to lack of facilities and socioeconomic reasons, and this is the case of rural areas in Eastern Europe countries. In Greece, community mental health service delivery in rural areas has been implemented through the development of the Mobile Mental Health Units (MMHUs). Methods: We present a 10-year account of the operation of the MMHU of the prefectures of Ioannina and Thesprotia (MMHU I-T) and report on the impact of the service on mental health delivery in the catchment area. The MMHU I-T is a multidisciplinary community mental health team which delivers services in rural and mountainous areas of Northwest Greece. Results: The MMHU I-T has become an integral part of the local primary care system and is well known to the population of the catchment area. By the end of 2016, the majority of patients (60%) were self-referred or family-referred, compared to 24% in the first 2 years. Currently, the number of active patients is 293 (mean age 63 years, 49.5% are older adults), and the mean caseload for each member of the team is 36.6. A significant proportion of patients (28%) receive care with regular domiciliary visits, and the provision of home-based care was correlated with the age of the patients. Within the first 2 years of operation of the MMHU I-T hospitalizations of treatment, engaged patients were reduced significantly by 30.4%, whereas the treatment engagement rates of patients with psychotic disorders were 67.2% in 5 years. Conclusions: The MMHU I-T and other similar units in Greece are a successful paradigm of a low-cost service which promotes mental health in rural, remote, and deprived areas. This model of care may be informative for clinical practice and health policy given the ongoing recession and health budget cuts. It suggests that rural mental healthcare may be effectively delivered by integrating generic community mental health mobile teams into the primary care system. abstract_id: PUBMED:24621929 Mobile phone-based clinical guidance for rural health providers in India. There are few tried and tested mobile technology applications to enhance and standardize the quality of health care by frontline rural health providers in low-resource settings. We developed a media-rich, mobile phone-based clinical guidance system for management of fevers, diarrhoeas and respiratory problems by rural health providers. Using a randomized control design, we field tested this application with 16 rural health providers and 128 patients at two rural/tribal sites in Tamil Nadu, Southern India. Protocol compliance for both groups, phone usability, acceptability and patient feedback for the experimental group were evaluated. Linear mixed-model analyses showed statistically significant improvements in protocol compliance in the experimental group. Usability and acceptability among patients and rural health providers were very high. Our results indicate that mobile phone-based, media-rich procedural guidance applications have significant potential for achieving consistently standardized quality of care by diverse frontline rural health providers, with patient acceptance. abstract_id: PUBMED:37365536 Rural-urban difference in meeting the need for healthcare and food among older adults: evidence from India. Background: Due to changes in demographic and epidemiological scenarios, and the gradual increase in the older population, India is yet to prepare for rising nutrition and health-related issues among older adults in the coming decades. While the process of ageing and its associated aspect has been found to have an urban-rural divide. Thus, this study examines rural/urban differences in unmet needs for food and healthcare among Indian older adults. Methods: A sample of 31,464 older adults aged 60 years and above were considered in the study from the Longitudinal and Ageing Survey of India (LASI). The bivariate analysis was done using the sampling weights. Logistic regression and decomposition analysis was used to explain the rural-urban gap in the unmet needs for food and healthcare among Indian older adults. Results: Rural older adults were more vulnerable to meeting the need for health and food than their urban counterparts. While factors that contributed majorly to the difference in unmet need for food between urban and rural were education (34.98%), social group (6.58%), living arrangements (3.34%) and monthly per capita expenditure (MPCE) (2.84%). Similarly, for the unmet need for health, the factors that contributed the most to the rural-urban gap are education (28.2%), household size (2.32%), and MPCE (1.27%). Conclusion: The study indicates more vulnerability among rural older adults than compared to urban older individuals. The targeted policy-level efforts should be initiated considering the economic and residential vulnerability identified in the study. There is a need for primary care services that can provide targeted help to older adults in rural communities. abstract_id: PUBMED:30959954 Utilization of a Mobile Dental Vehicle for Oral Healthcare in Rural Areas. Oral diseases remain one of the major global public health challenges, and the worldwide urban⁻rural disparities in oral health are significant. Residents in rural areas generally suffer from a higher prevalence and severity of dental caries and periodontal disease, yet they face numerous difficulties and barriers in accessing oral healthcare. Conventional strategies, such as building of dental clinics or, hospitals, or the provision of outreach services by using disposable materials, are neither practical nor effective in rural settings. Mobile dental vehicles (MDVs) have been proposed as an alternative strategy to supplement the traditional oral healthcare in many regions. They have usually been utilized in school-based oral health programs, providing dental care to the homeless or migrants, and screening programs for the population for various oral diseases. Due to their high mobility, MDVs are particularly valuable for the underserved populations living in rural areas. The advance of dental devices enables MDVs to be operated in a self-sufficient manner. This allows the MDV to function almost as well as a conventional dental clinic, providing a variety of dental treatments, including scaling, restoration, and oral surgery. This article discusses the use of MDVs as a solution to urban⁻rural inequality in receiving oral healthcare. Answer: The exploration of mobile phones as a step towards healthcare delivery in rural India reveals a positive outlook. In rural Bangalore, an exploratory study of 488 mobile phone users indicated a high willingness to receive health-related information via mobile phones, with 99% of respondents open to it and 98% accepting reminders for drug adherence. Voice calls were the preferred mode of communication for 89% of participants, and 75% were willing to consult a doctor via phone in acute illnesses. Factors such as sex, English literacy, employment status, and presence of chronic disease influenced communication preferences (PUBMED:25133610). The integration of mHealth and big-data analytics in India's healthcare system promises to transform healthcare delivery, especially in rural areas with limited access to high-quality care. The analysis of data from mHealth devices can provide insights into population health status, which is crucial for public health planning and reaching Universal Health Coverage (PUBMED:31488497). Mobile healthcare (m-Health) services are seen as a practical alternative to telemedicine in India, where there is a high mobile user base in both urban and rural regions. The availability of telecommunications and related services is vital for the optimal use of technology for healthcare services (PUBMED:29755887). Community health workers in rural India have shown readiness for mobile-phone based telemedicine, with 75% owning mobile phones and a positive correlation between knowledge and practice domains. Training programs focused on Auxiliary Nurse Midwives/Multipurpose Health Workers can improve attitudes towards telemedicine (PUBMED:35847763). The use of an integrated call center has demonstrated the potential to revolutionize trauma healthcare delivery in rural areas in a cost-effective manner, improving patient satisfaction and reducing waiting times in clinics (PUBMED:22416147). In summary, mobile phones and related technologies appear to be a viable next step towards enhancing healthcare delivery in rural India, with high acceptability among users, the potential for improved healthcare services, and readiness among community health workers for telemedicine uptake.
Instruction: Acid-induced esophageal shortening in humans: a cause of hiatus hernia? Abstracts: abstract_id: PUBMED:7958685 Esophageal shortening induced by short-term intraluminal acid perfusion in opossum: a cause for hiatus hernia? Background/aims: Reflux esophagitis and hiatal hernia commonly coexist, yet a cause and effect relationship remains unclear. This study examined whether acute acid-induced esophageal injury induces longitudinal esophageal shortening in the opossum model. Methods: Esophageal length was measured continuously using a specially designed strain gauge transducer in anesthetized opossums while the midesophagus was perfused intraluminally with either normal saline or 100 mmol/L HCl. After a stabilization period, the test solution was perfused for 150 minutes. The effect of bilateral cervical vagotomy and atropine (60 microns/kg intravenously) were determined in separate groups. Parallel studies in which resting lower esophageal sphincter pressure was measured before and after prolonged intraesophageal acid perfusion were performed. Results: Esophageal acid perfusion induced acute epithelial injury as determined histologically. This was associated with significant esophageal shortening compared with saline-perfused controls and was not affected by vagotomy or atropine. In contrast, acid perfusion invariably induced a decrease in resting lower esophageal sphincter pressure. Conclusions: Acute acid-induced esophageal mucosal injury induces longitudinal esophageal shortening that does not involve vagal pathways or cholinergic neurons. This raises the possibility that esophagitis could contribute to the development of hiatal hernia by inducing esophageal long axis shortening. abstract_id: PUBMED:11111106 Acid-induced esophageal shortening in humans: a cause of hiatus hernia? Background: Hiatus hernia and gastroesophageal reflux disease commonly coexist, and there is pathophysiological evidence that the presence of a hiatus hernia contributes to abnormal acid reflux. However, the cause of hiatus hernia remains unclear. In an animal model, it has been shown that acute acid injury to the esophagus results in esophageal shortening, raising the possibility that reflux esophagitis per se can contribute to the formation of hiatus hernia by inducing esophageal shortening. Aim: To determine whether luminal acid produces esophageal shortening in humans. Methods: Twelve volunteers were each studied on two occasions, one week apart, in a double-blind, crossover trial. The location of the lower esophageal sphincter (LES), as well as the LES resting pressure and axial length were determined at baseline and then again after 20 min of either acid or saline perfusion. Results: Acid perfusion did not induce significant changes in resting LES pressure but resulted in proximal migration of the LES (ie, esophageal shortening) by an average of 0.5 cm, with the largest proximal migration being 1.8 cm. In contrast, saline perfusion resulted in slight distal migration of the LES (ie, esophageal lengthening). Conclusions: Intraluminal acid perfusion causes longitudinal axis shortening of the esophagus and suggests that gastroesophageal acid reflux may contribute to the cause of hiatus hernia. abstract_id: PUBMED:37581423 Impaired sliding between the lower esophageal sphincter and crural diaphragm (esophageal hiatus) in patients with achalasia esophagus. Swallow-related axial shortening of the esophagus results in the formation of phrenic ampulla in normal subjects; whether it is the case in achalasia esophagus is not known. The goal is to study axial shortening of the esophagus and relative movement between the lower esophageal sphincter (LES) and crural diaphragm (CD) in normal subjects and patients with achalasia. A novel method, isoimpedance contour excursion at the lower edger of LES, as a marker of axial esophageal shortening was validated using X-ray fluoroscopy (n = 5) and used to study axial shortening and separation between the LES and CD during peristalsis in normal subjects (n = 15) and patients with achalasia type 2 esophagus (n = 15). Abdominal CT scan images were used to determine the nature of tissue in the esophageal hiatus of control (n = 15) and achalasia patients (n = 15). Swallow-induced peristalsis resulted in an axial excursion of isoimpedance contours, which was quantitatively similar to the metal clip anchored to the LES on X-ray fluoroscopy (2.3 ± 1.4 vs. 2.1 ± 1.4 cm with deep inspiration and 2.7 ± 0.6 cm vs. 2.7 ± 0.6 cm with swallow-induced peristalsis). Esophageal axial shortening with swallows in patients with achalasia was significantly smaller than normal (1.64 ± 0.5 cm vs. 3.59 ± 0.4 cm, P < 0.001). Gray-level matrix analysis of CT images suggests more "fibrous" and less fat in the hiatus of patients with achalasia. Lack of sliding between the LES and CD explains the low prevalence of hiatus hernia, and low compliance of the LES in achalasia esophagus, which likely plays a role in the pathogenesis of achalasia.NEW & NOTEWORTHY Swallow-related axial shortening of the esophagus is reduced, and there is no separation between the lower esophageal sphincter and crural diaphragm (CD) with swallowing in patients with achalasia esophagus. Fat in the hiatal opening of the esophagus appears to be replaced with fibrous tissue in patients with achalasia, resulting in tight anchoring between the LES and CD. The above findings explain low prevalence of hiatus hernia and the low compliance of the LES in achalasia esophagus. abstract_id: PUBMED:26192789 The role of excessive esophageal acid exposure in patients with gastroesophageal reflux disease. Gastroesophageal reflux disease (GERD), especially reflux esophagitis (RE), is characterized by excessive esophageal acid exposure. Transient lower esophageal sphincter (LES) relaxation is the major mechanism of acid reflux episodes in both healthy subjects (HS) and patients with GERD. In the sitting position, where acid reflux episodes often occur, there is no difference in the frequency of transient LES relaxations between these two groups; however, in patients with GERD, at 5 cm above the LES, the proportion of acid reflux episodes during transient LES relaxations is significantly greater than in the HS group. This difference is considered to be one of the causes of excessive esophageal acid exposure in patients with GERD, but its cause is still unclear. A recent study, which investigated the proportion of acid reflux episodes during transient LES relaxations at 2 and 7 cm above the LES, showed that there was no difference at 2 cm above the LES between HS and patients with RE, but at 7 cm they were significantly greater in patients with RE than in HS. Evaluation of acid reflux at 2 cm above the LES is difficult to measure, but the proximal extent of refluxate could be one of the important factors of excessive acid esophageal exposure in patients with RE. Ineffective esophageal motility, found in patients with moderate to severe RE, impairs esophageal bolus clearance of acid, therefore both the proximal extent of refluxate and the delay of esophageal bolus clearance of acid could be major causes of excessive esophageal acid exposure. Hiatus hernia also causes acid reflux, due to its association with hypotensive LES, and also impairs esophageal bolus clearance of acid. abstract_id: PUBMED:31826568 The relationship between acid reflux and esophageal motility, esophagitis and cardiac morphology in gastroesophageal reflux disease Objective: To analyze the relationship between the severity of esophageal acid reflux and esophageal motility, esophageal mucosal injury and morphological anatomy of gastroesophageal junction (GEJ) in patients with gastroesophageal reflux disease (GERD). Methods: The clinicaldata of GERD patients who underwent 24 h pH-impedance monitoring, gastroscopy and high-resolution manometry (HRM) from January 2016 to January 2019 in the Gastroesophageal Surgery Department of PLA Rocket Force Characteristic Medical Center were retrospectively analyzed. The patients were divided into non-pathological acid reflux group, mild pathological acid reflux group and moderate to severe pathological acid reflux group according to the DeMeester score. The gender and age of each group were matched, with 60 cases in each group. Statistical analysiswas performed to analyze thedifferences in upper esophageal sphincter pressure, lower esophageal sphincter pressure (LES), LES length, length of ventral LES, percentage of ineffective swallowing, esophagitis, Hill grade of GEJ, and hiatus hernia (HH) in each group. The comparison and correlation analysis are also carried out between the groups. Results: The male-female ratio was 33/27, and the age was (57±13) years in each group. Non-parametric analysis showed that the LES pressure and the length of the ventral LES decreased with the severity of acid reflux, and there was a statistical difference (P= 0.033, P=0.015). The detection rate of HH by HRM increased significantly (χ(2)=0.001) as well. Esophagitis score increased with the severity of acid reflux and there was statistical difference (P<0.001).The detection rate of esophagitis increased significantly (χ(2)<0.001) as well. Hill grading score of GEJ increased with the severity of acid reflux, and there was statistical difference (P<0.001).The detection rate of HH by endoscopy increased significantly (χ(2)<0.001) as well. The correlation between DeMeester score and LES pressure, length of ventral LES, percentage of ineffective swallowing, esophagitis score, and Hill grade score were statistically significant (P<0.05). Conclusions: The esophageal low motility (such as low LES pressure) and anatomical abnormalities (abdominal esophageal shortening, GEJ flabbiness, and even HH formation) of the GEJ regionare significantly associated with the severity of acid reflux. These factors may be important causes of increased acid reflux. In addition, the aggravation of acid reflux can also increase the incidence and severity of esophagitis. abstract_id: PUBMED:7498646 Attenuation of esophageal shortening during peristalsis with hiatus hernia. Background & Aims: Minimal quantitative information exists on esophageal shortening during peristalsis in the human esophagus. The aim of this study was to ascertain the effect of hiatus hernia on longitudinal muscle-mediated peristaltic esophageal shortening. Methods: Seven volunteers and 11 patients with hiatal hernia had metal clips endoscopically affixed at the squamocolumnar junction and 3-5 cm proximal to it (n = 11). Location of the lower esophageal sphincter and axial clip movement were assessed using concurrent manometry and videofluoroscopy during barium swallows in a supine and upright posture with and without abdominal compression. Results: Three subject groups were defined by the proximity of the squamocolumnar junction to the diaphragmatic hiatus: group 1, < or = 0 cm; group 2, between 0 and 2 cm; and group 3, > or = 2 cm. Peristaltic esophageal shortening was progressively diminished, re-elongation progressively prolonged, and the degree of contraction observed in the distal esophageal segment reduced with progressive degree of hiatus hernia. There was minimal mobility of the squamocolumnar junction relative to the hiatus with posture or abdominal compression. Conclusions: Longitudinal muscle contraction during peristalsis normally causes transient elevation of the squamocolumnar junction above the diaphragm. Esophageal shortening during primary peristalsis is reduced with increasing degree of hiatus hernia, suggesting that there is diminished opposition of longitudinal muscle contraction from the phrenoesophageal attachments. abstract_id: PUBMED:20949704 Correlation between pathological distal esophageal acid exposure and ineffective esophageal motility. Aim: To assess the correlation between esophageal dysmotility, characterized as inefficient esophageal motility (IEM), and the presence of pathological acid reflux due to a structurally defective lower esophageal sphincter (LES), hiatus hernia. (HH), or esophagitis in patients with suspected gastroesophageal reflux disease (GERD). Methods: 311 patients referred for GERD diagnostic procedures in a gastroesopahgeal motility laboratory were included in the study. Patients underwent an interview regarding their clinical symptoms, upper endoscopy (UE), stationary esophageal manometry, and 24-h esophageal pH-metry. The following risk factors of patients in the negative pH-metry group were compared to those in the positive pH-metry group: IEM, defective LES, HH, and esophagitis. The association between IEM and positive pH-metry results was first assessed by means of univariate analysis and later determined with multivariate logistic regression analysis. Results: Out of the total of 311 studied patients, 208 met the inclusion criteria; 88 had normal and 120 had positive pH-metry results. Univariate analysis revealed that the occurrence of IEM, defective LES, and HH was significantly greater in the positive pH-metry group. Following logistic regression analysis, the occurrence of IEM remained significantly greater in the positive pH-metry group. Conclusions: IEM is associated with the presence of abnormal acid reflux, as assessed by 24-h esophageal pH-metry, regardless of the presence of defective LES, HH, or esophagitis. abstract_id: PUBMED:32405907 Bougie-Induced Esophageal Perforation During Laparoscopic Roux-en-Y Gastric Bypass Surgery with Hiatus Hernia Repair. Introduction: Use of bougie often helps in the calibration of gastric pouch and assess proper closure of the hiatus. Bougie induced esophageal perforation during surgery is uncommon. We encountered a case of bougie induced lower esophageal perforation while introducing it across a gastroesophageal junction.Here we discussed the cause, mechanism of perforation, and its management. Material And Methods: A patient with BMI of 46.7 kg/m 2 was schedule for Laparoscopic Roux- en- Y Gastric Bypass with Hiatus Hernia Repair. A cruroplasty was performed using interrupted non-absorbable suture. Bougie intubation across gastroesophageal junction was unsuccessful after closure of hiatus. Possibilities of incorporation of a esophageal wall during cruroplasty or too tight hiatus was suspected. Crural approximation suture was removed and esophagus was inspected, which showed posterior perforation at the distal esophagus. Primary closure done with fullthickness interrupted 3'0 absorbable suture to create full-thickness interrupted stitches.A Jackson-Pratt drain was placed close to hiatus. Result: We kept the patient nil per orally for two days.On the fourth postoperative day,the patient recovered uneventfully and was discharge after drain removal. Conclusion: Bougie induced perforation is uncommon and occurs after improper manipulation of bougie across the gastroesophageal junction. Acute esophageal angulation and over thrusting of the bougie against closed hiatus can contribute to posterior esophageal perforation. Inspection of the esophagus above the hiatus is essential to avoid missed perforation. abstract_id: PUBMED:17525090 Intraluminal acid induces oesophageal shortening via capsaicin-sensitive neurokinin neurons. Objective: Intraluminal acid evokes reflex contraction of oesophageal longitudinal smooth muscle (LSM) and consequent oesophageal shortening. This reflex may play a role in the pathophysiology of oesophageal pain syndromes and hiatus hernia formation. The aim of the current study was to elucidate further the mechanisms of acid-induced oesophageal shortening. Design: Intraluminal acid perfusion of the intact opossum smooth muscle oesophagus was performed in vitro in the presence and absence of neural blockade and pharmacological antagonism of the neurokinin 2 receptor, while continuously recording changes in oesophageal axial length. In addition, the effect of these antagonists on the contractile response of LSM strips to the mast cell degranulating agent 48/80 was determined. Finally, immunohistochemistry was performed to look for evidence of LSM innervation by substance P/calcitonin gene-related peptide (CGRP)-containing axons. Results: Intraluminal acid perfusion induced longitudinal axis shortening that was completely abolished by capsaicin desensitization, substance P desensitization, or the application of the neurokinin 2 receptor antagonist MEN10376. Compound 48/80 induced sustained contraction of LSM strips in a concentration-dependent fashion and this was associated with evidence of mast cell degranulation. The 48/80-induced LSM contraction was antagonized by capsaicin desensitization, substance P desensitization and MEN10376, but not tetrodotoxin. Immunohistochemistry revealed numerous substance P/CGRP-containing neurons innervating the LSM and within the mucosa. Conclusions: This study suggests that luminal acid activates a reflex pathway involving mast cell degranulation, activation of capsaicin-sensitive afferent neurons and the release of substance P or a related neurokinin, which evokes sustained contraction of the oesophageal LSM. This pathway may be a target for treatment of oesophageal pain syndromes. abstract_id: PUBMED:25443048 Frequency of abnormal esophageal acid exposure in patients eligible for bariatric surgery. Background: Obesity and gastroesophageal reflux disease (GERD) are both high-prevalence diseases in developed nations. Obesity has been identified as an important risk factor in the development of GERD. The objective of this study was to determine the frequency of abnormal esophageal acid exposure in patients candidate for bariatric surgery and its relationship with any clinical and endoscopic findings before surgery. Methods: Data collected from a group of 88 patients awaiting bariatric surgery included a series of demographic variables and symptoms typical of GERD. The tests patients underwent included manometry, pH monitoring, and upper gastrointestinal endoscopy. Univariate and multivariate analyses were conducted on the variables related to the onset of reflux. Results: Esophageal pH monitoring tests were positive in 65% of the patients. Manometries showed lower esophageal sphincter hypotonia in 46%, while 20% returned abnormal upper endoscopy results. Out of the 45% of patients who were asymptomatic or returned normal endoscopies, half returned positive esophageal pH tests. In turn, among the 55% of patients who had symptoms or an abnormal upper endoscopy, three quarters had pH tests that diagnosed reflux. pH tests were also positive in 80% of symptomatic patients and 100% of patients with esophagitis (P<.042). No statistically significant relationship was found between body mass index, sex, age, manometry, or hiatus hernia and the positive pH monitoring. Conclusion: Frequency of abnormal esophageal acid exposure among obese patients is high. There is a relationship between the presence of symptoms and reflux. But the absence of symptoms does not rule out the presence of abnormal esophageal function tests. Answer: The relationship between acid-induced esophageal shortening and the development of hiatus hernia has been explored in both animal models and human studies. In an opossum model, it was found that acute acid-induced esophageal mucosal injury led to significant esophageal shortening, which did not involve vagal pathways or cholinergic neurons. This raised the possibility that esophagitis could contribute to the development of hiatal hernia by inducing esophageal long axis shortening (PUBMED:7958685). Similarly, a study in humans demonstrated that intraluminal acid perfusion caused longitudinal axis shortening of the esophagus, suggesting that gastroesophageal acid reflux may contribute to the cause of hiatus hernia (PUBMED:11111106). Further research in patients with achalasia, a condition characterized by impaired esophageal motility, showed that swallow-related axial shortening of the esophagus is reduced, and there is no separation between the lower esophageal sphincter (LES) and crural diaphragm (CD) with swallowing. This lack of sliding between the LES and CD explains the low prevalence of hiatus hernia in achalasia esophagus (PUBMED:37581423). Additional studies have found that excessive esophageal acid exposure, often seen in gastroesophageal reflux disease (GERD), is associated with esophageal motility disorders, esophagitis, and morphological changes at the gastroesophageal junction, including hiatus hernia (PUBMED:26192789, PUBMED:31826568). Moreover, esophageal shortening during peristalsis is reduced with increasing degree of hiatus hernia, suggesting diminished opposition of longitudinal muscle contraction from the phrenoesophageal attachments (PUBMED:7498646). In conclusion, there is evidence to suggest that acid-induced esophageal shortening in humans may be a contributing factor to the development of hiatus hernia. This relationship is supported by findings that show esophageal shortening following acid exposure, as well as associations between acid reflux, esophageal motility disorders, and hiatus hernia.
Instruction: Sinonasal manifestations of cystic fibrosis: a correlation between genotype and phenotype? Abstracts: abstract_id: PUBMED:24210900 Sinonasal manifestations of cystic fibrosis: a correlation between genotype and phenotype? Background: Patients with Cystic Fibrosis are prone to develop sinonasal disease. Studies in genotype-phenotype correlations for sinonasal disease are scarce and inconclusive. Methods: In this observational study several aspects of sinonasal disease were investigated in 104 adult patients with CF. In each patient a disease specific quality of life questionnaire (RSOM-31), nasal endoscopy and a CT scan of the paranasal sinuses were performed. Patients were divided into two groups, class I-III mutations and class IV-V mutations, based on their CFTR mutations. Results: The prevalence of rhinosinusitis in adult patients with CF was 63% and the prevalence of nasal polyps 25%. Patients with class I-III mutations had significantly smaller frontal sinuses, sphenoid sinuses, more opacification in the sinonasal area and more often osteitis/neoosteogenesis of the maxillary sinus wall compared to patients with class IV and V mutations. Conclusion: These data suggest more severe sinonasal disease in patients with class I-III mutations compared to patients with class IV-V mutations. abstract_id: PUBMED:14671928 Cystic fibrosis: relationship between genotype and phenotype Cystic fibrosis (CF) is one of the most common lethal autosomal recessive disease among the Caucasian population. It is caused by defects in the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) gene. More than 1000 different CF mutations have been described. This large heterogeneity of mutations could explain in part the great variability of clinical expression of the disease. However, the severity of the lung disease is very different among patients with similar CFTR genotype, even from the same family. These discrepancies in phenotypes within patients of the same genotype suggest the influence of nongenetic environmental factors and genetic modifiers outside the CF locus. abstract_id: PUBMED:10669528 Strategies for analyzing genotype-phenotype relationships in asthma. Asthma is a genetically complex disease with a multifaceted phenotype. Different approaches including population-based and family-based methods for evaluating genotype-phenotype relationships in asthma are discussed as well as the problems that may obscure these determinations. Examples of similar efforts in cystic fibrosis and breast cancer are considered in addition to interaction between causative genes and etiologically relevant environmental exposure. abstract_id: PUBMED:10872417 Genotype-phenotype relationships in cystic fibrosis. The genotype-phenotype relationship in CF is complex despite its being a monogenic disorder. Factors that contribute to variability among individuals with the same genotype are an area of intense study. Nevertheless, certain conclusions can be derived from these studies. First, mutations in both CFTR alleles cause the CF phenotype. Homozygosity for delta F508 or compound heterozygosity for delta F508 and another severe mutation (e.g., G551D, W1282X) cause classic CF: obstructive pulmonary disease, exocrine pancreatic deficiency, male infertility, and elevated sweat chloride concentrations. Clinical variability is observed among patients with the classic form of CF, especially with regards to the severity of lung disease. Although understanding of the role of other genes and environment in the development of lung disease is incomplete, evidence that other factors are important raises the possibility that therapeutic intervention may be possible at several levels. Second, genotype correlates more closely with certain features of the CF phenotype than others. Mutations that allow partial function of CFTR are often associated with pancreatic sufficiency, occasionally identified with normal sweat gland function, and sporadically correlated with mild lung disease. Partially functioning mutants rarely prevent maldevelopment of the male reproductive tract; an exception is 3849 + 10 Kb C-->T. These observations suggest that certain tissues require different levels of CFTR function to avoid the pathologic manifestations typical of CF. The genetic cause of several disorders that clinically overlap CF can be attributed, in part, to mutations in CFTR. Finally, molecular analysis of disease-associated mutations identified through genotype-phenotype studies provides a mechanistic framework for genotype-based therapeutic approaches and pharmaceutical interventions. abstract_id: PUBMED:24791677 Effect of the F508del genotype on outcomes of endoscopic sinus surgery in children with cystic fibrosis. Objective: Numerous authors have sought to describe genotype-phenotype correlations in cystic fibrosis (CF), notably to pancreatic insufficiency and lung disease. However, few studies have focused on the association between the F508del genotype and response to sinus surgery. The objective of this study is to assess the effect of the F508del genotype on sinonasal disease severity and outcomes following functional endoscopic sinus surgery (FESS) in a pediatric population. Methods: A retrospective chart review of 153 children with CF seen at a tertiary care pediatric hospital from 1995 to 2008 was performed. Patients were classified into one of three groups according to F508del genotype, either as homozygous, heterozygous or not carrying a F508del mutation. The sinonasal disease phenotype of the three groups was compared based on clinical and radiological findings, extent of endoscopic sinus surgery and rate of revision surgery. Results: The relationship between the F508del genotype and pancreatic insufficiency was confirmed (p<0.05). There was no association between the F508del genotype and increased need for FESS (p=0.75). Moreover, no association was established between F508del homozygosity and presence of nasal polyps, Lund-Mackay score, extent of surgery or length of postoperative hospitalization. The rates of revision surgery did not differ significantly among the three genotypes analyzed (p=0.59). Conclusion: There is no clear association between the F508del genotype and an increased need for FESS, extent of surgery, or revision surgery. Given the phenotypic variability of sinonasal disease in patients with CF, a prospective study is needed to better understand outcomes following FESS and the contribution of gene modifiers to this effect. abstract_id: PUBMED:8723677 Relationship between genotype and phenotype in monogenic diseases: relevance to polygenic diseases. Since the early descriptions of sickle cell anemia, it has been clear that genotype at a single locus rarely completely predicts phenotype. This paper reviews explanations for phenotypic variability in some monogenic diseases. In cystic fibrosis, there is strong correlation between genotype and pancreatic phenotype but only weak association with respiratory phenotype, possibly due to differential inheritance of alleles at loci controlling susceptibility to respiratory infection. In addition, disease mutations have been shown to have more or less severe effect, depending on other variation within the cystic fibrosis gene. In phenylketonuria, genotype at the phenylalanine hydroxylase locus appears to explain the biochemical phenotype, but not the intellectual status. There may be genetically determined variation in flux through the minor metabolic pathways for phenylalanine, influencing levels of alternative metabolites involved in mental development. Phenotypic discordance in sickle cell anemia and beta-thalassemia has been associated with the co-inheritance of genes for hereditary persistence of fetal hemoglobin. A mouse locus has been identified that influences tumour number in mice with the multiple intestinal neoplasia gene. Understanding of the genetic interactions that determine phenotype in apparently monogenic diseases should lead to clarification of the role of different genes in polygenic diseases with complex inheritance patterns, as well as enhancing the ability to predict the outcome of a disease mutation. abstract_id: PUBMED:8949420 Genotype-phenotype correlation in cystic fibrosis patients. Cystic fibrosis (CF) is the most common severe recessive disease in Caucasians. The gene responsible for the disease encodes a protein named cystic fibrosis transmembrane conductance regulator, which is predicted to function as a cAMP-regulated chloride channel. Reduced chloride secretion is at the basis of insufficient hydratation of ductural mucus in the airways, reproductive system and pancreas, leading to progressive obstructive damage. The clinical expression of the disease is heterogeneous but most CF patients typically present with chronic obstructive lung disease, elevated electrolyte concentration in sweat and insufficient pancreatic exocrine function. Approximately 10-15% of CF patients have pancreatic sufficiency, 2 to 5% of patients have liver disease and infertility is present in almost all adult males, due to congenital bilateral absence of the vas deferens, and frequently in females. To date, over 500 mutations have been identified in the CFTR gene. A genotype-phenotype correlation has been observed only with pancreas status, while severity of lung involvement is not clearly related to the CFTR genotype, suggesting that this phenotype might be modulated by additional genetic or environmental factors; liver disease in general is not genetically determined. A large proportion of sterile healthy men with congenital bilateral absence of the vas deferens carry a mutation in at least one of their CFTR genes. The data collected so far suggest that the relationship between the CFTR genotype and the clinical manifestation is multiple and complex. abstract_id: PUBMED:9363081 The relationship between genotype and phenotype in cystic fibrosis. Cystic fibrosis is characterized by a wide variability of clinical expression. The cloning of the cystic fibrosis transmembrane conductance regulator gene and the identification of its mutations has promoted extensive research into the association between genotype and phenotype. Several studies showed that there are mutations, such as delta F508 (the most common mutation worldwide), that are associated with a severe phenotype: early age at diagnosis, pancreatic insufficiency, poor nutritional status, high incidence of meconium ileus, and high sweat chloride levels; lung disease, however, is variable. The milder mutation is dominant over the severe mutation causing a milder phenotype. In vitro studies of cystic fibrosis transmembrane conductance regulator function suggested that different mutations cause different defects of protein production and function. Five mechanisms by which mutations disrupt cystic fibrosis transmembrane conductance regulator function have been suggested: class I mutations cause defective protein production, class II mutations are associated with defective protein processing, class III mutations are associated with defective regulation, class IV mutations are associated with defective conductance, and class V mutations include mutations affecting the level of normal messenger RNA transcript and protein required for normal function. This class might include mutations affecting correct splicing of pre-messenger RNA transcripts by either exon skipping or by inclusion of extra cryptic exons. abstract_id: PUBMED:12124743 Genotype-phenotype correlation in cystic fibrosis: the role of modifier genes. More than 1,000 mutations have been identified in the cystic fibrosis (CF) transmembrane regulator (CFTR) disease gene. The impact of these mutations on the protein and the wide spectrum of CF phenotypes prompted a series of Genotype-Phenotype correlation studies. The CFTR genotype is invariably correlated with pancreatic status-in about 85% of cases with pancreatic insufficiency and in about 15% of cases with pancreatic sufficiency. The correlations between the CFTR genotype and pulmonary, liver, and gastrointestinal expression are debatable. The heterogeneous phenotype in CF patients bearing the same genotype or homozygotes for nonsense mutations implicated environmental and/or genetic factors in the disease. However, the discordant phenotype observed in CF siblings argued against a major role of environmental factors and suggested that genes other than CFTR modulate the CF phenotype. A locus that modulates gastrointestinal expression was identified in mice and subsequently in humans. By analyzing nine CF patients discordant for meconium ileus we were able to show that this locus had a dominant effect. Moreover, in a collaborative study we found a higher rate of polymorphisms in beta-defensin genes 1 and 2 in CF patients and in controls. In another multicenter study mutations in alpha-1 antitrypsin (A1AT) and mannose binding lectin genes were found to be independent risk factors for liver disease in CF patients. The body of evidence available suggests that the variegated CF phenotype results from complex interactions between numerous gene products. abstract_id: PUBMED:26678226 T2R38 genotype is correlated with sinonasal quality of life in homozygous ΔF508 cystic fibrosis patients. Background: Chronic rhinosinusitis (CRS) is very prevalent in the cystic fibrosis (CF) patient population, and leads to high morbidity and markedly decreased quality of life (QOL). Identification of genetic markers that contribute to CRS symptoms in these patients can allow for risk stratification and tailoring of medical and surgical treatments. T2R38 is a bitter taste receptor expressed in the sinonasal tract, and nonfunctional alleles of this receptor have been implicated in treatment-refractory CRS in non-CF patients. The purpose of this study is to investigate the significance of T2R38 genotype in the variability of sinonasal QOL and CRS disease severity in a sample of CF patients. Methods: ΔF508 homozygous CF patients were recruited from the University of Pennsylvania Cystic Fibrosis Center and were genotyped for the TAS2R38 locus. To assess sinonasal symptom severity, a 22-item Sino-Nasal Outcome Test (SNOT-22) was collected from each patient. Additional demographic and medical history data was obtained at the time of patient enrollment. Results: A total of 49 ΔF508 homozygous CF patients aged 18 to 32 years were included in the final SNOT-22 score analysis. Individuals with 2 functional T2R38 alleles (PAV/PAV) had significantly lower SNOT-22 scores (n = 49, p < 0.05). On further breakdown of SNOT-22 subcategories, rhinologic symptoms specifically were less severe in PAV/PAV patients than patients with other genotypes (n = 47, p < 0.05). Conclusion: Our investigation indicates that T2R38 genotype correlates both with SNOT-22 scores and rhinologic-specific QOL in ΔF508 homozygous CF patients. Answer: The relationship between genotype and phenotype in cystic fibrosis (CF), particularly regarding sinonasal manifestations, has been explored in several studies, with varying conclusions. One observational study found that adult CF patients with class I-III mutations exhibited more severe sinonasal disease compared to those with class IV-V mutations. This was evidenced by significantly smaller frontal and sphenoid sinuses, more opacification in the sinonasal area, and more frequent osteitis/neoosteogenesis of the maxillary sinus wall in patients with class I-III mutations (PUBMED:24210900). However, another study focusing on children with CF undergoing endoscopic sinus surgery (FESS) did not find a clear association between the F508del genotype and the need for FESS, extent of surgery, or revision surgery rates. This suggests that the F508del genotype may not significantly impact sinonasal disease severity or surgical outcomes in a pediatric population (PUBMED:24791677). The complexity of genotype-phenotype correlations in CF is further highlighted by the fact that the severity of lung disease can vary greatly among patients with similar CFTR genotypes, even within the same family. This indicates that non-genetic environmental factors and genetic modifiers outside the CF locus may influence the phenotype (PUBMED:14671928). Moreover, a study on ΔF508 homozygous CF patients found that the T2R38 genotype, a bitter taste receptor gene, correlated with sinonasal quality of life. Patients with two functional T2R38 alleles had significantly lower Sino-Nasal Outcome Test (SNOT-22) scores, suggesting less severe rhinologic symptoms (PUBMED:26678226). In summary, while there is evidence of genotype-phenotype correlations in CF-related sinonasal disease, the relationship is complex and may be influenced by other genetic factors and environmental variables. Some studies suggest a correlation between certain CFTR mutations and the severity of sinonasal manifestations (PUBMED:24210900), while others indicate that additional genetic modifiers, such as the T2R38 gene, may also play a role (PUBMED:26678226). However, the F508del genotype alone may not be a reliable predictor of sinonasal disease severity or surgical outcomes in all cases (PUBMED:24791677).
Instruction: Mirror therapy in the rehabilitation of lower-limb amputation: are there any contraindications? Abstracts: abstract_id: PUBMED:23498652 Phantom limb pain syndrome: therapeutic approach using mirror therapy in a Geriatric Department The clinical use of mirror visual feedback was initially introduced to alleviate phantom pain by restoring motor function through plastic changes in the human primary motor cortex. It is a promising novel technique that gives a new perspective to neurological rehabilitation. Using this therapy, the mirror neuron system is activated and decrease the activity of those systems that perceive protopathic pain, making somatosensory cortex reorganization possible. This paper reports the results of the mirror therapy in three patients with phantom limb pain after recent lower limb amputation, showing its analgesic effects and its benefits as a comprehensive rehabilitation instrument for lower limb amputee geriatric patients. abstract_id: PUBMED:37082804 Is mirror therapy associated with progressive muscle relaxation more effective than mirror therapy alone in reducing phantom limb pain in patients with lower limb amputation? Mirror therapy is a widely used treatment for phantom limb pain (PLP) relief in patients with limb loss. Less common is progressive muscle relaxation (PMR), used mostly in other medical conditions (psychological, terminal cancer pain, etc). The purpose of this study is to evaluate the efficacy of a mirror therapy preceded by PMR intervention compared to mirror therapy preceded by unguided generic relaxation-mirror therapy in patients with lower limb amputation suffering from PLP. This pilot study was a single-blind, controlled, randomized trial. Thirty lower limb amputees suffering from PLP were recruited and randomly assigned to three groups respectively undergoing a PMR-mirror therapy rehabilitative intervention, generic relaxation-mirror therapy, and conventional physiotherapy (ConvPT). Selected items from Prosthesis Evaluation Questionnaire (PEQ) and the Brief Pain Inventory (BPI) were used to test the pain features at the beginning and 1 week after 3 weeks of intervention. A decrease of about 65% was found in the rate and duration of PLP at the PEQ in PMR-mirror therapy with respect to generic relaxation-mirror therapy (about 30%) and ConvPT (about 6%). A decrease of about 90% in intensity (worst and average) of PLP in PMR-mirror therapy when compared to generic relaxation-mirror therapy (about 45%) and ConvPT (about 20%) was found at the BPI. We preliminary concluded, albeit with limitations due to the small sample of patients, that mirror therapy can improve PLP when associated with PMR. Further studies are required to confirm that PMR could be an effective technique for more successful PLP management. abstract_id: PUBMED:21119317 Mirror therapy in the rehabilitation of lower-limb amputation: are there any contraindications? Objective: Mirror box therapy and its development (immersive virtual reality) is used in pain therapy and in rehabilitation of people with amputation affected by phantom limb-related phenomena. It allows patients to view a reflection of their anatomical limb in the visual space occupied by their phantom limb. There are only limited reports of its possible side effects. Design: We retrospectively reviewed the existence of side effects or adverse reactions in a group of 33 nonselected patients with phantom limb-related phenomena. Results: Nineteen reported confusion and dizziness, 6 reported a not clearly specified sensation of irritation, and 4 refused to continue the treatment. Only 4 of the 33 patients did not have any complaints. Conclusions: Possible reasons for this large number of side effects could be the lack of selection of patients and the fact that the mirror box therapy was paralleled by a conventional rehabilitation approach targeted to the use of a prosthesis. Warnings on the need to select patients, with regard to their psychologic as well as clinical profile (including time from amputation and clinical setting), and possible conflicting mechanisms between mirror box therapy and conventional therapies are presented. abstract_id: PUBMED:32405482 Mirror foot with Trapezoidal dysplastic Tibia- A Case Report. Introduction: Pre-axial foot polydactyly has been termed as mirror foot, otherwise known as diplopodia. Association of a hypoplastic trapezoid shaped tibia makes the condition extremely uncommon. Whatever limited literature is available, most of it is focused on the preliminary reconstruction of deformity which again may not be feasible in late presentation. Case Report: We present a 17-year-old girl having mirror foot with dysplastic trapezoid shaped tibia. She had mild equinovarus deformity of the right foot with three pre-axial extra digits that represent a mirror foot. There was also a gross shortening of the right leg with 20° fixed flexion deformity of the knee, but she could walk around with limited disability. Her chief complaint was difficulty in outdoor ambulation and poor cosmesis of the foot. Limb reconstruction may not have a predictable outcome with so much of shortening and she was also reluctant for amputation. A course of therapy was given with a treatment objective of improvement of knee flexion deformity and quadriceps control. She was fitted with an extension prosthesis which accommodates the deformed foot and also corrected the limb length discrepancy. Conclusion: Mirror foot with dysplastic tibia reported at an adolescent age may not be benefitted from surgical reconstruction. Functional rehabilitation is better than an anatomic correction in case of late presentation. abstract_id: PUBMED:24899347 The mirror neuron system in motor and sensory rehabilitation The discovery of the mirror neuron system has dramatically changed the study of motor control in neuroscience. The mirror neuron system provides a conceptual framework covering the aspects of motor as well as sensory functions in motor control. Previous studies of motor control can be classified as studies of motor or sensory functions, and these two classes of studies appear to have advanced independently. In rehabilitation requiring motor learning, such as relearning movement after limb paresis, however, sensory information of feedback for motor output as well as motor command are essential. During rehabilitation from chronic pain, motor exercise is one of the most effective treatments for pain caused by dysfunction in the sensory system. In rehabilitation where total intervention unifying the motor and sensory aspects of motor control is important, learning through imitation, which is associated with the mirror neuron system can be effective and suitable. In this paper, we introduce the clinical applications of imitated movement in rehabilitation from motor impairment after brain damage and phantom limb pain after limb amputation. abstract_id: PUBMED:35042760 Efficacy of mirror therapy and virtual reality therapy in alleviating phantom limb pain: a meta-analysis and systematic review. Introduction: Amputations result from trauma, war, conflict, vascular diseases and cancer. Phantom limb pain (PLP) is a potentially debilitating form of chronic pain affecting around 100 million amputees across the world. Mirror therapy and virtual reality (VR) are two commonly used treatments, and we evaluated their respective success rates. Methods: A meta-analysis and systematic review was undertaken to investigate mirror therapy and VR in their ability to reduce pain levels. A mean difference (MD) model to compare group pain levels pretreatment and post-treatment via aggregating these results from numerous similar studies was employed. Meta-analysis was conducted using RevMan (V.5.4) and expressed in MD for visual analogue scale (VAS) score. Results: A total of 15 studies met our search criteria; they consisted of eight mirror therapy with 214 participants and seven VR including 86 participants, totalling 300 participants. Mean age ranged from 36 to 63 years, 77% male, of which 61% were lower body amputees. Both led to a VAS reduction (mirror therapy mean reduction VAS score was 2.54, 95% CI 1.42 to 3.66; p<0.001; VR 2.24, 95% CI 1.28 to 3.20; p<0.001). There was no statistically significant difference in pain alleviation between mirror therapy and VR (p=0.69). Conclusions: Mirror therapy and VR are both equally efficacious in alleviating PLP, but neither is more effective than the other. However, due to small sample size and limited number of studies, factors such as gender, cause of amputation, site of limb loss or length of time from amputation, which may influence treatment success, could not be explored. abstract_id: PUBMED:19096290 Self-delivered home-based mirror therapy for lower limb phantom pain. Home-based patient-delivered mirror therapy is a promising approach in the treatment of phantom limb pain. Previous studies and case reports of mirror therapy have used a therapist-guided, structured protocol of exercises. No case report has described treatment for either upper or lower limb phantom pain by using home-based patient-delivered mirror therapy. The success of this case demonstrates that home-based patient-delivered mirror therapy may be an efficacious, low-cost treatment option that would eliminate many traditional barriers to care. abstract_id: PUBMED:35642157 Randomized Controlled Trial of the Effects of Repetitive Transcranial Magnetic Stimulation and Mirror Therapy on Phantom Limb Pain in Amputees Objective: To investigate the effect of repetitive transcranial magnetic stimulation (rTMS) on phantom limb pain (PLP) in amputees, and to compare the therapeutic effect with that of mirror therapy (MT). Methods: The study was designed as a randomized controlled trial. The evaluators were blinded, while the subjects and the therapists were unblinded. Subjects were randomly assigned to either the rTMS group or the MT group with a computer-generated random number table. From June 2018 to December 2020, from out of 45 amputee patients screened for the study, 30 who met the inclusion criteria were recruited for the study. All patients were recruited from the Rehabilitation Medicine Center, West China Hospital, Sichuan University. In the end, 4 patients withdrew from the study and 26 patients (12 in the rTMS group and 14 in the MT group) completed the prescribed treatment and evaluation. The rTMS group was given rTMS (1 Hz, 15 min, 5 d/week) for 2 weeks in addition to conventional rehabilitation therapy, while the MT group received MT (corresponding movements of limbs, 15 min, 5 d/week) for 2 weeks in addition to conventional rehabilitation therapy. PLP was evaluated by the Visual Analogue Scale (VAS) and Douleur Neuropathique 4 Questions (DN-4). Subjects were assessed before treatment ( t0), immediately after the completion of the treatment ( t1) and 3 months after the completion of the treatment ( t2). Results: The mean age of the 26 patients was 39.73±12.64. There were 15 males and 11 females. According to the reported description of the characteristics of the PLP by the patients, the characteristics with the highest incidence were tingling, stabbing, numbing, electric shocks and burning in descending order. There was no significant difference in the incidence of PLP characteristics between the two groups ( P>0.05). The two groups had comparable baseline data, showing no significant difference in VAS and DN-4 between the two groups at t0 ( P>0.05). At t1 and t2, the VAS and DN-4 scores were decreased from those of t0, showing statistically significant difference in both groups ( P<0.01 for both scores). In the rTMS group, there was no significant difference between VAS and DN-4 scores at t1 and those at t2 ( P>0.05). In the MT group, the VAS and DN-4 scores at t2 were significantly lower than those of t1 ( P<0.05). There was no statistically significant difference between the rTMS group and MT group in the changes in pain measurements, i.e., VAS and DN-4 scores, before and after the intervention ( P>0.05). The 26 patients who completed the experiment showed no dizziness, headache, or other abnormalities during the study. Conclusion: The results of this study indicate that repetitive transcranial magnetic stimulation could improve PLP in amputees, and the improvement effect was comparable to that of mirror therapy. abstract_id: PUBMED:36809505 Assessment of efficiency of mirror therapy in preventing phantom limb pain in patients undergoing below-knee amputation surgery-a randomized clinical trial. Purpose And Objectives: Phantom limb pain (PLP) is a major cause of physical limitation and disability accounting for about 85% of amputated patients. Mirror therapy is used as a therapeutic modality for patients with phantom limb pain. Primary objective was to study the incidence of PLP at 6 months following below-knee amputation between the mirror therapy group and control group. Methods: Patients posted for below-knee amputation surgery were randomized into two groups. Patients allocated to group M received mirror therapy in post-operative period. Two sessions of therapy were given per day for 7 days and each session lasted for 20 min. Patients who developed pain from the missing portion of the amputated limb were considered to have PLP. All patients were followed up for six months and the time of occurrence of PLP and intensity of the pain were recorded among other demographic factors. Results: A total of 120 patients completed the study after recruitment. The demographic parameters were comparable between the two groups. Overall incidence of phantom limb pain was significantly higher in the control group (Group C) when compared to the mirror therapy (Group M) group [Group M = 7 (11.7%) vs Group C = 17 (28.3%); p = 0.022]. Intensity of PLP measured on the Numerical Rating Scale (NRS) was significantly lower at 3 months in Group M compared to Group C among patients who developed PLP [NRS - median (Inter quartile range): Group M 5 (4,5) vs Group C 6 (5,6); p 0.001]. Conclusion: Mirror therapy reduced the incidence of phantom limb pain when administered pre-emptively in patients undergoing amputation surgeries. The severity of the pain was also found to be lower at 3 months in patients who received pre-emptive mirror therapy. Trial Registration: This prospective study was registered in the clinical trial registry of India. Trial Registration Number: CTRI/2020/07/026488. abstract_id: PUBMED:23091690 Mirror therapy for phantom limb pain. Phantom limb pain is a painful sensation that is perceived in a body part that no longer exists. To control this pain, many methods have been used such as medication, physical treatment, nerve block, neuromodulation, surgical treatment and mirror therapy. However, until now, there effects have been uncertain. We report the successful reduction of phantom limb pain using mirror therapy when other treatments initially failed to control the pain. Answer: Mirror therapy is a recognized intervention for the rehabilitation of lower limb amputation and has been shown to alleviate phantom limb pain (PLP) in several studies. However, there may be contraindications or side effects associated with its use. A retrospective review of 33 nonselected patients with phantom limb-related phenomena who underwent mirror therapy revealed that a significant number of patients experienced side effects. Nineteen reported confusion and dizziness, six reported a sensation of irritation, and four refused to continue the treatment. Only four patients did not have any complaints. The authors suggest that the side effects could be due to the lack of patient selection and the concurrent use of conventional rehabilitation approaches targeted at prosthesis use. They recommend caution in patient selection, considering both psychological and clinical profiles, including time from amputation and clinical setting, to minimize potential conflicts between mirror therapy and conventional therapies (PUBMED:21119317). In summary, while mirror therapy can be beneficial for PLP management in lower limb amputees, it is important to consider potential contraindications and side effects, such as confusion, dizziness, and irritation. Proper patient selection and consideration of the clinical context are essential to ensure the safe and effective use of mirror therapy in this patient population.
Instruction: Catheter ablation of ventricular tachycardia and mortality in patients with nonischemic dilated cardiomyopathy: can noninducibility after ablation be a predictor for reduced mortality? Abstracts: abstract_id: PUBMED:25873719 Catheter ablation of ventricular tachycardia and mortality in patients with nonischemic dilated cardiomyopathy: can noninducibility after ablation be a predictor for reduced mortality? Background: Data on outcomes after catheter ablation of ventricular tachycardia (VT) in patients with nonischemic dilated cardiomyopathy (NIDCM) are insufficient. We aimed to investigate the effects of successful catheter ablation of VT on cardiac mortality in patients with NIDCM. Methods And Results: One hundred two patients with NIDCM (86 men; mean age, 58.8±15.2 years; mean ejection fraction, 33.3±11.9%) underwent VT ablation. After catheter ablation, a programmed ventricular stimulation to test for success was performed. Complete VT noninducibility was achieved in 62 (61%) patients and partial success or failure in 32 (31%) patients. During 2 years of follow-up, VT recurrence was observed in 33 patients (53%) without inducible VTs and in 24 patients (75%) with inducible VT inducible (P=0.041). VT inducibility was associated with higher VT recurrence (adjusted hazard ratio, 1.84; 95% confidence interval, 1.08-3.13; P=0.025). The primary end point of all-cause mortality was reached in 9 patients (15%) with noninducible VTs versus 11 patients (34%) with inducible sustained VTs (P=0.026). VT inducibility was associated with all-cause mortality (adjusted hazard ratio, 2.73; 95% confidence interval, 1.003-7.43; P=0.049). Conclusions: In patients with NIDCM and recurrent sustained VTs, a complete ablation of all inducible VTs may be achieved in 60% of the cases. The complete noninducibility may be a preferable end point of ablation because it was associated with better long-term success. Importantly, if possible to achieve through ablation, a complete VT noninducibility was associated with reduction of the likelihood for all-cause mortality in patients with NIDCM. abstract_id: PUBMED:27521538 Can ventricular tachycardia non-inducibility after ablation predict reduced ventricular tachycardia recurrence and mortality in patients with non-ischemic cardiomyopathy? A meta-analysis of twenty-four observational studies. Background: At present, the role of ventricular tachycardia (VT) non-inducibility after ablation in patients with non-ischemic cardiomyopathy (NICM) remains controversial. We conducted a meta-analysis of the published literature to assess whether VT non-inducibility after ablation could predict reduced VT recurrence and mortality in patients with NICM. Methods: PubMed, ScienceDirect, and the Cochrane library were searched for studies evaluating the effects of VT non-inducibility after catheter ablation on the long-term outcome in NICM patients with sustained VT. Results were analyzed using a fixed-effect model, and the data were pooled using RevMan 5.3 software. Results: Twenty-four observational studies were identified (736 participants, mean follow-up time: 22months). NICM patients with VT inducibility after ablation had a higher risk of VT recurrence (odds ratio [OR]=5.83, 95% confidence interval [CI] 4.07-8.37; P<0.00001) and all-cause mortality (OR=3.55, 95% CI 1.62-7.78; P=0.002) compared with VT non-inducibility. Similarly in the subgroup analysis, patients with VT inducibility showed a higher risk of VT recurrence from non-ischemic dilated cardiomyopathy (OR=3.92, 95% CI 2.36-6.50; P<0.00001) and arrhythmogenic right ventricular dysplasia/cardiomyopathy (OR=5.37, 95% CI 2.20-13.10; P=0.0002). Additionally, meta-analysis also showed that combined endo-epicardial ablation significantly reduced the risk of VT recurrence compared with endocardial-only ablation (OR=2.02, 95% CI 1.19-3.44; P=0.009; mean follow-up time: 22months). Conclusion: Recent evidence has shown that VT non-inducibility after ablation is a predictor for reduced VT recurrence and mortality compared with VT inducibility in NICM patients with sustained VT. In addition, endocardial plus adjuvant epicardial ablation provides better long-term arrhythmia-free survival than endocardial ablation alone. abstract_id: PUBMED:28229198 Catheter ablation of ventricular tachycardia : What have we achieved? The role of catheter ablation in patients with ventricular tachycardia (VT) has evolved over the last two decades into an established treatment option. In patients with idiopathic VT catheter ablation is the gold standard treatment option with high effectiveness and low risk of complications. Due to the high risk of side effects the use of antiarrhythmic drugs is only indicated in exceptional cases. In patients with structural heart diseases, such as ischemic and dilated cardiomyopathy, VT is the most frequent cause of death. Furthermore, recurrent shocks from implantable cardioverter defibrillators (ICD) are one of the main reasons for the high morbidity and mortality; however, in these patients a complex myocardial substrate is present and consequently there is a relevant risk of recurrence after VT ablation. A periprocedural mortality of approximately 3% must be considered in these often severely ill patients. Nevertheless, there is no reasonable alternative to catheter ablation, particularly in patients who continue to have VT episodes even under therapy with amiodarone. Questions with respect to the optimal procedural technique for VT ablation, the endpoint and optimal timing of ablation need to be clarified in clinical trials. abstract_id: PUBMED:25996358 Procedural Outcomes and Survival After Catheter Ablation of Ventricular Tachycardia in Relation to Electroanatomical Substrate in Patients With Nonischemic-Dilated Cardiomyopathy: The Role of Unipolar Voltage Mapping. Introduction: In patients with ischemic cardiomyopathy the size of bipolar low-voltage areas (LVA) in electroanatomical maps (EAM) was associated with poorer outcomes after catheter ablation (CA) of ventricular tachycardia (VT). However, the effect of LVA size on the survival after VT ablation in patients with nonischemic dilated cardiomyopathy (NIDCM) has not been studied. Methods And Results: In 55 patients with NIDCM (48 male, age 61 ± 16 years, ejection fraction 32 ± 13%) an EAM to delineate the bipolar and unipolar LVAs was performed in 52 (94.5%) patients endocardially, in 24 (43.6%) patients epicardially, and in 21 (38.2%) patients on both surfaces. Additionally, activation mapping of the VT was possible in 22 (40%) patients. CA with lines transecting the scar and targeting late potentials was performed in all patients. Complete VT noninducibility at the end was achieved in 40 (72.7%) patients. During the median follow-up of 22 (interquartile range IQR 6, 34) months, VT recurrences were observed in 30 (54.5%) and cardiac death in 14 (25.5%) patients. The ROC analysis revealed that the size of endocardial unipolar LVA (<8.3 mV) was associated with cardiac death (AUC 0.89, 95% CI 0.79-0.98, P <0.0001). UVA = 145 cm2 discriminates for cardiac death with 83% sensitivity and 78% specificity. Endocardial UVA > 145 cm2 was a predictor for cardiac death (adjusted HR = 6.9; P = 0.014) and UVA ≥ 54% (of total endocardial LV surface) for VT recurrence (adjusted HR = 3.5; P = 0.016). Conclusion: The size of endocardial unipolar LVA (<8.3 mV) was a strong and independent predictor for cardiac mortality and VT recurrence in patients with NIDCM. abstract_id: PUBMED:21769783 Efficacy of catheter ablation in patients with an electrical storm. Background: Electrical storm (ES) is a life-threatening condition requiring prompt and effective therapy. This may be achieved by the use of catheter ablation. Aim: To assess safety and efficacy of catheter ablation in patients with ES. Methods: We performed 28 ablation procedures from February 2006 to May 2010 due to ES in 24 patients (21 men, 3 women, aged 62.5 ± 7.8 years). Eighteen patients had a history of myocardial infarction, 2 - dilated cardiomyopathy, 2 - hypertrophic cardiomyopathy (one also had myocardial infarction), 1 - spongiform cardiomyopathy, 1 - heart failure after aortic valve replacement and 1 - myocarditis. The mean value of ejection fraction was 27.3 ± 6.5% (15-40%). Procedures were performed using the CARTO system. Two patients after an endocardial map had also epicardial mapping performed and one of these patients underwent epicardial cryoablation. The other one underwent a radiofrequency catheter ablation. Results: During the follow-up period of 27.8 ± 15.9 months 16 (66%) patients had no ventricular tachycardia (VT)/ventricular fibrillation (VF) episodes. Sporadic VT episodes were observed in 3 patients. Recurrence of ES occurred in 3 (12%) patients and 3 (12%) patients died during the follow-up due to the progression of heart failure. Conclusions: 1. Ablation of ventricular arrhythmias in the course of ES in patients with organic heart disease is safe and effective, and probably improves their prognosis. 2. After ablation, some patients have adequate interventions of implantable cardioverter-defibrillator due to progression of the disease. 3. The method does not prevent haemodynamic mortality. abstract_id: PUBMED:35887764 Long-Term Outcomes after Catheter Ablation of Ventricular Tachycardia in Dilated vs. Ischemic Cardiomyopathy. Ischemic (ICM) and dilated cardiomyopathy (DCM) represent the two main underlying heart diseases in patients referred for catheter ablation of ventricular tachycardia (VT). While VT ablation in ischemic cardiomyopathy is relatively well-studied, data in patients with DCM are still scarce. The study aimed to compare the acute and long-term outcomes in patients with ICM and DCM who underwent VT ablation at a high-volume center. Consecutive patients who underwent VT ablation from April 2018 to April 2021 were included retrospectively. Patients with ischemic cardiomyopathy were compared to those with dilated cardiomyopathy. The primary endpoint was rate of VT recurrences, the secondary endpoints included overall mortality, rehospitalization because of cardiac condition (VT, acute heart failure, acute myocardial infarction, heart transplantation or implantation of left ventricular assisting device), and major adverse cardiac events (MACE) at long-term follow-up. A total of 225 patients admitted for first VT ablation were included. A total of 156 patients (69%) revealed ICM and 69 (31%) DCM. After a mean follow-up of 22 months, the primary endpoint of VT recurrence occurred significantly more often in the patients with dilated cardiomyopathy (ICM n = 47; 37% vs. DCM n = 34; 64%; p = 0.001). In regard to the secondary endpoint of overall mortality, there was no difference between the two patient cohorts (DCM n = 9; 15% vs. ICM n = 22; 16%; p = 0.677); the patients with DCM showed significantly higher rehospitalization rates due to cardiac conditions (75% vs. 59%; p = 0.038) and more frequent MACE (68% vs. 52%; p= 0.036). In a Cox regression model, electrical storm at admission was shown to be a predictor for VT recurrence after successful catheter ablation (HR = 1.942: 95% CI 1.237-3.050; p = 0.004), while the ablation of every induced VT morphology during the procedure (HR = 0.522; 95% CI = 0.307-0.885; p = 0.016) contributed to a positive long-term outcome. DCM is associated with a higher risk of VT recurrence after catheter ablation compared to ICM. Furthermore, patients with DCM are more frequent re-hospitalized in the majority of cases due the VT recurrence. There is no difference in the long-term mortality between the two cohorts. abstract_id: PUBMED:21416824 Radiofrequency catheter ablation of ventricular tachycardia in patients with dilated cardiomyopathy The incidence of drug refractory ventricular tachycardia recurrences in patients with dilated cardiomyopathy can be significantly reduced by radiofrequency catheter ablation. The recent introduction of three-dimensional electroanatomical mapping systems improved the success rate and extended the indication to patients with unstable and/or multiple ventricular tachycardias. The present article summarizes the indications, the mapping techniques and the most recent results of catheter ablation of ventricular tachycardias in the setting of dilated cardiomyopathy. abstract_id: PUBMED:31707389 Hybrid Ventricular Tachycardia Ablation after Failed Percutaneous Endocardial and Epicardial Ablation. Introduction: Recurrent ventricular tachycardia (VT) after percutaneous ablation is associated with a high morbidity and mortality. We assessed the feasibility of open chest extracorporeal circulation (ECC)-supported 3D multielectrode mapping and targeted VT substrate ablation in patients with previously failed percutaneous endocardial and epicardial VT ablations. Methods: In patients with previously failed percutaneous endocardial and epicardial VT ablations and a high risk of hemodynamic collapse during the procedure, open chest ECC-supported mapping and ablation were performed in a hybrid EP lab setting. Electro-anatomic maps (3D) were acquired during sinus rhythm and VT using a multielectrode mapping catheter (HD grid; Abbott or Pentaray, Biosense Webster). Irrigated radiofrequency ablations of all inducible VT were performed with a contact force ablation catheter. Results: Hybrid VT ablation was performed in 5 patients with structural heart disease (i.e., 3 with previous old myocardial infarction and 2 with nonischemic cardiomy-opathy) and recurrent VT. Acute procedural success was achieved in all patients. Four patients were successfully weaned off the ECC. In 1 patient with a severely reduced LVEF (16%), damage to the venous graft occurred after sternotomy and that patient died after 1 month. Four patients (80%) remained VT free after a median follow-up of 6 (IQR 4-10) months. Conclusion: In high-risk patients with previously failed percutaneous endocardial and epicardial VT ablations, open chest ECC-supported multielectrode epicardial mapping revealed a VT substrate in all of the patients, and targeted epicardial ablation abolished VT substrate in these patients. abstract_id: PUBMED:9064984 Methods of catheter ablation in ventricular tachycardic arrhythmias Catheter ablation has evolved as the therapy of choice in patients with atrioventricular nodal reentrant tachycardia or atrioventricular tachycardia involving accessory pathways. Radiofrequency current catheter ablation is now generally accepted and most frequently used. In patients with ventricular tachycardia and no organic heart disease (idiopathic ventricular tachycardia) the success rates range between 80-90%. In addition, in patients with bundle branch reentrant tachycardia up to 100% success rates are described. The usefulness of radiofrequency ablation in patients with structural heart disease is still controversial. Although in patients with incessant ventricular tachycardia high acute success rates are observed, the results in post-myocardial infarction tachycardia or dilative cardiomyopathy range only between 40 and 75%. Further refinement of localization techniques as well as improvement of ablation technology is mandatory until the routine use of this technique as a curative procedure for ventricular tachycardia is defined. abstract_id: PUBMED:35430753 Catheter ablation of ventricular tachycardia in dilated-phase hypertrophic cardiomyopathy: Substrate characterization and ablation outcome. Introduction: Catheter ablation is a therapeutic option to suppress ventricular tachycardia (VT) in the setting of dilated-phase hypertrophic cardiomyopathy (DHCM). However, the characteristics of the arrhythmogenic substrate and the ablation outcome are not fully illustrated. Method: A total of 23 ablation procedures for drug-refractory sustained monomorphic VTs in 13 DHCM patients (60 ± 11 years, one female, the left ventricular [LV] ejection fraction 39% ± 9%, the LV mass index 156 ± 39 g/m2 ) were performed. The distribution of VT substrate as endocardial or epicardial/intramural was based on detailed mapping and ablation response during VT. Result: Two patients underwent ablation of sustained monomorphic VT that was not scar-mediated tachycardia. Of the remaining 11 patients, eight (73%) patients had VT substrate in the basal regions, most frequently at the epicardial and/or intramural basal antero-septum. None of the patients had VT substrate located at the LV inferolateral region. Ablation at the right ventricular septum and the aortic cusps was done in four and five patients, respectively. Other approaches including bipolar and chemical ablations, were done in three and two patients, respectively. Six (55%) out of 11 patients (two patients lost follow-up) had VT recurrence. All the six patients had basal substrate. However, anti-tachycardia pacing was sufficient for VT termination except in one patient. Conclusion: Catheter ablation of VT in patients with DHCM is challenging because of the predominant basal anteroseptal epicardial/intramural location of arrhythmogenic substrate. An ablation approach from multiple sites and/or adjunctive interventional techniques are often required. Answer: Yes, noninducibility after catheter ablation of ventricular tachycardia (VT) can be a predictor for reduced mortality in patients with nonischemic dilated cardiomyopathy (NIDCM). A study found that complete VT noninducibility was achieved in 61% of patients with NIDCM undergoing VT ablation, and this noninducibility was associated with a reduction in the likelihood of all-cause mortality during a 2-year follow-up period (PUBMED:25873719). Furthermore, a meta-analysis of twenty-four observational studies, which included 736 participants, supported the finding that VT non-inducibility after ablation is a predictor for reduced VT recurrence and mortality compared with VT inducibility in patients with non-ischemic cardiomyopathy (NICM) (PUBMED:27521538). These findings suggest that achieving complete noninducibility of VT after catheter ablation is a desirable endpoint that correlates with better long-term outcomes in patients with NIDCM.
Instruction: Assessment of termination of trauma resuscitation guidelines: are children small adults? Abstracts: abstract_id: PUBMED:20438923 Assessment of termination of trauma resuscitation guidelines: are children small adults? Background: Guidelines for termination of resuscitation in prehospital traumatic cardiopulmonary arrest (TCPA) have recently been published for adults. Clinical criteria for termination of care include absent pulse, unorganized electrocardiogram (ECG), fixed pupils (all at the scene), and cardiopulmonary resuscitation (CPR) greater than 15 minutes. The goal of this study was to evaluate these guidelines in a pediatric trauma population. Methods: Pediatric trauma patients with documented arrest were included in the study. Data assessed were duration of CPR, ECG rhythm, pulse assessment, pupil response, transport times, and standard injury criteria (eg, mechanism of injury). Survivors were compared to nonsurvivors using descriptive statistics, chi(2), and Pearson correlation. Results: Between 2000 and 2009, 30 patients were identified as having had a TCPA. Of the 30 with a prehospital TCPA, there were 9 females and 21 males (0.2-18 years old). The average (SD) injury severity score was 35.4 (20.6). Twenty-four patients (80%) did not survive. Severe traumatic brain injury was associated with nonsurvivors in 78%. One-way analysis of variances demonstrated that CPR greater than 15 minutes (P = .011) and fixed pupils (P = .022) were significant variables to distinguish between survivors and nonsurvivors, whereas ECG rhythm (P = .34) and absent pulse (P = .056) did not, 42 +/- 28 minutes for nonsurvivors and 7 +/- 3 minutes for survivors. Conclusion: Criteria for termination of resuscitation correctly predicted 100% of those who died when all the criteria were met. More importantly, no survivors would have had resuscitation stopped. Duration of CPR seems to be a strong predictor of mortality in this study. abstract_id: PUBMED:34555488 Applying a set of termination of resuscitation criteria to paediatric out-of-hospital cardiac arrest. Objective: Prehospital Termination of Resuscitation (TOR) protocols for adults can reduce the number of futile transports of patients in cardiac arrest, yet similar protocols are not widely available for paediatric out-of-hospital cardiac arrest (POHCA). The objective of this study was to apply a set of criteria for paediatric TOR (pTOR) from the Maryland Institute for Emergency Medical Services Systems (MIEMSS) to a large national cohort and determine its association with return of spontaneous circulation (ROSC) after POHCA. Methods: We identified patients ages 0-17 treated by Emergency Medical Services (EMS) with cardiac arrest in 2019 from the ESO dataset and and applied the applicable pTOR certeria for medical or traumatic arrests. We calculated predictive test characteristics for the outcome of prehospital ROSC, stratified by medical and traumatic cause of arrest. Results: We analyzed records for 1595 POHCA patients. Eighty-eight percent (n = 1395) were classified as medical. ROSC rates were 23% among medical POHCA and 27% among traumatic POHCA. The medical criteria correctly classified >99% (322/323) of patients who achieved ROSC as ineligible for TOR. The trauma criteria correctly classified 93% (50/54) of patients with ROSC as ineligible for TOR. Of the five misclassified patients, three were involved in drowning incidents. Conclusions: The Maryland pTOR criteria identified eligible patients who did not achieve prehospital ROSC, while reliably excluding those who did achieve prehospital ROSC. As most misclassified patients were victims of drowning, we recommend considering the exclusion of drowning patients from future pTOR guidelines. Further studies are needed to evaluate the long-term survival and neurologic outcome of patients misclassified by pTOR criteria. abstract_id: PUBMED:27927227 Adult "termination-of-resuscitation" (TOR)-criteria may not be suitable for children - a retrospective analysis. Background: Only a small number of patients survive out-of-hospital-cardiac-arrest (OHCA). The duration of CPR varies considerably and transportation of patients under CPR is often unsuccessful. Termination-of-resuscitation (TOR)-criteria aim to preclude futile resuscitation efforts. Our goal was to find out to which extent existing TOR-criteria can be transferred to paediatric OHCA-patients with special regard to their prognostic value. Methods: We performed a retrospective analysis of an eleven-year single centre patient cohort. 43 paediatric patients admitted to our institution after emergency-medical-system (EMS)-confirmed OHCA from 2003 to 2013 were included. Morrison's BLS- and ALS-TOR-rules as well as the Trauma-TOR-criteria by the American Association of EMS Physicians were evaluated for application in children, by calculating sensitivity, specificity, negative and positive predictive value for death-, as well as survival-prediction in our cohort. Results: 26 patients achieved ROSC and 14 were discharged alive (n = 7 PCPC 1/2, n = 7 PCPC 5). Sensitivity for BLS-TOR-criteria predicting death was 48.3%, specificity 92.9%, the PPV 93.3% and the NPV 46.4%. ALS-TOR-criteria for death had a sensitivity of 10.3%, specificity of 100%, a PPV of 100% and an NPV of 35%. Conclusion: Retrospective application of the BLS-TOR-rule in our patient cohort identified the resuscitation of one later survivor as futile. ALS-TOR-criteria did not give false predictions of death. The proportion of CPRs that could have been abandoned is 48.2% for the BLS-TOR and only 10.3% for the ALS-TOR-rule. Both rules therefore appear not to be transferable to a paediatric population. abstract_id: PUBMED:28392701 Evaluation of the clinical effect of small-volume resuscitation on uncontrolled hemorrhagic shock in emergency. Objective: The objective of the present study was to explore the resuscitative effect of small-volume resuscitation on uncontrolled hemorrhagic shock in emergency. Methods: In this study, the resuscitative effects in 200 trauma patients with uncontrolled hemorrhagic shock in emergency were studied. Half of these patients were infused with hypertonic/hyperoncotic fluid (small-volume resuscitation group, n=100), whereas the rest were infused with Hespan and lactated Ringer's solution (conventional fluid resuscitation group, n=100). The changes in hemodynamics, coagulation function, blood biochemistry, blood hematology, and the average infusion volume in both the groups were comparatively studied. Results: It was found that the hemodynamics were improved in both the groups after resuscitation. Interestingly, compared with trauma patients infused with Hespan and lactated Ringer's solution, the growth rate, range, and time duration of the mean arterial pressure of the patients in small-volume resuscitation group increased significantly, and the shock index decreased progressively; in the 60th min after the resuscitation, blood index including hemoglobin, hematocrit, red blood cells, white blood cells, and platelet declined, whereas prothrombin time and activated partial thromboplastin time were prolonged in both the groups, but these changes were less obvious in the small-volume group. In addition, the average infusion volume of patients in the small-volume group was less than that of patients in conventional fluid resuscitation group. Conclusion: Featured with small infusion volume and less influence to coagulation function and homeostasis of human body, small-volume resuscitation possesses a significantly higher resuscitative effect. Therefore, trauma patients may have a better chance to maintain the hemodynamic stability and the survival rate, or recovery speed will be increased when traditional aggressive fluid resuscitation is replaced by small-volume resuscitation. abstract_id: PUBMED:24089117 Withholding and termination of resuscitation of adult cardiopulmonary arrest secondary to trauma: resource document to the joint NAEMSP-ACSCOT position statements. In the setting of traumatic cardiopulmonary arrest, protocols that direct emergency medical service (EMS) providers to withhold or terminate resuscitation, when clinically indicated, have the potential to decrease unnecessary use of warning lights and sirens and save valuable public health resources. Protocols to withhold resuscitation should be based on the determination that there are no obvious signs of life, the injuries are obviously incompatible with life, there is evidence of prolonged arrest, and there is a lack of organized electrocardiographic activity. Termination of resuscitation is indicated when there are no signs of life and no return of spontaneous circulation despite appropriate field EMS treatment that includes minimally interrupted cardiopulmonary resuscitation. Further research is needed to determine the appropriate duration of cardiopulmonary resuscitation before termination of resuscitation and the proper role of direct medical oversight in termination of resuscitation protocols. This article is the resource document to the position statements, jointly endorsed by the National Association of EMS Physicians and the American College of Surgeons' Committee on Trauma, on withholding and termination of resuscitation in traumatic cardiopulmonary arrest. abstract_id: PUBMED:9229985 Small-volume resuscitation for hypovolemic shock. Concept, experimental and clinical results The concept of small-volume resuscitation, the rapid infusion of a small volume (4 ml/kg BW) of hyperosmolar 7.2-7.5% saline solution for the initial therapy of severe hypovolemia and shock was advocated more than a decade ago. Numerous publications have established that hyperosmolar saline solution can restore arterial blood pressure, cardiac index and oxygen delivery as well as organ perfusion to pre-shock values. Most prehospital studies failed to yield conclusive results with respect to a reduction in overall mortality. A meta-analysis of preclinical studies from North and South America, however, has indicated an increase in survival rate by 5.1% following small-volume resuscitation when compared to standard of care. Moreover, small-volume resuscitation appears to be of specific impact in patients suffering from head injuries with increased ICP and in severest trauma requiring immediate surgical intervention. Results from clinical trials in Austria, Germany and France have demonstrated positive effects of hyperosmolar saline solutions when used for fluid loading or fluid substitution in cardiac bypass and in aortic aneurysm surgery, respectively. A less positive perioperative fluid balance, a better hemodynamic stability and improved pulmonary function were reported. In septic patients oxygen consumption could significantly be augmented. The most important mechanism of action of small-volume resuscitation is the mobilisation of endogenous fluid primarily from oedematous endothelial cells, by which the rectification of shock-narrowed capillaries and the restoration of nutritional blood, flow is efficiently promoted. Moreover, after ischemia reperfusion a reduction in sticking and rolling leukocytes have been found following hyperosmolar saline infusion. Both may be of paramount importance in the long-term preservation of organ function following hypovolemic shock. An increased myocardial contractility in addition to the fluid loading effects of hyperosmolar saline solutions has been suggested as a mechanism of action. This, however, could not be confirmed by pre-load independent measures of myocardial contractility. Some concerns have been raised regarding the use of hyperosmolar saline solutions in patients with a reduced cardiac reserve. A slower speed of infusion and adequate monitoring is recommended for high risk patients. Recently, hyperosmolar saline solutions in combination with artificial oxygen carriers have been proposed to increase tissue oxygen delivery through enhanced O2 content. This interesting perspective, however, requires further studies to confirm the potential indications for such solutions. Many hyperosmolar saline colloid solutions have been investigated in the past years, from which 7.2-7.5% sodium chloride in combination with either 6-10% dextran 60/70 or 6-10% hydroxyethyl starch 200,000 appear to yield the best benefit-risk ratio. This has led to the registration of the solutions in South America, Austria, The Czech Republic, and is soon awaited for North America. abstract_id: PUBMED:21909734 The concept of small volume resuscitation for preclinical trauma management. Experiences in the Air Rescue Service Background: Prompt hemorrhage control and adequate fluid resuscitation are the key components of early trauma care. However, the optimal resuscitation strategy remains controversial. In this context the small volume resuscitation (SVR) concept with hypertonic-hyperoncotic solutions is a new strategy. Patients And Methods: This was a retrospective study in the Helicopter Emergency Medical Service over a 5-year period. Included were all major trauma victims if they were candidates for SVR (initially 4 ml HyperHaes/kg body weight, followed by conventional fluid resuscitation with crystalloids and colloids). Demographic data, type and cause of injury and injury severity score (ISS) were recorded and the amount of fluid volume and the hemodynamic profile were analyzed. Negative side-effects as well as sodium chloride serum levels on hospital admission were recorded. Results: A total of 342 trauma victims (male 70.2%, mean age 39.0 ± 18.8 years, ISS 31.6 ± 16.9, ISS>16, 81.6%) underwent prehospital SVR. A blunt trauma mechanism was predominant (96.8%) and the leading cause of injury was motor vehicle accidents (61.5%) and motorcycle accidents (22.3%). Multiple trauma and polytrauma were noted in 87.4% of the cases. Predominant was traumatic brain injury (73.1%) as well as chest injury (73.1%) followed by limb injury (69.9%) and abdominal/pelvic trauma (45.0%). Within the whole study group in addition to 250 ml HyperHaes, mean volumes of 1214 ± 679 ml lactated Ringers and 1288 ± 954 ml hydroxethylstarch were infused during the prehospital treatment phase. There were no statistically significant differences in the amount of crystalloids and colloids infused regarding the subgroups multisystem trauma (ISS>16), severe traumatic brain injury (GCS<9) and entrapment trauma compared to the total study group. In patients with an initial systolic blood pressure (SBP) >80 mmHg significantly less colloids (1035 ± 659 ml vs. 1288 ± 954 ml, p<0.006) were infused, whereas in patients with an initial SBP ≤ 80 mmHg significantly more colloids were infused (1609 ± 1159 ml vs. 1288 ± 954 ml, p<0.002). There was a statistically significant increase in systolic as well as diastolic blood pressure at all times of blood pressure measurement during prehospital treatment after bolus infusion of HyperHaes within the whole study group. The same applies to the subgroups multisystem trauma, severe traumatic brain injury and entrapment trauma. Minor negative side-effects were observed in 4 cases (1.2%). The mean serum sodium chloride profile on hospital admission was 146.9 ± 5.0 mmol/l, the base excess (BE) was -5.7 ± 5.3 mmol/l) and the pH was 7.3 ± 0.1. Conclusion: The concept of small volume resuscitation provides early and effective hemodynamic control. Clinical side-effects associated with bolus infusion of hypertonic-hyperoncotic solutions are rare. abstract_id: PUBMED:1739896 Small-volume resuscitation. Small-volume resuscitation with hypertonic saline is an attractive modality but is still at an early stage of experimental and clinical investigation. Aggressive resuscitation from traumatic, hypovolemic shock could be facilitated during the prehospital phase, but hypotension remains a physiological and beneficial compensatory mechanism to minimize active bleeding until definitive care is provided. Despite encouraging results, the author concludes that, at present, hypertonic saline cannot be considered a truly "magic potion" for resuscitation of the trauma patient. abstract_id: PUBMED:27655883 Performance of a simplified termination of resuscitation rule for adult traumatic cardiopulmonary arrest in the prehospital setting. Objective: The prehospital termination of resuscitation (TOR) guidelines for traumatic cardiopulmonary arrest (TCPA) was proposed in 2003. Its multiple descriptors of cases where efforts can be terminated make it complex to apply in the field. Here we proposed a simplified rule and evaluated its predictive performance. Methods: We analysed Utstein registry data for 2009-2013 from a Taipei emergency medical service to test a simplified TOR rule that comprises two criteria: blunt trauma injury and the presence of asystole. Enrolees were adults (≥18 years) with TCPA. The predicted outcome was in-hospital death. We compared the areas under the curve (AUC) of the simple rule with each of four descriptors in the guidelines and with a combination of all four to assess their discriminatory ability. Test characteristics were calculated to assess predictive performance. Results: A total of 893 TCPA cases were included. Blunt trauma occurred in 459 (51.4%) cases and asystole in 384 (43.0%). In-hospital mortality was 854 (95.6%) cases. The simplified TOR rule had greater discriminatory ability (AUC 0.683, 95% CI 0.618 to 0.747) compared with any single descriptor in the 2003 guidelines (range of AUC: 0.506-0.616) although the AUC was similar when all four were combined (AUC 0.695, 95% CI 0.615 to 0.775). The specificity of the simplified rule was 100% (95% CI 88.8% to 100%) and positive predictive value 100% (95% CI 96.8% to 100%). The false positive value, false negative value and decreased rate of unnecessary transport were 0% (95% CI 0% to 3.2%), 94.8% (95% CI 92.9% to 96.2%) and 16.4% (95% CI 14.1% to 19.1%), respectively. Conclusions: The simplified TOR rule appears to accurately predict non-survivors in adults with TCPA in the prehospital setting. abstract_id: PUBMED:28958368 Assessment and Resuscitation in Trauma Management. The golden hour of trauma represents a crucial period in the management of acute injury. In an efficient trauma resuscitation, the primary survey is viewed as more than simple ABCs with multiple processes running in parallel. Resuscitation efforts should be goal oriented with defined endpoints for airway management, access, and hemodynamic parameters. In tandem with resuscitation, early identification of life-threatening injuries is critical for determining the disposition of patients when they leave the trauma bay. Salvage strategies for profoundly hypotensive or pulseless patients include retrograde balloon occlusion of the aorta and resuscitative thoracotomy, with differing populations benefiting from each. Answer: The assessment of termination of trauma resuscitation guidelines reveals that children cannot be considered small adults in this context. Studies have shown that while certain criteria for termination of resuscitation may predict outcomes in adults, they do not necessarily transfer to pediatric populations. One study evaluated the application of adult guidelines for termination of resuscitation in a pediatric trauma population and found that criteria such as CPR duration greater than 15 minutes and fixed pupils were significant variables to distinguish between survivors and nonsurvivors. However, ECG rhythm and absent pulse were not significant predictors. This study concluded that the criteria for termination of resuscitation correctly predicted 100% of those who died when all the criteria were met and importantly, no survivors would have had resuscitation stopped if these criteria were applied, indicating that duration of CPR is a strong predictor of mortality in pediatric patients (PUBMED:20438923). Another study applied pediatric Termination of Resuscitation (pTOR) criteria from the Maryland Institute for Emergency Medical Services Systems to a national cohort and found that the criteria identified patients who did not achieve prehospital Return of Spontaneous Circulation (ROSC) while reliably excluding those who did. However, the study recommended considering the exclusion of drowning patients from future pTOR guidelines, as most misclassified patients were victims of drowning (PUBMED:34555488). A retrospective analysis also indicated that adult "termination-of-resuscitation" (TOR) criteria may not be suitable for children, as the application of these criteria in a pediatric cohort did not yield accurate predictions of death or survival, suggesting that these rules are not transferable to a pediatric population (PUBMED:27927227). In summary, while adult resuscitation termination guidelines provide a framework, pediatric trauma resuscitation requires specific criteria that account for the differences in physiology and response to injury in children. Existing studies support the need for pediatric-specific guidelines and caution against the direct application of adult criteria to children (PUBMED:20438923, PUBMED:34555488, PUBMED:27927227).
Instruction: Linking Illness in Parents to Health Anxiety in Offspring: Do Beliefs about Health Play a Role? Abstracts: abstract_id: PUBMED:24963560 Linking Illness in Parents to Health Anxiety in Offspring: Do Beliefs about Health Play a Role? Background: The cognitive behavioural (CB) model of health anxiety proposes parental illness leads to elevated health anxiety in offspring by promoting the acquisition of specific health beliefs (e.g. overestimation of the likelihood of illness). Aims: Our study tested this central tenet of the CB model. Method: Participants were 444 emerging adults (18-25-years-old) who completed online measures and were categorized into those with healthy parents (n = 328) or seriously ill parents (n = 116). Results: Small (d = .21), but significant, elevations in health anxiety, and small to medium (d = .40) elevations in beliefs about the likelihood of illness were found among those with ill vs. healthy parents. Mediation analyses indicated the relationship between parental illness and health anxiety was mediated by beliefs regarding the likelihood of future illness. Conclusions: Our study incrementally advances knowledge by testing and supporting a central proposition of the CB model. The findings add further specificity to the CB model by highlighting the importance of a specific health belief as a central contributor to health anxiety among offspring with a history of serious parental illness. abstract_id: PUBMED:35510964 Health anxiety by proxy - through the eyes of the parents. Health anxiety by proxy is a newly described phenomenon where parents worry excessively that their child suffers from a serious illness. In a former study, six parents with distressing worries about their child's health were interviewed to develop the Health Anxiety by Proxy Scale. The present study is a secondary analysis of these semi-structured interview data using interpretative phenomenological analysis aiming to explore for the first time the lived experience of parents with health anxiety by proxy. Analysis revealed three main themes: 'Faces of distress' describing various aspects of parents' experienced distress; 'Invasive insecurity and mistrust' portraying how anxiety affects parents' relationship with their children, health professionals and family; and 'Making sense of own worries' covering parents' ambivalence regarding their anxiety and rationalization of their worries. Knowledge about perspectives of parents who suffer from health anxiety by proxy can inform communication in clinical encounters where validation of parents' experiences may be key to forging an alliance for further treatment; and to recognising and potentially lowering barriers to receiving help that lies in parents' potential mistrust in professional help. abstract_id: PUBMED:31286846 Illness cognition and health anxiety in parents of children with cancer. Purpose: Health anxiety is a clinical entity characterized by a pathological fear of illness. Illness cognition refers to persistent positive or negative thoughts an individual has towards illness. Evidence has shown that patients with chronic conditions who possess negative illness cognitions experience greater social, emotional, and physical difficulties than patients with positive illness cognitions. This study aims to measure the prevalence of health anxiety in a population of parents of children with cancer, and investigate the association between positive and negative illness cognitions and health anxiety. Methods: We interviewed 105 parents of children with cancer and administered Arabic versions of the Illness Cognition Questionnaire - Parent Version and the Short Health Anxiety Inventory. Results: The mean parental age was 37.7 years with the majority of participants being mothers (78.1%) and married (94.3%) and with 35.2% having completed university education. The average age of the child with cancer was 8.4 years, with the largest proportion of children suffering from leukemia. The prevalence of health anxiety among parents of children with cancer was 21%. The following two dimensions of illness cognition were significantly associated with health anxiety: Helplessness (B = 0.222, p = 0.021) and lower Acceptance (B = -0.242, p = 0.008). Other variables associated with health anxiety were perceived inadequate income (B = -0.238, p = 0.021) and personal illness or illness of a family member/close friend (B = 0.251, p = 0.013). Conclusions: Parents of children with cancer may experience health anxiety. Predictors of health anxiety include feelings of helplessness, lower acceptance, inadequate income, and extended family illnesses. abstract_id: PUBMED:38039620 Measurement properties of the Health Anxiety by Proxy Scale (HAPYS): A new questionnaire to assess parents' worries about their child's health. Objective: Health anxiety by proxy refers to parents' excessive worries about their child's health. The Health Anxiety by Proxy scale (HAPYS) is a new self-report questionnaire to assess parents' worries and behaviors regarding their child's health. This study aimed to investigate the measurement properties of the HAPYS. Methods: Questionnaires were completed by 204 parents, and a HAPYS score was obtained for 200 parents: 39 parents diagnosed with health anxiety, 33 parents with different anxiety disorders, 33 parents with a Functional Somatic Disorder, and 95 healthy parents. We evaluated the following measurement properties: structural validity, reliability, convergent validity ((pain catastrophizing, parents' reports of child's emotional and physical symptoms), discriminant validity (parental reports of child's well-being), and known-groups validity (see compared groups above). Results: HAPYS demonstrated a one factor dimensionality, and excellent internal reliability (α = 0.95; CI: 0.93-0.97) and test-retest reliability after two weeks (ICC = 0.91; CI: 0.87-0.94). Convergent validity with the construct of parental catastrophizing about child pain was good (r = 0.72; CI: 0.64-0.78)). Good known-groups validity was demonstrated by the largest total HAPYS score observed in parents with health anxiety (median = 35; IQR: 9-53) and the lowest score in healthy parents (median = 9; IQR: 5-15) (p < 0.001). Conclusion: The findings support that HAPYS is a useful measure of health anxiety by proxy. Future research should examine the measurement properties in larger samples and different languages with further statistical analyses of structural validity. abstract_id: PUBMED:33313025 "How to Recognize if Your Child Is Seriously Ill" During COVID-19 Lockdown: An Evaluation of Parents' Confidence and Health-Seeking Behaviors. Background: Parents' health-seeking behaviors has changed during the COVID-19 pandemic. Providing parents with guidance in decision making might improve their confidence to seek timely advice when a child becomes ill. The aim of this study was to evaluate the "How to recognize if your child is seriously ill" leaflet on parents' confidence, health-seeking behaviors, and usefulness during the COVID-19 lockdown. Method: A nine-item survey, codesigned with parent advisors, was used to measure confidence and health-seeking behavior. Social media was used for data collection in a 6-week period (April-June 2020) during COVID-19 lockdown in the United Kingdom. Categorical data were analyzed as frequencies, and inductive content analysis was performed with the qualitative data. Results: In total, 171 parents responded. Most parents (n = 160, 93.6%) found the leaflet helpful. The leaflet increased the confidence among 116 parents (67.8%) to recognize if their child is ill, and 156 (91.2%) parents had a better understanding of when and where to seek help. Thirty-three (19.2%) parents used the leaflet, while their child was unwell during COVID-19 lockdown, and in 14 (42%) cases, the leaflet resulted in changing health-seeking behavior for that episode. Twelve of these parents decided to seek medical consultation when they had not planned to before. Content analysis revealed three categories. (1) Knowledge-parents found the leaflet an objective source to validate their concerns. (2) Usability-parents reported that the leaflet was clearly designed. (3) Decision aid-parents commented that the leaflet provided clarification around recognition of serious symptoms and when and where to seek appropriate care. Conclusions: Our leaflet provided parents with guidance on decision making and risk assessment of ill children during COVID-19 lockdown. Parents found it helpful; it increased their confidence and positively changed their health-seeking behaviors. Providing parents with targeted information to recognize serious illness in children at home could potentially foster self-care and safely maintain a reduction in pediatric emergency attendances for self-limiting illnesses. abstract_id: PUBMED:20738735 Impact of childhood cancer on parents' relationships: an integrative review. Purpose: The diagnosis of cancer and the treatment decisions associated with it may cause uncertainty, stress, and anxiety among parents. Emotional tensions can affect parents' relationships during the trajectory of the child's cancer illness. We conducted an integrative review to examine the evidence related to the effects of childhood cancer on parents' relationships. Methods: An integrative literature search of studies published between 1997 and 2009 was conducted in the Cumulative Index to Nursing and Allied Health Literature (CINAHL), Psychology Information (PsycINFO), PubMed, Scopus, CUIDEN, and Latin American and Caribbean Health Science Literature (LILACS). The key words used were neoplasms, child, marriage, spouses, family relations, and nursing. Articles were reviewed if the (a) topic addressed parents' relationships during childhood cancer; (b) participants were mothers, fathers, or both; (c) design was either qualitative or quantitative; (d) language was English, Portuguese, or Spanish; (e) date of publication was between January 1997 and October 2009; and (f) abstract was available. Results: Fourteen articles met the search criteria and were reviewed using Cooper's framework for integrative reviews. Four themes emerged: (a) changes in the parents' relationship during the trajectory of the child's illness; (b) difficulty in communication between couples; (c) gender differences in parental stress and coping; and (d) role changes. Conclusions And Implications: Findings revealed positive and negative changes in parents' relationships, communication, stress, and roles. Nurses need to assess the impact of cancer diagnosis and treatments on parent relationships, offer support and encouragement, and allow expression of feelings. Future research is needed to develop and test interventions that increase parents' potentials and strengthen relationships during the challenging trajectory of their children's cancer and treatment. Clinical Relevance: The multiple sources of stress and uncertainty associated with a child's cancer diagnosis and treatment affect parents' relationships. Difficulties in communication appear frequently in parents' relationship. Our findings may guide healthcare professionals in identifying parents at risk for developing conflicts, communication problems, and lack of alignment between parents that could interfere with providing optimal care for their child with cancer. Healthcare professionals may promote dialogue and encourage parents to express their feelings, seek mutual support, and establish a partnership in dealing with the child's illness. abstract_id: PUBMED:29111893 Psychological Health of Children with Chronic Physical Illness and their Parents - Results from Meta-Analyses Psychological Health of Children with Chronic Physical Illness and their Parents - Results from Meta-Analyses The present paper summarizes results from meta-analyses on psychological well-being of children with chronic physical illnesses and their parents. At the beginning, we discuss potential reasons for psychological effects of a chronic physical illness on children and adolescents as well as their parents. We then summarize results of meta-analyses of studies that compared aspects of mental health of children with a chronic physical illness and their parents with families of healthy children. Depressive symptoms, anxiety, and internalizing symptoms in general were most elevated in children with chronic fatigue syndrome and chronic headache while externalizing symptoms were most elevated in young people with epilepsy, chronic headache, and cerebral palsy. Depression and anxiety was less elevated in the ill children than in their parents. Parents of children with HIV-infection/AIDS and cerebral palsy reported the highest levels of distress, followed by parents of children diagnosed with cancer and spina bifida. Conclusions are drawn for future research and practice. abstract_id: PUBMED:14562488 Effects of coresidence and caregiving on health of Thai parents of adult children with AIDS. Purpose: To explore potential effects on the health of older parents living with and caring for people infected with HIV. Design: Comparison of health outcomes between affected parents and matched nonaffected parents, and between principal caregivers and nonprincipal caregivers in Thailand. Methods: Survey data from 394 affected households and 376 nonaffected households; qualitative data from 18 interviews of affected older parents. Results: A large proportion of older people with HIV-infected children provided time-consuming and strenuous caregiving services to them. Mothers shouldered most of this burden. Mothers who had a child die from AIDS reported lower levels of overall happiness than did mothers who had not. Mothers and fathers of PHAs (persons with HIV/AIDS) who died reported lower levels of overall happiness compared to 3 years previously (before the time of the death of their child) and compared to parents from households that did not experience an adult child's death. Many parents of children with AIDS experienced anxiety, insomnia, fatigue, muscle strain, and head and stomach aches during the time they cared for their ill children. Conclusions: Many older people suffered adverse health outcomes related to living with and caring for their children with AIDS. Recommendations indicate a variety of programs that might help such older parents. abstract_id: PUBMED:33724904 A new measure of excessive parental worries about children's health: Development of the Health Anxiety by Proxy Scale (HAPYS). Objective: Health anxiety by proxy is a newly introduced term to describe parents' experience of excessive and unpleasant worries about their child's health. This article describes the development of a new measure, the Health Anxiety by Proxy Scale (HAPYS), for systematic assessment of health anxiety by proxy. Method: The development of the HAPYS was performed over three phases. (1) Patients clinically assessed to have health anxiety by proxy participated in semi-structured interviews to elaborate their experience of worries regarding their child's health and their related behaviours, and to examine the face validity of items in an existing questionnaire: 'Illness Worry Scale - parent version'. (2) Based on the findings from Phase 1 the project group and a panel of experts selected and formulated questionnaire items and scoring formats. (3) The HAPYS was pilot-tested twice using cognitive interviewing with healthy parents and parents with health anxiety by proxy followed by further adjustments. Results: The final version of HAPYS consists of 26 items characteristic of health anxiety by proxy and of an impact section with five items. Conclusion: Based on the pilot testing the HAPYS showed good face and content validity. It holds the potential to be a valid questionnaire to help clinicians across health care settings assess parents suffering from health anxiety by proxy. abstract_id: PUBMED:23461414 Quality of life and mental health among parents of children with cerebral palsy: the influence of self-efficacy and coping strategies. Aims And Objectives: To explore the quality of life and mental health of caregivers of children with cerebral palsy and to examine the impact of self-efficacy and coping strategies on these outcomes. Background: Few studies analyse the impact of caring for a child with cerebral palsy on the caregivers' quality of life besides mental health. Also, less attention has been paid to the influence of caregiver's personal resources like self-efficacy or coping strategies on how they adjust to the child's illness and the care situation. Design: Cross-section correlational design. Methods: Sixty two parents of children with cerebral palsy completed measures to assess the quality of life (i.e. physical, environmental and social relationships), mental health (i.e. general mental health, depression and anxiety), self-efficacy and coping strategies. Results: Parents of children with cerebral palsy had, in general terms, low levels of quality of life and mental health. Self-efficacy was related to most of the outcomes, whereas any of the coping strategies assessed was significantly related to the outcomes. Conclusions: Quality of life and mental health can be affected in caregivers of children with CP. Personal resources like self-efficacy also need attention as they can help in the understanding of the differences in these outcomes and the design of effective interventions. RELEVANCE OF CLINICAL PRACTICE: Self-efficacy should be a key element in interventions addressed to parents of children with CP to elicit a process of empowerment that can improve the well-being of the family as a whole. Answer: The cognitive-behavioral (CB) model of health anxiety suggests that parental illness can lead to increased health anxiety in offspring by fostering the acquisition of specific health beliefs, such as the overestimation of the likelihood of illness (PUBMED:24963560). A study that tested this central tenet of the CB model found that emerging adults with seriously ill parents had small but significant elevations in health anxiety and beliefs about the likelihood of illness compared to those with healthy parents. The relationship between parental illness and health anxiety in offspring was mediated by beliefs regarding the likelihood of future illness, supporting the CB model's proposition and highlighting the importance of specific health beliefs in contributing to health anxiety among offspring with a history of serious parental illness (PUBMED:24963560). In summary, beliefs about health do play a role in linking illness in parents to health anxiety in offspring, as proposed by the CB model and supported by empirical evidence.
Instruction: Can effective basic life support be taught to untrained individuals during a cardiac arrest? Abstracts: abstract_id: PUBMED:19078820 Can effective basic life support be taught to untrained individuals during a cardiac arrest? Objective: The objective of this study was to determine whether a trained rescuer could teach untrained bystanders to perform basic life support (BLS) during a simulated cardiac arrest. Method: Volunteers were recruited from hospital ancillary staff and relatives of patients attending an emergency department. None had previous formal training in BLS. They were asked to perform BLS without instruction on a Laerdal 'Resusi Annie'. An instructor then gave training whilst performing BLS. When volunteers felt competent they performed BLS on their own. A penalty score was used to assess their performance. Results: Fifty-one volunteers completed the study (32 female, 19 male) mean age was 39 years (range 18-67 years). All the 51 volunteers showed an improvement in their penalty score after instruction. The score improved from 86.9 (SD=13.8) to 38.2 (SD=11.8). This decline was statistically significant (paired-sample t-test, P<0.0001). Conclusion: Untrained individuals showed an improvement in BLS skills when taught during a cardiac arrest. abstract_id: PUBMED:34093079 Basic life support The European Resuscitation Council has produced these basic life support guidelines, which are based on the 2020 International Consensus on Cardiopulmonary Resuscitation Science with Treatment Recommendations. The topics covered include cardiac arrest recognition, alerting emergency services, chest compressions, rescue breaths, automated external defibrillation (AED), cardiopulmonary resuscitation (CPR) quality measurement, new technologies, safety, and foreign body airway obstruction. abstract_id: PUBMED:26374339 Effect of the laryngeal tube on the no-flow-time in a simulated two rescuer basic life support setting with inexperienced users. Introduction: Intubation with a laryngeal tube (LT) is a recommended alternative to endotracheal intubation during advanced life support (ALS). LT insertion is easy; therefore, it may also be an alternative to bag-mask ventilation (BMV) for untrained personnel performing basic life support (BLS). Data from manikin studies support the influence of LT on no-flow-time (NFT) during ALS. Methods: We performed a prospective, randomized manikin study using a two-rescuer model to compare the effects of ventilation using a LT and BMV on NFT during BLS. Participants were trained in BMV and were inexperienced in the use of a LT. Results: There was no significant difference in total NFT with the use of a LT and BMV (LT: mean 83.1 ± 37.3 s; BMV: mean 78.7 ± 24.5 s; p = 0.313), but we found significant differences in the progression of the scenario: in the BLS-scenario, the proportion of time spent performing chest compressions was higher when BMV was used compared to when a LT was used. The quality of chest compressions and the ventilation rate did not differ significantly between the two groups. The mean tidal volume and mean minute volume were significantly larger with the use of a LT compared with the use of BMV. Conclusions: In conclusion, in a two-rescuer BLS scenario, NFT is longer with the use of a LT (without prior training) than with the use of BMV (with prior training). The probable reasons for this result are higher tidal volumes with the use of a LT leading to longer interruptions without chest compressions. abstract_id: PUBMED:31556731 Basic life support for the child and infant. The majority of cardiorespiratory arrests in children and infants are caused by respiratory insufficiency rather than cardiac problems; therefore, the order of delivering resuscitation is different to the delivery of basic life support to adults. The Nursing and Midwifery Council has stated that all nurses must be able to provide basic life support. This article will explain the process for recognising the need for, and delivering, basic life support to infants and children. abstract_id: PUBMED:37969666 Importance of basic life support training in rural India. Basic life support (BLS) provided right away can lower fatality rates. Cardiac arrest typically results in death within minutes if it is untreated. Therefore, it is of interest to assess how BLS training affected villagers. The pre-experimental one-group pre-test post-test design was chosen for the investigation. A non-probability volunteer sampling technique was adopted to collect a sample of 220 village residents who met the inclusion requirements. The participants received training in basic life support that is achieved by using a real-life role model, hands-on CPR instruction. Checklist served as a standardized method for assessing the BLS training program. The pre-test and post-test's means were 23.05 and 56.51, respectively, and their respective standard deviations were 11.89 and 8.27. The 'z test' calculation result is 12.36) The results showed that BLS training was more successful for villagers and that they required regular BLS training programs to maintain their BLS skill level. abstract_id: PUBMED:29123702 Effects of obligatory training and prior training experience on attitudes towards performing basic life support: a questionnaire survey. Aim: To determine the effect of Japanese obligatory basic life support training for new driver's license applicants on their willingness to carry out basic life support. Methods: We distributed a questionnaire to 9,807 participants of basic life support courses in authorized driving schools from May 2007 to April 2008 after the release of the 2006 Japanese guidelines. The questionnaire explored the participants' willingness to perform basic life support in four hypothetical scenarios: cardiopulmonary resuscitation on one's own initiative; compression-only cardiopulmonary resuscitation following telephone cardiopulmonary resuscitation; early emergency call; and use of an automated external defibrillator. The questionnaire was given at the beginning of the basic life support course in the first 6-month term and at the end in the second 6-month term. Results: The 9,011 fully completed answer sheets were analyzed. The training significantly increased the proportion of respondents willing to use an automated external defibrillator and to perform cardiopulmonary resuscitation on their own initiative in those with and without prior basic life support training experience. It significantly increased the proportion of respondents willing to carry out favorable actions in all four scenarios. In multiple logistic regression analysis, basic life support training and prior training experiences within 3 years were associated with the attitude. The analysis of reasons for unwillingness suggested that the training reduced the lack of confidence in their skill but did not attenuate the lack of confidence in detection of arrest or clinical judgment to initiate a basic life support action. Conclusions: Obligatory basic life support training should be carried out periodically and modified to ensure that participants gain confidence in judging and detecting cardiac arrest. abstract_id: PUBMED:27942342 Is current training in basic and advanced cardiac life support (BLS & ACLS) effective? A study of BLS & ACLS knowledge amongst healthcare professionals of North-Kerala. Background: Healthcare professionals are expected to have knowledge of current basic and advanced cardiac life support (BLS/ACLS) guidelines to revive unresponsive patients. Methods: A cross-sectional study was conducted to evaluate the current practices and knowledge of BLS/ACLS principles among healthcare professionals of North-Kerala using pretested self-administered structured questionnaire. Answers were validated in accordance with American Heart Association's BLS/ACLS teaching manual and the results were analysed. Results: Among 461 healthcare professionals, 141 (30.6%) were practicing physicians, 268 (58.1%) were nurses and 52 (11.3%) supporting staff. The maximum achievable score was 20 (BLS 15/ACLS 5). The mean score amongst all healthcare professionals was 8.9±4.7. The mean score among physicians, nurses and support staff were 8.6±3.4, 9±3.6 and 9±3.3 respectively. The majority of healthcare professionals scored ≤50% (237, 51.4%); 204 (44.3%) scored 51%-80% and 20 (4.34%) scored >80%. Mean scores decreased with age, male sex and across occupation. Nurses who underwent BLS/ACLS training previously had significantly higher mean scores (10.2±3.4) than untrained (8.2±3.6, P=0.001). Physicians with <5 years experience (P=0.002) and nurses in the private sector (P=0.003) had significantly higher scores. One hundred and sixty three (35.3%) healthcare professionals knew the correct airway opening manoeuvres like head tilt, chin lift and jaw thrust. Only 54 (11.7%) respondents were aware that atropine is not used in ACLS for cardiac arrest resuscitation and 79 (17.1%) correctly opted ventricular fibrillation and pulseless ventricular tachycardia as shockable rhythms. The majority of healthcare professionals (356, 77.2%) suggested that BLS/ACLS be included in academic curriculum. Conclusion: Inadequate knowledge of BLS/ACLS principles amongst healthcare professionals, especially physicians, illuminate lacunae in existing training systems and merit urgent redressal. abstract_id: PUBMED:30793062 The evaluation of laypersons awareness of basic life support at the university in Izmir. Objectives: Basic Life Support (BLS) is the application of cardiopulmonary resuscitation (CPR) in order to save the lives of cardiac arrest victims by members of the public pending the arrival of the Emergency Medical Service (EMS). The aim of this study was to evaluate the effectiveness of training in order to ensure society understands the importance of early initiation of BLS, and to provide information concerning BLS and automated external defibrillators (AED). Methods: This study consisted of 150 participants, of whom none were healthcare professionals. The research data were collected from 150 pre-tests and 100 post-tests. A Comparison of nominal data was analyzed by both McNemar's test and Pearson's chi-square exact test. Results: Of the participants, 39% had received the BLS training prior to the study. It was observed that the participants' desire for applying BLS increased from 43% to 78% post training, and the ratio of ability to distinguish the need for BLS increased from 54% to 79%. Our results also indicated that the knowledge level of the CPR application increased after the study. The proportion of participants who knew the purpose of using AED increased from 79.8% to 95.7%. Conclusions: It was concluded that the BLS Awareness training increased in relation to the application of BLS, improved the BLS knowledge and increased awareness of the use of AED. abstract_id: PUBMED:26555372 Epidemiology of Pediatric Prehospital Basic Life Support Care in the United States. Children have unique medical needs compared to adults. Emergency medical services personnel need proper equipment and training to care for children. The purpose of this study is to characterize emergency medical services pediatric basic life support to help better understand the needs of children transported by ambulance. Pediatric basic life support patients were identified in this retrospective descriptive study. Descriptive statistics were used to examine incident location, possible injury, cardiac arrest, resuscitation attempted, chief complaint, primary symptom, provider's primary impression, cause of injury, and procedures performed during pediatric basic life support calls using the largest aggregate of emergency medical services data available, the 2013 National Emergency Medical Services Information System (NEMSIS) Public Release Research Data Set. Pediatric calls represented 7.4% of emergency medical services activations. Most pediatric patients were male (49.8%), White (40.0%), and of non-Hispanic origin (56.5%). Most incidents occurred in the home. Injury, cardiac arrest, and resuscitation attempts were highest in the 15 to 19 year old age group. Global complaints (37.1%) predominated by anatomic location and musculoskeletal complaints (26.9%) by organ system. The most common primary symptom was pain (30.3%) followed by mental/psychiatric (13.4%). Provider's top primary impression was traumatic injury (35.7%). The most common cause of injury was motor vehicle accident (32.3%). The most common procedure performed was patient assessment (27.4%). Median EMS system response time was 7 minutes (IQR: 5-12). Median EMS scene time was 12 minutes (IQR: 8-19). Median transport time was 14 minutes (IQR: 8-24). Median EMS total call time was 51 minutes (IQR: 33-77). The epidemiology of pediatric basic life support can help to guide efforts in both emergency medical services operations and training. abstract_id: PUBMED:29361872 Effectiveness of Basic Life Support Training for Middle School Students. Bystander cardiopulmonary resuscitation improves survival after out-of-hospital cardiac arrest. This study aimed to assess the effectiveness of a basic life support (BLS) educational course given to 110 middle school children, using a pretest posttest design. In the pretest, students were asked to demonstrate BLS on a manikin to simulate a real-life scenario. After the pretest, a BLS training course of two sessions was provided, followed by posttest on the same manikin. Students were assessed using an observational sheet based on the American Heart Association's BLS guidelines. In the pretest, students showed significant weakness in the majority of guidelines. In the posttest, they demonstrated significant improvement in their BLS skills. BLS training in the middle school was effective, considering the lack of previous skills. It is recommended that BLS education be compulsory in the school setting. Answer: Yes, effective basic life support (BLS) can be taught to untrained individuals during a cardiac arrest. A study involving hospital ancillary staff and relatives of patients attending an emergency department, who had no previous formal training in BLS, demonstrated that these untrained individuals showed significant improvement in their BLS skills when taught during a simulated cardiac arrest. The penalty score used to assess their performance improved from 86.9 to 38.2 after instruction, indicating that the training was effective (PUBMED:19078820). This suggests that with proper instruction, even during the stress of a cardiac arrest scenario, untrained bystanders can learn and apply BLS techniques effectively.
Instruction: Is a change in functional capacity or dependency in activities of daily living associated with a change in mental health among older people living in residential care facilities? Abstracts: abstract_id: PUBMED:24379657 Is a change in functional capacity or dependency in activities of daily living associated with a change in mental health among older people living in residential care facilities? Aim: Functional capacity and dependency in activities of daily living (ADL) could be important mediators for an association between physical exercise and mental health. The aim of this study was to investigate whether a change in functional capacity or dependency in ADL is associated with a change in depressive symptoms and psychological well-being among older people living in residential care facilities, and whether dementia can be a moderating factor for this association. Methods: A prospective cohort study was undertaken. Participants were 206 older people, dependent in ADL, living in residential care facilities, 115 (56%) of whom had diagnosed dementia. Multivariate linear regression, with comprehensive adjustment for potential confounders, was used to investigate associations between differences over 3 months in Berg Balance Scale (BBS) and Geriatric Depression Scale (GDS-15) scores, and in BBS and Philadelphia Geriatric Center Morale Scale (PGCMS) scores. Associations were also investigated between differences in Barthel ADL Index and GDS-15 scores, and in Barthel ADL Index and PGCMS scores. Results: There were no significant associations between changes in scores over 3 months; the unstandardized β for associations between BBS and GDS-15 was 0.026 (P=0.31), BBS and PGCMS 0.045 (P=0.14), Barthel ADL Index and GDS-15 0.123 (P=0.06), and Barthel ADL Index and PGCMS -0.013 (P=0.86). There were no interaction effects for dementia. Conclusion: A change in functional capacity or dependency in ADL does not appear to be associated with a change in depressive symptoms or psychological well-being among older people living in residential care facilities. These results may offer one possible explanation as to why studies of physical exercise to influence these aspects of mental health have not shown effects in this group of older people. abstract_id: PUBMED:24523582 Functional capacity and dependency in transfer and dressing are associated with depressive symptoms in older people. Background: This study examined associations between depressive symptoms and functional capacity, overall dependency in personal activities of daily living (ADLs), and dependency in individual ADL tasks, respectively, in people with a high mean age, large range of functional capacity, and wide spectrum of dependency in ADLs. Methods: Cross-sectional data from three studies were used. A total of 392 individuals living in community and residential care facilities were included. Mean age was 86.2 years, 72% were women, 75% were dependent in ADLs, 42% had depression, and 39% had dementia. Depressive symptoms were assessed with the 15-item Geriatric Depression Scale (GDS-15), functional capacity with the Berg Balance Scale (BBS), and ADLs with the Barthel ADL Index. Multiple linear regression analyses with comprehensive adjustments were performed between GDS-15 and BBS, GDS-15 and Barthel ADL Index, and GDS-15 and each individual ADL task, separately. Results: GDS-15 score was associated with BBS score (unstandardized b =-0.03, P=0.008), but not with Barthel ADL Index score (unstandardized b =-0.07, P=0.068). No significant interaction effects of sex, dementia, or living conditions were found in these associations. Among individual ADL tasks, dependency in transfer (unstandardized b =-1.03, P=0.007) and dressing (unstandardized b =-0.70, P=0.035) were associated with depressive symptoms. Conclusion: Functional capacity seems to be independently associated with depressive symptoms in older people living in community and residential care facilities, whereas overall ADL performance may not be associated. Dependency in the individual ADL tasks of transfer and dressing appear to be independently associated with depressive symptoms and may be an important focus of future interdisciplinary multifactorial intervention studies. abstract_id: PUBMED:32850900 Effect of Capacity to Undertake Instrumental Activities of Daily Living on Entry to Aged Residential Care in Older People With Heart Failure. Background: Heart failure is a common condition in older people with complex medical needs. A key factor in resilience after heart failure is the capacity to perform the instrumental activities of daily living (IADLs). Knowing the association between capacity to perform IADLs and entry into aged residential care will help health professionals plan interventions that will allow older people to remain independent longer. Methods: We analyzed the association between the capacity to perform eight IADLs and entry into ARC. Participants included New Zealanders aged ≥65 years with a diagnosis of heart failure, and who had an InterRAI 9.1 Home Care assessment between July 2012 and June 2018. A multivariable competing risks regression model for entry to ARC with death as the competing risks was used to establish sub-hazard ratios (SHR) for IADL capacity. Co-variates included demographic variables, co-morbidities, living arrangements, cognitive performance, depression, timed walk, alcohol use, smoking, activities of daily living, recent hospitalization and history of falls. Results: There were 13,220 participants with heart failure who were followed for a median 1.69 (0.70-3.17) years. There were 3,177 (24.0%) participants who entered aged residential care and 5,714 (43.2%) who died without having first entered residential care. Overall capacity to perform specific IADLs was "very poor" for housework (85.5%), shopping (68.0%), stairs (61.7%), meal preparation (53.0%), and transportation (52.2%). In the multivariable model, compared to adequate capacity (the reference) poorer capacity for managing finance, managing medications, meal preparation and transport were all associated with increased risk of entering aged residential care, with SHR from 1.05 to 1.18. Overall, the IADL capacity explained ~10% of the risk of entering aged residential care. Conclusion: Capacity to perform IADL is a key factor in maintaining resilience in older people with heart failure. Capacity to manage finances, transport and medications, prepare meals, and transport oneself with minimal supervision could reduce the risk of entry into aged residential care. Developing early interventions and support for people with poor capacity to perform their IADL may help reduce admission into aged residential care. abstract_id: PUBMED:16579666 A high-intensity functional weight-bearing exercise program for older people dependent in activities of daily living and living in residential care facilities: evaluation of the applicability with focus on cognitive function. Background And Purpose: Knowledge concerning the applicability and the effect of high-intensity exercise programs is very limited for older people with severe cognitive and physical impairments. The primary aim of this study was to evaluate the applicability of a high-intensity functional weight-bearing exercise program among older people who are dependent in activities of daily living and living in residential care facilities. A second aim was to analyze whether cognitive function was associated with the applicability of the program. Subjects: The subjects were 91 older people (mean age=85.3 years, SD=6.1, range=68-100) who were dependent in personal activities of daily living and randomly assigned to participate in an exercise intervention. Their mean score for the Mini-Mental State Examination (MMSE) was 17.5 (SD=5.0, range=10-29). Methods: A high-intensity functional weight-bearing exercise program was performed in groups of 3 to 7 participants who were supervised by physical therapists. There were 29 exercise sessions over 13 weeks. Attendance, intensity of lower-limb strength and balance exercises, and occurrence and seriousness of adverse events were the outcome variables in evaluating the applicability of the program. Results: The median attendance rate was 76%. Lower-limb strength exercises with high intensity were performed in a median of 53% of the attended exercise sessions, and balance exercises with high intensity were performed in a median of 73% of the attended exercise sessions. The median rate of sessions with adverse events was 5%. All except 2 adverse events were assessed as minor and temporary, and none led to manifest injury or disease. No significant differences were observed in applicability when comparing participants with dementia and participants without dementia. In addition, there was no significant correlation between applicability and the MMSE score. Discussion And Conclusion: The results suggest that a high-intensity functional weight-bearing exercise program is applicable for use, regardless of cognitive function, among older people who are dependent in activities of daily living, living in residential care facilities, and have an MMSE score of 10 or higher. abstract_id: PUBMED:34751506 Effects of Vitality Acupunch exercise on functional fitness and activities of daily living among probable sarcopenic older adults in residential facilities. Purpose: To test the effects of a Vitality Acupunch exercise program on the functional fitness and the ability to perform the activities of daily living (ADL) among older adults with probable sarcopenia in residential facilities. Design: This was a cluster-randomized controlled trial. A total of 12 long-term care facilities randomly assigned to the intervention and control groups with 1:1 allocation ratio. Among them, 114 older adults with probable sarcopenia participated at baseline and were allocated to either the intervention or control group according to the facility where they resided in. Of these, 103 older adults completed the study. Methods: The intervention group (n = 52) underwent the Vitality Acupunch exercise program three times a week, each lasting 40 min, for 6 months while the control group (n = 51) performed its routine daily activities. Findings: The functional fitness and ADL of the intervention group significantly improved at each time point (all p < 0.001), while the control group showed a significant decreasing trend. Except the lower limb muscular endurance, the functional fitness and ADL of the intervention group significantly improved compared to the control group at T1, and the improvements were still significant at T2. Conclusions: Functional fitness and the ability to perform ADL in older adults with probable sarcopenia were significantly improved after receiving the Vitality Acupunch exercises. Clinical Relevance: An exercise that integrates the meridian theory and exercise concepts effectively improves functional fitness in probable sarcopenic older adults. Clinical Trial Registration: Clinicaltrials.gov (NCT04504786). The trial was first posted on August 7, 2020. This part of the data was collected from August 2020 to March 2021. abstract_id: PUBMED:33494359 Sociodemographic Factors Affecting Older People's Care Dependency in Their Daily Living Environment According to Care Dependency Scale (CDS). The aim of the research was to determine the influence of sociodemographic factors on older people's care dependency in their living environment according to the Care Dependency Scale (CDS). Methods: The research was conducted in a group of 151 older people staying in their own homes. The methods applied in the research included a sociodemographic questionnaire and scales including the Abbreviated Mental Test Score (AMTS), CDS, Katz Index of Independence in Activities of Daily Living (ADL), Lawton's Instrumental Activities of Daily Living (I-ADL), Mini Nutritional Assessment (MNA), and Geriatric Depression Scale (GDS). Results: Gender had a significant impact on the level of care dependency. The surveyed females obtained the medium or high level of dependency more often than males (22.4% vs. 6.1%), and the low level of dependency was significantly more frequent among men than women (p = 0.006). Moreover, the age of the respondents determined their level of care dependency. The subjects with a medium or high level of care dependency were significantly older (p = 0.001). The subjects with a low level of care dependency were more likely to be married than people with a medium/high level (p < 0.001). The level of education had a significant impact on care dependency. A higher level of education correlated with a medium/high level of dependency (p = 0.003). Conclusions: The survey results confirmed that sociodemographic factors have a significant impact on the level of care dependency. When planning care in the home environment, special attention should be paid to older women, who are more likely to lose their independence than men. These women should be given additional support. abstract_id: PUBMED:17636155 Berg balance scale: intrarater test-retest reliability among older people dependent in activities of daily living and living in residential care facilities. Background And Purpose: The Berg Balance Scale (BBS) is frequently used to assess balance in older people, but knowledge is lacking about the absolute reliability of BBS scores. The aim of this study was to investigate the absolute and relative intrarater test-retest reliability of data obtained with the BBS when it is used among older people who are dependent in activities of daily living and living in residential care facilities. Subjects: The participants were 45 older people (36 women and 9 men) who were living in 3 residential care facilities. Their mean age was 82.3 years (SD=6.6, range=68-96), and their mean score on the Mini Mental State Examination was 17.5 (SD=6.3, range=4-30). Methods: The BBS was assessed twice by the same assessor. The intrarater test-retest reliability assessments were made at approximately the same time of day and with 1 to 3 days in between assessments. Absolute reliability was calculated using an analysis of variance with a 95% confidence level, as suggested by Bland and Altman. Relative reliability was calculated using the intraclass correlation coefficient (ICC). Results: The mean score was 30.1 points (SD=15.9, range=3-53) for the first BBS test and 30.6 points (SD=15.6, range=4-54) for the retest. The mean absolute difference between the 2 tests was 2.8 points (SD=2.7, range=0-11). The absolute reliability was calculated as being 7.7 points, and the ICC was calculated to .97. Discussion And Conclusion: Despite a high ICC value, the absolute reliability showed that a change of 8 BBS points is required to reveal a genuine change in function among older people who are dependent in activities of daily living and living in residential care facilities. This knowledge is important in the clinical setting when evaluating an individual's change in balance function over time in this group of older people. abstract_id: PUBMED:28858409 Nutritional supplementation for activities of daily living and functional ability of older people in residential facilities: A systematic review. Aim: To review the best available evidence on nutritional supplementation for activities of daily living and functional ability of older people in residential facilities. Methods: Electronic searches were carried out using CENTRAL, MEDLINE and EMBASE databases for the years 2006-2016. Randomized controlled trials and cluster-randomized controlled trials that examined the effects of nutrition interventions aimed at improving the energy or protein intake alone or both in combination were included. Two authors independently reviewed all potential studies for inclusion against the eligibility criteria. We reviewed studies for outcome relevance and methodological rigor. Any disagreement was resolved by discussion. Results: A total of eight studies were included, containing four randomized controlled trials and four cluster-randomized controlled trials involving 698 participants. There was significant improvement in handgrip strength (mean difference 1.65 kg, 95% confidence interval 0.09-3.22 kg, P = 0.04), but no difference in activities of daily living (mean difference 2.06, 95% confidence interval -18.28-22.40, P = 0.84), balance (mean difference -1.10, 95% confidence interval -3.04-0.84, P = 0.27), gait velocity (mean difference 0.00, 95% confidence interval -0.03-0.03, P = 1.00) and death (RR 1.90, 95% confidence interval 0.61-5.99, P = 0.27). Conclusions: Nutritional intervention with older people in residential facilities was effective in improving handgrip strength, but did not significantly improve scores for activities of daily living, balance, gait velocity or preventing death. Further studies with larger sample sizes and of high quality are required to investigate appropriate intervention methods and specific target participants. Geriatr Gerontol Int 2018; 18: 197-210. abstract_id: PUBMED:20496181 Effects of a high-intensity functional exercise programme on depressive symptoms and psychological well-being among older people living in residential care facilities: A cluster-randomized controlled trial. Objectives: To evaluate the effects of a high-intensity functional exercise programme on depressive symptoms and psychological well-being among older people dependent in activities of daily living (ADL) and living in residential care facilities. Method: Cluster-randomized controlled study. Participants were 191 older people, aged 65-100, dependent in ADL and with Mini Mental State Examination scores between 10 and 30. One-hundred (52%) of the participants had a diagnosed dementia disorder. A high-intensity functional weight-bearing exercise programme and a control activity were performed in groups. Sessions were held five times over each two week period for three months, a total of 29 times. The outcome measures, Geriatric Depression Scale (GDS-15) and Philadelphia Geriatric Center Morale Scale (PGCMS) were blindly assessed at baseline, three and six months. Results: At baseline, mean +/- SD (range) for GDS was 4.4 +/- 3.2 (0-14), and for PGCMS 11.0 +/- 3.5 (2-17). There were no significant differences in GDS or PGCMS between the exercise and the control group at the three and six month follow-ups in the total sample. Among people with dementia, there was a between-group difference at three months in PGCMS scores in favour of the exercise group. Conclusion: A high-intensity functional exercise programme seems generally not to influence depressive symptoms or psychological well-being among older people dependent in ADL and living in residential care facilities. An individualized and multifactorial intervention may be needed in this group. However, an exercise programme as a single intervention may have a short-term effect on well-being among people with dementia. abstract_id: PUBMED:18283231 A randomized controlled trial of fall prevention by a high-intensity functional exercise program for older people living in residential care facilities. Background And Aims: Falls are particularly common among older people living in residential care facilities. The aim of this randomized controlled trial was to evaluate the effectiveness of a high-intensity functional exercise program in reducing falls in residential care facilities. Methods: Participants comprised 191 older people, 139 women and 52 men, who were dependent in activities of daily living. Their mean+/-SD score on the Mini-Mental State Examination was 17.8+/-5.1 (range 10-30). Participants were randomized to a high-intensity functional exercise program or a control activity, consisting of 29 sessions over 3 months. The fall rate and proportion of participants sustaining a fall were the outcome measures, subsequently analysed using negative binominal analysis and logistic regression analysis, respectively. Results: During the 6-month follow-up period, when all participants were compared, no statistically significant differences between groups were found for fall rate (exercise group 3.6 falls per person years [PY], control group 4.6 falls per PY), incidence rate ratio (95% CI) 0.82 (0.49-1.39), p=0.46, or the proportion of participants sustaining a fall (exercise 53%, control 51%), odds ratio (95% CI) 0.95 (0.52-1.74), p=0.86. A subgroup interaction analysis revealed that, among participants who improved their balance during the intervention period, the exercise group had a lower fall rate than the control group (exercise 2.7 falls per PY, control 5.9 falls per PY), incidence rate ratio (95% CI) 0.44 (0.21-0.91), p=0.03. Conclusions: In older people living in residential care facilities, a high-intensity functional exercise program may prevent falls among those who improve their balance. Answer: The relationship between changes in functional capacity or dependency in activities of daily living (ADL) and mental health among older people living in residential care facilities has been explored in several studies, but the findings are not entirely consistent. One study found no significant associations between changes in functional capacity, as measured by the Berg Balance Scale (BBS), or dependency in ADL, as measured by the Barthel ADL Index, and changes in depressive symptoms or psychological well-being, as measured by the Geriatric Depression Scale (GDS-15) and the Philadelphia Geriatric Center Morale Scale (PGCMS), respectively. This study concluded that a change in functional capacity or dependency in ADL does not appear to be associated with a change in mental health among this population (PUBMED:24379657). In contrast, another study reported that functional capacity, as assessed by the BBS, was independently associated with depressive symptoms in older people living in both community and residential care facilities. This study also found that dependency in individual ADL tasks, specifically transfer and dressing, was associated with depressive symptoms. However, overall ADL performance, as measured by the Barthel ADL Index, was not associated with depressive symptoms (PUBMED:24523582). These differing results suggest that while specific aspects of functional capacity and dependency in certain ADL tasks may be related to mental health outcomes like depressive symptoms, the overall change in functional capacity or dependency in ADL may not have a clear association with changes in mental health among older people in residential care facilities. It is important to note that the context of the studies and the specific measures used to assess functional capacity, dependency in ADL, and mental health may influence the findings. Further research may be needed to clarify these relationships and to determine whether interventions targeting functional capacity or ADL dependency could have beneficial effects on mental health in this population.
Instruction: Survival impact of delayed treatment in patients with hepatocellular carcinoma undergoing locoregional therapy: is there a lead-time bias? Abstracts: abstract_id: PUBMED:17454859 Survival impact of delayed treatment in patients with hepatocellular carcinoma undergoing locoregional therapy: is there a lead-time bias? Objective: Many reports indicate the importance of active treatment for hepatocellular carcinoma (HCC), but there are few studies available that address the impact of delayed therapy on survival or take the lead-time bias into account. The objective of this study was to investigate whether patients with delayed locoregional therapy for HCC truly have a shortened survival from the time of diagnosis. Material And Methods: Survival rates were compared between 48 HCC patients with treatment delay and 96 age- and gender-matched controls without delay. All patients underwent transarterial chemoembolization or percutaneous ethanol or acetic acid injection for HCC. Treatment delay was defined as a >2 months' time interval between diagnosis and treatment. Results: Baseline comparison showed that patients with treatment delay had higher scores in the model for endstage liver disease compared with those of patients without delay (12.3+/-1.8 versus 11.1+/-2.5, p=0.01). In the Cox multivariate model, advanced cancer stage (relative risk (RR): 2.66, p=0.001), Child-Turcotte-Pugh class B (RR: 3.81, p<0.001), tumor size >5 cm (RR: 2.02, p=0.011) and treatment delay (RR: 2.91, p=0.001) were independent poor prognostic predictors. Among patients with treatment delay, disease progression was registered in 30 (63%) patients. Patients with prolonged treatment delay (>3 months) were more likely to have tumor progression (p=0.013). In the Cox model, a treatment delay of >3 months independently predicted a poor rate of survival (RR: 3.67, p=0.002). Conclusions: Delayed HCC treatment is linked with shortened overall survival unrelated to the lead-time bias in patients undergoing locoregional therapy. Prolonged treatment delay of more than 3 months in these patients may worsen the long-term outcome. abstract_id: PUBMED:27623157 Impact of preoperative locoregional therapy on recurrence and patient survival following liver transplantation for hepatocellular carcinoma: a meta-analysis. Abstracts Objective: To evaluate the impact of preoperative locoregional therapy on recurrence and patient survival following liver transplantation for hepatocellular carcinoma (HCC). Methods: We searched medical literature databases to identify appropriate studies assessing the impact of preoperative locoregional therapy on recurrence and patient survival following liver transplantation from January 1962 to April 2014. Study inclusion criteria were the existence of a control group, a sufficiently long follow-up period and reporting of survival outcomes. We then performed a meta-analysis of these studies. Results: Our search identified 12 studies from among a possible 1105. A total of 1504 patients were included in our analysis. There was no significant heterogeneity among the studies. In the meta-analysis, preoperative locoregional therapy was not statistically significant in affecting five-year survival rates following liver transplantation (hazard ratio [HR] = 1.06; 95% confidence interval [CI] = 0.82-1.38). For patients meeting the Milan criteria, preoperative locoregional therapy did not affect survival rates following liver transplantation (HR =1.04, 95% CI =0.74-1.45). The recurrence-free survival rate also had no association with preoperative locoregional therapy (HR =1.02, 95% CI =0.70-1.50). Conclusion: Our meta-analysis suggests that preoperative locoregional therapy has no impact on survival following liver transplantation for HCC. abstract_id: PUBMED:29572974 Impact of remnant vital tissue after locoregional treatment and liver transplant in hepatocellular cancer patients, a multicentre cohort study. The role of pathological findings after locoregional treatments as predictors of hepatocellular cancer recurrence after liver transplantation has been poorly addressed. The aim of the study was to identify the role of remnant vital tissue (RVT) of the target lesion in predicting hepatocellular cancer recurrence. Two hundred and seventy-six patients firstly undergoing locoregional treatment and then transplanted between January 2010 and December 2015 in four European Transplant Centres (i.e. Rome Tor Vergata, Birmingham, Brussels and Ancona) were enrolled in the study to investigate the role of pathological response at upfront locoregional treatment. At multivariable Cox regression analysis, RVT ≥2 cm was a strong independent risk factor for post-LT recurrence (HR = 5.6; P < 0.0001). Five-year disease-free survival rates were 60.8%, 80.9% and 95.0% in patients presenting a RVT ≥2 cm vs. 0.1-1.9 vs. no RVT, respectively. When only Milan Criteria-IN patients were analysed, similar results were reported, with 5-year disease-free survival rates of 58.1%, 79.0% and 94.0% in patients presenting a RVT ≥2 cm vs. 0.1-1.9 vs. no RVT, respectively. RVT is an important determinant of tumour recurrence after liver transplantation performed for hepatocellular cancer. Its discriminative power looks to be evident also in a Milan-IN setting, suggesting to more liberally use locoregional treatments also in these patients. abstract_id: PUBMED:37568605 Efficacy of the Combination of Systemic Sequential Therapy and Locoregional Therapy in the Long-Term Survival of Patients with BCLC Stage C Hepatocellular Carcinoma. Background: The aim of this study was to evaluate the clinical impact of a combination of systemic sequential therapy and locoregional therapy on the long-term survival of patients with Barcelona Clinic Liver Cancer (BCLC) stage C hepatocellular carcinoma (HCC). Methods: Sixty-four consecutive patients with intrahepatic target nodules who had initially received systemic therapy (lenvatinib and atezolizumab plus bevacizumab) were reviewed. The clinical impact of the combined use of systemic sequential therapy and locoregional therapy was evaluated by determining overall survival (OS). The combined use of systemic sequential therapy with more than two agents and locoregional treatment was defined as multidisciplinary combination therapy (MCT), while only systemic sequential therapy and repeated locoregional-treatment was defined as a single treatment procedure (STP). Results: R0 resection, MCT, and STP resulted in significantly better OS compared with no additional treatment (median OS, not reached vs. 18.2 months and 12.6 vs. 8.1 months, respectively; p = 0.002). Multivariate analysis confirmed that the use of R0 resection and MCT were associated with better OS (hazard ratio [HR]; 0.053, p = 0.006 and 0.189, p < 0.001, respectively) compared with that for STP (HR; 0.279, p = 0.003). Conclusions: MCT is may effective in patients with BCLC stage C HCC and intrahepatic target nodules who have previously received systemic therapy-based treatment. abstract_id: PUBMED:29981527 Impact of hepatocellular carcinoma aetiology and liver function on the benefit of surveillance: A novel approach for the adjustment of lead-time bias. Background & Aims: Surveillance reportedly has benefit on survival in patients with hepatocellular carcinoma (HCC), even after adjustment for lead-time bias. However, previous adjustment for lead-time bias using tumour volume doubling time (TVDT) had inherent problem in accuracy. We evaluated survival benefit of HCC surveillance with newly developed approach for adjusting lead-time bias. In addition, survival benefit was evaluated according to HCC aetiology and liver function. Methods: A total of 3899 patients were studied. TVDT was calculated in 255 study patients with ≥2 tumour size measurements before the diagnosis of HCC. Adjusted survival time was calculated based on TVDT, as the time from when HCC was assumed to be 5 mm to death or last follow-up. Survival rates based on this adjusted survival time were compared between the surveillance and nonsurveillance groups and categorized by HCC aetiology and liver function. Results: Calculated TVDT varied widely by study patients (median 141.9, IQR, 73.1-261.7 days). Survival rates based on adjusted survival time were higher in the surveillance group overall and by patients HCC aetiology. Whereas adjusted survival rates were higher in the surveillance group in Child-Pugh class A patients, the survival benefit was smaller in Child-Pugh class B patients and not statistically significant in Child-Pugh class C patients. Conclusions: The survival benefit of surveillance for patients with HCC was demonstrated after adjustment for lead-time bias with novel, more accurate methodology. However, the benefits differed based on liver function and may vary largely by patients because of wide variation in TVDT. abstract_id: PUBMED:27270711 New Approaches in Locoregional Therapies for Hepatocellular Carcinoma. Purpose: Hepatocellular carcinoma (HCC) represent actually the fifth most common cancer worldwide, with liver transplantation and hepatic resection who represent the standard of care of curative treatment. Unfortunately, not all patient could benefit of curative treatment. For such patients, locoregional or systemic therapies represent a valid option in order to achieve the best survival possible. Methods: A review of most interesting paper actually present in literature on locoregional treatment for nonresectable nontransplantable HCC was performed. Results: A detailed description on each different approach has been detailed in each chapter. Conclusion: In case of nontransplantable and nonresectable HCC, locoregional treatment represent a valid alternative in management of this patients. abstract_id: PUBMED:38310071 Subsequent locoregional therapy prolongs survival in progressive hepatocellular carcinoma patients under lenvatinib treatment. Background: Locoregional therapy and multi-kinase inhibitor agent have been the backbone of treatment for hepatocellular carcinoma (HCC) patients. However, the effect of combination or sequential use of locoregional therapy on HCC patients receiving multi-kinase inhibitor remain uncertain. Therefore, we aim to explore whether the subsequent locoregional therapy provides better survival in HCC patients under lenvatinib treatment. Methods: From March 2018 to April 2020, a total of 78 unresectable HCC patients receiving lenvatinib were recruited. Image response was evaluated by dynamic image using the modified RECIST criteria. Among patients with tumor progression under lenvatinib treatment, whether receiving subsequent locoregional therapy or not were documented. Overall survival between two groups and the predictors for tumor progression were also analyzed. Results: Among the 78 patients receiving lenvatinib, the median age was 67.8 years old, and 69.2 % were male. Forty-four patients (56.4 %) experienced tumor progression with time to progression 5.1 months (95 % confidence interval (CI): 4.7-6.8) months. In multivariable Cox regression analysis, albumin-bilirubin (ALBI) grade II (adjusted HR: 2.883, P = 0.0104), and treatment duration less than three months (adjusted HR: 3.801, P = 0.0014) were the independent predictive factors for tumor progression, while patients achieving objective response under lenvatinib treatment within 12 weeks was the independent protective factor for tumor progression (adjusted HR: 0.144, P = 0.0020). Among the 44 patients with tumor progression, twenty-six (59.1 %) patients received subsequent locoregional therapy after tumor progression. Comparing to those with tumor progression without locoregional treatment, patients who received subsequent locoregional therapy had significantly better survival (1st year cumulative survival rate 70 % vs 27 %, log-rank P = 0.003). Conclusion: ALBI grade, treatment duration of lenvatinib, and achieving objective image response within twelve weeks were the independent predictive factors for tumor progression. Furthermore, longer overall survival was observed in tumor progression patients with subsequent locoregional therapy and with better liver preserved function. abstract_id: PUBMED:33174627 Impact of time-to-surgery on outcomes of patients undergoing curative-intent liver resection for BCLC-0, A and B hepatocellular carcinoma. Background: The impact of a prolonged time-to-surgery (TTS) among patients with resectable hepatocellular carcinoma (HCC) is not well defined. Methods: Patients who underwent curative-intent hepatectomy for BCLC-0, A and B HCC between 2000 and 2017 were identified using a multi-institutional database. The impact of prolonged TTS on overall survival (OS) and disease-free survival (DFS) was examined. Results: Among 775 patients who underwent resection for HCC, 537 (69.3%) had early surgery (TTS < 90 days) and 238 (30.7%) patients had a delayed surgery (TTS ≥ 90 days). Patient- and tumor-related characteristics were similar between the two groups except for a higher proportion of patients undergoing major liver resection in the early surgery group (31.3% vs. 23.8%, p = .04). The percentage of patients with delayed surgery varied from 8.8% to 59.1% among different centers (p < .001). Patients with TTS < 90 days had similar 5-year OS (63.7% vs. 64.9; p = .79) and 5-year DFS (33.5% vs. 42.4; p = .20) with that of patients with TTS ≥ 90 days. On multivariable analysis, delayed surgery was not associated with neither worse OS (BCLC-0/A: adjusted hazards ratio [aHR] = 0.90; 95% confidence interval [CI]: 0.65-1.25 and BCLC-B: aHR = 0.72; 95%CI: 0.30-1.74) nor DFS (BCLC-0/A: aHR = 0.78; 95%CI: 0.60-1.01 and BCLC-B: aHR = 0.67; 95% CI: 0.36-1.25). Conclusion: Approximately one in three patients diagnosed with resectable HCC had a prolonged TTS. Delayed surgery was not associated with worse outcomes among patients with resectable HCC. abstract_id: PUBMED:28819771 The Impact of Bridging LRT on Survival in Patients Listed for Liver Transplantation. Purpose: Locoregional therapy (LRT) is recommended for certain groups of patients with hepatocellular carcinoma (HCC) listed for orthotopic liver transplantation (OLT) with expected wait times greater than 6 months. A retrospective single-center study was performed to evaluate the effect of LRT on the outcomes of patients with HCC listed for OLT. Methods: Institutional database was reviewed to identify the patients listed for OLT with HCC (March 1998-December 2010). Patient and tumor characteristics were recorded with a follow-up period of 6.5 years from diagnosis. Results: Out of 359 listed patients, 72 patients were delisted (delisted group) and 287 underwent orthotopic liver transplantation (OLT group). One hundred fifty-five and 45 patients from the OLT and delisted groups underwent locoregional therapy (LRT), respectively. Median wait time and sum of largest tumor diameter were significantly higher for patients receiving LRT. LRT was associated with significantly better survival among delisted patients at the end of follow-up (1249.6 ± 137.4 vs. 742.1 ± 155.4 days, p = 0.028). In the OLT group, survival was similar between LRT and no LRT group, but survival was significantly better in patients with largest tumor diameter ≥30 mm (1949.4 ± 95.1 vs. 1694.8 ± 135.5, p = 0.02). Conclusion: Patients with HCC and the largest tumor greater than 30 mm treated with LRT prior to OLT have improved survival compared to patients not receiving LRT. Additionally, for patients who did not undergo transplant survival is significantly longer for those receiving LRT. abstract_id: PUBMED:24717522 Estimation of lead-time bias and its impact on the outcome of surveillance for the early diagnosis of hepatocellular carcinoma. Background & Aims: Lead-time is the time by which diagnosis is anticipated by screening/surveillance with respect to the symptomatic detection of a disease. Any screening program, including surveillance for hepatocellular carcinoma (HCC), is subject to lead-time bias. Data regarding lead-time for HCC are lacking. Aims of the present study were to calculate lead-time and to assess its impact on the benefit obtainable from the surveillance of cirrhotic patients. Methods: One-thousand three-hundred and eighty Child-Pugh class A/B patients from the ITA.LI.CA database, in whom HCC was detected during semiannual surveillance (n = 850), annual surveillance (n = 234) or when patients came when symptomatic (n = 296), were selected. Lead-time was estimated by means of appropriate formulas and Monte Carlo simulation, including 1000 patients for each arm. Results: The 5-year overall survival after HCC diagnosis was 32.7% in semiannually surveilled patients, 25.2% in annually surveilled patients, and 12.2% in symptomatic patients (p<0.001). In a 10-year follow-up perspective, the median lead-time calculated for all surveilled patients was 6.5 months (7.2 for semiannual and 4.1 for annual surveillance). Lead-time bias accounted for most of the surveillance benefit until the third year of follow-up after HCC diagnosis. However, even after lead-time adjustment, semiannual surveillance maintained a survival benefit over symptomatic diagnosis (number of patients needed to screen = 13), as did annual surveillance (18 patients). Conclusions: Lead-time bias is the main determinant of the short-term benefit provided by surveillance for HCC, but this benefit becomes factual in a long-term perspective, confirming the clinical utility of an anticipated diagnosis of HCC. Answer: The study by PUBMED:17454859 investigated the survival impact of delayed treatment in patients with hepatocellular carcinoma (HCC) undergoing locoregional therapy, specifically addressing the question of whether there is a lead-time bias. The study compared survival rates between HCC patients with treatment delay (defined as a >2 months' interval between diagnosis and treatment) and age- and gender-matched controls without delay. The results showed that patients with treatment delay had higher Model for End-Stage Liver Disease (MELD) scores and that treatment delay was an independent poor prognostic predictor. Furthermore, disease progression was more common in patients with treatment delay, and those with a prolonged delay of more than 3 months were more likely to have tumor progression. The study concluded that delayed HCC treatment is linked with shortened overall survival unrelated to lead-time bias in patients undergoing locoregional therapy, and a prolonged treatment delay of more than 3 months may worsen the long-term outcome. This finding is significant because it suggests that the survival disadvantage observed in patients with delayed treatment is not merely due to lead-time bias, which is the time by which diagnosis is anticipated by screening/surveillance with respect to symptomatic detection of a disease. Lead-time bias can artificially inflate survival statistics for patients diagnosed early due to increased surveillance. However, in this study, the negative impact of delayed treatment on survival was independent of such bias, indicating a true detriment to patient outcomes when locoregional therapy is postponed.
Instruction: Is fluorine-18 fluorodeoxyglucose positron emission tomography useful for the thyroid nodules with indeterminate fine needle aspiration biopsy? Abstracts: abstract_id: PUBMED:24228840 Is fluorine-18 fluorodeoxyglucose positron emission tomography useful for the thyroid nodules with indeterminate fine needle aspiration biopsy? A meta-analysis of the literature. Background: The indeterminate fine needle aspiration biopsy (FNAB) results present a clinical dilemma for physicians. The aim of this study was to evaluate the diagnostic accuracy of fluorine-18 fluorodeoxyglucose positron emission tomography (18F-FDG PET) in the detection of these indeterminate lesions. Methods: Seven studies (involving a total of 267 patients) published before November 2012 were reviewed. Systematic methods were used to identify, select, and evaluate the methodological quality of the studies as well as to summarize the overall findings of sensitivity and specificity. Results: A total number of 70 patients were confirmed to have malignant lesions, with a cancer prevalence of 26.2% (70/267; ranging from 19.6% to 40.0% in these studies). The pooled sensitivity and specificity of PET or PET/CT for the detection of cancer was 89.0% (95% CI: 79.0% ~ 95.0%) and 55.0% (95% CI: 48.0% ~ 62.0%), respectively. There was no evidence of threshold effects or publication bias. The area [±standard error (±SE)] under the symmetrical sROC curve was 0.7207 ± 0.1041. Although SUVmax was higher in malignant lesions (P < 0.01), there was still a great overlap. The best cut-off value of SUVmax for differentiation was 2.05; but with a high sensitivity of 89.8% and low specificity of 42.0%. Conclusion: F-FDG PET or PET/CT showed a high sensitivity in detecting thyroid cancers in patients with indeterminate FNAB results. Further examination was strongly recommended when an FDG-avid lesion was detected. abstract_id: PUBMED:20140435 Exclusion of malignancy in thyroid nodules with indeterminate fine-needle aspiration cytology after negative 18F-fluorodeoxyglucose positron emission tomography: interim analysis. Background: In all, 20% of fine-needle aspiration (FNA) biopsies of thyroid nodules have an indeterminate diagnosis; of these, 80% are found to be benign after thyroidectomy. Some previous reports indicate that positron emission tomography (PET) with (18)F-fluorodeoxyglucose (FDG) imaging may predict malignancy status. We now report results on the first 51 patients in the largest prospective study of FDG-PET in patients with an indeterminate thyroid nodule FNA. Methods: Eligible patients had a dominant thyroid nodule that was palpable or >or=1 cm in greatest dimension as seen by ultrasonography, and indeterminate histology of the FNA biopsy specimen. Participants underwent preoperative neck FDG-PET alone or FDG-PET with computed tomography (FDG-PET/CT). Images were evaluated qualitatively and semiquantitatively using the maximum standardized uptake value (SUV(max)). Final diagnosis was determined by histopathologic analysis after thyroidectomy. Descriptive statistical analysis was performed. Results: A total of 51 patients underwent preoperative FDG-PET or FDG-PET/CT. Studies without focally increased uptake localized to the lesion were considered negative. For all lesions (10 malignant, 41 benign), the sensitivity, specificity, positive-predictive value (PPV), and negative-predictive value (NPV) were 80%, 61%, 33%, and 93%, respectively. Postoperatively, two malignant and six benign lesions were found to be <1 cm by pathology examination; one lesion was not measured. When these lesions were excluded, the sensitivity, specificity, PPV, and NPV were 100%, 59%, 36%, and 100%, respectively. Conclusions: Based on these preliminary data, FDG-PET may have a role in excluding malignancy in thyroid nodules with an indeterminate FNA biopsy. This finding justifies ongoing accrual to our target population of 125 participants. abstract_id: PUBMED:18314022 Is there a role for fluorodeoxyglucose positron emission tomography/computed tomography in cytologically indeterminate thyroid nodules? Objective: The aim of this study was to determine the accuracy of the fluorine 18 ((18)F)-labeled fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) scan in the evaluation of thyroid nodules in which the cytopathology of fine-needle aspiration (FNA) biopsies are classified as "indeterminate," ie, either follicular or Hürthle cell lesion. Methods: At an academic medical center, we conducted a prospective pilot study of 15 patients with thyroid nodules in whom adequate FNA was diagnosed as indeterminate. All patients underwent a whole-body FDG-PET/CT scan followed by thyroidectomy. Preoperative FDG-PET/CT results and the histopathology of the surgical specimen were compared and statistically analyzed. Results: The FNA demonstrated follicular cells in 11 (73%) patients, Hürthle cells in 3 (20%) patients, and both types of cells in 1 (7%) patient. The histopathology of the surgical specimen revealed thyroid cancer in 7 (47%) patients. The FDG-PET/CT scan was positive in 8 patients; 4 (50%) patients were found to have cancer. The FDG-PET/CT scan was negative in 7 patients. Four of these patients had benign lesions and 3 had thyroid carcinoma. Thus, 4 (27%) patients had false-positive FDG-PET/CT scans and 3 (20%) patients had false-negative studies. The sensitivity of FDG-PET/CT to detect a malignant focus was 57% with a specificity of 50%. The positive predictive value was 50% and the negative predictive value was 57%. Conclusions: In this pilot study of patients with cytologically indeterminate thyroid nodules, FDG-PET/CT was not a predictable indicator of benign or malignant disease. Although a larger series may elucidate a role for FDG-PET/CT, the relatively low predictability shown in this study should caution clinicians about using FDG-PET/CT to consider foregoing thyroidectomy for cytologically indeterminate nodules. abstract_id: PUBMED:22257371 Is (18)F-fluorodeoxyglucose-PET/CT useful for the presurgical characterization of thyroid nodules with indeterminate fine needle aspiration cytology? Background: Thyroid nodules found incidentally on (18)F-fluorodeoxyglucose-positron emission tomography (FDG-PET) have been shown to be malignant in 30%-50% of cases. The American Thyroid Association recommends performing fine needle aspiration cytology (FNAC) for thyroid nodules showing FDG uptake. On the other hand, the role of FDG-PET in characterizing thyroid nodules with indeterminate cytology before surgery is not clear. The goal of this study was to evaluate the role of FDG-PET/computed tomography (CT) in predicting malignancy of thyroid nodules with indeterminate FNAC and to correlate FDG uptake with pathological and ultrasonographic (US) features. Methods: Between November 2006 and October 2009, 55 patients (42 women, mean age: 50 years) planned for surgery for 56 thyroid nodules with indeterminate FNAC were prospectively included and considered for analysis. All patients underwent presurgical FDG-PET/CT (Siemens Biograph, mean FDG injected activity: 165 MBq) and neck US. Pathology of the corresponding surgical specimen was the gold standard for statistical analysis. Results: At pathology 34 nodules were benign, 10 were malignant (7 papillary and 3 follicular carcinomas), and 12 were tumors of uncertain malignant potential (TUMP). The median size of the thyroid nodules was 21 mm (range: 10-57). Sensitivity, specificity, positive (PPV), and negative predictive (NPV) values of FDG-PET in detecting cancer/TUMP were 77%, 62%, 57%, and 81%, respectively. In multivariate analysis, cellular atypia was the only factor predictive of FDG uptake (p<0.001). Hurthle cells and poorly differentiated components were independent predictive factors of high (≥5) SUV Max (p=0.02 and p=0.02). Sensitivity, specificity, PPV, and NPV of US in detecting cancer/TUMP were 82%, 47%, 50%, and 80%, respectively. In multivariate analysis, hypervascularization was correlated with malignancy/TUMP (p=0.007) and cystic features were correlated with benignity (p=0.03). Conclusion: Adding FDG-PET findings to neck US provided no diagnostic benefit. The sensitivity and specificity of FDG-PET in the presurgical evaluation of indeterminate thyroid nodules are too low to recommend FDG-PET routinely. abstract_id: PUBMED:17684046 Role of 18F-fluorodeoxyglucose positron emission tomography in preoperative assessment of cytologically indeterminate thyroid nodules. Objective: The objective of the study was to determine the diagnostic accuracy of (18)F-fluorodeoxyglucose ((18)F-FDG) positron emission tomography (PET) in the preoperative diagnosis of thyroid nodules with indeterminate fine-needle aspiration biopsy results. Methods: Forty-two consecutive patients with thyroid nodules with indeterminate cytological results participated in this study. Abnormal (18)F-FDG PET uptake was assessed visually and by measuring the maximum standardized uptake value (SUVmax) in thyroid topography. All these results were compared with the final pathological results. Results: The presence of focal uptake correlated with a greater risk of malignancy (P = 0.018). All 11 malignant nodules had focal uptake (sensitivity of 100%). Of the 31 patients with benign nodules, there were 19 with positive uptake (specificity of 38.7%). The pre-PET probability of cancer was 26.2% (11 of 42), and this probability increased to 36.7% after PET for those patients whose exam showed focal uptake (11 of 30). The preoperative use of (18)F-FDG PET would result in a significant reduction (39%, 12 of 31) in the number of thyroidectomies performed in patients with benign lesions. SUVmax could not improve this degree of accuracy. There was no correlation between thyroid nodule size and SUVmax value (P = 0.96). Patients with carcinomas were younger than patients with benign lesions (P = 0.048). There was no other clinical, laboratory, or ultrasonographic variable related to malignancy. Conclusions: (18)F-FDG PET provides high sensitivity to malignant lesions and may be a potentially useful tool in the evaluation of thyroid nodules with indeterminate cytological findings. For these nodules the number of unnecessary thyroidectomies in a hypothetical algorithm using (18)F-FDG PET would be reduced by 39%. abstract_id: PUBMED:27110335 Role of 18F-fluorodeoxyglucose Positron Emission Tomography/Computed Tomography in the Evaluation of Cytologically Indeterminate Thyroid Nodules. Background: Thyroid nodules with indeterminate fine-needle aspiration biopsy (FNAB) results remain a diagnostic dilemma, because 70 - 85% of these nodules have been found to be benign after thyroid surgery. Objectives: The purpose of this study was to evaluate the usefulness of 18F-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) in the preoperative diagnosis of cytologically indeterminate nodules. Patients And Methods: Forty-six patients were included in this study. These individuals had undergone FDG PET/CTs for the preoperative evaluation of thyroid nodules with indeterminate FNAB results. The results of the preoperative PET/CT scans were compared with the postoperative pathological results and statistically analyzed. Results: Of the 46 patients included in our study, the histopathology of the surgical specimens revealed thyroid cancer in 17 individuals (37%, 17/46). The PET/CT scan showed a positive result in 27 patients. Of these, 16 patients (59.3%) were found to have thyroid carcinomas. In addition, the PET/CT scan was considered to be negative in 19 patients, 18 (94.7%) of whom had benign lesions. For the detection of malignant lesions, the values for the sensitivity and specificity, and the positive predictive and negative predictive values were 94%, 62%, 59%, and 95%, respectively. Conclusion: The FDG PET/CT showed a high sensitivity and a high negative predictive value for identifying malignancies in thyroid nodules with indeterminate FNAB results. Therefore, the FDG PET/CT may be a helpful tool in the clinical management of these nodules. When an FDG positive lesion is detected, further examination is recommended. abstract_id: PUBMED:21432844 The role of [18F]-2-fluoro-2-deoxy-d-glucose-positron emission tomography in thyroid nodules with indeterminate fine-needle aspiration biopsy: systematic review and meta-analysis of the literature. Indeterminate results at fine-needle aspiration biopsy (FNAB) of thyroid nodules pose a clinical dilemma, because only 20% to 30% of patients suffer from malignancy. Previous studies suggested that the false-negative ratio of [(18)F]-2-fluoro-2-deoxy-D-glucose-positron emission tomography (FDG-PET) is very low; therefore, it may help identify patients who would benefit from (hemi)thyroidectomy. A systematic literature search was performed in 5 databases. After assessment, the identified studies were analyzed for heterogeneity, and the extracted data of test characteristics were pooled using a random-effects model. Threshold effects were examined, and publication bias was assessed. The query resulted in 239 records, of which 6 studies met predefined inclusion criteria. Data from 225 of the 241 described patients could be extracted. There was mild to moderate heterogeneity in study results (inconsistency index [I(2)] = 0.390-0.867). The pooled prevalence of malignancy was 26%. Pooled sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 95% (95% confidence interval [95% CI], 86%-99%), 48% (95% CI, 40%-56%), 39% (95% CI, 31%-47%), 96% (95% CI, 90%-99%), and 60% (95% CI, 53%-67%), respectively. Sensitivity increased to 100% for the 164 lesions that measured >15 mm in greatest dimension. There was no evidence of threshold effects or publication bias. A negative FDG-PET scan in patients who had thyroid nodules >15 mm with indeterminate FNAB results excluded thyroid cancer in a pooled population of 225 patients. Conversely, a positive FDG-PET result did not identify cancer, because approximately 50% of these patients had benign nodules. The authors concluded that the incorporation of FDG-PET into the initial workup of such patients before surgery deserves further investigation. abstract_id: PUBMED:23053387 The role of positron emission tomography and positron emission tomography/computed tomography in thyroid tumours: an overview. Positron emission tomography (PET) and PET/computed tomography (PET/CT) with different tracers have been increasingly used in patients with thyroid tumours. The aim of this article is to perform an overview based on literature data about the usefulness of PET imaging in this setting. The role of Fluorine-18-Fluorodeoxyglucose (FDG) PET and PET/CT in differentiated thyroid carcinoma (DTC) is well established, particularly in patients presenting with elevated serum thyroglobulin levels and negative radioiodine whole-body scan. Iodine-124 PET and PET/CT may serve a role in staging DTC and obtaining lesional dosimetry for a better and more rationale planning of treatment with Iodine-131. FDG-PET and PET/CT are useful in the post-thyroidectomy staging of high-risk patients with less differentiated histological subtypes. PET and PET/CT with different tracers seem to be useful methods in localizing the source of elevated calcitonin levels in patients with recurrent medullary thyroid carcinoma. Incorporation of FDG-PET or PET/CT into the initial workup of patients with indeterminate thyroid nodules at fine needle aspiration biopsy deserves further investigation. FDG-PET report should suggest further evaluation when focal thyroid incidentalomas are described because these findings are associated with a significant risk of cancer. abstract_id: PUBMED:31475094 Core needle biopsy in the management of thyroid nodules with an indeterminate fine-needle aspiration report. Ultrasonography (US)-guided fine-needle aspiration biopsy (FNAB) cytology is widely used but is limited due to its pathologically indeterminate results in diagnosing thyroid nodules. Recently, US-guided core-needle biopsy (CNB) was introduced as an effective and safe technique for diagnosing indeterminate thyroid nodules. Using CNB, information about architectural histologic structure such as nodule capsule or more immunochemical staining can be obtained which lead to a more accurate diagnosis. Up to 98% of indeterminate thyroid lesions can be classified as malignant or benign when CNB is used for follow-up analysis. Other evidences revealed the effectiveness of CNB in reducing inconclusive results and improving the diagnostic performance of thyroid nodules initially diagnosed as AUS/FLUS by FNAB. In this review, we investigate how to deal with indeterminate thyroid nodules diagnosed by FNAB and determine how CNB has a role in diagnosing these indeterminate thyroid nodules. abstract_id: PUBMED:30197336 Clinical significance of standardized uptake values in thyroid incidentaloma discovered by F-18 fluorodeoxyglucose positron emission tomography/computed tomography. Aim Of Study: While using F-18 fluorodeoxyglucose (FDG) positron emission tomography computed tomography (PET/CT) for other than thyroid disease, an increased frequency of incidentally discovered areas of focally or diffuse increased uptake within the thyroid gland can be seen. We aimed to find the focal thyroid FDG uptake and compare the maximum standardized uptake value (SUVmax) results with cytology and histology results. Materials And Methods: We examined PET scan reports for all patients undergoing FDG PET/CT investigation over a 10-year period in a single center. Twelve thousand seven hundred and ninety-six patients underwent FDG PET/CT scanning in one PET/CT unit. Within this group, 526 patients had diffuse, focal, or multifocal FDG uptake. About 305 of 526 patients (57.9%) showed diffuse FDG uptake and 221 (42%) showed focal uptake on thyroid gland. Results: The malignant group thyroid nodule sizes were between 8 and 39 mm (21.1 mm average, standard deviation [SD] ±7.3) on ultrasonography (USG) examination. These nodules have SUVmax values between 2.3 and 31.2 (average 8.8 SD ± 5.7). Benign group thyroid nodule sizes were between 5 and 46 mm (average 18.3 mm, SD ± 5.8) on USG examination. There were no significant correlations between SUVmax of the incidental focal thyroid lesions seen on FDG PET/CT and fine needle aspiration biopsy results. Conclusion: There is a relatively high possibility of a malignant lesion in thyroid incidentaloma. FDG uptake of these lesions is not a useful tool in absolute discrimination between malignancy and benign lesion. The presence of primary or secondary malign lesion is diagnosed in 34.1% of the patients who are found to have incidental focal FDG uptake within thyroid gland in PET/CT scans, but we suggest that the thyroid incidentalomas detected on FDG PET/CT should be further examined with USG and scintigraphy. Answer: Fluorine-18 fluorodeoxyglucose positron emission tomography (18F-FDG PET) has been evaluated for its utility in assessing thyroid nodules with indeterminate fine needle aspiration biopsy (FNAB) results. The studies show that 18F-FDG PET or PET/CT has a high sensitivity in detecting thyroid cancers in patients with indeterminate FNAB results, suggesting that further examination is strongly recommended when an FDG-avid lesion is detected (PUBMED:24228840). However, the specificity of 18F-FDG PET in these studies is relatively low, indicating that a significant number of benign lesions may also show FDG uptake (PUBMED:24228840). Interim analysis from a prospective study indicated that FDG-PET might have a role in excluding malignancy in thyroid nodules with an indeterminate FNA biopsy, showing a high negative predictive value (NPV) when lesions smaller than 1 cm were excluded (PUBMED:20140435). Conversely, a pilot study found that FDG-PET/CT was not a reliable indicator of benign or malignant disease in patients with cytologically indeterminate thyroid nodules, showing relatively low sensitivity and specificity (PUBMED:18314022). Further research suggested that the sensitivity and specificity of FDG-PET in the presurgical evaluation of indeterminate thyroid nodules are too low to recommend its routine use (PUBMED:22257371). Another study supported the high sensitivity of 18F-FDG PET for malignant lesions and proposed that it could potentially reduce the number of unnecessary thyroidectomies (PUBMED:17684046). A study with a larger sample size showed that FDG PET/CT had high sensitivity and NPV for identifying malignancies in thyroid nodules with indeterminate FNAB results, indicating that it may be a helpful tool in clinical management (PUBMED:27110335). A systematic review and meta-analysis also supported the high sensitivity of FDG-PET, especially for lesions larger than 15 mm, but noted that a positive FDG-PET result did not definitively identify cancer due to a high rate of benign nodules with positive uptake (PUBMED:21432844).
Instruction: Depression, weight gain, and low birth weight adolescent delivery: do somatic symptoms strengthen or weaken the relationship? Abstracts: abstract_id: PUBMED:19064227 Depression, weight gain, and low birth weight adolescent delivery: do somatic symptoms strengthen or weaken the relationship? Objective: To determine if omission of the Center for Epidemiologic Studies Depression Scale (CES-D) items that assess the somatic symptoms of depression improves the psychometric properties of the scale and utility of the CES-D diagnosis of depression for predicting four adverse obstetrical outcomes that have been tentatively linked to maternal depression. Methods: A cohort of 1684 13-21-year-old participants in an adolescent-oriented maternity program completed the CES-D at enrollment. Chi-square analyses were used to compare the predictive capacity of depression diagnosed by the full CES-D and the 14-item non-somatic subscale of the CES-D. The reliability and construct validity of the two scales were also compared. Results: Removing the somatic component of the CES-D decreased the proportion of adolescents who met screening criteria for depression. However, it did not improve the psychometric properties of the scale. The reliability (Cronbach alpha: 0.87) and construct validity (depressed adolescents were significantly more psychologically stressed and had poorer social support) of the two scales were equivalent. Regardless of the scale used, adolescent mothers who were depressed in the second and third trimesters were at increased risk for inadequate weight gain and both small for gestational age fetuses and preterm delivery (ORs 1.6-1.8). The differences in case definition and predictive capacity were most evident when the CES-D was administered during the first trimester. However, overall effect sizes were nearly identical with the two scales. Conclusion: Removing the somatic component does not improve the psychometric properties of the CES-D or the predictive capacity of the CES-D diagnosis of depression for three sentinel obstetrical outcomes. This information should be reassuring to researchers and clinicians as most studies of the causes and consequences of maternal depression during and after pregnancy use the full CES-D scale. abstract_id: PUBMED:25649866 Co-occurrence of IBS and symptoms of anxiety or depression, among Norwegian twins, is influenced by both heredity and intrauterine growth. Background: Environmental and genetic factors contribute to variation in irritable bowel syndrome (IBS), anxiety and depression. Comorbidity between these disorders is high. A previous investigation of our population-based twin cohort revealed that low birth weight increased the risk for development of IBS, with environmental influences in utero as the most relevant contributing factor. We hypothesise that both intrauterine and genetic factors influence the co-occurrence of IBS and symptoms of anxiety and depression. Methods: A postal questionnaire sent to 12700 Norwegian twins born between 1967 and 1979 comprised a checklist of 31 illnesses and symptoms, including IBS and symptoms of anxiety and depression. The influence of genetic factors and intrauterine growth on comorbidity between these disorders were analysed in the full sample and compared to those based on only monozygotic (MZ) twin pairs discordant for IBS (95 pairs) in birth weight group < 2500 g and ≥ 2500 g. Results: In the co-twin analyses restricted growth (birth weight < 2500 g) was significantly associated with anxiety and depression (average birth weight difference of 181.0 g (p <0.0001) and 249.9 g (p < 0.0001), respectively). The analysis of the full sample revealed that IBS was significantly associated with symptoms of anxiety (adjusted OR = 2.5, 95% CI: 1.9, 3.3) and depression (adjusted OR = 2.3. 95% CI: 1.8, 3.0). Analyses of MZ pairs discordant for IBS indicated significant associations between IBS and symptoms of anxiety (OR = 3.7, 95% CI: 1.3, 10.5) and between IBS and symptoms of depression (OR = 4.2, 95% CI: 1.7, 9.9) only in the birth weight group below 2500 g. Conclusion: Our findings suggest that genetic factors partly explain the association between IBS and symptoms of anxiety and depression. In the low range of birth weight (<2500 g), restricted fetal growth seems to be a common contributing factor to the co-occurrence between these disorders. abstract_id: PUBMED:18349706 Weight concerns in male low birth weight adolescents: relation to body mass index, self-esteem, and depression. Objective: To compare weight concerns and self-reported body mass index (BMI) of low birth weight (LBW) adolescent boys to those of a normative sample and examine relationships among BMI, weight concerns, self-esteem, and depression in the LBW cohort. Methods: LBW boys (n = 260; mean age, 16.0) belong to the Neonatal Brain Hemorrhage Study birth cohort. Normative boys (n = 305; mean age, 16.5) belong to the National Health and Nutrition Examination Survey. Both samples were assessed in 2001-2004 with self-report questionnaires. BMI was calculated from self-reported height and weight. Weight perception and weight dissatisfaction were assessed with the Eating Symptoms Inventory. In LBW boys, self-esteem was measured with the Rosenberg Self-Esteem Scale and depression with the Beck Depression Inventory. Results: Based on self-reported height and weight, LBW boys were more likely to be healthy weight or underweight and less likely to be overweight than normative boys. Despite having healthier self-reported BMIs, LBW boys reported more weight concerns than the normative sample. A total of 46.9% of LBW boys perceived their weight as abnormal, and 76.5% desired weight change. Weight concerns in LBW boys mostly reflected a perception of being underweight (31.2% of the cohort) and a desire to gain weight (47.5% of the cohort), although only 6.5% were clinically underweight. Weight concerns, but not BMI, were related to clinical depression and lower self-esteem. Conclusions: LBW adolescent boys are at high risk of experiencing weight concerns. Weight concerns rather than BMI are associated with emotional problems in LBW boys. abstract_id: PUBMED:26474231 Adolescent Pregnancy Guidelines. Objective: To describe the needs and evidence-based practice specific to care of the pregnant adolescent in Canada, including special populations. Outcomes: Healthy pregnancies for adolescent women in Canada, with culturally sensitive and age-appropriate care to ensure the best possible outcomes for these young women and their infants and young families, and to reduce repeat pregnancy rates. Evidence: Published literature was retrieved through searches of PubMed and The Cochrane Library on May 23, 2012 using appropriate controlled vocabulary (e.g., Pregnancy in Adolescence) and key words (e.g., pregnancy, teen, youth). Results were restricted to systematic reviews, randomized control trials/controlled clinical trials, and observational studies. Results were limited to English or French language materials published in or after 1990. Searches were updated on a regular basis and incorporated in the guideline to July 6, 2013. Grey (unpublished) literature was identified through searching the websites of health technology assessment and health technology-related agencies, national and international medical specialty societies, and clinical practice guideline collections. Values: The quality of evidence in this document was rated using the criteria described in the Report of the Canadian Task Force on Preventive Health Care (Table 1). BENEFITS/HARMS/COSTS: These guidelines are designed to help practitioners caring for adolescent women during pregnancy in Canada and allow them to take the best care of these young women in a manner appropriate for their age, cultural backgrounds, and risk profiles. Recommendations: 1. Health care providers should adapt their prenatal care for adolescents and offer multidisciplinary care that is easily accessible to the adolescent early in the pregnancy, recognizing that adolescents often present to care later than their adult counterparts. A model that provides an opportunity to address all of these needs at one site may be the preferred model of care for pregnant adolescents. (II-1A) 2. Health care providers should be sensitive to the unique developmental needs of adolescents through all stages of pregnancy and during intrapartum and postpartum care. (III-B) 3. Adolescents have high-risk pregnancies and should be managed accordingly within programs that have the capacity to manage their care. The unique physical risks of adolescent pregnancy should be recognized and the care provided must address these. (II-1A) 4. Fathers and partners should be included as much as possible in pregnancy care and prenatal/infant care education. (III-B) 5. A first-trimester ultrasound is recommended not only for the usual reasons for properly dating the pregnancy, but also for assessing the increased risks of preterm birth. (I-A) 6. Counselling about all available pregnancy outcome options (abortion, adoption, and parenting) should be provided to any adolescent with a confirmed intrauterine gestation. (III-A) 7. Testing for sexually transmitted infections (STI) (II-2A) and bacterial vaginosis (III-B) should be performed routinely upon presentation for pregnancy care and again in the third trimester; STI testing should also be performed postpartum and when needed symptomatically. a. Because pregnant adolescents are inherently at increased risk for preterm labour, preterm birth, and preterm pre-labour rupture of membranes, screening and management of bacterial vaginosis is recommended. (III-B) b. After treatment for a positive test, a test of cure is needed 3 to 4 weeks after completion of treatment. Refer partner for screening and treatment. Take the opportunity to discuss condom use. (III-A) 8. Routine and repeated screening for alcohol use, substance abuse, and violence in pregnancy is recommended because of their increased rates in this population. (II-2A) 9. Routine and repeated screening for and treatment of mood disorders in pregnancy is recommended because of their increased rates in this population. The Edinburgh Postnatal Depression Scale administered in each trimester and postpartum, and more frequently if deemed necessary, is one option for such screening. (II-2A) 10. Pregnant adolescents should have a nutritional assessment, vitamins and food supplementation if needed, and access to a strategy to reduce anemia and low birth weight and to optimize weight gain in pregnancy. (II-2A) 11. Conflicting evidence supports and refutes differences in gestational hypertension in the adolescent population; therefore, the care usual for adult populations is supported for pregnant adolescents at this time. (II-2A) 12. Practitioners should consult gestational diabetes mellitus (GDM) guidelines. In theory, testing all patients is appropriate, although rates of GDM are generally lower in adolescent populations. Practitioners should be aware, however, that certain ethnic groups including Aboriginal populations are at high risk of GDM. (II-2A) 13. An ultrasound anatomical assessment at 16 to 20 weeks is recommended because of increased rates of congenital anomalies in this population. (II-2A) 14. As in other populations at risk of intrauterine growth restriction (IUGR) and low birth weight, an ultrasound to assess fetal well-being and estimated fetal weight at 32 to 34 weeks gestational age is suggested to screen for IUGR. (III-A) 15. Visits in the second or third trimester should be more frequent to address the increased risk of preterm labour and preterm birth and to assess fetal well-being. All caregivers should be aware of the signs and symptoms of preterm labour and should educate their patients to recognize them. (III-A) 16. It should be recognized that adolescents have improved vaginal delivery rates and a concomitantly lower Caesarean section rate than their adult counterparts. (II-2A) As with antenatal care, peripartum care in hospital should be multidisciplinary, involving social care, support for breastfeeding and lactation, and the involvement of children's aid services when warranted. (III-B) 17. Postpartum care should include a focus on contraceptive methods, especially long-acting reversible contraception methods, as a means to decrease the high rates of repeat pregnancy in this population; discussion of contraception should begin before delivery. (III-A) 18. Breastfeeding should be recommended and sufficient support given to this population at high risk for discontinuation. (II-2A) 19. Postpartum care programs should be available to support adolescent parents and their children, to improve the mothers' knowledge of parenting, to increase breastfeeding rates, to screen for and manage postpartum depression, to increase birth intervals, and to decrease repeated unintended pregnancy rates. (III-B) 20. Adolescent women in rural, remote, northern, and Aboriginal communities should be supported to give birth as close to home as possible. (II-2A) 21. Adolescent pregnant women who need to be evacuated from a remote community should be able to have a family member or other person accompany them to provide support and encouragement. (II-2A) 22. Culturally safe prenatal care including emotional, educational, and clinical support to assist adolescent parents in leading healthier lives should be available, especially in northern and Aboriginal communities. (II-3A) 23. Cultural beliefs around miscarriage and pregnancy issues, and special considerations in the handling of fetal remains, placental tissue, and the umbilical cord, must be respected. (III). abstract_id: PUBMED:2138601 Repeat adolescent pregnancy and low birth weight: methods issues. Longitudinal studies of the relationship between birth order and birth weight suggest that mean birth weight increases with parity among women of all ages. In contrast, national birth statistics and other cross-sectional studies of these relationships suggests a decrease in mean birth weight and an increased incidence of low-weight births with parity among adolescent mothers. We examine the potential reasons for these strikingly different conclusions. Our aim is to determine whether multiparity itself is a low-birth-weight risk factor among adolescent mothers or whether it is a marker for other adolescent maternal conditions that increase the risk of a low birth weight, independent of parity. abstract_id: PUBMED:22428513 Maternal weight gain and pregnancy outcome in adolescent girls in Ibadan, Nigeria. Background: Maternal weight gain is an important predictor of pregnancy outcome. Pregnancy in adolescence is a major health challenge because it is associated with high risk. Few studies have been carried out on the influence of weight gain in adolescent pregnancy outcome in Nigeria. Objective: This study determines the influence of maternal weight gain on the pregnancy outcome of adolescent mothers in Ibadan, Oyo state, Nigeria. Materials And Methods: A retrospective review of case notes of 785 adolescents < or = 19 years and 1300 adult mothers who delivered atAdeoyo and Oluyoro maternity hospitals from January 2003 to December 2006 was carried out. Anthropometric information such as weight at second and third trimester, and infant birth weight and length were collected using a record review proforma. The data were analyzed using chi-square, ANOVA, Pearson's and Spearman's correlation. Results: Adolescent mothers had a significantly lower weight gain (6.1 +/- 2.9 kg) compared to the control whose mean weight gain was 6.7 +/- 3.6 kg (p<0.05). There was also a significant difference in the proportion of the adolescent mothers (77.3%) and the adult mothers (57.6%) with packed cell volume below normal levels (p<0.05). The mean infant birth weight and length among adolescents (2.8 +/- 0.5 kg and 46.6 +/- 4.0cm) were significantly lower than 3.1 +/- 0.5 kg and 47.9 +/- 3.5cm of adult mothers respectively (p<0.05). The prevalence of low birth weight was significantly higher among the adolescent mothers (12.2%) than in the adult mothers (7.9%) (p<0.05). A significant positive relationship was found between infant birth weight and maternal weight gain in pregnancy (r(s)= 0.18, p<0.05). Caesarean section was siignificantly higher among adolescent mothers (21.0%) than adult mothers (10.7%) (p<0.05). Complications of pregnancy were more common among adolescent mothers (35.8%) than adult mothers (15.5%). Conclusion: The adolescent mothers had significantly lower weight gain in pregnancy and higher rates of low birth weight compared to the adult mothers (p<0.05). Strategies to reduce incidence of adolescent pregnancies and improve access to nutrition information that will promote weight gain among pregnant adolescents should be instituted. abstract_id: PUBMED:25664081 Complications and outcomes of repeat cesarean section in adolescent women. Aim: The evaluation of the effect of repeat cesarean sections in adolescent pregnancies on the morbidity, obstetric and perinatal results. Materials And Methods: We reviewed the patient file and hospital records of patients who underwent at least one cesarean section among adolescent age group pregnant women who gave birth at our clinic between January 2010 and May 2013. The patients were divided into two groups as the patients who underwent the second cesarean section (116 patients) and those who underwent the third cesarean section (36 patients). The demographic data, maternal data and obstetric and perinatal results of the patients were evaluated. Results: A significant difference was present between the patients in the evaluation of the total number of examinations during pregnancy (P = 0.001), total maternal weight gain during pregnancy (P = 0.006), and the first examination gestational age (P = 0.006) and all values were less favorable in the third cesarean group. The gestational week at birth (P < 0.001), birth weight (P < 0.001), and APGAR score (P < 0.001) in the group with the third cesarean section were statistically significantly lower than the second cesarean section. The third cesarean cesarean was found to cause a significant risk increase for placenta accreta risk in adolescent pregnancies (P = 0.042). Conclusion: The increasing number of cesarean sections in the adolescent group is seen to be a significant risk factor for low gestational week of birth, low birth weight and related morbidities. The most important reason for the increased morbidity with increasing cesarean sections in the adolescent age has been defined as placenta accreta. abstract_id: PUBMED:19164843 Depression during pregnancy: rates, risks and consequences--Motherisk Update 2008. Affective illness is common in women, and the puerperium is a time of particular vulnerability. Gender differences in the expression of affective disorders have been attributed to the impact of hormonal influence, socialization, and genetics. Dramatic fluctuations in gonadal hormones that occur following childbirth, influences the increased incidence of mood disorders during this time. Numerous tools including the Edinburgh Postpartum Depression Scale can be used to screen for depression during pregnancy and postpartum. While screening tools may assist with appropriately identifying women who should be further assessed, their use alone does not significantly increase treatment seeking in women, even when their providers are notified about risk. Many studies demonstrate that only a small number (18%) of women who meet criteria for major depressive disorder seek treatment during pregnancy and postpartum. Additionally, common symptoms of depression (sleep, energy and appetite change) may be misinterpreted as normative experiences of pregnancy.Treatment engagement is important as untreated depression during pregnancy may have unfavorable outcomes for both women and children. Complications of pregnancy associated with depression include: inadequate weight gain,under utilization of prenatal care, increased substance use, and premature birth. Human studies demonstrate that perceived life-event stress, as well as depression and anxiety predicted lower birth weight, decreased Apgar scores, and smaller head circumference, and small for gestational age babies. Postpartum depression (PPD) is a common clinical disorder occurring in 15% of deliveries,making it one of the most frequent conditions to complicate pregnancy. Risk factors include past personal or family history of depression, sing marital status, poor health functioning, lower SES, and alcohol use. Women who have a prior history of postpartum depression, particularly with features of bipolarity or psychosis may be at particularly high risk. abstract_id: PUBMED:6637866 Maternal nutritional status and adolescent pregnancy outcome. To investigate the determinants of low birth weight of infants born to adolescent mothers, we studied the obstetric population attended at the Maternity Hospital of Lima, Peru. From this population we selected for study a sample of 1256 adolescent mothers ranging in age from 12 to 25 yr. The study included anthropometric and biochemical measurements used to evaluate nutritional status and physiological maturity of the mother and newborn. Findings from the present research indicate that the low birth weight of infants born to adolescent mothers is not due to premature delivery (short gestation) or low gynecological maturity. Furthermore, young adolescent mothers had smaller and thinner newborns than those born to older women who were adjusted for nutritional status during pregnancy and at delivery. That is, despite the similar nutritional status among the young adolescent mothers, the availability of nutrients for the accumulation of calories in the fetus (measured by skinfold thickness) was less than that of older women. Furthermore, the pregnancy weight gain associated with an optimal or average newborn weight is greater for young teenagers than for older women. These findings support the hypothesis that among rapidly growing teenagers the nutritional requirements of pregnancy may be greater than those of older women, and that this increased requirement competes with the growth needs of the fetus. abstract_id: PUBMED:23211927 Adiposity in adolescent offspring born prematurely to mothers with preeclampsia. Objective: To evaluate the relationship between maternal preeclampsia resulting in premature delivery and adiposity in the offspring during adolescence. Study Design: The 172 study participants were 14 years old and had very low birth weight. We compared height, weight, body mass index (BMI), percent fat, waist circumference, and triceps and subscapular skin fold thicknesses between those born prematurely secondary to preeclampsia (n = 51; 22 male) and those born prematurely after normotensive pregnancies (n = 121; 55 male). Multiple linear regression analysis was used to adjust for potential confounders (maternal BMI, antenatal steroid exposure, and race) and to evaluate potential explanatory variables (fetal, infancy, and childhood weight gain, and caloric intake, level of fitness, and physical activity at 14 years). Results: When adjusted for potential prenatal confounders (antenatal steroid exposure and race), adolescent male offspring of preeclamptic pregnancies had higher BMI (4.0 kg/m(2) [1.5, 6.6]) (mean difference [95% CI]), waist circumference (11.8 cm [3.8, 19.7]), triceps (4.6 mm [0.6, 8.6]) and subscapular skinfold thicknesses (6.2 mm [1.5, 10.9]), and percent body fat (4.1% [-0.1, 8.3]). Adjusting for infancy and childhood weight gain attenuated these group differences. There were no group differences among females. Conclusion: Male adolescent offspring born prematurely of women with preeclampsia have higher measures of adiposity than those born prematurely of normotensive pregnancies. Answer: The relationship between depression, weight gain, and low birth weight in adolescent delivery is complex. The study by PUBMED:19064227 found that removing the somatic component from the Center for Epidemiologic Studies Depression Scale (CES-D) did not improve the psychometric properties of the scale or the predictive capacity of the CES-D diagnosis of depression for adverse obstetrical outcomes. Depressed adolescent mothers were at increased risk for inadequate weight gain and adverse outcomes such as small for gestational age fetuses and preterm delivery, regardless of whether the full CES-D or the non-somatic subscale was used. This suggests that somatic symptoms do not significantly strengthen or weaken the relationship between depression and these obstetrical outcomes. Additionally, PUBMED:22428513 reported that adolescent mothers had significantly lower weight gain during pregnancy and higher rates of low birth weight compared to adult mothers. This indicates that weight gain is an important factor in pregnancy outcomes for adolescents, but the study does not directly address the role of somatic symptoms of depression in this relationship. Furthermore, PUBMED:18349706 found that weight concerns in low birth weight (LBW) adolescent boys were associated with clinical depression and lower self-esteem, but not with BMI. This suggests that the perception of weight and related concerns may have a psychological impact that is separate from actual weight or weight gain. In summary, while depression and weight gain are both associated with low birth weight and other adverse outcomes in adolescent deliveries, the inclusion or exclusion of somatic symptoms in the assessment of depression does not appear to significantly alter this relationship. The studies suggest that both psychological and physical aspects of health during pregnancy are important and interrelated, but they do not provide a clear answer on how somatic symptoms specifically influence the relationship between depression, weight gain, and low birth weight in adolescent deliveries.
Instruction: The impact of the kidskin sun protection intervention on summer suntan and reported sun exposure: was it sustained? Abstracts: abstract_id: PUBMED:16330090 The impact of the kidskin sun protection intervention on summer suntan and reported sun exposure: was it sustained? Background: Recognition that early sun exposure is an important risk factor for cutaneous melanoma in white populations has led to efforts to reduce children's sun exposure. 'Kidskin' was a non-randomized, school-based sun protection intervention trial in Perth, Western Australia (1995-1999). Its aim was to determine the extent to which such a program could reduce children's sun exposure. Methods: Kidskin involved 1614 children assigned to one of three groups: a Control, a 'Moderate' and a 'High' intervention group of 14, 11 and 8 schools respectively. The unit of assignment was the school. Control schools received the standard health education curriculum, while intervention schools received a multi-component intervention including a specially designed curriculum. The High intervention group received additional components. Outcomes included parent reported sun-related behaviors and objectively measured suntan at the end of summer vacation. These outcomes were observed every 2 years. Statistical analyses allowed for correlations between students within schools. Results: Kidskin initially had favorable effects on reported sun exposure and measured suntan. However, at the end of the 4-year program, and again 2 years later, little evidence of a favorable effect remained. Conclusions: The benefits of childhood sun protection interventions may not last beyond the life of the program. abstract_id: PUBMED:29391375 Investigating the long-term impact of a childhood sun-exposure intervention, with a focus on eye health: protocol for the Kidskin-Young Adult Myopia Study. Introduction: Excessive and insufficient sun exposure during childhood have been linked to serious diseases in later life; for example, insufficient sun exposure during childhood may increase the risk of developing myopia. The Kidskin-Young Adult Myopia Study (K-YAMS) is a follow-up of participants in the Kidskin Study, a non-randomised controlled trial that evaluated the effect of a 4-year educational intervention on sun-protection behaviours among primary school children in the late 1990s. Children who received the Kidskin intervention had lower levels of sun exposure compared with peers in the control group after 2 and 4 years of the intervention, but this was not maintained 2 years after the intervention had ceased. Thus, a follow-up of Kidskin Study participants provides a novel opportunity to investigate the associations between a childhood sun-exposure intervention and potentially related conditions in adulthood. Methods And Analysis: The K-YAMS contacts Kidskin Study participants and invites them to participate using a variety of methods, such as prior contact details, the Australian Electoral Roll and social media. Self-reported and objective measures of sun-exposure and sun-protection behaviours are collected as well as a number of eye measurements including cycloplegic autorefraction and ocular biometry. Data will be analysed to investigate a possible association between myopic refractive error and Kidskin intervention group or measured sun exposure. Ethics And Dissemination: The K-YAMS is approved by the Human Research Ethics Committee of the University of Western Australia (RA/4/1/6807). Findings will be disseminated via scientific journals and conferences. Trial Registration Number: ACTRN12616000812392; Pre-results. abstract_id: PUBMED:30370209 Sun Safe Partners: A pilot and feasibility trial of a couple-focused intervention to improve sun protection practices. This single arm pilot intervention study evaluated the feasibility and preliminary impact of a Sun Safe Partners, which is a couple-focused intervention targeting improved sun protection behavior. Data were collected from New Jersey between August 2015 and March 2016. Participants were 61 couples reporting low levels of sun protection recruited from an online panel. After providing online consent, couples received mailed pamphlets and participated in a call where they discussed current levels of sun protection, made an Action Plan to improve sun protection, and discussed ways of assisting one another in improving sun protection. A call summary was mailed afterwards. Feasibility was assessed by study enrollment, call participation, follow-up survey completion, and intervention evaluation. Participants completed a baseline survey, and a one month and six month post-intervention survey assessing sun protection as well as individual and relationship-centered sun protection attitudes and practices. Results indicated that acceptance into the trial was 22.1%. Call participation was high (84%) and the intervention was well-evaluated. Among the 51 couples who completed the call and a follow-up, the intervention improved sun protection behaviors. Sun protection benefits, photo-aging risk, and relationship-centered attitudes and practices increased. A couple-focused intervention shows promise for improving sun protection. Future studies using a randomized clinical trial as well as strategies to improve study participation are recommended. abstract_id: PUBMED:33061512 Sun Awareness and Sun Protection Practices. Background: The incidence of skin cancer has increased over the past few years, owing to excessive exposure to ultraviolet (UV) radiation. Over the past decade, protection against solar radiation has been highly encouraged in numerous public health education campaigns. The adopted sun protection practices include avoiding exposure to the sun, using protective clothing, and applying sunscreen. The aim of this paper was to analyze the awareness and attitudes towards sun protection practices in some regions of the Arabian Peninsula. Objective: The main objective of this study is to estimate the level of sun protection practices and determine the factors that have a significant impact on the level of adoption of sun protection practices by residents in the Arabian Peninsula. Methods: A 35-question self-reporting online questionnaire to evaluate the sun protection measures adopted by the residents was distributed in the Kingdom of Saudi Arabia and Kingdom of Bahrain. The survey was conducted from January 2018 to May 2019, and a total 830 responses were collected for analysis. The data were analyzed using the statistical software SPSS. The chi-squared test and Spearman's correlation were performed to determine the association of the sun protection level with factors concerning the socio-demographic characteristics, sun sensitivity, health-conscious lifestyle, adopted sun protection measures, and general awareness about the UV index, skin cancer, and vitamin D. Logistic regression classification was applied in developing the empirical model to predict the sun protection level of the residents. Results: Based on the analysis, only 34% of the total 830 respondents use sunscreen and most of the sunscreen users are female. The three most popular sun protection practices are: 1) avoiding exposure to the sun during peak hours, 2) using protective clothing, and 3) using sunglasses. The results indicate that 22% of respondents use a low level of sun protection, 53% use a medium level of protection, while 25% of respondents use a high level of sun protection. Many of the respondents understand that exposure to the sun can cause skin cancer and skin aging, while 54% of the respondents still believe that a suntan can be healthy if sunburn is avoided. Thirty percent of the participants of this study are aware of the UV index, and only 20% of all respondents know the correct meaning of the UV index. The overall predictive accuracy of the constructed logistic regression model is 67%. Based on the results of the logistic regression model, the five most significant factors of the sun protection level are: 1) perception that the sun can cause skin aging, or not, 2) age, 3) awareness of the dangerous effects of sun exposure, 4) income with respect to expenses, and 5) taking the UV index into consideration during exposure to the sun. Conclusion: Most of the respondents are aware about sun protection. However, the sun protection level of the residents is still inadequate. An efficient strategy of increasing the awareness of the effective use of sun protection systems must be established based on the significant factors to prevent skin cancer. abstract_id: PUBMED:30997383 Knowledge, attitude, and practices toward sun exposure and use of sun protection among non-medical, female, university students in Saudi Arabia: A cross-sectional study. Excessive unprotected sun exposure is a significant risk factor for skin damage and skin cancers. In recent decades, the incidence of skin cancer has increased dramatically worldwide, reaching epidemic proportions. Skin cancer is the most common type of cancer worldwide, and the ninth most common malignancy in Saudi Arabia. Sun protection is a key primary preventive strategy against skin cancer and skin damage induced by sun exposure. Herein, a cross-sectional study was made to evaluate the knowledge, attitude, and practices toward sun exposure, and the use of sun protection among non-medical female students on the Sulaymaniyah campus of the King Abdul-Aziz University. Also, we identified the reasons that prevented the students from using sun protection measures. A special-designed, self-administered questionnaire was applied on a sample size of 501 students. We found that most students were aware of the risks of unprotected sun exposure. Nevertheless, a view of sunscreen as harmful was reported (34.1%). Seeking shade and wearing protective clothing were the most used sun protection methods (58.1% and 43.1%, respectively). Sunscreen users made up only one third of our sample (23.6%). However, the majority of students (64.9%) did not know about the sun protection factor of sunscreen products. Discomfort felt on the skin was the most commonly reported reason for avoiding the use of sunscreen (40.7%). When comparing our study with Western studies, we found a high level of awareness among our sample of students. Significant differences in the attitude toward the application of suntan and sun protection products might be due to differences in cultural background. Our results spotted the need for future health education programs for our society that focus on the significant importance of sunscreen, as well as the correct methods of application. abstract_id: PUBMED:11109684 Improved sun protection behaviour in children after two years of the Kidskin intervention. Objective: To evaluate a school-based intervention in terms of reducing children's sun exposure and improving their use of sun protection measures. Methods: 'Kidskin' is a five-year, school-based intervention study in Perth, Western Australia, of a cohort of children who were five or six years old in 1995. The study involves three groups: control, 'moderate' and 'high' intervention. Children in the control schools received the standard health curriculum; those in the intervention schools received a multicomponent intervention, including a specially designed curriculum. Children in the high intervention group also received program materials over the summer holidays and were offered sun-protective swimwear at a low cost. After two years, parents completed a questionnaire about their child's sun-related behavior. Results: Children in the intervention groups--especially the 'high' group--were reported to have had less sun exposure. This involved covering the back more often, spending more time in the shade when outdoors and wearing a style of swimsuit that covered the trunk. There was also evidence that children in the intervention groups spent less time outdoors in the middle of the day. There was little difference between groups in the wearing of hats or sunscreen. Conclusions: Our school-based intervention improved children's sun protection, but had little effect on specific behaviours that have already been vigorously promoted. Implications: School-based prevention campaigns would benefit from focusing on sun protection using clothing and shade, and reducing sun exposure in the middle of the day. There may be little potential to improve hat and sunscreen use. abstract_id: PUBMED:36622262 Sun protection behavior: health impact, prevalence, correlates and interventions. Sun protection behavior can reduce skin cancer risk. This paper provides an overview of skin cancer risk and the complex behavioral options for sun protection, along with a narrative review of research on determinants of, and interventions to promote, sun protection. Gaps and opportunities for future research are also outlined. Evidence supports the effectiveness of sunscreen use, ultraviolet (UV) protection clothing, and shade seeking. However, these behaviors are complex and are often performed in ways that are inadequate for sun protection. Most research examining and targeting determinants of sun protection behavior has focused on sunscreen use, to the exclusion of other strategies, and has largely ignored the complexity of even sunscreen use. Most research and interventions are theory- driven, drawing on social cognitive constructs, but also considering self and social identity and emotion. Multilevel perspectives considering context, environment, policies, and other structural contexts have also been applied to sun protection behavior, but there is a need to combine psychological constructs with factors at other levels to optimize predictive models and intervention effectiveness. Effective sun protection effectively involves complex behaviors and perceived and actual tradeoffs that should be considered, in combination with multilevel determinants, in research predicting and promoting sun safety. abstract_id: PUBMED:11545453 Reduced sun exposure and tanning in children after 2 years of a school-based intervention (Australia). Objective: This paper presents the results of the evaluation of measured suntan and parent-reported sun exposure in participating children after 2 years of the Kidskin study, a 5-year school-based sun protection intervention undertaken in Perth, Western Australia (1995-1999). Methods: The study involves three groups: a control, a "moderate", and a "high" intervention group. Participants were 5 or 6 years of age at the beginning of the study. Control schools received the standard Health Education curriculum, while intervention schools received a multicomponent intervention including a specially designed curriculum. Children in the high intervention group also received program materials over the summer vacation and were offered sun-protective swimwear at low cost. At the end of the second summer, suntan was measured and parents completed a questionnaire about their child's sun-related behavior. Results: Children in the intervention groups--especially the high group--were less tanned at the end of the summer; this effect was greater for the back than for the forearms. These children were also reported to have received less sun exposure and made greater use of sun protection measures. Conclusion: Intensive school-based interventions can reduce tanning and reported sun exposure in children. abstract_id: PUBMED:27085551 Sun Exposure Habits and Sun Protection Practices of Skaters. Beach activities and outdoor sports are risk factors for developing skin cancer. Skateboarding is among the most popular sports among adolescents. The aim was to analyse the sun exposure habits and sun protection practices of skaters, in comparison with corresponding sun exposure and protection practices on the beach. This cross-sectional study is based on health surveys carried out into sun exposure and protection practices among young skateboarders. The study population was composed of 102 skaters, of whom 84 (82.4 %) were male. Fifty-eight (56.8 %) of the participants reported having suffered at least one sunburn event the previous summer. Eighty-seven (91.6 %) said they practiced outdoor sports more than 90 days per year, while only six (6.1 %) spent this long at the beach. Forty-nine (52.1 %) practiced outdoor sports for 5 or more hours a day compared to 42 (43.3 %) who spent comparable periods at the beach. A long-sleeved shirt was worn more frequently during sports than at the beach. Keen skaters are a risk group for the development of skin cancer due to excessive sun exposure, high rates of sunburn and scant use of sun protection measures. Specific educational and behavioural interventions should be aimed at this risk group. abstract_id: PUBMED:35570895 Sun Protection Behavior in Danish Outdoor Workers Following a Multicomponent Intervention. Background: Outdoor workers can be exposed to relatively high levels of ultraviolet radiation and are at risk of developing occupational skin cancer. Implementing the use of sun protection in outdoor workers at work is therefore important. The objective of this follow-up study was to evaluate the effect of a multicomponent intervention to improve the use of sun protection in Danish outdoor workers. Method: A total of 237 Danish outdoor workers were asked to complete surveys in 2016/17 and in 2020. Multicomponent interventions, between surveys, included information on skin cancer risk and use of sun protection, personal dosimetry and skin examination for signs of photodamage and skin cancer. Survey items on awareness of occupational skin cancer risk and perceived importance of sun protection as well as availability and use of sun protection at work were compared and analyzed in relation to the multicomponent intervention. Results: Overall, the use of sun protection at work increased significantly (composite score [95% CI] 4.0 [3.7, 4.3] in 2016/17 and 4.6 [4.3, 4.9] in 2020, p < 0.001). Sunscreen was by far the biggest contributor, and the only type of sun protection used at work, which changed significantly (often/always use 37% in 2016/17 and 52% in 2020, p < 0.001). The biggest influence on the increased use of sun protection at work seemed to be a significant increase in the awareness of occupational skin cancer risk (moderate/high 43% in 2016/17 and 63% in 2020, p < 0.001) and perceived importance of sun protection at work (moderate/high 69% in 2016/17 and 83% in 2020, p < 0.001). Conclusion: The results of this study indicate that awareness of occupational skin cancer risk as well as the perceived importance and use of sun protection at work in Danish outdoor workers may be improved by means of multicomponent intervention. Answer: The Kidskin sun protection intervention initially had favorable effects on reported sun exposure and measured suntan, but these benefits were not sustained over time. At the end of the 4-year program, and again 2 years later, there was little evidence of a favorable effect remaining (PUBMED:16330090). This suggests that the impact of the Kidskin intervention on reducing summer suntan and sun exposure in children was not maintained in the long term after the conclusion of the program.
Instruction: Does choice of angular velocity affect pain level during isokinetic strength testing of knee osteoarthritis patients? Abstracts: abstract_id: PUBMED:22713541 Does choice of angular velocity affect pain level during isokinetic strength testing of knee osteoarthritis patients? Background: Thigh musculature strength assessment in individuals with knee osteoarthritis is routinely performed in rehabilitative settings. A factor that may influence results is pain experienced during testing. Aim: To assess whether pain experienced during isokinetic testing in individuals with knee osteoarthritis is dependent on the angular velocity prescribed. Design: Experimental, repeated measures. Setting: University laboratory. Population: Thirty-five individuals (19 women, 16 men) with tibiofemoral osteoarthritis. Methods: Participants performed three randomized sets of five maximal concentric extension-flexion repetitions at 60°/s, 90°/s and 120°/s. Pain intensity was measured immediately after the completion of each set. Strength outcomes for each set were the average peak moment. Results: Across gender, pain level was not significantly affected by testing velocity (P=0.18, η(p)(2) =0.05). There was a trend of women reporting more pain than men across all testing velocities, however this comparison did not reach statistical significance (P=0.18, η(p)(2)=0.05). There was a significant main effect of testing velocity on strength, with the highest level attained at 60°/s. However, no difference in strength was noted when testing was performed at 90°/s or 120°/s. Conclusion: A large variation in pain scores within and across conditions and gender was noted, suggesting that at the current stage: 1) isokinetic angular velocity prescription be performed on an individual patient basis; and 2) improvements in the manner pain is recorded are needed in order to reduce the variations in pain scores. Clinical Rehabilitation Impact: Individual prescription of angular velocity may be necessary for optimal strength output and reduction of pain during effort exertion in this patient population. abstract_id: PUBMED:31453533 Effects of isokinetic muscle strengthening on balance, proprioception, and physical function in bilateral knee osteoarthritis patients with moderate fall risk. Objectives: This study aims to assess the effects of isokinetic quadriceps and hamstring strengthening exercises on balance, proprioception, and physical function in patients with moderate-to-severe knee osteoarthritis and moderate fall risk. Patients And Methods: Between November 2011 and December 2012, a total of 39 participants (30 females, 9 males; mean age 61.7±8.6 years; range, 18 to 79 years) with Grade 2 or 3 knee osteoarthritis according to the Kellgren-Lawrence radiographic grading system and moderate risk of fall with active knee pain were included in this study. All participants received isokinetic quadriceps and hamstring strengthening exercises for six weeks. Pre-treatment quadriceps and hamstring muscle strength (peak torque and total work value) and quadriceps to hamstring muscle strength ratio at angular velocities of 60°/sec and 180°/sec, range of motion (ROM), average proprioceptive errors at 15-45° and 30-60°, the Berg Balance Scale (BBS) scores, the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) subscale scores, the Visual Analog Scale (VAS) scores, and physical function tests results were compared with the post-treatment results. The correlations of pre- and post-treatment BBS and average proprioceptive error at 15-45° and 30-60° changes to other pre- and post-treatment clinical measurements changes were calculated. Results: Post-treatment quadriceps and hamstring muscle strength at angular velocities of 60°/sec and 180°/sec and quadriceps to hamstring muscle strength ratios at angular velocity of 60°/sec, ROM, average proprioceptive errors at 15-45° and 30-60°, BBS scores, WOMAC subscale scores, VAS scores, and physical function tests significantly improved compared to the pre-treatment results (p<0.001). Statistically significant correlations were found between the pre- and post-treatment BBS score changes and pre- and post-treatment VAS (p=0.015), WOMAC-Pain (p=0.017), WOMAC-Physical Function (p=0.005) scores and Timed Up and Go Test (p=0.036) scores. Conclusion: Inclusion of isokinetic quadriceps and hamstring strengthening exercises into the rehabilitation programs for the patients with knee osteoarthritis may improve the quality of life and contribute to the decreased risk of fall. abstract_id: PUBMED:34639740 Kinesio Taping Relieves Pain and Improves Isokinetic Not Isometric Muscle Strength in Patients with Knee Osteoarthritis-A Systematic Review and Meta-Analysis. This study investigated the effects of kinesio taping (KT) or KT plus conventional therapy on pain, muscle strength, funrefction, and range of motion in patients with knee osteoarthritis (OA). Data sources: Databases included PubMed, Ovid Medline, CINAHL, Airiti Library, EMBASE, and WOS search engines. Search terms related to KT and knee OA were combined and searched. Articles that met the inclusion criteria and were graded with a Jadad score ≥3 were included in a meta-analysis to calculate the total effect. The exclusion criteria were non-English-language articles, non-original articles, non-full-text articles, no description of the intervention, or articles with a Jadad score ≤2. Eleven articles were included in the meta-analysis. KT treatment had a significant small total effect on pain reduction (p < 0.001; n = 1509; standardized mean difference (SMD) = -0.42; 95% CI = -0.65 to -0.18) and a significant moderate total effect on isokinetic muscle strength improvement (p = 0.001; n = 447; SMD = 0.72; 95% CI = 0.28 to 1.16). No significant total effects of KT on isometric muscle strength, time to complete functional tasks, or ROM improvement were found. KT or KT plus conventional therapy has a significant effect on pain relief and isokinetic but not isometric muscle strength improvement in patients with knee OA. KT can be an effective tool for treating knee OA pain and is especially valuable for aiding in isokinetic muscle strength. (PROSPERO register ID: CRD42021252313). abstract_id: PUBMED:37201003 Can isokinetic test be a supportive tool for unilateral knee arthroplasty decision? Objectives: This study analyzed whether the isokinetic muscle strength of bilateral knee osteoarthritis patients undergoing unilateral total knee arthroplasty (TKA) is a predictor for prior surgery side. Patients And Methods: In the prospective study conducted between April 2021 and December 2021, 58 knees of 29 unilateral TKA candidates (6 males, 23 females; mean age: 66.7±7.4 years; range, 53 to 81 years) were enrolled. The patients were divided into surgical (n=29)and nonsurgical (n=29) groups. The knees of patients with bilateral knee osteoarthritis (Stage III or IV) according to the Kellgren-Lawrence (KL) scale were scheduled for unilateral TKA. An isokinetic testing system was used to assess knee flexor and extensor muscle strength (peak torque) at angular velocities of 60°/sec and 180°/sec (five cycles per velocity). The radiological (X-ray-based KL scale and magnetic resonance imaging-based quadriceps angle) and clinical findings (isokinetic test and Visual Analog Scale pain scores) in both groups were compared. Results: The mean symptom duration was 10±5.4 years. The KL score and quadriceps angle showed no significant differences (p=0.056 and p=0.663, respectively). Isokinetic test results were in accordance with the clinical results of the surgery group. In the isokinetic evaluation, both the 60°/sec concentric extension (35.00 vs. 46.00, p=0.002) and flexion peak torque (18.00 vs. 26.00, p=0.001) values were significantly lower in the surgical group than in the nonsurgical group. Conclusion: Isokinetic testing can be a supportive tool for assessing the prior side of TKA in patients with bilateral knee osteoarthritis. Further studies are required to support these findings. abstract_id: PUBMED:19560995 Effects of different stretching techniques on the outcomes of isokinetic exercise in patients with knee osteoarthritis. We recruited 132 subjects with bilateral knee osteoarthritis (Altman Grade II) to compare the effects of different stretching techniques on the outcomes of isokinetic muscle strengthening exercises. Patients were randomly divided into four groups (I-IV). The patients in Group I received isokinetic muscular strengthening exercises, Group II received bilateral knee static stretching and isokinetic exercises, Group III received proprioceptive neuromuscular facilitation (PNF) stretching and isokinetic exercises, and Group IV acted as controls. Outcomes were measured by changes in Lequesne's index, range of knee motion, visual analog pain scale, and peak muscle torques during knee flexion and extension. Patients in all the treated groups experienced significant reductions in knee pain and disability, and increased peak muscle torques after treatment and at follow-up. However, only patients in Groups II and III had significant improvements in range of motion and muscle strength gain during 60 degrees/second angular velocity peak torques. Group III demonstrated the greatest increase in muscle strength gain during 180 degrees/second angular velocity peak torques. In conclusion, stretching therapy could increase the effectiveness of isokinetic exercise in terms of functional improvement in patients with knee osteoarthritis. PNF techniques were more effective than static stretching. abstract_id: PUBMED:10868015 Individual isokinetic strength training in patients with gonarthrosis Problem: To date, therapy of osteoarthritis of the knee is aimed at relieving pain and changing behavior patterns, which usually leads to reduced activity. The weakening of the quadricep's musculature leads to an increase in both joint instability and arthritis. Walking time is prolonged and the pain-induced reaction of knee angle velocity is onset by increased stress on other joints. The progressive muscle atrophy correlates to the degree of pain. The aim of this study was to demonstrate an improvement in strength and pain based on 4-week isokinetic strength training in gonarthritis patients. Method: During a conservative hospitalization period, isokinetic strength training was performed by 19 randomized patients with gonarthritis in addition to regular physiotherapy. Another 19 patients functioned as a control group. The work was examined at 60 degrees/s and 180 degrees/s and rated using a pain questionnaire at the start and end of the investigation. Results: In addition to the expected increase of strength and strength endurance in the test group, the degree of pain could also be statistically significantly decreased compared to the control group. Activities of daily living, such as climbing stairs and standing-up, were also performed more easily. Conclusions: The therapeutic strategy for patients with osteoarthritis of the knee should be reconsidered to include less expensive therapeutic sport measures. Anglo-american and Scandinavian studies support this statement. Overuse and pain can be avoided by precise and low-dose strength training. Objective and reproducible measurements in the patients are essential to make individual training possible. abstract_id: PUBMED:21340212 Isokinetic assessment of the hip muscles in patients with osteoarthritis of the knee. Objectives: To evaluate the difference in isokinetic strength of hip muscles between patients with knee osteoarthritis (OA) and matched healthy controls, and to establish the correlation between this isokinetic strength and pain and function in patients with knee OA. Methods: 25 patients with a diagnosis of unilateral knee OA, 25 patients with bilateral knee OA, and 50 matched controls were evaluated using the visual analog scale for pain, knee Lequesne index, Western Ontario and McMaster Universities questionnaire and an isokinetic test. Results: The groups were matched for age, gender and body mass index. The results of the isokinetic test revealed lower peak torque of the hip in patients with OA of the knee than in the control group for all movements studied. Strong correlations were found between the peak torque, visual analog scale and function. Conclusions: Patients with OA of the knee exhibit lower isokinetic strength in the hip muscles than healthy control subjects. Strengthening the muscles surrounding the hip joint may help to decrease pain in people with knee OA. Some correlations between pain/function and peak torque were found. abstract_id: PUBMED:31390143 Immediate responses of multi-focal low level laser therapy on quadriceps in knee osteoarthritis patients. Low-level laser therapy (LLLT) applying on knee osteoarthritis (OA) patients has shown positive outcomes in analgesic effect and functional recovery. However, few studies applied such therapy on large area of quadriceps muscle in these patients. The aim of this study was to evaluate immediate effect of multi-focal LLLT on quadriceps of knee OA patients in pain and functional performance. Fifty-one participants with knee OA were enrolled and evaluated before (T1) and immediately after intervention (T2) by knee joint pain in numeric rating scale (NRS), walking speed, timed five-chair stands, and quadriceps strength by isokinetic dynamometer. Intervention with two multi-focal Gallium-Aluminum-Arsenide laser devices, each device with 36 laser diodes (wavelength 808 ± 10 nm, continuous, mean power 50 mW, 30 minutes), applied simultaneously over bilateral quadriceps with a total dose of 180 J for each thigh. The multi-focal LLLT significantly improved knee joint pain as measured by the NRS (54% reduction), timed five-chair stands, and walking speed (P < .05). Knee extensor strength also increased in terms of peak torque and force of concentric and eccentric contraction as measured by isokinetic dynamometer (P < .05). In conclusion, single-session multi-focal LLLT on quadriceps in knee OA patients has immediate beneficial effect on knee pain reduction, quadriceps strengthening and functional performance recovery. Long-term effect requires further investigation. Multi-focal LLLT on quadriceps might serve as an alternative non-invasive treatment option in these patients. abstract_id: PUBMED:16084806 Use of ultrasound to increase effectiveness of isokinetic exercise for knee osteoarthritis. Objective: To investigate the effects of ultrasound (US) in isokinetic muscle strengthening exercises on functional status of patients with knee osteoarthritis (OA). Design: Effectiveness of isokinetic muscle strengthening exercises for treatment of periarticular soft tissue disorders was compared with and without pulsed and continuous US. Setting: Outpatient exercise program in a Taiwan medical university hospital. Participants: One hundred twenty subjects with bilateral knee OA (Altman grade II). Intervention: Subjects were randomized sequentially into 1 of 4 groups. Group I received isokinetic muscular strengthening exercises, group II received isokinetic exercise and continuous US, group III received isokinetic exercise and pulsed US treatment, and group IV was the control group. Main Outcome Measures: Therapeutic effects of isokinetic exercise were evaluated by changes in ambulation speed and the Lequesne index. In addition, changes in knee range of motion (ROM), visual analog scale for pain, and muscle peak torques during knee flexion and extension were compared. Compliance in each group was recorded. Results: Each treated group had increased muscle peak torques and significantly reduced pain and disability after treatment and at follow-up. However, only patients in groups II and III had significant improvement in ROM and ambulation speed after treatment. Fewer participants in group III discontinued treatment due to knee pain during exercise. Patients in group III also showed the greatest increase in walking speed and decrease in disability after treatment and at follow-up. Gains in muscular strength in 60 degrees /s angular velocity peak torques were also noted in groups II and III. However, group III showed the greatest muscular strength gains with 180 degrees /s angular velocity peak torques after treatment and follow-up. Conclusions: US treatment could increase the effectiveness of isokinetic exercise for functional improvement of knee OA, and pulsed ultrasound has a greater effect than continuous US. abstract_id: PUBMED:30332992 The effects of pain on quadriceps strength, joint proprioception and dynamic balance among women aged 65 to 75 years with knee osteoarthritis. Background: Patients with knee osteoarthritis (OA) were reported to have quadriceps weakness, and impaired proprioception, both related to pain and swelling. It is unclear whether pain alone a causal factor to above findings over the knee joint. The purpose of this study was to assess the effects of knee pain alone on the quadriceps strength, proprioception and dynamic balance in subjects with bilateral knee OA without joint swelling. Methods: Fourty females with mean age of 68.3 years were involved in this cross-sectional study. The inclusion criteria were bilateral knee OA without joint swelling, with a visual analogue pain scale difference (> 1) between each knee. Patients all underwent assessment of the isokinetic strength of knee muscles, knee proprioceptive acuity, and dynamic balance. Results: Patients' more painful knee had weaker isokinetic quadriceps strength than less painful knee at both 60 °/s and 180 °/s (p = 0.01, p = 0.01, respectively). There were no differences in proprioceptive acuity between both knees in all three knee positions. Meanwhile, there was a significant difference in the dynamic balance index measurement between both knees (more painful versus less painful: 3.88 ± 1.15 vs. 3.30 ± 1.00, p = 0.01). Quadriceps strength was associated with dynamic balance stability (60 °/s, r = - 0.578, p < 0.01; 180 °/s, r = - 0.439, p < 0.01). Conclusions: For patients with knee OA, the more painful knee was associated with weaker quadriceps and poor balance ability. To improve lower limb function and balance stability of the older persons having knee OA, physicians should take the optimal pain management strategy. Answer: The choice of angular velocity during isokinetic strength testing of knee osteoarthritis patients does not significantly affect the pain level experienced by the patients. A study assessing whether pain experienced during isokinetic testing in individuals with knee osteoarthritis is dependent on the angular velocity prescribed found that across gender, pain level was not significantly affected by testing velocity (P=0.18, η(p)(2) =0.05) (PUBMED:22713541). This suggests that angular velocity may not be a primary factor influencing pain during such testing, and therefore, other factors may need to be considered when addressing pain management during isokinetic strength assessments in this patient population.
Instruction: Does egg-sharing compromise the chance of donors or recipients achieving a live birth? Abstracts: abstract_id: PUBMED:35332836 Does egg-sharing negatively impact on the chance of the donor or recipient achieving a live birth? This study aimed to discover whether egg sharing compromises the chance of donors or recipients achieving a live birth. A descriptive cohort study was performed of 4,545 fertility patients and 5,316 stimulation cycles at a London based fertility clinic between 2010 and 2019. There was no significant difference in clinical pregnancy rate (CPR) or live birth rate (LBR) between egg sharers and standard IVF patients or between egg sharing recipients and non-egg sharing recipients. Both egg sharers and their recipients had fewer oocytes and fewer day 3 embryos available for fresh embryo transfer or cryopreservation than standard IVF patients or non-egg sharing recipients. The cumulative LBR were significantly lower amongst egg sharers than standard IVF patients (p < 0.05), and significantly lower amongst egg sharing recipients than non-egg sharing recipients (p < 0.05). This study demonstrates that egg sharing does not compromise the chances of donors or their recipients achieving a live birth. However, participants may occasionally require additional ovarian stimulation cycles to conceive. With government funding for IVF treatment falling, egg sharing provides a practical option to allow more women access to IVF. Egg sharing is currently the most efficient way of maximising the use of the precious resource of human oocytes. abstract_id: PUBMED:14585888 Does egg-sharing compromise the chance of donors or recipients achieving a live birth? Background: To evaluate the effect of egg-sharing and the outcome of assisted reproductive treatment (ART) in standard IVF/ICSI patients, egg-sharing donors and egg-sharing recipients. Methods: Descriptive cohort study to evaluate 276 egg-sharing cycles involving 192 egg-sharers, 274 recipient cycles receiving eggs from egg-sharers and 1098 non-egg-sharing standard IVF/ICSI cycles from January 1998 to December 2002. Patients were divided into three groups: group A, egg-sharers; group B, non-egg-sharers, age <36 years, FSH <10 IU/l, BMI <30 kg/m(2); and group C, egg-sharing recipients, all ages. Duration and amount of gonadotrophin required to achieve follicular maturity, number of eggs collected and donated, fertilization rate, pregnancy rates and live birth rates were compared between donor, recipient and standard IVF/ICSI patients. The chi(2 )test was used to test for statistical significance (P < 0.05). Results: There was no significant difference in pregnancy rate and live birth rate of egg-sharers, group A (42.0%, 33.0%), non-egg-sharers, group B (40.0%, 30.9%) and recipients, group C (41.4%, 28.6%). The number of oocytes collected, number of mature follicles and amount of gonadotrophin used was not significantly different between the two groups (A and B). The average number of embryos transferred and the mean number of eggs allocated between egg-sharers and recipients was not statistically different. Conclusion: Egg-sharing does not compromise the chance of achieving a pregnancy or live birth for the egg-sharer or the recipient as compared to standard IVF/ICSI patients. The egg-sharers were not at a higher potential risk of ovarian stimulation syndrome and there was no imbalance of egg allocation. abstract_id: PUBMED:9455866 Egg sharing and egg donation: attitudes of British egg donors and recipients. The question of payment to egg donors has recently focused the attention of both the Human Fertilisation and Embryology Authority (HFEA) and licensed clinics. An acute shortage of egg donors and the rising costs of assisted conception treatment are matters of grave concern to many patients. To understand the emotional and social effects of egg sharing and egg donation, we conducted a survey of attitudes in a group of women who had some knowledge or experience of egg donation. A total of 750 questionnaires were sent out of which 217 were returned within the specified time limit. From these, 107 respondents had experience of egg donation and 110 had made enquiries about donation. The data from these questionnaires were collated and tabulated by the National Opinion Polls (NOP) Research Group. An analysis of the data produced the following key findings: (i) donating or sharing eggs is a social issue, 94% discuss it with partners/family/friends; (ii) altruistic motives are not the prerogative of non-patient volunteers-egg share donors felt that helping the childless was as important as having a chance of in-vitro fertilization (IVF) for themselves; (iii) the treatment procedure causes the most anxiety for egg donors. The recipients were most concerned about delays, donor characteristics and how the eggs were allocated; (iv) most respondents (65%) with prior experience of egg sharing would do it again - 63% of egg share donors, 72% of egg share recipients; (v) cash rewards to egg donors and outright advertising for donors were rejected by 64 and 62% of the sample respectively; and (vi) counselling was highly valued and there were no instances of 'shattered lives' after treatment. The findings do not support the recently announced intentions of the HFEA to disallow payment to gamete donors on the grounds of devalued consent. There is no precedent in modern medicine for egg sharing. The patients surveyed drew a clear distinction between egg sharing and financial rewards. As long as egg donation is not covered by the National Health Service, it is fairer to offer egg sharing than to refuse treatment to those unable to pay. abstract_id: PUBMED:22503266 Egg-sharing, consent and exploitation: examining donors' and recipients' circumstances and retrospective reflections. Egg-sharing schemes involve a woman sharing a portion of her eggs with another in exchange for free or reduced-cost fertility treatment and have been regulated in the UK since 1998. However, while being perceived as a unique anomaly (within the UK system) such schemes have caused considerable debate. Some critics have been concerned that the benefit-in-kind (i.e. fertility treatment) offered in exchange for donating eggs might compromise the consent of potential donors, particularly those who have no other means to access treatment, and lead to exploitation and the commodification of gametes. This article reports empirical findings that engage directly with these concerns. First, data on the demographic characteristics, circumstances and treatment outcomes of donors and recipients are presented, followed by data on egg sharers' own retrospective assessments of egg-sharing and their opinions regarding its ethics and regulation. This study shows that, contrary to some expectations, there are very few differences in the characteristics, experiences and opinions of egg-sharing donors and recipients, and also highlights the overwhelmingly positive assessment of egg-sharing by women who have taken part in such schemes. abstract_id: PUBMED:31248619 Oocyte donor age has a significant impact on oocyte recipients' cumulative live-birth rate: a population-based cohort study. Objective: To study the impact of the donor's and recipient's age on the cumulative live-birth rate (CLBR) in oocyte donation cycles. Design: A population-based retrospective cohort study. Setting: Not applicable. Patient(s): All women using donated oocytes (n = 1,490) in Victoria, Australia, between 2009 and 2015. Intervention(s): None. Main Outcome Measure(s): The association between the donor's and recipient's age and CLBR modeled by multivariate Cox proportional hazard regression with the covariates of male partner's age, recipient parity, and cause of infertility adjusted for, and donor age grouped as <30, 30-34, 35-37, 38-40, and ≥41 years, and recipient age as <35, 35-37, 38-40, 41-42, 43-44, and ≥45 years. Result(s): The mean age of the oocyte donors was 33.7 years (range: 21 to 45 years) with 49% aged 35 years and over. The mean age of the oocyte recipients was 41.4 years (range: 19 to 53 years) with 25.4% aged ≥45 years. There was a statistically significant relationship between the donor's age and the CLBR. The CLBR for recipients with donors aged <30 years and 30-34 years was 44.7% and 43.3%, respectively. This decreased to 33.6% in donors aged 35-37 years, 22.6% in donors aged 38-40 years, and 5.1% in donors aged ≥41 years. Compared with recipients with donors aged <30 years, the recipients with donors aged 38-40 years had 40% less chance of achieving a live birth (adjusted hazard ratio 0.60; 95% CI, 0.43-0.86) and recipients with donors aged ≥41 years had 86% less chance of achieving a live birth (adjusted hazard ratio 0.14; 95% CI, 0.04-0.44). The multivariate analysis showed no statistically significant effect of the recipient's age on CLBR. Conclusion(s): We have demonstrated that the age of the oocyte donor is critical to the CLBR and is independent of the recipient woman's age. Recipients using oocytes from donors aged ≥35 years had a statistically significantly lower CLBR when compared with recipients using oocytes from donors aged <35 years. abstract_id: PUBMED:28299366 Indian egg donors' characteristics, motivations and feelings towards the recipient and resultant child. This is the first study to examine characteristics, motivations and experiences of Indian egg donors. In-depth interviews were conducted with 25 egg donors who had donated during the previous 8 months at a fertility clinic in Mumbai. The semi-structured interviews were conducted in Hindi and English. In addition to demographic information, data were collected on donors' motivations for donating, with whom they had discussed donation, and feelings towards the recipients. The response rate was 66%. All participants were literate and had attended school. Twenty (80%) egg donors had children and five (20%) did not. The most common motivation (19, 76%) for donating was financial need. Egg donors had discussed their donation with their husband or with close family/friends, with almost all mentioning that wider society would disapprove. The majority (20, 80%) had no information about the recipients and 11 (44%) preferred not to. The findings highlight the similarities and differences between egg donors from India and those from other countries and that egg donors are of a more varied demographic background than surrogates in India. Given that India has been a popular destination for fertility treatment, the findings have important implications for regulation and practice within India and internationally. abstract_id: PUBMED:22196890 Oocyte vitrification technology has made egg-sharing donation easier in China. When infertile women undergoing IVF or intracytoplasmic sperm injection (ICSI) have more than 20 mature oocytes retrieved, at least 15 oocytes are inseminated by their husband's spermatozoa. The extra oocytes are cryopreserved by vitrification. If the patients became pregnant and have healthy live births, the patients are encouraged to donate their remaining cryopreserved oocytes. Forty-seven egg-sharing donors were recruited after having normal deliveries and they donated their remaining oocytes, totalling 395 cryopreserved oocytes, to 75 recipients. The survival rate of vitrified-warmed oocytes was 83.0%. Following insemination by ICSI, the fertilization and cleavage rates were 83.8% and 89.8%, respectively. Out of 75 recipients, 71 recipients completed the treatment cycles and 30 of them became pregnant with clinical pregnancy and implantation rates of 42.3% and 25.5%, respectively. The birthweight of the new-born infants (22 from singleton and two from one set of twins) were 3344.5 ± 669.1g and 2425.0 ± 742.5 g, respectively. No birth defects were observed for the live births. These results indicate that oocyte vitrification is an effective methodology for an egg-sharing donation programme, with acceptable pregnancy and implantation rates. abstract_id: PUBMED:22434851 Emotional and relational aspects of egg-sharing: egg-share donors' and recipients' feelings about each other, each others' treatment outcome and any resulting children. Background: This paper presents finding from a study of the emotional and relational aspects of egg-sharing, exploring egg-share donors' and recipients' thoughts and feelings about each other, about each other's treatment outcome and any resulting children, as well as their attitudes towards disclosure of donor origins and contact between donors and donor offspring in the future. It is the first study of this population since the removal of donor anonymity in 2005. Methods: A paper or online questionnaire was completed anonymously by 48 donors and 38 recipients who took part in egg-sharing between 2007 and 2009. Data were obtained on a range of measures-including demographics, family circumstances, motivations and anxieties, feelings about egg-sharing, retrospective assessments and views on regulation-and analysed to facilitate cross-group and within-group comparisons of donors and recipients. Results: This study found very few differences between donors and recipients, as well as between successful and unsuccessful egg-share participants. Donors and recipients expressed sentiments of goodwill towards one another, and displayed attitudes of openness regarding disclosure decisions and future contact among donors and donor-conceived offspring. While some donors and recipients wanted to know the outcome of their donor's/recipient's treatment, others preferred not to. Conclusions: Most significantly, concerns voiced regarding the potential psychological harm to donors, particularly those whose own treatment ends unsuccessfully, were not borne out by the data. abstract_id: PUBMED:37351021 Emerging Ideas. A ripple effect: Does I-sharing with a stranger promote compromise in cohabiting couples? Background: Living together harmoniously requires compromise. Research on the psychological factors that promote compromise frequently focuses on individual differences and interpersonal environment. The current research takes a different approach by asking what role I-sharing (i.e., the feeling that others share the same in-the-moment, subjective experience as oneself) plays in the success of a cohabiting couple's compromise efforts. Objective: Drawing from work on the prosocial outcomes associated with I-sharing, we asked whether I-sharing with a stranger enhances couples' ability to compromise. Method: Both members of a cohabiting couple privately reported the percentage of time they spend on household tasks (the Division of Household Labor inventory [DHL]) before undergoing a manipulation in which they either did or did not have an I-sharing experience with a stranger. Participants then completed the DHL again, this time working together with their cohabiting partner to reach a consensus in their responses. Finally, participants had another chance to complete the DHL in private, receiving the instruction to respond with their true beliefs. Results: When relationship partners worked together to complete the DHL at the second reporting, their responses were identical. At the third (private) DHL reporting, consistent with prediction, couples assigned to the I-share condition showed more agreement with their partner than couples assigned to the No I-share condition. Conclusion: This finding suggests that the prosociality generated by I-sharing can promote compromise in cohabiting couples. Implications: I-sharing holds promise as a therapeutic tool to promote compromise in treatment-seeking families and couples. abstract_id: PUBMED:12524066 Outcome for donors and recipients in two egg-sharing policies. Objective: To assess the effect of two different donation policies on results of egg-sharing. Design: Retrospective analysis. Setting: Tertiary referral center. Patient(s): One hundred five donor-recipient pairs participating in an egg-sharing program between 1998 and 2001. Intervention(s): Egg-sharing was performed according to policy A (retrieval of at least 12 oocytes, with more oocytes allocated to the donor) or policy B (retrieval of at least 8 oocytes, with equal distribution of oocytes between donors and recipients). Main Outcome Measure(s): Delivery rate and cycle cancellation rate. Result(s): Policy B resulted in a decreased cycle cancellation rate in the recipient group compared with policy A (9.7% vs. 29.7%, respectively; P<or=.01). No significant difference in delivery rate per fresh transfer was observed between policy A and policy B in donors (30.2% vs. 27.8%, respectively) or recipients (28.2% vs. 25.7%, respectively). Conclusion(s): Decreasing the number of oocytes required to perform egg sharing from 12 to 8 and dividing the oocytes equally between donors and recipients significantly reduces the cycle cancellation rate while it does not appear to affect adversely the probability of delivery. Answer: Based on the provided abstracts, egg sharing does not appear to compromise the chance of donors or recipients achieving a live birth. Several studies have investigated this question and found no significant difference in pregnancy rates and live birth rates between egg sharers and standard IVF/ICSI patients or between egg sharing recipients and non-egg sharing recipients (PUBMED:14585888). Furthermore, a descriptive cohort study of 4,545 fertility patients and 5,316 stimulation cycles showed that there was no significant difference in clinical pregnancy rate (CPR) or live birth rate (LBR) between egg sharers and standard IVF patients or between egg sharing recipients and non-egg sharing recipients (PUBMED:35332836). Another study examining the emotional and relational aspects of egg-sharing found that donors and recipients expressed sentiments of goodwill towards one another and displayed attitudes of openness regarding disclosure decisions and future contact among donors and donor-conceived offspring. This study also indicated that concerns about potential psychological harm to donors, particularly those whose own treatment ends unsuccessfully, were not supported by the data (PUBMED:22434851). Additionally, a study on the impact of the donor's and recipient's age on the cumulative live-birth rate in oocyte donation cycles found that the age of the oocyte donor is critical to the cumulative live-birth rate and is independent of the recipient woman's age (PUBMED:31248619). This suggests that factors other than the act of egg sharing itself, such as the age of the donor, can be more influential on the outcome. In conclusion, the evidence from these studies suggests that egg sharing does not compromise the chances of achieving a live birth for either the egg-sharer or the recipient when compared to standard IVF/ICSI patients.
Instruction: Isolated echogenic foci in the fetal heart: do they increase the risk of trisomy 21 in a population previously screened by nuchal translucency? Abstracts: abstract_id: PUBMED:11529991 Isolated echogenic foci in the fetal heart: do they increase the risk of trisomy 21 in a population previously screened by nuchal translucency? Objectives: To confirm the hypothesis that isolated cardiac echogenic foci at the second-trimester anomaly scan do not influence our current calculation of risk of trisomy 21 in individual pregnancies, which is based on maternal age and nuchal translucency thickness at 11-14 weeks. Design: Observational study in a fetal medicine unit. Methods: In a general pregnant population undergoing first-trimester nuchal translucency screening, data from 239 singleton pregnancies with isolated cardiac echogenic foci at the second-trimester anomaly scan were compared with those of a control group of 7449 pregnancies with normal anomaly scans. Prevalence of trisomy 21 was determined in both groups. Following the anomaly scan, the individual risks of trisomy 21 were calculated by adjusting the previous risk based on maternal age and first-trimester nuchal translucency. We assumed that echogenic foci did not alter each individual risk calculation. The expected number of cases of Down syndrome in both groups was then calculated from the sum of probabilities of each individual affected fetus. The observed number of cases was compared with the expected number in both study and control populations. Results: There was no statistically significant difference between the prevalence of trisomy 21 in the study group (no cases) and in the control population (three cases). From individual risk calculations, observing no cases of trisomy 21 in the study group was the most likely event if echogenic foci did not increase the risk of this chromosomal abnormality (P = 0.62). Conclusion: The finding of isolated echogenic foci at the time of the 20 week-scan does not significantly change the risks of trisomy 21 if background risk and previous nuchal translucency measurements are taken into account in the individual risk calculation. We suggest that no further adjustments to risk should be used. abstract_id: PUBMED:24431650 Effects of period of gestation and position of fetal neck on nuchal translucency measurement. Objective: The aim of this study was to determine the effects of period of gestation and position of fetal neck on nuchal translucency measurement. Materials And Methods: Nuchal translucency was measured in the mid-sagittal plane, with the fetal neck in the flexed, neutral, and extended positions in 100 pregnant women between 11 and 13(+6) weeks. Mean nuchal translucency measurements at different periods of gestation were compared. Differences between the extended and neutral positions (Δ extended nuchal translucency) and those between the flexed and neutral positions (Δ flexed nuchal translucency) were calculated. The repeatability coefficients for the measurements in all the three positions were computed. Statistical analysis was also done. Results: Nuchal translucency values were 1.050 ± 0.282 mm in the 11th week, 1.243 ± 0.348 mm in the 12th week, and 1.823 ± 0.357 mm in the 13th week (r = 0.747, p < 0.0001). The mean Δ flexed value was 0.233 ± 0.133 mm lesser than the neutral value (p < 0.0001). The mean Δ extended nuchal translucency was 0.305 ± 0.155 mm greater than the neutral value (p < 0.0001). The repeatability coefficient was the lowest in the neutral position (0.17 mm in the neutral position, 0.28 in the flexed position and 0.41 mm in the extended position). Conclusion: We concluded that the period of gestation and fetal neck position can make a significant difference to nuchal translucency measurement. Repeatability of measurement is more accurate with the fetal neck in the neutral position. These findings have important implications for clinicians using nuchal translucency to screen the obstetric population for Down's syndrome. abstract_id: PUBMED:12738147 Association between increased nuchal translucency and second trimester cardiac echogenic foci. Objective: To test the hypothesis that increased first trimester nuchal translucency is associated with isolated cardiac foci in the second trimester. Methods: We identified all pregnancies delivered between January 1997 and June 2000. We included 7686 normal singleton fetuses who had a nuchal translucency scan and either a subsequent normal anomaly scan at 18-23 weeks' gestation (n = 7447) or isolated cardiac foci (n = 239). Fetuses were divided into two groups: normal (95th percentile or less) and increased (greater than 95th percentile) nuchal translucency. Results: The prevalence of cardiac echogenic foci in fetuses with normal nuchal translucency was 218 of 7427 (2.9%; 95% confidence interval [CI] 2.6, 3.3%), whereas 21 of 259 fetuses (8.1%; 95% CI 5.1, 12.1%) with increased nuchal translucency were subsequently found to have cardiac foci. The adjusted odds ratio for cardiac echogenic foci in cases of increased nuchal translucency was 2.92 (95% CI 1.83, 4.66). Conclusion: An association exists between first trimester nuchal translucency and second trimester cardiac echogenic foci. Risk calculation algorithms for trisomy 21 based on nuchal translucency thickness should not use cardiac foci as an independent marker. abstract_id: PUBMED:17521948 Increased nuchal translucency with normal karyotype Nuchal translucency (NT) measurement in first trimester screening between 11 and 14 weeks' gestation has now been clearly identified as a marker for aneuploidies and in particular for trisomy 21. Even in the absence of aneuploidy increased fetal nuchal translucency has been shown to be a marker for fetal heart malformations and numerous other fetal defects and genetic syndromes when the measure is>or=95th centile which is around 3,5 mm for each gestational age. Fetuses with NT thickness above the 99th centile and normal karyotype have a high risk of adverse prenatal outcome and this increase in risk is exponential as the nuchal translucency measurement increases. However, among children born alive with normal ultrasound at 22 weeks of gestation, there was no significant association between NT thickness and development at 2 years as assessed by clinical examination and ASQ scores, when with a control population. Counselling should emphasize that when the karyotype is normal and no fetal structural malformation was missed prenatally following resolution of nuchal thickening, the prognosis is not impaired at the age of 2. abstract_id: PUBMED:10609724 Isolated fetal echogenic intracardiac foci or golf balls: is karyotyping for Down's syndrome indicated? Objective: To determine the prevalence of isolated echogenic intracardiac foci and the subsequent risk for Down's syndrome at 18-23 weeks in an unselected obstetric population. Design: Prospective study. Setting: A district general hospital serving a routine obstetric population. Participants: 16,917 pregnant women who underwent a routine ultrasound screening at 18-23 weeks of gestation between November 1994 and August 1998. Methods: All women were offered screening for Down's syndrome by nuchal translucency or maternal serum biochemistry. The prevalence of isolated echogenic intracardiac foci was determined and the relative risk for Down's syndrome was calculated for different ultrasound findings. Results: The combined sensitivity of age, nuchal translucency and maternal serum biochemistry for Down's syndrome was 84% (27/32). The relative risk for Down's syndrome was 0.17 (95% CI 0.07-0.41) for the women with normal scan findings at 18-23 weeks. The prevalence of isolated echogenic intracardiac foci at 18-23 weeks was 0.9% (144/16,917). None of these pregnancies were affected by Down's syndrome. Conclusion: The significance of the association between isolated echogenic intracardiac foci and Down's syndrome is a matter of ongoing debate. The data of this study suggest that in an unselected obstetric population with prior, effective, routine Down's syndrome screening, the association between isolated echogenic intracardiac foci and Down's syndrome is no longer significant. abstract_id: PUBMED:15846173 Increased nuchal translucency with normal karyotype. Increased fetal nuchal translucency (NT) thickness between 11 and 14 weeks' gestation is a common phenotypic expression of chromosomal abnormalities, including trisomy 21. However, even in the absence of aneuploidy, nuchal thickening is clinically relevant because it is associated with an increase in adverse perinatal outcome caused by a variety of fetal malformations, dysplasias, deformations, dysruptions, and genetic syndromes. Once the presence of aneuploidy is ruled out, the risk of perinatal outcome dose not statistically increase until the nuchal translucency measurement reaches 3.5 mm or more (>99th percentile). This increase in risk occurs in an exponential fashion as the NT measurement increases. However, if the fetus survives until midgestation, and if a targeted ultrasound at 20 to 22 weeks fails to reveal any abnormalities, the risk of an adverse perinatal outcome and postnatal developmental delay is not statistically increased. abstract_id: PUBMED:17470580 First-trimester ultrasonographic screening for trisomy 21 using fetal nuchal translucency and nasal bone. Objective: To report our experience with first-trimester screening for trisomy 21 by using the combination of nuchal translucency thickness and nasal bone assessment. Methods: Pregnant women from a predominantly Latin American population consisting of patients at both low risk and high risk for chromosomal defects underwent first-trimester ultrasonographic screening. Nuchal translucency thickness and nasal bone were assessed by two accredited fetal medicine specialists. Cases of trisomy 21 were identified from the cytogenetics laboratory logbook. Results: Over a 3-year period, 1,287 consecutive singleton pregnancies were screened. The median maternal age was 33 years (range 14-47 years), with 456 (35.4%) women aged 35 years or older at the time of the scan. Overall, 110 fetuses (8.5%) had nuchal translucency thickness greater than the 95th percentile for gestational age and 25 (1.9%) had absent nasal bone. Trisomy 21 was diagnosed in 31 cases. Among them, the nuchal translucency thickness was increased in 28 and the nasal bone was absent in 13 (detection rates of 90.3% and 41.9%, respectively; P<.01). All but one (92.3%) of the trisomy 21 fetuses with absent nasal bone had increased nuchal translucency. Only two of the normal fetuses had an absent nasal bone in the first trimester. Conclusion: In our population, increased nuchal translucency thickness is the most important ultrasonographic marker of trisomy 21. In contrast, the nasal bone seems to have a less prominent role in identifying the fetus at risk for trisomy 21 due to its lower detection rate. However, an absent nasal bone should be considered as a highly predictive marker of trisomy 21. abstract_id: PUBMED:16104681 Nuchal translucency-based Down syndrome screening: barriers to implementation. Nuchal translucency sonography represents an important step forward in screening for Down syndrome. It is likely that a range of screening programs that rely on nuchal translucency sonography will become popular for general population screening. However, a number of practical implementation issues remain to be resolved before widespread utilization of this technique can be endorsed. This paper addresses some of these issues, including sonographer training, standardized nuchal translucency sonography protocol, ongoing nuchal translucency quality assurance programs, how to interpret Down syndrome risk, availability of first trimester diagnosis, and how to combine screening tests across different gestational ages. abstract_id: PUBMED:15838748 First trimester increased nuchal translucency associated with fetal achondroplasia. A 30-year-old woman, gravida 2, para 1, was referred for nuchal translucency, free beta-human chorionic gonadotropin (beta-hCG), and pregnancy-associated plasma protein A (PAPP-A) screening at 12 weeks gestation. The nuchal translucency was increased to 3.8 mm, resulting in a post-test result of 1:15 risk for Down syndrome by a combination of maternal age plus nuchal translucency, and 1:5 by a combination of maternal age plus nuchal translucency plus free beta-hCG plus PAPP-A. The patient underwent uncomplicated chorionic villus sampling that demonstrated a normal 46,XY fetus. The patient was then followed-up weekly by ultrasound. At 18 weeks gestation, generalized features of rhizomelic micromelia together with macrocrania and narrow thorax were seen. All fetal long bones were less than 2 standard deviations from the mean value. At that time, DNA analysis was performed on stored villi and a G380R mutation in the locus gene encoding for the fibroblast growth factor receptor 3 on chromosome 4p16.3 was found, leading to a prenatal diagnosis of achondroplasia. The couple opted for termination of pregnancy after counseling. Postmortem X-ray was done and confirmed the ultrasound diagnosis of achondroplasia. This is the second known case of increased nuchal translucency in the first trimester associated with prenatal confirmed diagnosis of fetal osteochondrodysplasia. abstract_id: PUBMED:16394033 Relation between increased fetal nuchal translucency thickness and chromosomal defects. Objective: To examine the prevalence and distribution of all chromosomal defects in fetuses with increased nuchal translucency thickness. Methods: Assessment of risk for trisomy 21 was carried out by a combination of maternal age and fetal nuchal translucency thickness at 11-13 + 6 weeks. A search of the database was made to identify, first, all singleton pregnancies in which fetal karyotyping was carried out and, second, the cases where the fetal nuchal translucency was equal to or above the 95th centile for fetal crown-rump length. The prevalence and distribution of chromosomal defects were determined for each nuchal translucency category: between the 95th centile for crown-rump length and 3.4 mm, 3.5-4.4 mm, 4.5-5.4 mm, 5.5-6.4 mm, 6.5-7.4 mm, 7.5-8.4 mm, 8.5-9.4 mm, 9.5-10.4 mm, 10.5-11.4 mm, and 11.5 mm or more. Results: The search identified 11,315 pregnancies. The median maternal age was 34.5 (range 15-50) years, and the median fetal crown-rump length was 64 (range 45-84) mm. The fetal karyotype was abnormal in 2,168 (19.2%) pregnancies, and the incidence of chromosomal defects increased with nuchal translucency thickness from approximately 7% for those with nuchal translucency between the 95th centile for crown-rump length and 3.4 mm to 75% for nuchal translucency of 8.5 mm or more. In the majority of fetuses with trisomy 21, the nuchal translucency thickness was less then 4.5 mm, whereas in the majority of fetuses with trisomies 13 or 18 it was 4.5-8.4 mm, and in those with Turner syndrome it was 8.5 mm or more. Conclusion: In fetuses with increased nuchal translucency, approximately one half of the chromosomally abnormal group is affected by defects other than trisomy 21. The distribution of nuchal translucency is different for each type of chromosomal defect. Level Of Evidence: II-3. Answer: Isolated echogenic foci in the fetal heart do not significantly increase the risk of trisomy 21 in a population that has already been screened by nuchal translucency. An observational study compared data from singleton pregnancies with isolated cardiac echogenic foci at the second-trimester anomaly scan to a control group with normal anomaly scans. All pregnancies had undergone first-trimester nuchal translucency screening. The study found no statistically significant difference in the prevalence of trisomy 21 between the study group, which had no cases, and the control population, which had three cases. The individual risks of trisomy 21 were calculated by adjusting the previous risk based on maternal age and first-trimester nuchal translucency, assuming that echogenic foci did not alter each individual risk calculation. The observed number of cases was compared with the expected number in both study and control populations, and the most likely event was observing no cases of trisomy 21 in the study group if echogenic foci did not increase the risk of this chromosomal abnormality (PUBMED:11529991). Therefore, the finding of isolated echogenic foci at the time of the 20-week scan does not significantly change the risks of trisomy 21 if background risk and previous nuchal translucency measurements are taken into account in the individual risk calculation (PUBMED:11529991).
Instruction: Should male gender assignment be considered in the markedly virilized patient With 46,XX and congenital adrenal hyperplasia? Abstracts: abstract_id: PUBMED:27125449 Long term outcomes in 46, XX adult patients with congenital adrenal hyperplasia reared as males. Patients with Congenital Adrenal Hyperplasia (CAH) owing to 21-hydroxylase deficiency and whose karyotype is 46, XX are usually assigned to the female gender. Reported herein are the long term outcomes in three patients with CAH whose karyotype is 46, XX and who were reared as males. A retrospective review of three CAH patients with a 46, XX karyotype who were reared as males was conducted. Gender assignment, clinical and biochemical data, pre and post-genitoplasty genital examinations were reviewed. Gender identity was tested by an extensive questionnaire. Gender role, sexual preference, marital status and sexual satisfaction were evaluated by interview. The three patients were genotyped for the CYP21A2 gene confirming the diagnosis of CAH. Owing to genital virilization, cultural preferences for male gender and the lack of newborn screening programs the three patients reported herein were assigned to the male gender at birth before the diagnosis of CAH was established. In adulthood the patients remained significantly virilized. Thorough psychosexual assessments in adulthood revealed well established male gender identities compatible with their male gender assignments at birth. In all three patients, gender role and behavior were consistent with male gender identity including sexual intercourse with female partners. The three patients reported herein revealed that male gender assignment to CAH patients with a 46, XX karyotype may have a successful outcome providing there is strong parental support and expert endocrine care. No standard guidelines have been published for the gender assignment of CAH patients with a 46, XX karyotype and genital ambiguity. More studies concerning gender assignment in CAH patients with a 46, XX karyotype reared as males are needed. abstract_id: PUBMED:27867895 Gender of rearing and psychosocial aspect in 46 XX congenital adrenal hyperplasia. Background: In congenital adrenal hyperplasia (CAH) with ambiguous genitalia, assigning gender of rearing can be complex, especially If genitalia is highly virilized. Apart from karyotype, prenatal androgen exposure, patient's gender orientation, sociocultural, and parental influences play a role. The aim of this study was to assess gender dysphoria and psychosocial issues in patients of CAH raised as males and females. Materials And Methods: This is a cross-sectional study that includes patients (old and new) with CAH who were treated by us in the last 6 months. A semi-structured interview proforma was used to elicit history and psychosocial background of the patients. The clinical and biochemical details were noted. For psychological analysis, patients were screened for gender dysphoria using Parent Report Gender Identity Questionnaire for children <12 years and Gender Identity/Gender Dysphoria Questionnaire for Adolescents and Adults. Results: We analyzed 22 46 XX CAH patients among which, 3 were reared as males and 19 as females. Among the 19 patients reared as females, 17 patients showed no gender dysphoria. Two patients revealed gender dysphoria as indicated by their marginally low scores on the gender dysphoria assessment. However, in view of current literature and the age groups of the patients, behavior of the 6-year-old patient can be best understood as being tomboyish. Gender dysphoria in the 22-year-old can be explained by the dominance of psychosocial factors and not hormones alone. Among the three patients reared as males, two prepubertal were satisfied with their male gender identity. The third patient, aged 32 years, had gender dysphoria when reared as a male that resolved when gender was reassigned as female and feminizing surgery was done. Conclusion: Gender assignment in 46 XX CAH is guided by factors such as degree of virilization of genitalia, gender orientation, patient involvement, sociocultural, and parental influences. abstract_id: PUBMED:20728155 Should male gender assignment be considered in the markedly virilized patient With 46,XX and congenital adrenal hyperplasia? Purpose: We assess the outcome in 46,XX men with congenital adrenal hyperplasia who were born with Prader 4 or 5 genitalia and assigned male gender at birth. Materials And Methods: After receiving institutional review board approval and subject consent we reviewed the medical records of 12 men 35 to 69 years old with 46,XX congenital adrenal hyperplasia, of whom 6 completed social and gender issue questionnaires. Results: All subjects were assigned male gender at birth, were diagnosed with virilizing congenital adrenal hyperplasia at age greater than 3 years and indicated a male gender identity with sexual orientation to females. Ten of the 12 subjects had always lived as male and 2 who were reassigned to female gender in childhood subsequently self-reassigned as male. Nine of the 12 men had long-term female partners, including 7 married 12 years or more. The 3 subjects without a long-term female partner included 1 priest, 1 who was reassigned female gender, married, divorced and self-reassigned as male, and 1 with a girlfriend and sexual activity. All except the priest and the subject who was previously married when female indicated a strong libido and frequent orgasmic sexual activity. Responses to self-esteem, masculinity, body image, social adjustment and symptom questionnaires suggested adjustments related to the extent of familial and social support. Conclusions: Outcome data on severely masculinized 46,XX patients with congenital adrenal hyperplasia who were assigned male gender at birth indicate male gender identity in adulthood with satisfactory male sexual function in those retaining male genitalia. In men who completed questionnaires results were poorer in those lacking familial/social support. Male gender of rearing may be a viable option for parents whose children are born with congenital adrenal hyperplasia, a 46,XX karyotype and male genitalia, although positive parental and other support, and counseling are needed for adjustment. abstract_id: PUBMED:38149768 Extensive Literature Review of 46, XX Newborns with Congenital Adrenal Hyperplasia (CAH) and Severe Genital Masculinization: Should They be Assigned and Reared Male? 46, XX individuals born with severely masculinized genitals due to congenital adrenal hyperplasia (CAH) who have been assigned males at birth and reared male can successfully establish a male gender identity/role, find employment, marry, function sexually with a female partner, and develop positive mental health status. While there were a few individuals who reportedly did not fare well or who changed gender to female, the majority of those identifying as males appear to have an overall good quality of life. Parental/family support, along with the support of others, appears essential to a positive outcome as a male or as a female. Reasons are given to support why serious consideration of a male gender assignment and rearing, in certain situations, is justified and should be seriously considered. Disorders of Sex Differentiation (DSD) teams should inform parents of the option of male assignment and rearing in 46, XX CAH infants with severe genital masculinization, a rare condition. To provide this option is concordant with the principles of ethics, transparency and with the Endocrine Society Guidelines and the American Academy of Pediatrics' policy of fully informed consent. abstract_id: PUBMED:33442128 Case Report of an Adult Female with Neglected Congenital Adrenal Hyperplasia (CAH). An apparently well 27-year-old phenotypically male adult was seen at the endocrine clinic for gender assignment. Patient had been raised as a male and identifies as such. Abdominal CT scan showed a unilateral left adrenal mass and karyotyping revealed 46 XX female karyotype. She was diagnosed to have simple virilizing CAH and needed thorough counselling with subsequent management by a multidisciplinary team. abstract_id: PUBMED:32212580 Gender Identity and Assignment Recommendations in Disorders of Sex Development Patients: 20 Years’ Experience and Challenges Objective: Gender assignment in infants and children with disorders of sex development (DSD) is a stressful situation for both patient/families and medical professionals. Methods: The purpose of this study was to investigate the results of gender assignment recommendations in children with DSD in our clinic from 1999 through 2019. Results: The mean age of the 226 patients with DSD at the time of first admission were 3.05±4.70 years. 50.9% of patients were 46,XY DSD, 42.9% were 46,XX DSD and 6.2% were sex chromosome DSD. Congenital adrenal hyperplasia (majority of patients had 21-hydroxylase deficiency) was the most common etiological cause of 46,XX DSD. In 46,XX patients, 87 of 99 (89.7%) were recommended to be supported as a female, 6 as a male, and 4 were followed up. In 46,XY patients, 40 of 115 (34.8%) were recommended to be supported as a female, and 70 as male (60.9%), and 5 were followed up. In sex chromosome DSD patients, 3 of 14 were recommended to be supported as a female, 9 as a male. The greatest difficulty in making gender assignment recommendations were in the 46,XY DSD group. Conclusion: In DSD gender assignment recommendations, the etiologic diagnosis, psychiatric gender orientation, expectation of the family, phallus length and Prader stage were effective in the gender assignment in DSD cases, especially the first two criteria. It is important to share these experiences among the medical professionals who are routinely charged with this difficult task in multidisciplinary councils. abstract_id: PUBMED:30035430 Disorders of sexual differentiation in children: a critical look at open questions (part I) Management of children with disorders of sexual differentiation (DSD) continues to cause questions and receive criticism. This review discusses some contradictions concerning the newly proposed terminology, nomenclature, and classification. Congenital adrenal hyperplasia (CAH) is the most common cause of DSD, and its management also remains controversial. Therefore, in this part of the article, we critically analyze available literature on gender identity, the need and timing of surgery in children with DSD and 46, XX karyotype. abstract_id: PUBMED:20926538 Approach to assigning gender in 46,XX congenital adrenal hyperplasia with male external genitalia: replacing dogmatism with pragmatism. The goal of sex assignment is to facilitate the best possible quality of life for the patient. Factors such as reproductive system development, sexual identity, sexual function, and fertility are important considerations in this regard. Although some DSD gender assignments are relatively straightforward, those with midstage genital ambiguity and unclear gonadal function represent a major challenge. A recent major change in DSD care has been to encourage a male assignment for 46,XY infants with ambiguous genitalia who have evidence of testicular function and in utero central nervous system androgen exposure. In contrast, assignment of virilized 46,XX DSD patients remains female when ovaries and internal organs are present, regardless of the extent of virilization of the external genitalia. In this paper, we propose consideration of male assignment for these 46,XX patients who have fully developed male genitalia based on available outcome data. abstract_id: PUBMED:31801784 Child with '46, XX' disorder of sex development: clues to diagnose aromatase deficiency. A diagnosis of congenital adrenal hyperplasia (CAH) in a '46, XX' newborn with ambiguous genitalia is like a 'knee jerk reaction' of the paediatrician because of its higher frequency and life-threatening consequences if remain undiagnosed and hence untreated. Aromatase deficiency (AD), a rare cause of '46, XX' disorder of sex development, mimics virilising CAH in many aspects; thus, the disease is often overlooked. Diagnosis of AD in women is much easier around puberty due to the presence of primary amenorrhoea, undeveloped breasts, androgen excess and tall stature with eunuchoid proportions. Diagnosing AD with confidence immediately after birth or during early childhood is a challenging task without genetic analysis. In resource-restricted settings, AD remains a diagnosis of exclusion particularly in this age group and history of maternal virilisation, non-progressive genital ambiguity, elevated gonadotrophins (follicle-stimulating hormone >>luteinising hormone), mildly delayed bone age with/without enlarged polycystic ovaries serve as important clues to the underlying AD. abstract_id: PUBMED:33374095 Clinical Characteristics of 46,XX Males with Congenital Adrenal Hyperplasia Objective: To retrospectively evaluate the follow-up data in patients with 46,XX congenital adrenal hyperplasia (CAH) who were raised male. Methods: A national database was created. The data of patients were asked to be recorded in the data form. Results: The median (range) age of diagnosis was three (0.1-18.3) years in 44 patients. Twenty nine cases were diagnosed after the age of two years. Most (95.4%) cases were stage 4-5 virilized. Hysterectomy and bilateral salpingoopherectomy, at a median age of 7.25 (2.4-25.3) years, was performed in 35 cases. Testicular prostheses were placed in 11 (25%) cases at a median age of 11.2 (2.8-17) years. The median final height was 149.2 (132.8-172) cms in 38 patients, including simple virilizing (n=18), salt-wasting (n=6), and 11-beta hydroxylase (n=12). Of the 16 patients above the age of eighteen, university education was completed in 25%. Conclusion: It was seen that most (65.9%) of the 46,XX CAH cases raised male were diagnosed after two years of age. In these cases, hysterectomy and bilateral salpingoopherectomy, genital corrective surgeries and testicular prosthesis operations were performed in a very wide age rage. Answer: Based on the information provided in the abstracts, male gender assignment can be considered in markedly virilized patients with 46,XX karyotype and congenital adrenal hyperplasia (CAH). Several studies and case reports suggest that individuals with 46,XX CAH who were assigned male at birth and reared as males have established male gender identities and roles, with some leading successful lives with established relationships and satisfactory sexual function with female partners (PUBMED:20728155; PUBMED:38149768; PUBMED:27125449). The decision for gender assignment in such cases is complex and influenced by multiple factors including the degree of genital virilization, cultural preferences, parental choice, and the availability of medical and psychological support (PUBMED:27867895; PUBMED:32212580). It is also important to consider the individual's gender orientation, sociocultural context, and the potential for future gender dysphoria (PUBMED:27867895). The literature suggests that with strong parental support and expert endocrine care, male gender assignment for 46,XX CAH patients may have a successful outcome (PUBMED:27125449). However, it is also noted that there is no standard guideline for gender assignment in these cases, and more studies are needed (PUBMED:27125449). Additionally, the presence of familial and social support, along with counseling, is crucial for positive adjustment regardless of the gender of rearing (PUBMED:20728155). In summary, male gender assignment can be a viable option for some 46,XX CAH patients with severe genital masculinization, but it requires careful consideration of individual circumstances, multidisciplinary support, and informed consent (PUBMED:38149768; PUBMED:20728155; PUBMED:27125449). It is essential to provide parents with all available information to make an informed decision that aligns with ethical principles and guidelines (PUBMED:38149768).
Instruction: Does diffuse irrigation result in improved radiofrequency catheter ablation? Abstracts: abstract_id: PUBMED:25107946 Does diffuse irrigation result in improved radiofrequency catheter ablation? A prospective randomized study of right atrial typical flutter ablation. Aims: Recent developments of open irrigated catheters have sought to create uniform cooling of the entire ablating electrode. The aim of this randomized study was to assess whether the diffuse irrigation of the Coolflex(®) (CF) catheter results in improved short-term procedural benefits in patients undergoing ablation of right atrial typical flutter. Methods And Results: Sixty consecutive patients (age 62 ± 13) with typical atrial flutter were prospectively randomized to ablation of the cavotricuspid isthmus (CTI) using either a standard 3.5 mm tip ablation catheter with six distal irrigation channels (6C) (30 patients) or a 4 mm tip fully irrigated ablation catheter (CF) (30 patients). There were no significant differences seen between procedures performed with the diffusely irrigated CF catheter and the standard six-channel irrigated-tip catheter. This concerned the total procedural duration RF duration, fluoroscopic duration, the total amount of irrigation fluid, and the occurrence of steam pop. Conclusions: The use of a diffuse irrigation at the ablation catheter tip does neither facilitate lesion formation nor reduce the amount of irrigation during RF ablation for typical right atrial flutter using recommended flow and power settings. abstract_id: PUBMED:35322477 Radiofrequency ablation using a novel insulated-tip ablation catheter can create uniform lesions comparable in size to conventional irrigated ablation catheters while using a fraction of the energy and irrigation. Introduction: During radiofrequency ablation (RFA) using conventional RFA catheters (RFC), ~90% of the energy dissipates into the bloodstream/surrounding tissue. We hypothesized that a novel insulated-tip ablation catheter (SMT) capable of blocking the radiofrequency path may focus most of the energy into the targeted tissue while utilizing reduced power and irrigation. Methods: This study evaluated the outcomes of RFA using SMT versus an RFC in silico, ex vivo, and in vivo. Radiofrequency applications were delivered over porcine myocardium (ex vivo) and porcine thigh muscle preparations superfused with heparinized blood (in vivo). Altogether, 274 radiofrequency applications were delivered using SMT (4-15 W, 2 or 20 ml/min) and 74 applications using RFC (30 W, 30 ml/min). Results: RFA using SMT proved capable of directing 66.8% of the radiofrequency energy into the targeted tissue. Accordingly, low power-low irrigation RFA using SMT (8-12 W, 2 ml/min) yielded lesion sizes comparable with RFC, whereas high power-high irrigation (15 W, 20 ml/min) RFA with SMT yielded lesions larger than RFC (p < .05). Although SMT was associated with greater impedance drops ex vivo and in vivo, ablation using RFC was associated with increased charring/steam pop/tissue cavitation (p < .05). Lastly, lesions created with SMT were more homogeneous than RFC (p < .001). Conclusion: Low power-low irrigation (8-12 W, 2 ml/min) RFA using the novel SMT ablation catheter can create more uniform, but comparable-sized lesions as RFC with reduced charring/steam pop/tissue cavitation. High power-high irrigation (15 W, 20 ml/min) RFA with SMT yields lesions larger than RFC. abstract_id: PUBMED:36852910 Distribution of excitation recoverable myocardium after radiofrequency ablation and its relation to energy application time and irrigation. Introduction: Radiofrequency (RF) catheter ablation induces excitation recoverable myocardium around durable core lesions, and its distribution may be different depending on energy delivery methods. Methods And Results: In coronary perfusing porcine hearts, pacing threshold through the ventricle was measured using eight-pole (1-mm distance) needle electrodes vertically inserted into myocardium before, within 3 min after and 40 min after 40 W ablation with 10-g catheter contact (Group 1: irrigation catheter for 15 s, Group 2: irrigation catheter for 40 s, Group 3: nonirrigation catheter for 15 s, Group 4: nonirrigation catheter for 40 s). Ablation was accomplished in all 12 ablations in Groups 1-3 whereas in 8/12 ablations in Group 4 because of high-temperature rise. Within 3 min after ablation, 10.0 V pacing uncaptured electrodes were distributed from the surface to inside the myocardium, and its depth was deeper in 40 s than in 15 s ablation. 40 min after ablation, excitation recovery at one or more electrodes below the durable lesion was observed in all Groups. Excitation recovery electrodes were also observed on the surface in Group 1 but not the other Groups. Accordingly, the number of excitation-recovered electrodes were larger in Group 1 than the other Groups. Conclusions: Regardless of the ablation methods, excitation recoverable myocardium was present around 1.0 mm below the durable lesions. Lesions created by short application time using an irrigation catheter may have included large excitation recoverable myocardium soon after ablation because of the presence of reversible myocardium on well-irrigated myocardial surfaces. abstract_id: PUBMED:38367008 Safety of ventricular arrhythmia radiofrequency ablation with half-normal saline irrigation. Aims: Failure of radiofrequency (RF) ablation of ventricular arrhythmias is often due to inadequate lesion size. Irrigated RF ablation with half-normal saline (HNS) has the potential to increase lesion size and reduce sodium delivery to the patient if the same volume of RF irrigant were used for normal saline (NS) and HNS but could increase risks related to steam pops and lesion size. This study aims to assess periprocedural complications and acute ablation outcome of ventricular arrhythmias ablation with HNS. Methods And Results: Prospective assessment of outcomes was performed in 1024 endocardial and/or epicardial RF ablation procedures in 935 consecutive patients (median age 64 years, 71.2% men, 73.4% cardiomyopathy, 47.2% sustained ventricular tachycardia). Half-normal saline was selected at the discretion of the treating physician. Radiofrequency ablation power was generally titrated to a ≤15 Ω impedance fall with intracardiac echocardiography monitoring. Half-normal saline was used in 900 (87.9%) and NS in 124 (12.1%) procedures. Any adverse event within 30 days occurred in 13.0% of patients treated with HNS RF ablation including 4 (0.4%) strokes/transient ischaemic attacks and 34 (3.8%) pericardial effusions requiring treatment (mostly related to epicardial access). Two steam pops with perforation required surgical repair (0.2%). Patients who received NS irrigation had less severe disease and arrhythmias. In multivariable models, adverse events and acute success of the procedure were not related to the type of irrigation. Conclusion: Half-normal saline irrigation RF ablation with power guided by impedance fall and intracardiac echocardiography has an acceptable rate of complications and acute ablation success while administering half of the saline load expected for NS irrigation. abstract_id: PUBMED:22296283 Atrial fibrillation ablation using a closed irrigation radiofrequency ablation catheter. Background: Catheter ablation is an effective therapy for symptomatic, medically refractory atrial fibrillation (AF). Open-irrigated radiofrequency (RF) ablation catheters produce transmural lesions at the cost of increased fluid delivery. In vivo models suggest closed-irrigated RF catheters create equivalent lesions, but clinical outcomes are limited. Methods: A cohort of 195 sequential patients with symptomatic AF underwent stepwise AF ablation (AFA) using a closed-irrigation ablation catheter. Recurrence of AF was monitored and outcomes were evaluated using Kaplan-Meier survival analysis and Cox proportional hazards models. Results: Mean age was 59.0 years, 74.9% were male, 56.4% of patients were paroxysmal and mean duration of AF was 5.4 years. Patients had multiple comorbidities including hypertension (76.4%), tobacco abuse (42.1%), diabetes (17.4%), and obesity (mean body mass index 30.8). The median follow-up was 55.8 weeks. Overall event-free survival was 73.6% with one ablation and 77.4% after reablation (reablation rate was 8.7%). Median time to recurrence was 26.9 weeks. AF was more likely to recur in patients being treated with antiarrhythmic therapy at the time of last follow-up (recurrence rate 30.3% with antiarrhythmic drugs, 13.2% without antiarrhythmic drugs; hazard ratio [HR] 2.2, 95% confidence interval [CI] 1.1-4.4, P = 0.024) and in those with a history of AF greater than 2 years duration (HR 2.7, 95% CI 1.1-6.9, P = 0.038). Conclusions: Our study represents the largest cohort of patients receiving AFA with closed-irrigation ablation catheters. We demonstrate comparable outcomes to those previously reported in studies of open-irrigation ablation catheters. Given the theoretical benefits of a closed-irrigation system, a large head-to-head comparison using this catheter is warranted. abstract_id: PUBMED:35322476 Will the insulated-tip radiofrequency catheter transform ablation procedures? Radiofrequency (RF) ablation has been the most widely employed energy source for catheter ablation to date. However, most of conventional RF ablation energy dissipates into the bloodstream before reaching the target tissue. Technology that conveys RF energy exclusively toward target tissue may potentially improve the quality, safety, and outcome of the RF ablation procedures. RF ablation using a novel insulated-tip catheter (Sirona Medical Technologies [SMT]) may refine RF ablation in the future to minimize the risk of iatrogenic complications. Although it is still unclear whether the results of the SMT catheter can be translated to a human beating heart, the data for SMT catheter of this study are very promising. abstract_id: PUBMED:28167080 Epicardial Catheter Ablation of Ventricular Tachycardia. Over the last two decades, epicardial catheter ablation has evolved into a practical approach for treatment of ventricular tachycardia (VT). There are certain considerations when performing this procedure. First, presence of epicardial fat can diminish peak-to-peak electrogram amplitude and also impede radiofrequency energy delivery. Hence, epicardial VT ablation should be performed with cooled-tip radiofrequency using reduced irrigation flow within a relatively 'dry' pericardial milieu. Furthermore, catheter orientation is key when performing epicardial ablation. Lastly, hemo-pericardium remains the most common major adverse event of epicardial ablation and its presenting timeline may be used to identify the precise nature of this complication. abstract_id: PUBMED:27353324 Impact of surround flow catheter tip irrigation in contact force ablation on the incidence of asymptomatic oesophageal lesions after atrial fibrillation ablation: a prospective comparative study. Aims: Aim of this study was to evaluate the impact of a recently introduced contact force ablation catheter with modified irrigation technology compared with a conventionally irrigated ablation catheter on the incidence of endoscopically detected oesophageal lesions (EDEL). Methods And Results: Patients with symptomatic, drug-refractory paroxysmal or persistent atrial fibrillation (AF) who underwent left atrial radiofrequency (RF) catheter ablation were prospectively enrolled. Patients were ablated using a single-tip RF contact force ablation catheter with conventional irrigation (Group 1; n = 50) or with a recently introduced intensified 'surround flow' irrigation technology (Group 2; n = 50). Assessment of EDEL was performed by oesophagogastroduodenoscopy in all patients after ablation. A total of 100 patients (mean age 63.6 ± 12.1 years; men 58%) with paroxysmal (n = 41; 41%) or persistent AF were included. Groups 1 and 2 patients were comparable in regard to baseline characteristics and procedural parameters, especially ablation time at posterior left atrial wall. Overall, 13 patients (13%) developed EDEL after AF ablation (8 oesophageal ulcerations, 5 erythema). The incidence of EDEL including oesophageal ulcerations was higher in Group 2 compared with Group 1 patients without statistical significance (18 vs. 8%, P = 0.23). One pericardial tamponade and one access site bleeding occurred in Group 2. No further adverse events were reported in both groups. Conclusion: According to these preliminary results, the use of an improved ablation catheter irrigation technology (surround flow) in conjunction with contact force measurement was associated with a higher but not statistically significant probability of oesophageal thermal lesions. Further studies including larger patient cohorts are needed. abstract_id: PUBMED:23875670 Radiofrequency ablation at low irrigation flow rates using a novel 12-hole gold open-irrigation catheter. Background: High irrigation rates during radiofrequency (RF) ablation may cause fluid overload and limit lesion size. This in vivo animal study assessed the safety and efficacy of RF ablation at low irrigation rates using a novel 12-hole gold catheter. Methods: A total of 103 lesions, created on the thigh of five mongrel dogs, were analyzed. Lesions were created using a 12-hole irrigated gold-tip (Au) and a six-hole irrigated platinum-iridium (PtIr) catheter (both 7F/3.5-mm electrode; BIOTRONIK SE & CO, KG, Berlin, Germany) in parallel and perpendicular orientation. RF current was delivered for 60 seconds at 30 W using 8 mL/min and 15 mL/min irrigation. Electrode temperature, steam pops, lesion dimensions, and coagulum formation were recorded. Results: Electrode temperatures were lower for Au compared to PtIr in parallel (8 mL/min: 38.1 ± 1.7°C vs 48.0 ± 4.8°C, P < 0.0001; 15 mL/min: 36.0 ± 1.5°C vs 46.9 ± 5.4°C, P < 0.0001) and perpendicular position (15 mL/min: 35.5 ± 1.2°C vs 38.4 ± 2.5°C, P = 0.003). The number of steam pops between Au and PtIr was comparable for parallel (8 mL/min: 14% vs 27%, P = 0.65; 15 mL/min: 14% vs 43%, P = 0.21) and perpendicular orientation (8 mL/min: 25% vs 17%, P = 1.00; 15 mL/min: 18% vs 0%, P = 0.48). Au created larger volumes than PtIr at 8 mL/min irrigation (861 ± 251 mm(3) vs 504 ± 212 mm(3) , P = 0.004); however, for 15 mL/min, volumes were comparable (624 ± 269 mm(3) vs 768 ± 466 mm(3) , P = 0.46). No coagulum formation was observed for any of the catheters on the surface and catheter tip. Conclusion: RF ablation at low flow rate using a novel 12-hole irrigation Au catheter is safe and results in larger lesions than with a PtIr electrode. abstract_id: PUBMED:37946002 The effect of half-normal saline irrigation on lesion characteristics in temperature-flow-controlled ablation. Purpose: Radiofrequency (RF) ablation with half-normal saline (HNS) irrigation is reported to potentially enlarge local lesion compared to normal saline (NS) in power-controlled ablation (PC-Abl). However, the effect of HNS-irrigation in temperature-flow-controlled ablation (TFC-Abl) on lesion characteristics is unknown. We compared this between TFC-Abl with QDOT-Micro™ catheter and PC-Abl with Thermocool SmartTouch SF™ catheter (STSF). Methods: RF-application with NS (n = 480) and HNS (n = 480) irrigation were performed on swine myocardium placed in a circulating saline bath. Lesion characteristics without steam-pops under various conditions (target AI, 400/550; ablation power, 30/50 W; contact force, 10/20/30 g; catheter orientation, perpendicular/parallel) were assessed and compared between two irrigants. Results: After matching, 343 lesions without steam-pops in each group were evaluated. In PC-Abl, lesion size did not differ between two groups (NS, 188 ± 97 vs. HNS, 200 ± 95 mm3, p = 0.28 in volume; 33.9 ± 7.3 vs. 34.8 ± 9.5 mm2, p = 0.34 in surface area; and 4.0 ± 1.0 vs. 4.0 ± 1.0 mm, p = 0.81 in depth), but steam-pops were more frequently observed with HNS-irrigation (23.8% vs. 37.9%, p = 0.001). Contrary, in TFC-Abl, HNS-irrigation produced significantly larger (214 ± 106 vs. 243 ± 128 mm3, p = 0.017) and deeper (4.0 ± 1.0 vs. 4.3 ± 1.1 mm, p = 0.002) lesions without increasing the risk of steam-pops (15.0% vs 15.0%, p = 0.99). Automatic temperature-guided titration was more frequently observed in HNS-irrigation (54.8% vs. 78.5%, p < 0.001). Conclusions: TFC-Abl with QDOT-Micro™ catheter utilizing HNS-irrigation might increase volume and depth of local lesion without increasing the risk of stem-pops compared to NS-irrigation. Power-controlled ablation with HNS-irrigation showed similar focal lesion with higher incidence of steam-pops (SPs) compared to normal saline (NS) irrigation. Contrary, temperature-flow-controlled ablation with HNS-irrigation provided larger and deeper lesion than NS-irrigation with similar incidence of SPs. ns, p > 0.05; *, 0.01 < p ≤ 0.05; **, 0.005 < p ≤ 0.01. HNS, half-normal saline; NS, normal saline. Answer: The use of diffuse irrigation in radiofrequency catheter ablation does not appear to result in improved short-term procedural benefits. A study comparing the use of a diffuse irrigation Coolflex® catheter with a standard six-channel irrigated-tip catheter for the ablation of right atrial typical flutter found no significant differences in procedural duration, RF duration, fluoroscopic duration, total amount of irrigation fluid, or the occurrence of steam pop between the two methods (PUBMED:25107946). However, it is important to note that advancements in catheter technology and irrigation methods can influence the outcomes of radiofrequency ablation. For instance, a novel insulated-tip ablation catheter (SMT) has been shown to direct a higher percentage of radiofrequency energy into the targeted tissue, allowing for the creation of uniform lesions comparable in size to those made by conventional irrigated ablation catheters, but with reduced power and irrigation. This technology also resulted in fewer instances of charring, steam pop, and tissue cavitation (PUBMED:35322477). Moreover, the distribution of excitation recoverable myocardium after radiofrequency ablation, which can be influenced by energy application time and irrigation, suggests that short application time using an irrigation catheter may include a larger excitation recoverable myocardium soon after ablation due to the presence of reversible myocardium on well-irrigated myocardial surfaces (PUBMED:36852910). In summary, while diffuse irrigation itself does not necessarily improve the outcomes of radiofrequency catheter ablation (PUBMED:25107946), the use of novel catheter technologies and irrigation strategies can have a significant impact on lesion characteristics and procedural safety (PUBMED:35322477; PUBMED:36852910).
Instruction: Is the Age of Patients with Iatrogenic Bile Duct Injuries Increasing? Abstracts: abstract_id: PUBMED:16698592 Clinical analysis of patients with iatrogenic bile duct injury. Background: The main cause of iatrogenic bile duct injury is misidentification of the common bile duct as the cystic duct. In this article, we summarize the experience in the treatment of 112 patients with iatrogenic bile duct injury. Method: Clinical data of these patients treated at 10 hospitals of Songhua river area, Heilongjiang province, China from January 1978 to January 2005 were analyzed retrospectively. Results: In 55.4% patients (62/112), iatrogenic bile duct injury was due to misidentification of the anatomy of Calot's triangle before cholecystectomy. Their diagnosis was based on clinical features, celiac puncture and imaging examination in which ultrasonography was most sensitive, giving a diagnostic rate of 97.5%. Six types of injury were identified according to their locations, and type III damage was commonly seen (92/112). The curative rate in this group was 95.5% (107/112). Eighty-seven patients (77.7%) underwent Roux-en-Y choledochojejunostomy with a cure rate of 94.3% (82/87). Conclusion: The prevention of iatrogenic bile duct injury lies in identifying the topography of extrahepatic bile ducts. Roux-en-Y choledochojejunostomy is usually the treatment of choice. abstract_id: PUBMED:2083927 The therapy of iatrogenic lesions of the bile duct. Forty-three patients were operated on for iatrogenic lesions of the bile duct. Only one patient had a biliary lesion which occurred in the course of distal gastric resection. All other lesions were observed during cholecystectomy. Injury of the bile duct was detected intraoperatively in sixteen cases. In 10 patients, lesions were observed in the postoperative period and in 17 patients, the post-operative diagnosis was made on the basis of symptoms of stenosis of the bile duct. Satisfactory results can be obtained by suturing the common bile duct and splinting with a T-tube where the lesion is partial and detected in the course of surgery. In the case of patients with strictures, an anastomosis (choledochojejunostomy Roux-en-Y loop) should be performed. Strictures involving hepatic bifurcation and the right hepatic duct have a higher incidence of restenosis, and transhepatic splinting of the anastomosis can therefore produce better results. Long-term transhepatic drainage has the advantage that replacement of the drain is relatively straightforward and complete dislocation impossible. Four of our patients died postoperatively, three of multiple septic organ failure due to preoperative biliary peritonitis or cholangitis, and one of a pulmonary embolism. Satisfactory long-term results after correction of an iatrogenic lesion of the bile duct can be obtained if the corrective procedure is undertaken immediately, prior to the onset of biliary cirrhosis. abstract_id: PUBMED:17897905 Classification of iatrogenic bile duct injury. Background: Iatrogenic bile duct injury continues to be an important clinical problem, resulting in serious morbidity, and occasional mortality, to patients. The ease of management, operative risk, and outcome of bile duct injuries vary considerably, and are highly dependent on the type of injury and its location. This article reviews the various classification systems of bile duct injury. Data Sources: A Medline, PubMed database search was performed to identify relevant articles using the keywords "bile duct injury", "cholecystectomy", and "classification". Additional papers were identified by a manual search of the references from the key articles. Results: Traditionally, biliary injuries have been classified using the Bismuth's classification. This classification, which originated from the era of open surgery, is intended to help the surgeons to choose the appropriate technique for the repair, and it has a good correlation with the final outcome after surgical repair. However, the Bismuth's classification does not encompass the whole spectrum of injuries that are possible. Bile duct injury during laparoscopic cholecystectomy tends to be more severe than those with open cholecystectomy. Strasberg's classification made Bismuth's classification much more comprehensive by including various other types of extrahepatic bile duct injuries. Our group, Bergman et al, Neuhaus et al, Csendes et al, and Stewart et al have also proposed other classification systems to complement the Bismuth's classification. Conclusions: None of the classification system is universally accepted as each has its own limitation. Hopefully, a universally accepted comprehensive classification system will be published in the near future. abstract_id: PUBMED:21178207 Some aspects of classification and treatment strategy of iatrogenic bile duct injury Iatrogenic bile duct injury continues to be an important clinical problem, resulting in serious morbidity, and occasional mortality, to patients. None of the classification system is universally accepted and worldwide used as each has its own limitation. This article reviews the various classification systems of bile duct injury. Traditionally, biliary injuries have been classified using the Bismuth's classification. This classification does not encompass the whole spectrum of injuries during laparoscopic cholecystectomy. Strasberg's classification made Bismuth's classification much more comprehensive by including various other types of extrahepatic bile duct injuries. Our own classification is simple enough, embraces the majority possible variants of bile duct injuries and convenient in clinical use. abstract_id: PUBMED:9275685 Studies of aetiology and management about iatrogenic injuries of bile duct 74 patients with iatrogenic injuries of bile duct were treated from June 1965 to October 1994 in our hospital. Iatragenic extrahepatic bile duct injuries may be caused by some errors during cholecystectomy. The operator did not realize that cholecystectomy itself is a procedure with inherent and potential risks. The operator was careless, rude and arrogant in operating. The operator lacked experience and skill, and was unable to deal with difficult situation during operation. Also we discussed the management of bile duct injuries, and suggested that intraoperative injuries of extrahepatic bile duct should be reconstructed immediately including repair of or end to end anastomoses of the injuries of bile duct. Cholecystojejunostomy (Roux-en-Y technique) is the best reconstruction procedure for patients with the injuries of bile duct after operation or failure of repetitional repaired and reconstructive operations of bile duct. Satisfactory result could be obtained. abstract_id: PUBMED:17581463 Unsuccessful endoscopic stenting in iatrogenic bile duct injury: remember rendezvous procedure. Patients with iatrogenic bile duct injury often require a variety of interventions that may range from endoscopic procedures to complex open surgery. Injuries involving both stricture and leakage, in particular, are a clinical challenge as early measures are mandatory because of leakage. However, early open surgery is often hazardous in these cases because of ongoing infection at the point of injury and around it caused by the leakage. Therefore, these patients should be treated endoscopically and percutaneously as long as possible, although iatrogenic, often narrow stricture hampers these efforts. In the present paper, the clinical value of a rendezvous technique is emphasized. This technique improves the success rate for endoscopic stenting in iatrogenic bile duct injuries. In the literature, there are no earlier reports on biliary rendezvous procedures in patients with bile duct injuries. abstract_id: PUBMED:2503779 Iatrogenic bile duct injuries: avoidance, recognition, and treatment. Common bile duct injury has potential catastrophic implications if unrecognized or inadequately managed at the time of initial surgical intervention. Every precaution necessary to avoid operative injury to the common duct should be taken, and with experience and meticulous attention to detail, this complication should rarely be encountered in the career of a well-trained general surgeon. If ever this unexpected occurrence happens, adherence to certain principles of recognition and management may preclude significant morbidity and mortality. Two patients with injured common bile ducts are discussed. One patient presented with an iatrogenic stricture due to previous operative trauma. A second patient suffered a complete transection of the common duct while undergoing a routine cholecystectomy; the transection was recognized and repaired primarily. The fundamental principles of bile duct reconstruction and biliary-enteric anastomoses are discussed. The prevention of stricture formation is explained, with specific guidelines presented to minimize this potentially lethal complication. abstract_id: PUBMED:12074081 Iatrogenic bile duct injury: the scourge of laparoscopic cholecystectomy. Background: Laparoscopic cholecystectomy (LC) has become the first-line surgical treatment of calculous gall-bladder disease and the benefits over open cholecystectomy are well known. In the early years of LC, the higher rate of bile duct injuries compared with open cholecystectomy was believed to be due to the 'learning curve' and would dissipate with increased experience. The purpose of the present paper was to review a tertiary referral unit's experience of bile duct injuries induced by LC. Methods: A retrospective analysis was performed on all patients referred for management of an iatrogenic bile duct injury from 1981 to 2000. For injuries sustained at LC, details of time between LC and recognition of the injury, time from injury to definitive repair, type of injury, use of intraoperative cholangiography (IOC), definitive repair and postoperative outcome were recorded. The type of injury sustained at open cholecystectomy was similarly classified to allow the severity of injury to be compared. Results: There were 131 patients referred for management of an iatrogenic bile duct injury that occurred at open cholecystectomy (n = 62), liver resection (n = 5) and at LC (n = 64). Only 39% of bile duct injuries were recognized at the time of LC. Following conversion to open operation, half the subsequent procedures were considered inappropriate. When the injury was not recognized during LC, 70% of patients developed bile leak/peritonitis, almost half of whom were referred, whereas the rest underwent a variety of operative procedures by the referring surgeon. The remainder developed jaundice or abnormal liver function tests and cholangitis. An IOC was performed in 43% of cases, but failed to identify an injury in two-thirds of patients. The bile duct injuries that occurred at LC were of greater severity than with open cholecystectomy. Following definitive repair, there was one death (1.6%). Ninety-two per cent of patients had an uncomplicated recovery and there was one late stricture requiring surgical revision. Conclusions: The early prediction that the rate of injury during LC would decline substantially with increased experience has not been fulfilled. Bile duct injury that occurs at LC is of greater severity than with open cholecystectomy. Bile duct injury is recognized during LC in less than half the cases. Evidence is accruing that the use of cholangiography reduces the risk and severity of injury and, when correctly interpreted, increases the chance of recognition of bile duct injury during the procedure. Prevention is the key but, should an injury occur, referral to a specialist in biliary reconstructive surgery is indicated. abstract_id: PUBMED:18098080 Late stage stenoses of bile ducts after iatrogenic bile duct injuries following cholecystectomy Introduction: Iatrogenic bile duct injuries represent a severe complication after cholecystectomy. For the attending physician therapy and management of these injuries are a challenge. Inadequate and delayed treatment can lead to stenoses at a late stage, which can necessitate further surgical intervention. Methods: In a study data of 74 patients, who were treated in our clinic for bile duct injuries following cholecystectomy, were analysed retrospectively. Results: A total of 8 patients with late stage bile duct strictures following iatrogenic bile duct injury including the subsequent therapy could be identified. The data of these patients were analysed in respect of cause and strategies to prevent late stage stenoses. In 62 patients the bile duct injury occurred following laparoscopic and in 12 patients following open cholecystectomy. In 16 patients the injury was combined with a vascular lesion. The interval between primary intervention and definitive therapy was 11 days in 53 patients and 1-15 years in 21 patients. In 8 patients the reason for the re-operation after a long interval (1-15 years) was a late stage stenosis. A hepatico-jejunostomy was performed subsequently and during follow-up 5 / 8 patients were symptom-free; 7 patients were re-operated due to a stenosed primary biliodigestive anastomosis and 3 patients each due to atrophy of the right liver lobe and recurrent cholangitis. One patient complained of recurrent cholangitis and a further patient of symptoms due to adhesions. Discussion: If treated inadequately bile duct injuries occurring during cholecystectomy can in the long-term lead to considerable problems such as recurrent cholangitis, late stage stenoses and even to secondary biliary cirrhosis. Therefore, a complex inter-disciplinary therapeutic concept aiming at timely treatment is necessary. abstract_id: PUBMED:31622318 Managing iatrogenic bile duct injuries through a multidisciplinary team approach: A SIUT case series. Iatrogenic injury to the bile duct is one of the most serious and feared complication of cholecystectomy, with a high mortality ranging between 3-12%. The management of such injuries of the bile duct is far more complicated and prolonged than the procedure itself. A retrospective analysis of 36 patients with bile duct injuries (BDI) was conducted over a period of 7 years, from January 2007 to December 2014. Most of their injuries occurred during open cholecystectomy, 22 rather than laparoscopic 14 and were mostly elective surgeries 34. Most injuries were identified postoperatively in 33 (91.6%) patients, at a median of 3.0 days. Among the modalities used to diagnose and treat these patients, endoscopy was performed in 32 of the cases (88.8%), followed by surgery on 17 (47.2%) patients and radiology on 16 (44.4%) cases. Surgery remains the gold standard for treatment of complete transection of bile duct injuries and long term outcomes are usually good. Endoscopy and radiology has an increasing role in the diagnosis and treatment of a leaking (non-transected) bile duct injury. Answer: The provided abstracts do not contain specific information regarding the age trends of patients with iatrogenic bile duct injuries. None of the abstracts mention a change or increase in the age of patients suffering from these injuries. Therefore, based on the given data, it is not possible to determine whether the age of patients with iatrogenic bile duct injuries is increasing.
Instruction: Is cigarette smoking associated with suicidal ideation among young people? Abstracts: abstract_id: PUBMED:32257512 Cigarette smoking dose-response and suicidal ideation among young people in Nepal: a cross-sectional study. Background: Worldwide, tobacco smoking is a major risk factor for morbidity and early mortality among adult population. The present study aimed to find out the association between current smoking and suicidal ideation among young people in Nepal. Materials And Methods: A cross-sectional questionnaire-based survey was carried out among 452 youths from Pokhara, Nepal. The present study included both genders (age 18-24 years) who were smokers as well as non-smokers. Results: Across the study period, 452 participants were identified after matching for age, and sex (226 in the smoking group and 226 in the non-smoking group). The mean age of participants was 21.6±1.2 years and 58.8% were males. The overall rate of suicidal ideation in our cohort was 8.9%. Smokers were slightly more likely to report suicidal ideation than non-smokers (aOR 1.12). The risk of developing suicidal ideation was 3.56 (95% CI 1.26-10.09) times more in individuals who smoked greater than 3.5 cigarettes per week (p=0.01). Conclusion: The rate of suicidal ideation was slightly higher among smokers and a dose-response relationship was identified with the number of cigarettes smoked per week. Being aware of the link between smoking and suicidal ideation may help health care professionals working with young people to address more effectively the issues of mental well-being and thoughts about suicide. abstract_id: PUBMED:36253173 What is the contribution of smoking to the increased risk of suicide in young smokers? A systematic review Objectives: The correlation between smoking and suicide is well documented in the general population: there is an increased risk of suicide among tobacco smokers. However, the association between smoking and suicidal behaviors (ideations, plans, attempts) in youth is poorly elucidated. This is a systematic review of the literature examined data on the relationship between active and passive smoking and suicidal ideation (SI), suicide planning (SP), and suicide attempts (SA) among youth in the general population. Method: Medline searches were performed for the period 1980-2020. Cross-sectional, case-control, prospective population-based studies of young people (age less than 18 years) were included in this review; studies of specific populations (patients with an identified pathology of any kind) were excluded. Results: This review included 43 studies: 23 studies on the association between active smoking and SI, SI and/or PS, TS; three studies on the association between passive smoking and suicidal behavior, three studies on the association between smoking and suicidal behavior in young people in psychiatric hospital settings, and five studies comparing the suicidal behavior of girls and boys. Analysis of the data collected lead to the conclusion that active or passive smoking is associated with suicidal behavior in young people. Smoking appears to contribute to psychopathological disorders, including depression, the use of other psychoactive substances, or psychosocial suffering which are often associated with an increased risk of suicide in young people. The correlations between smoking and the presence of mental disorders have been highlighted; tobacco use may contribute to the development of depression, anxiety and stress. Further studies are needed to verify the existence of a causal link between smoking and suicide. Conclusion: Smoking is associated with the risk of suicidal behavior in young people; it should be included among the criteria for assessing suicidal risk in youth. Smoking cessation, which improves psychological well-being, should be further integrated into the prevention of suicidal behavior. abstract_id: PUBMED:19374974 Individual and family correlates for cigarette smoking among Taiwanese college students. Objective: This college-based questionnaire survey aimed to explore the individual, family, and peer correlates for cigarette smoking among first-year college students. Method: The sample included 2918 first-year college students (males, 45.5%) recruited from a national university in Taiwan (participation rate, 79.1%). The participants reported on questions about various substances, attitudes toward substances, personality characteristics, psychopathology, suicidal behaviors, parenting style, family function and use of substances, and peer substance use. Results: There were 263 (9.0%; males, 70.6%) current smokers. Compared to nonsmokers, college smokers were more extraverted and neurotic, and showed less harm avoidance, and more novelty seeking in their personality. They had more hostile, somatic, depressive, paranoid, and psychotic symptoms in terms of psychopathology. Smokers were more likely to use other substances, and to have suicidal ideations, wishes, plans, and attempts. Smokers perceived lower family cohesion, less care from their fathers, and less overprotection from their mothers. They were more likely to have peers and family members who also smoked or used other substances. The most associated correlates were male sex, older age, other substance use, novelty seeking, suicidal ideation and attempts, sibling and peer substance use, a prosubstance attitude, and less maternal overprotection. Conclusions: Our findings support the association of cigarette use in Taiwanese young adults with several individual, family, and peer factors identified in Western studies. Intervention in cigarette use should be multifaceted, by taking its correlates and the concurrent psychopathology, use of substances, and suicidality into consideration. abstract_id: PUBMED:15741485 Is cigarette smoking associated with suicidal ideation among young people? Objective: The authors examined the association between suicidal ideation in early adulthood and daily tobacco smoking in a community sample of adolescents. Method: Participants were enrolled in a longitudinal study of health and development. The factors of disadvantage, impulsiveness, stress, depressed mood, tobacco smoking, other substance use, and parental attachment were included in multivariate modelling of suicidal ideation. Results: Data on tobacco use were available for 764 participants. Early tobacco smoking was significantly predictive of later suicidal ideation, but there was no longer a significant relationship when high levels of stress and depression and low levels of parental attachment in adolescence were included in the multivariate model. Conclusions: Tobacco smoking in adolescence does not appear to elevate the risk of later suicidal ideation. abstract_id: PUBMED:24969957 Suicidal ideation and substance use among adolescents and young adults: a bidirectional relation? Objective: To examine reciprocal associations between substance use (cigarette smoking, use of alcohol, marijuana, and other illegal drugs) and suicidal ideation among adolescents and young adults (aged 11-21 at wave 1; aged 24-32 at wave 4). Methods: Four waves public-use Add Health data were used in the analysis (N=3342). Respondents were surveyed in 1995, 1996, 2001-2002, and 2008-2009. Current regular smoking, past-year alcohol use, past-year marijuana use, and ever use of other illegal drugs as well as past-year suicidal ideation were measured at the four waves (1995, 1996, 2001-2002, and 2008-2009). Fixed effects models with lagged dependent variables were modeled to test unidirectional associations between substance use and suicidal ideation, and nonrecursive models with feedback loops combining correlated fixed factors were conducted to examine reciprocal relations between each substance use and suicidal ideation, respectively. Results: After adjusting for the latent time-invariant effects and lagged effects of dependent variables, the unidirectional associations from substance use to suicidal ideation were consistently significant, and vice versa. Nonrecursive model results showed that use of cigarette or alcohol increased risk of suicidal ideation, while suicidal ideation was not associated with cigarette or alcohol use. Reversely, drug use (marijuana and other drugs) did not increase risk of suicidal ideation, but suicidal ideation increased risk of illicit drug use. Conclusion: The results suggest that relations between substance use and suicidal ideation are unidirectional, with cigarette or alcohol use increasing risk of suicidal ideation and suicidal ideation increasing risk of illicit drug use. abstract_id: PUBMED:29245024 Early-onset tobacco use and suicide-related behavior - A prospective study from adolescence to young adulthood. Background: Developmental relationships between tobacco use and suicide-related behaviors (SRB) remain unclear. Our objective was to investigate the longitudinal associations of tobacco use in adolescence and SRB in adulthood. Methods: Using a prospective design, we examined whether tobacco use in adolescence is associated with SRB (intentional self-injury, suicide ideation) in young adulthood in a population-based sample of 1330 twins (626 males, 704 females). The baseline and follow-up data were collected by professionally administered semi-structured poly-diagnostic interviews at ages 14 and 22, respectively. Results: After adjusting for multiple potential confounders, those who reported early-onset of regular tobacco use had a significantly increased risk for intentional self-injury, such as cutting or burning, at age 22 (adjusted odds ratio [AOR] 4.57, 95% CI 1.93-10.8) in comparison to those who had not at all initiated tobacco use. Also, daily cigarette smoking at baseline was associated with future intentional self-injury (AOR 4.45, 95% CI 2.04-9.70). Early-onset tobacco use was associated with suicidal ideation in females (AOR 3.69, 95% CI 1.56-8.72) but not in males. Considering any SRB, baseline daily smokers (AOR 2.13, 95% CI 1.12-4.07) and females with early onset of regular tobacco use (AOR 3.97, 95% CI 1.73-9.13) had an increased likelihood. Within-family analyses among twin pairs discordant for exposure and outcome controlling for familial confounds showed similar, albeit statistically non-significant, associations. Conclusion: Early-onset tobacco use in adolescence is longitudinally associated with SRB (intentional self-injury and/or suicide ideation) in young adulthood, particularly among females. Further investigation may reveal whether this association has implications for prevention of SRB in adolescence and young adulthood. abstract_id: PUBMED:19892406 Suicidal ideation among young French adults: association with occupation, family, sexual activity, personal background and drug use. Background: To assess associations among young adults between suicidal ideation in the previous year and adverse childhood events, occupation, education, tobacco use, alcohol abuse, cannabis use in the previous month, illicit drug use, sexual orientation and activity, depression, physical violence in the previous year, and lifetime forced sexual intercourse. Methods: A subsample of 4075 French adults aged 18-30 years was drawn from a random national telephone survey in 2005. Major depressive episode and alcohol abuse were assessed using CIDI-SF and AUDIT-C (score above 4). Data were analysed with logistic regressions. Results: Suicidal ideation affected 5.7% of men and 4.9% of women. Among men depression had the highest adjusted odds ratio (ORa=8.06, 5.07-12.79), followed by homosexual intercourse (3.37, 1.62-7.04), absence of sexual activity (2.83, 1.80-4.44); ORa between 1.6 and 2.0 were observed for living alone, daily tobacco smoking, being unemployed, serious health event concerning the father, age 26-30 and bad relationships between parents. Among women, depression had the highest ORa (7.60, 4.70-12.29), followed by lifetime experience of forced sexual intercourse (5.37, 2.89-9.96), having consumed illicit drugs other than cannabis (4.01, 1.48-10.89); ORa between 1.7 and 2.5 were observed for living alone, being unemployed, bad relationship between parents and age 26-30. Limitations: Cross-sectional survey, sexual orientation inferred from sexual activity. Conclusion: Suicide prevention should integrate the fact that besides depression, unemployment, family history, age, and sexual activity and orientation are specific risk factors among men, whereas illicit drug use, violence and forced sexual intercourse are more important among women. abstract_id: PUBMED:30881756 Cannabis and Amphetamine Use Among Adolescents in Five Asian Countries. Introduction: There has been a global increase in illicit drug use among young people. The aim of this study was to estimate the prevalence of lifetime cannabis and amphetamine use, as well as to explore factors associated with substance use among adolescents in five Asian countries: Iraq, Kuwait, Malaysia, Mongolia, and Vietnam. Methods: 38,941 school children (mean age 15.4 years, SD=1.5) completed the cross-sectional Global School-Based Student Health Survey (GSHS). Topics covered in the questionnaire included cannabis and amphetamine use. Personal, parental, and environmental attributes were explored as predictors of cannabis and amphetamine use. Logistic regression was used to assess the contribution of potential predictors on lifetime cannabis and lifetime amphetamine use. Results: Overall, the prevalence of lifetime cannabis use was 0.9% and lifetime amphetamine use was 1.0% among research participants. Cannabis use was influenced by male gender (Kuwait, Mongolia), parental smoking habits (Kuwait, Iraq), and current cigarette smoking in all countries. Amphetamine use was associated with suicidal ideation (Kuwait, Malaysia, Vietnam), school truancy (Malaysia, Mongolia, Vietnam), being a victim of physical assault (Kuwait, Mongolia), bullying victimization (Iraq, Malaysia, Vietnam), as well as anxiety and current cigarette use in all countries. Conclusions: Our preliminary results show the importance of personal attributes such as mental distress and environmental stressors on lifetime cannabis and lifetime amphetamine use. Future prospective studies are needed to identify causal relationships among personal attributes, parental attributes, environmental stressors, and illicit substance use. abstract_id: PUBMED:33663338 Profile of Substance Misuse among Adolescent and Young Adult Gabapentinoid Users: A Register-Based Follow-up Study of Former Adolescent Psychiatric Inpatients. Aims: Our study examined characteristics of adolescent and young adult study participants using gabapentinoids (gabapentin, pregabalin), with previous history of adolescent psychiatric inpatient hospitalization. Particular focus was on temporal association of age, at first prescription for gabapentinoids, to age at tobacco smoking initiation, regular alcohol use, diagnosis of substance dependence and prescriptions for benzodiazepines and opioids. Methods: The initial study population contained 508 adolescents (300 females, 208 males) admitted to psychiatric inpatient care in Oulu University hospital between the ages 13-17 years. Register-based follow-up information on prescriptions for gabapentinoids, benzodiazepines and opioids, as well as ICD-10 diagnosis for hospital-treated substance dependence, was obtained from the Finnish national health care registers. Results: The users of gabapentinoids accounted for 9.1% of the initial study population. Of adolescence-related characteristics, suicidal ideation, suicide attempts and non-suicidal self-injury was emphasized in females using gabapentinoids. The majority of participants using gabapentinoids had benzodiazepines (80.4%) and opioids (71.7%) as comorbid drugs. Initiation of tobacco smoking and alcohol use and first prescriptions for of benzodiazepines and opioids, and a diagnosis of substance dependence commonly predated first prescriptions for gabapentinoids. Conclusions: In clinical practice, the decision to prescribe gabapentinoids to adolescents or young adults must be made with caution, particularly for those with substance use problems and/or without a clinically approved indication. abstract_id: PUBMED:31205813 Prevalence of Risk and Protective Factors Associated with HIV and HCV Infections Among Male High School Students Who Have Sex with Males-Hawai'i, 2013, 2015, and 2017. Human immunodeficiency virus (HIV) and hepatitis C virus (HCV) infections are increasing among young adults, and males who have sex with males (MSM) are at high risk for both infections. Limited Hawai'i data exists on the extent to which populations, such as MSM, are engaging in behaviors that place them at increased risk for either infection. This analysis quantified the proportion of Hawai'i public high school students who are MSM and are at risk for HCV and HIV infections. Data from the 2013, 2015, and 2017 Hawai'i Youth Risk Behavior Surveys (YRBS) were combined (n=16,751) to investigate the prevalence of risk factors associated with HIV and HCV infections (eg, sexual risk behaviors, substance use) and protective factors among MSM public high school students. Among sexually experienced male students (n=3,391), 13.1% were classified as MSM and among these, 40.3% identified as heterosexual despite reporting same-sex sexual contact. Multivariate modeling demonstrated that MSM students are significantly more likely than non-MSM students to engage in behaviors that increase their risk for HIV and HCV infections (composite risk variable; adjusted Prevalence Ratio: 1.40, 95% CI 1.15 - 1.70) and are significantly less likely to have protective factors. Evidence-based prevention strategies for reducing HIV and HCV risk behaviors while improving protective factors among sexual minority youth in Hawai'i are necessary and must address sexual behavior along with other dimensions of sexual orientation. Answer: Yes, cigarette smoking is associated with suicidal ideation among young people. Several studies have found a link between smoking and suicidal behaviors in youth. A cross-sectional study in Nepal found that the rate of suicidal ideation was slightly higher among smokers compared to non-smokers, and there was a dose-response relationship with the number of cigarettes smoked per week (PUBMED:32257512). A systematic review also concluded that active or passive smoking is associated with suicidal behavior in young people, suggesting that smoking should be included among the criteria for assessing suicidal risk in youth (PUBMED:36253173). Further evidence comes from a study among Taiwanese college students, which found that smokers were more likely to have suicidal ideations, wishes, plans, and attempts compared to non-smokers (PUBMED:19374974). Another study, however, found that while early tobacco smoking was predictive of later suicidal ideation, the relationship was not significant when factors such as high levels of stress and depression and low levels of parental attachment were included in the model (PUBMED:15741485). Additionally, a longitudinal study suggested that early-onset tobacco use in adolescence is associated with suicide-related behaviors (SRB) in young adulthood, particularly among females (PUBMED:29245024). A study among young French adults also found associations between suicidal ideation and various factors, including daily tobacco smoking (PUBMED:19892406). Overall, the evidence indicates that there is an association between cigarette smoking and suicidal ideation among young people, although the exact nature of the relationship may be influenced by various individual, familial, and environmental factors.
Instruction: Midwifery defined by physicians, nurses and midwives: the birth of a consensus? Abstracts: abstract_id: PUBMED:8313288 Midwifery defined by physicians, nurses and midwives: the birth of a consensus? Objective: To describe the form of midwifery practice preferred by physicians practising obstetrics, nurses providing maternity care and midwives. Design: Mail survey conducted in 1991. Setting: Province of Quebec. Participants: A systematic random sample of 844 physicians, 808 nurses and 92 midwives; 597, 723 and 92 respectively completed the questionnaire, for an overall response rate of 80%. Main Outcome Measures: Midwife training options, range of responsibilities, location of midwifery care, relationship to other maternity care providers and degree of autonomy. Results: Most of the physicians, nurses and midwives surveyed agreed that if midwifery was legalized, midwives should have a university degree, provide basic care to women with normal pregnancy and delivery, provide prenatal and postnatal care in hospitals and community health centres, perform delivery in hospitals and work in close collaboration with the other maternity care professionals. Disagreement existed concerning the level of university training required, the need for training in nursing first, the scope of medical intervention performed by midwives, out-of-hospital delivery, the autonomy of midwives and control over their practice. Conclusion: Some consensus on midwifery practice exists between physicians, nurses and midwives. In jurisdictions where opposition to midwives is strong, such consensus could serve as the starting point for the introduction of midwifery. abstract_id: PUBMED:29461681 United States Model Midwifery Legislation and Regulation: Development of a Consensus Document. Introduction: Midwifery is defined and regulated across all 50 United States. However, states' regulations vary markedly, creating confusion for policy makers and consumers, and can limit services to women. In 2011, the International Confederation of Midwives released Global Standards for Midwifery Education, Regulation, and Association, providing guidance for international midwifery for the first time. US organizations representing midwifery education, regulation, and professional associations (US MERA) agreed to work together on common goals. Methods: The purpose of this modified Delphi study, conducted by US MERA, was to develop a consensus document on principles of model US midwifery legislation and regulation. Expert panelists (N = 51) across maternal and child health care professions and consumer groups participated over several iterative rounds. Results: The final document establishes guiding principles for US midwifery regulation, including regulatory authority, education, qualifications, regulation, registration and licensure, standards of practice and conduct, complaints, and third-party payment for services. Discussion: As more US states recognize and license midwives of all credentials and in every practice setting, we can envision a time when equity, informed choice, safety, and seamless access to quality midwifery care will be the right of every birthing family. abstract_id: PUBMED:21854473 Identifying key performance indicators for nursing and midwifery care using a consensus approach. Aims: The aim of this study was to gain consensus on key performance indicators that are appropriate and relevant for nursing and midwifery practice in the current policy context. Background: There is continuing demand to demonstrate effectiveness and efficiency in health and social care and to communicate this at boardroom level. Whilst there is substantial literature on the use of clinical indicators and nursing metrics, there is less evidence relating to indicators that reflect the patient experience. Design: A consensus approach was used to identify relevant key performance indicators. Methods: A nominal group technique was used comprising two stages: a workshop involving all grades of nursing and midwifery staff in two HSC trusts in Northern Ireland (n = 50); followed by a regional Consensus Conference (n = 80). During the workshop, potential key performance indicators were identified. This was used as the basis for the Consensus Conference, which involved two rounds of consensus. Analysis was based on aggregated scores that were then ranked. Results: Stage one identified 38 potential indicators and stage two prioritised the eight top-ranked indicators as a core set for nursing and midwifery. The relevance and appropriateness of these indicators were confirmed with nurses and midwives working in a range of settings and from the perspective of service users. Conclusions: The eight indicators identified do not conform to the majority of other nursing metrics generally reported in the literature. Furthermore, they are strategically aligned to work on the patient experience and are reflective of the fundamentals of nursing and midwifery practice, with the focus on person-centred care. Relevance To Clinical Practice: Nurses and midwives have a significant contribution to make in determining the extent to which these indicators are achieved in practice. Furthermore, measurement of such indicators provides an opportunity to evidence of the unique impact of nursing/midwifery care on the patient experience. abstract_id: PUBMED:35181239 International consensus definition of critical thinking in midwifery practice: A Delphi study. Problem: Currently there is no discipline-specific definition of critical thinking in midwifery practice. Background: Critical thinking in midwifery practice is the cornerstone for safe, evidence based, and woman centred clinical decision-making. Available definitions of critical thinking in other disciplines do not align with midwifery practice which is distinctive, multidimensional and complex. Aim: To develop an international consensus definition of critical thinking in midwifery practice. Methods: A two round Delphi study was used. Thirty-two international midwifery experts contributed to the first round which was qualitative in nature. Twenty one of these experts then ranked the relevance and clarity of concepts from round one. Findings: A consensus definition of critical thinking in midwifery practice was achieved. The expert panel identified and defined 14 'Habits of Mind' and 12 Skills that are the core of critical thinking in midwifery practice. Skills included; analysis, constructive application and contextualisation of best available evidence, problem solving, discriminating, predicting, evaluation of care, collect and interpret clinical cues, collaboration/ negotiation, reflexivity, facilitates shared decision-making, communication, and transforming knowledge. Habits of Mind included; intellectual curiosity, reflective, holistic view, intellectual integrity, flexibility, questioning/challenging, participatory, open mindedness, listening with understanding and empathy, cultural humility, woman centred, being brave, confidence, and creativity. Discussion/conclusion: This study is an international first and delineates characteristics of critical thinking in midwifery. Development of a consensus definition provides a common and shared understanding of the skills and attributes required for critical thinking in midwifery practice and can also be applied in education and research. abstract_id: PUBMED:37718556 A global comparison of nursing and midwifery's and medical efficacy in preventing COVID-19 deaths 2019-2022. Aim: Compare roles of nurses and midwives and physicians in reducing COVID-19 deaths measured with a case fatality ratio. Background: The roles and responsibilities of different health disciplines to the COVID-19 pandemic vary. While more difficult to measure, objective assessments of discipline contributions of nurses and midwives and physicians can be viewed through statistical analysis. Methods: Population-level data are analysed with scatter plots, bivariate correlations, partial correlation and multiple linear regression models to illustrate the contributions of nurses and midwives and physicians in reducing COVID-19 deaths. The role of nurses and midwives and physicians in protecting the community against COVID-19 deaths is explored and compared using competing effects of affluence, age and urbanization. Data analysis software programs include Excel v.2016, SPSS v.28 and Fisher r-to-z transformation. Results: Nurses and midwives reduce COVID-19 deaths significantly more than physicians. This difference remains while controlling for physician care, economic affluence, median age and urbanization individually or in combination. In contrast, the role of physicians in reducing COVID-19 deaths is less independent than nurses and midwives. Linear regression results insinuate when nursing and physician care are collated together with other predicting factors, physicians' contribution to community protection against COVID-19 case fatality ratio is statistically explained by nursing and midwifery care. Discussion: Unlike physicians, the nursing and midwifery workforce is bigger and located throughout all healthcare system levels and, therefore, is more accessible to the community. This is an important point in explaining the contribution of nurses and midwives to reducing COVID-19 deaths when compared with physicians. Conclusion: This study suggests that, worldwide, the nursing and midwifery workforce may play a more significant role in protecting the community against COVID-19 deaths than physicians. Implications For Health Policy, Nursing And Midwifery Practice: The findings from this study offer a unique perspective for health authorities to further understand the complementary and independent role of the nursing and midwifery workforce in respect of the healthcare team. This study suggests the importance of a broader range of healthcare services, especially during the pandemic, for example, COVID-19. With the permission of health authorities, the nursing and midwifery workforce should have further extension of their scope of practice in situations such as pandemics due to their broader access to the community. abstract_id: PUBMED:22818392 Midwifery scope of practice among staff nurses: a grounded theory study in Gujarat, India. Background: midwifery is a part of the nursing profession in India. This current study explores and describes the midwifery scope of practice among staff nurses. Methods: a grounded theory approach was used to develop a model. Twenty-eight service providers from the maternity sections of public health facilities, selected through purposive and theoretical sampling were interviewed in-depth. Unstructured observations in the labour wards were also used for developing the model. Findings: the midwifery practice of staff nurses was limited in scope compared to international standards of midwifery. Their practice was circumstance driven, ranging from extended to marginal depending on the context. Their right to practice was not legally defined, but they were not specifically prohibited from practice. As a consequence, the staff nurses faced loss of skills, and deskilling when their practice was restricted. Their practice was perceived as risky, when the scope of practice was extended because it was not rightfully endorsed, the nurses having no officially recognized right to practice midwifery at that level. The clinical midwifery education of nursing and midwifery students was marginalized because the education of medical students was given priority, and the students only got exposed to the restricted practice of staff nurses. Conclusions: unclear definitions of the right to practice and the scope of practice have led to the un-utilized potential of staff nurses practising midwifery. This is detrimental because India faces an acute shortage of qualified personnel to meet the need in providing human resources for maternal health. abstract_id: PUBMED:28934979 Human resources for health in Peru: recent trends (2007-2013) in the labour market for physicians, nurses and midwives. Background: Most analyses of gaps in human resources for health (HRH) do not consider training and the transition of graduates into the labour market. This study aims to explore the labour market for Peru's recent medical, nursing, and midwifery graduates as well as their transition into employment in the Ministry of Health's (MOH) system. Methods: Data from four different datasets, covering 2007-2013, was used to characterize the patterns of recently trained physicians, nurses, midwives, and postgraduate-trained physicians that enter employment in the MOH system, and scenario analyses were used to describe how this rate of entry needs to adapt in order to fill current HRH shortages. Results: HRH graduates have been increasing from 2007 to 2011, but the proportions that enter employment in the MOH system 2 years later range from 8 to 45% and less than 10% of newly trained medical specialists. Scenario analyses indicate that the gap for physicians and nurses will be met in 2027 and 2024, respectively, while midwives in 2017. However, if the number of HRH graduates entering the MOH system doubles, these gaps could be filled as early as 2020 for physicians and 2019 for nurses. In this latter scenario, the MOH system would still only utilize 56% of newly qualified physicians, 74% of nurses, and 66% of midwives available in the labour market. Conclusion: At 2013 training rates, Peru has the number of physicians, nurses, and midwives it needs to address HRH shortages and meet estimated HRH gaps in the national MOH system during the next decade. However, a significant number of newly qualified health professionals do not work for the MOH system within 2 years of graduation. These analyses highlight the importance of building adequate incentive structures to improve the entry and retention of HRH into the public sector. abstract_id: PUBMED:36705058 Nurses' and midwives' contributions to a public health response to keeping children safe from abuse and neglect - a Delphi study. Nurses and midwives can be instrumental to global efforts to address child abuse and neglect through a public health approach of prevention and early intervention. However, there is limited understanding of nurses' and midwives' roles, and no international or local guidelines to inform and evaluate their safeguarding practices. The aim of this modified Delphi study was to build consensus on the nature and scope of nursing and midwifery practice in safeguarding children in Australia. Sixty-four statements located within seven clusters were developed from a literature review and nursing and midwifery practice standards. Participants (N = 102) were nurses and midwives working with children in diverse contexts. They were asked to indicate the importance of each statement to their practice setting. Consensus (80%) was achieved on all statements in Round Two. Findings highlight that participating nurses and midwives agreed safeguarding children is a core component of their practice in diverse child-focussed settings. Findings can inform ongoing discussions about development of nursing and midwifery practice standards and guide effective workforce preparation, education, support and resourcing. Further research about ongoing development of nursing and midwifery roles in safeguarding children is essential to explore how to most effectively mobilise these professions to prevent child abuse and neglect. abstract_id: PUBMED:35034693 Expanding nurses' authority - Physicians' and nurses' attitudes. Aim: To examine and compare the attitudes of physicians and registered nurses regarding extending nurses' authority. Design: This was a comparative quantitative study, which used a questionnaire that examined the attitudes of nurses and physicians regarding expanding nurses' authority. Methods: The study comprised a sample of 134 nurses (62.7%) and physicians (37.3%), who filled out the self-report questionnaire that included sociodemographic data and questions designed to examine the participants' attitudes and perceptions on expanding nurses' authority. The general reliability of the questionnaire was Cronbach's α = 0.931. Participants were sampled through the convenience-snowball method. Results: Differences were found between the attitudes of physicians and nurses to expanding nurses' authority. Nurses tended to be in favor, whereas physicians tended to be against (t(86.29) = 7.713; p < 0.05). Nurses' attitudes are also more positive specifically to procedures related to drug administration (t(132) = 6.894; p < 0.05) and resuscitation (t(132) = 2.974; p < 0.05), compared to physicians. Conclusions: Nurses have more positive attitudes toward expanding nurses' authority than physicians do, specifically toward drug administration and resuscitation. Further study is proposed to identify factors which affect the physician-nurse relationship and attitudes on a large scale through a qualitative study. abstract_id: PUBMED:23343241 Flight nurses in Australia: maintaining their midwifery competence - a case study. Flight nurses (FNs) in Australia care for a wide diversity of patients as sole practitioners. No studies could be located regarding how FNs maintain their midwifery competence. The purpose of this study was to investigate how levels of competence in midwifery practice are determined amongst Australian FNs and explore if continuing professional development (CPD) contributes to the maintenance of FN's midwifery clinical skills and knowledge. Data was collected using computer mediated communication and content analysed using a thematic framework. The results of this study showed that the maintenance of current midwifery practice was complex due to the low numbers of midwifery patients cared for by FNs. CPD was identified as essential however this research demonstrated a lack of motivation by this small group of FNs to attend CPD, mainly due to the absence of a programme that recognises the different context and scope of their practice. Answer: The concept of midwifery has been explored and defined through various studies, with some level of consensus emerging among physicians, nurses, and midwives themselves. A mail survey conducted in Quebec in 1991 revealed that most physicians, nurses, and midwives agreed on certain aspects of midwifery practice if it were to be legalized. They concurred that midwives should have a university degree, provide basic care for normal pregnancies and deliveries, offer prenatal and postnatal care in hospitals and community health centers, perform deliveries in hospitals, and work in close collaboration with other maternity care professionals. However, there was disagreement on the level of university training required, the necessity for nursing training first, the extent of medical interventions performed by midwives, out-of-hospital delivery, the autonomy of midwives, and control over their practice (PUBMED:8313288). In the United States, a modified Delphi study conducted by US MERA aimed to develop a consensus document on principles of model US midwifery legislation and regulation. The final document established guiding principles for US midwifery regulation, including regulatory authority, education, qualifications, regulation, registration and licensure, standards of practice and conduct, complaints, and third-party payment for services. This consensus is seen as a step towards ensuring equity, informed choice, safety, and seamless access to quality midwifery care for every birthing family (PUBMED:29461681). Furthermore, a consensus approach was used to identify key performance indicators for nursing and midwifery care, reflecting the patient experience and aligning with the fundamentals of person-centered care (PUBMED:21854473). An international consensus definition of critical thinking in midwifery practice was also developed, identifying core skills and 'Habits of Mind' essential for the discipline (PUBMED:35181239). The scope of midwifery practice among staff nurses in India was found to be limited compared to international standards, with practice being circumstance-driven and not legally defined, leading to skill loss and deskilling (PUBMED:22818392). In Peru, the transition of recent graduates into the labor market was analyzed, highlighting the importance of building adequate incentive structures to improve the entry and retention of HRH into the public sector (PUBMED:28934979).
Instruction: Are frequent callers to family physicians high utilizers? Abstracts: abstract_id: PUBMED:15576539 Are frequent callers to family physicians high utilizers? Purpose: Our objective was to describe patients who telephone frequently after hours to physicians (frequent callers) and categorize their medical problems and resource utilization. Methods: Charts of frequent callers were reviewed and compared with those of a systematically selected group from the same family medicine residency practice (control group). Data collected included demographic and clinical information, as well as information on utilization of office, emergency department, and hospital services. In addition, 4 family physicians reviewed the patient information and identified the primary diagnosis for frequent callers. Results: Frequent callers were predominately female; had 3 times as many office visits, diagnoses, and medications; and had 8 times as many hospital admissions as the control group. The most common primary diagnostic categories were psychiatric disorders (36%), pain (21%), chronic illnesses (16%), pregnancy (13%), and common problems of childhood (9%). Conclusions: Frequent callers represent a unique group of patients with high utilization of health care services. Better targeted patient education and referral to other support services may decrease the number of calls and utilization of health services. Alternatively, among high utilizers, frequent telephone calls may be a substitute for other forms of care. abstract_id: PUBMED:27247615 Frequent callers to telephone helplines: new evidence and a new service model. Background: This paper describes a program of work designed to inform a service model to address a challenge for telephone helplines, namely frequent callers. Methods: We conducted a systematic literature review and four empirical studies that drew on different data sources-(a) routinely collected calls data from Lifeline, Australia's largest telephone helpline; (b) data from surveys/interviews with Lifeline frequent callers; (c) data from the Diagnosis, Management and Outcomes of Depression in Primary Care (diamond) study; and (d) data from Australia's National Survey of Mental Health and Wellbeing. Results: Frequent callers represent 3 % of callers but make 60 % of calls. They are isolated and have few social supports but are not "time wasters"; they have major mental and physical health problems and are often in crisis. They make use of other services for their mental health problems. The circumstances under which they use telephone helplines vary, but current service models reinforce their calling behaviour. Conclusions: The findings point to a service model that might better serve the needs of both frequent callers and other callers. The model involves offering frequent callers an integrated, tailored service in which they are allocated a dedicated and specially trained telephone crisis supporter (TCS), and given set calling times. It also involves promoting better linkages between telephone helplines and other services that provide mental health care, particularly general practitioners (GPs) and other primary care providers. The next step is to refine and test the model. abstract_id: PUBMED:35086356 Systematic Review of Research and Interventions With Frequent Callers to Suicide Prevention Helplines and Crisis Centers. Background: Helplines worldwide have frequent callers who may occupy a large proportion of call volume. Therapeutic gain from frequent calling has been questioned. We conducted this review to identify the characteristics of frequent callers and to compile recommendations about how best to help them. Method: Using preferred reporting items for systematic reviews and meta-analyses (PRISMA) standards, we searched for all empirical research in English and French from inception to May, 2020 in PubMed, PsycInfo, and the CRISE library. Results: We identified 738 manuscripts and retained 27 for analyses. Nine provided no definition of frequent callers; nine mixed frequent callers with repeat callers (>1 calls); nine concerned frequent callers (≥8 calls/month). The limited data suggest frequent callers are similar to other callers and often experience mental health problems, loneliness, and suicide risk. From recommendations in all 27 studies, we identified 10 suggestions to better manage and help frequent callers that merit validation. Limitations: The small number of empirical investigations and the diversity of their goals and methodologies limit generalizations. Although recommendations for helping callers may have face validity, empirical data on their effectiveness are scarce. Conclusion: Rather than focusing on reducing call frequency, we should empirically evaluate the benefits of interventions for frequent callers with different calling patterns, characteristics, and reasons for calling. abstract_id: PUBMED:30824276 Impact of an individual plan of care for frequent and high utilizers in a large healthcare system. Introduction: "Frequent or High Utilizers" are significant stressors to Emergency Departments (EDs) and Inpatient Units across the United States (US). These patients incur higher healthcare costs with ED visits and inpatient admissions. Our aims were to determine whether implementation of individualized care plans (ICPs) could 1) reduce costs, 2) reduce inpatient length of stay (LOS), and 3) reduce ED encounters throughout a large healthcare system. Methods: 13 EDs were included including academic, community, Free-standing and pediatric EDs. Data was collected from January 1, 2014 through December 31, 2017. ICPs were created for high ED utilizers, as recommended by staff input through multidisciplinary care committees at each site. The ICP consisted of 1) specific symptom-related information with approaches in management, 2) recent assessment from specialists, 3) social work summary, and 4) psychiatry summary. A Best Practice Alert was placed in the electronic medical record that could be seen at all hospitals within the system. ICP's were updated annually. Results: 626 ICPs were written; 452 initial ICPs and 174 updates. The 452 ICP patients accounted for 23,705 encounters during the four-year period; on average, an ICP patient visited the ED 52 times (14.75 encounters/year). Overall indirect and direct costs decreased 42% over first 6 months, inpatient LOS improved from 1.9 to 0.97 days/month, and ED encounters decreased from 1.96 to 1.14. All cost and LOS data significantly improved at 24 months post-ICP inception. Conclusion: Implementation of individualized care plan can reduce cost, inpatient LOS, and ED encounters for high utilizers. abstract_id: PUBMED:27650484 Frequent callers in primary health care - a qualitative study with a nursing perspective. Aim: To: (a) describe how telephone nurses define a frequent caller; and (b) describe their experiences with calls from frequent callers to primary healthcare centres. Background: Telephone nursing has been noted to be a cost-effective method to enhance access to and use of health care. Some patients use these services extensively and are called 'frequent callers'. Little is known about this group of callers, the reasons behind these calls and telephone nurses' experiences and strategies to manage the calls. Design: Descriptive design with a qualitative inductive approach. Methods: Interviews were conducted with ten telephone nurses in Sweden in 2015. Qualitative content analysis was conducted. Results: A main theme was established, called 'Balancing between the experienced and assessed needs', which described the telephone nurses' experiences with calls made by frequent callers to primary healthcare centres and was further described in five categories with 15 subcategories. The categories described telephone nurses' definitions of frequent callers, telephone nurses' views of the underlying reasons for the calls, challenges related to frequent callers, experiences with an increased work load and strategies used to manage and help frequent callers. Conclusion: Frequent callers were commonly encountered by telephone nurses' in this study. Their calls were experienced as complex and demanding to manage. The findings point to needs for guidelines and routines to improve the care of frequent callers. In addition, support and training in communication skills to encounter this group of callers in an optimal and safe way may be required. abstract_id: PUBMED:24972602 Frequent callers to crisis helplines: who are they and why do they call? Objective: Frequent callers present a challenge for crisis helplines, which strive to achieve optimal outcomes for all callers within finite resources. This study aimed to describe frequent callers to Lifeline (the largest crisis helpline in Australia) and compare them with non-frequent callers, with a view to furthering knowledge about models of service delivery that might meet the needs of frequent callers. Method: Lifeline provided an anonymous dataset on calls made between December 2011 and May 2013. We assumed calls from the same (encrypted) phone number were made by the same person, and aggregated call level data up to the person level. Individuals who made 0.667 calls per day in any period from 1 week to the full 549 days for which we had data (i.e. 4.7 calls in 7 days, 20 calls in 30 days, 40 calls in 60 days, etc.) were regarded as frequent callers. Results: Our analysis dataset included 411,725 calls made by 98,174 individuals, 2594 (2.6%) of whom met our definition of frequent callers. We identified a number of predictors of being a frequent caller, including being male or transgender, and never having been married. The odds increased with age until 55-64 years, and then declined. Suicidality, self-harm, mental health issues, crime, child protection and domestic violence issues all predicted being a frequent caller. Conclusions: Collectively, frequent callers have a significant impact on crisis lines, and solutions need to be found for responding to them that are in everybody's best interests (i.e. the frequent callers themselves, other callers, telephone crisis supporters who staff crisis lines, and those who manage crisis lines). In striking this balance, the complex and multiple needs of frequent callers must be taken into account. abstract_id: PUBMED:31159766 Social factors in frequent callers: a description of isolation, poverty and quality of life in those calling emergency medical services frequently. Background: Frequent users of emergency medical services (EMS) comprise a disproportionate percentage of emergency department (ED) visits. EDs are becoming increasingly overwhelmed and a portion of use by frequent callers of EMS is potentially avoidable. Social factors contribute to frequent use however few studies have examined their prevalence. This study aims to describe social isolation/loneliness, poverty, and quality of life in a sample of frequent callers of EMS in the Hamilton region, a southern Ontario mid-sized Canadian city. Study Design: Cross-sectional quantitative study. Methods: We surveyed people who called EMS five or more times within 12 months. A mailed self-administered survey with validated tools, and focused on four major measures: demographic information, social isolation, poverty, and quality of life. Results: Sixty-seven frequent EMS callers revealed that 37-49% were lonely, 14% had gone hungry in the preceding month, and 43% had difficulties making ends meet at the end of the month. For quality of life, 78% had mobility problems, 55% had difficulty with self-care, 78% had difficulty with usual activities, 87% experienced pain/discomfort, and 67% had anxiety/depression. Overall quality adjusted life years value was 0.53 on a scale of 0 to 1. The response rate was 41.1%. Conclusions: Loneliness in our participants was more common than Hamilton and Canadian rates. Frequent EMS callers had higher rates of poverty and food insecurity than average Ontario citizens, which may also act as a barrier to accessing preventative health services. Lower quality of life may indicate chronic illness, and users who cannot access ambulatory care services consistently may call EMS more frequently. Frequent callers of EMS had high rates of social loneliness and poverty, and low quality of life, indicating a need for health service optimization for this vulnerable population. abstract_id: PUBMED:37213782 One for all or all for one? An integrative review of research on frequent callers. Objective: Telephone health services is an increasing and integral part of health care in several countries. Callers who call repeatedly, in the current study "frequent callers" are present in all kinds of healthcare services, often constitute a considerable proportion of the total amount of calls and are complicated to help. The aim was to provide a comprehensive overview of research on frequent callers at a variety of telephone health services. Methods: An integrative literature review. Literature was searched for the period 2011-2020 in CINAHL Plus, MEDLINE, APA PsycArticles, APA PsycInfo, and PubMed, and resulted in the inclusion of 20 articles. Results: Studies on frequent callers (FCs) were found in the context of emergency medical services, telephone helplines, primary healthcare, and specialist medicine clinics. Frequent calling was associated with psychiatric comorbidity, and the reasons for calling were often multifaceted. Conclusion: The strategies suggested for handling calls involved an individual approach, which could be enabled through multidisciplinary work. Innovation: The main findings indicate a need for a systematic approach and guidelines to enable optimal help for FCs. Cooperation among healthcare instances seems to contribute to a more individual care for FCs. abstract_id: PUBMED:31997365 A survey of telephone nurses' experiences in their encounters with frequent callers. Aim: The aim of this study was to describe telephone nurses' experiences of their encounters with frequent callers to Swedish Healthcare Direct. Design: A descriptive inductive design with qualitative approach. Methods: Data collection was performed during the period of September 2017 - June 2018. A total of 199 telephone nurses working at 10 Swedish Healthcare Direct sites with different geographical locations in Sweden answered a survey containing seven open-ended questions. Data analysis was performed during the period of September 2018 - June 2019. The answers were analysed using content analysis. Results: The telephone nurses perceived that the encounters with frequent callers were sometimes frustrating as they felt unable to help. According to the telephone nurses, the frequent callers called about the same issues several times and the calls were often about loneliness and psychiatric problems. The telephone nurses were worried about missing something urgent; one of the aspects leading to this was the perception of knowing the caller. They expressed a wish to know more about frequent callers and strategies for helping them. A common care plan for frequent callers' calls was suggested. Conclusion: Telephone nurses often found it difficult to handle calls from frequent callers. However, they had a will to care for frequent callers and to learn more about them. Therefore, a common strategy, education and training for telephone nurses in answering calls from frequent callers could be beneficial for both callers and telephone nurses. Impact Statement: This study offers insights to researchers, telephone nurses and managers of telephone nursing, regarding telephone nurses' experiences in handling frequent caller calls. These findings can offer direction for the content of a possible intervention. abstract_id: PUBMED:37821905 Challenges in recruiting frequent users of ambulance services for a community paramedic home visit program. Background: The Community Paramedicine at Home (CP@home) program is a health promotion program where community paramedics conduct risk assessments with frequent 9-1-1 callers in their homes, with a goal of reducing the frequency of 9-1-1 calls in this vulnerable population. The effectiveness of the CP@home program was investigated through a community-based RCT conducted in four regions in Ontario, Canada. The purpose of this current recruitment study is to examine the challenges met when recruiting for a community randomized control trial on high frequency 9-1-1 callers. Methods: Eligible participants were recruited from one of four regions participating in the CP@home program and were randomly assigned to an intervention group (n = 1142) or control group (n = 1142). Data were collected during the recruitment process from the administrative database of the four paramedic services. Whether they live alone, their parental ethnicity, age, reason for calling 9-1-1, reason for not participating, contact method, and whether they were successfully contacted were recorded. Statistical significance was calculated using the Chi-Squared Test and Fisher's Exact Test to evaluate the effectiveness of the recruitment methods used to enroll eligible participants in the CP@home Program. Results: Of the people who were contacted, 48.0% answered their phone when called and 53.9% answered their door when a home visit was attempted. In Total, 110 (33.1%) of people where a contact attempt was successful participated in the CP@home program. Most participants were over the age of 65, even though people as young as 18 were contacted. Older adults who called 9-1-1 for a lift assist were more likely to participate, compared to any other individual reason recorded, and were most often recruited through a home visit. Conclusions: This recruitment analysis successfully describes the challenges experienced by researchers when recruiting frequent 9-1-1 callers, which are considered a hard-to-reach population. The differences in age, contact method, and reason for calling 9-1-1 amongst people contacted and participants should be considered when recruiting this population for future research. Answer: Yes, frequent callers to family physicians are considered high utilizers of healthcare services. The research indicates that frequent callers to family physicians tend to have a higher number of office visits, diagnoses, medications, and hospital admissions compared to a control group. Specifically, they had three times as many office visits, diagnoses, and medications, and eight times as many hospital admissions (PUBMED:15576539). Frequent callers often have complex needs, including psychiatric disorders, pain, chronic illnesses, pregnancy issues, and common problems of childhood (PUBMED:15576539). Additionally, frequent callers to telephone helplines, which can be an extension of healthcare services, represent a small percentage of callers but make a disproportionate number of calls, often due to major mental and physical health problems and a lack of social supports (PUBMED:27247615). These findings suggest that frequent callers are indeed high utilizers of healthcare resources.
Instruction: Is rs759853 polymorphism in promoter of aldose reductase gene a risk factor for diabetic nephropathy? Abstracts: abstract_id: PUBMED:25885804 Is rs759853 polymorphism in promoter of aldose reductase gene a risk factor for diabetic nephropathy? A meta-analysis. Background: So far, a number of case-control or cohort studies have been carried out to investigate the relationship between rs759853 polymorphism in the promoter of aldose reductase (AR) gene and the risk of diabetic nephropathy (DN). However, the results have generated considerable controversy. We performed this study to clarify the linkage between this gene mutation and the risk of DN. Methods: A comprehensive literature search of electronic databases and a well-organized meta-analysis were conducted. Results: Twelve comparisons and 4,735 individuals from nine published case-control or cohort studies were included finally. From none to large heterogeneity was observed, therefore, both fixed and random models were used. Significant differences were found between AR rs759853 polymorphism and susceptibility of DN from both type 1 and type 2 diabetes in all genetic models (allele contrast, OR = 1.37, CI (1.18, 1.59), P < 0.0001; additive model, OR = 1.78, CI (1.25, 2.53), P = 0.01; recessive model OR = 1.33 CI (1.08, 1.63), P = 0.008; dominant model, OR = 1.52, CI (1.26, 1.84), P < 0.0001; codominance model OR = 1.30 (1.15, 1.47), P < 0.0001). In stratified meta-analyses for type 2 diabetes by ethnicity, the significant relationship was found in allele contrast and dominant model in Caucasians, and in allele contrast and codominance model in Asians. However, data do not support the linkage between this gene mutation and the progression of DN. There was no significant publication bias. Conclusions: The evidence currently available shows that the AR rs759853 polymorphism may correlate with the susceptibility of DN. However, data do not support the association between this DNA variation and the progression of DN. abstract_id: PUBMED:15637423 C-106T polymorphism in promoter of aldose reductase gene is a risk factor for diabetic nephropathy in type 2 diabetes patients with poor glycaemic control. Background: Excessive flux through the polyol pathway has long been thought to be involved in the pathogenesis of diabetic microvascular complications. Aldose reductase (AR) is the first and rate-limiting enzyme in the pathway that catalyses the reduction of glucose to sorbitol. A frequent C-106T polymorphism in the promoter of the AR gene has been described, which may change the expression of the gene. The aim of the study was to examine if the C-106T polymorphism was associated with diabetic nephropathy. Material And Methods: We collected 444 patients with type 2 diabetes and divided them into three groups according to the renal status: 162 patients with normoalbuminuria, 153 with microalbuminuria and 129 with persistent proteinuria. Each subject was genotyped for the C-106 polymorphism using the PCR-based RFLP protocol. Results: When the whole study population was analysed, no distortion in the genotype frequency among the study groups was observed. When we stratified the study population by HbA1c we found that in patients with HbA1c > or =9% (median) the CT and TT genotypes were more frequent in patients with diabetic nephropathy (proteinuria and microalbuminuria) than those with normoalbuminuria (OR 2.04, 95% CI 1.12-3.74). Conclusion: The C-106T polymorphism in the AR gene is a risk factor for development of diabetic nephropathy in type 2 diabetes in patients with poor glycaemic control. abstract_id: PUBMED:16545977 The -106CC genotype of the aldose reductase gene is associated with an increased risk of proliferative diabetic retinopathy in Caucasian-Brazilians with type 2 diabetes. Diabetic retinopathy is a sight-threatening chronic complication of diabetes mellitus and is the leading cause of acquired blindness in adults. The -106C>T polymorphism in the promoter region of the aldose reductase (AR) gene has been shown to be associated with the susceptibility to diabetic nephropathy in type 2 diabetes, but the findings regarding the occurrence of diabetic retinopathy are conflicting. In this case-control study, we investigated whether the -106C>T polymorphism in the AR gene is involved in the development and progression of diabetic retinopathy in 579 Brazilians with type 2 diabetes (424 Caucasian- and 155 African-Brazilians). Patients underwent a clinical and laboratory evaluation consisting of a questionnaire, physical examination, assessment of diabetic complications and laboratory tests. Genotype analysis was performed using the polymerase chain reaction followed by digestion with restriction enzyme. Logistic regression analysis was used to control for independent risk factors associated with diabetic retinopathy. There were no differences in either genotype or allele frequencies for the -106C>T polymorphism between type 2 diabetic patients with or without diabetic retinopathy, in both ethnic groups. However, the CC genotype was associated with an increased risk of having proliferative diabetic retinopathy in Caucasian-Brazilians with type 2 diabetes (odds ratio (OR)=2.04; 95% confidence interval (CI)=1.21-3.45; P=0.007), independently of other risk factors associated with this complication. Thus, our results show that the -106CC genotype (-106C>T polymorphism) in the AR gene is related to the progression of diabetic retinopathy in Caucasian-Brazilians with type 2 diabetes. abstract_id: PUBMED:12505748 Polymorphisms of the vascular endothelial growth factor and susceptibility to diabetic microvascular complications in patients with type 1 diabetes mellitus. There is increasing evidence implicating genetic factors in the susceptibility to diabetic microvascular complications. Recent studies suggest that increased expression of the cytokine vascular endothelial growth factor (VEGF) may play a role in the pathogenesis of diabetic complications. A number of polymorphisms in the promoter region of the VEGF gene have been identified. The aim was to investigate whether an 18 base pair (bp) deletion (D)/insertion (I) polymorphism at position -2549 in the promoter region of the VEGF gene is associated with the susceptibility to diabetic microvascular complications. Two hundred and thirty-two patients with type 1 diabetes mellitus (T1DM) and 141 normal healthy controls were studied. The D/D genotype was significantly increased in those patients with nephropathy (n=102) compared to those with no complications after 20 years duration of diabetes (uncomplicated, n=66) (40.2% vs. 22.7%, respectively, chi(2)=5.5, P<.05). The combination of polymorphisms of VEGF together with the aldose reductase (ALR2) gene showed that in the nephropaths, 8 of the 83 subjects had the VEGF I allele together with the Z+2 5'ALR2 allele compared with 27 of the 62 uncomplicated patients (chi(2)=26.7, P<.00001). The functional role of the D/I polymorphism was examined by cloning the region into a luciferase reporter assay system and transient transfection into HepG2 cells. The construct containing the 18 bp deletion had a 1.95-fold increase in transcriptional activity compared with its counterpart that had the insert (P<.01). These results suggest that polymorphisms in the promoter region of the VEGF gene together with the ALR2 may be associated with the pathogenesis of diabetic nephropathy. abstract_id: PUBMED:11874426 Diabetic nephropathy is associated with the 5'-end dinucleotide repeat polymorphism of the aldose reductase gene in Chinese subjects with Type 2 diabetes. Aims: We investigated whether the promoter dinucleotide repeat polymorphism of the aldose reductase gene (5'-ALR2), implicated in the development of nephropathy in Type 1 diabetes, was associated with diabetic nephropathy in Type 2 diabetes. Methods: In 265 Southern Chinese with Type 2 diabetes the 5' -ALR2 polymorphism was identified in genomic DNA using polymerase chain reaction and automated fluorescent scanning. They were classified as normoalbuminuric (n = 128), microalbuminuric (n = 85) or albuminuric (n = 52) according to the mean albumin excretion rate of two 12-h overnight collections. Results: The 5' -ALR2 allele and genotype distributions differed significantly among the three groups of patients (P < 0.003 and P < 0.01, respectively). Normoalbuminuric patients had the lowest Z - 2 allele frequency: 17.6% vs. 28.2% and 23.1% for microalbuminuric and albuminuric patients, respectively, and the highest Z + 2 allele frequency: 36.7% vs. 21.2% and 23.1% in microalbuminuric and albuminuric patients, respectively. They also had the lowest Z - 2/X genotype frequency (X = any allele other than Z + 2): 18.8% vs. 36.5% in microalbuminuric (P < 0.01) and 38.5% in albuminuric patients (P < 0.02), respectively, but the highest Z + 2/Y genotype frequency (Y = any allele other than Z - 2): 50.7% vs. 27.0% and 34.6% in microalbuminuric (P < 0.001) and albuminuric patients, respectively. In a multiple logistic regression model, the Z - 2/X genotype (odds ratio 3.10; P < 0.025) was an independent risk factor of diabetic nephropathy, microalbuminuria or albuminuria, together with age, mean arterial pressure and body mass index. Conclusions: The 5' -ALR2 dinucleotide repeat polymorphism is associated with the development of diabetic nephropathy in Southern Chinese with Type 2 diabetes. abstract_id: PUBMED:15270790 Association of aldose reductase gene Z+2 polymorphism with reduced susceptibility to diabetic nephropathy in Caucasian Type 1 diabetic patients. Aims: The Z-2 allele of the (AC)n polymorphism in the aldose reductase gene (ALR2) confers increased risk of microvascular diabetic complications, whereas the Z+2 allele has been proposed to be a marker of protection. However data are conflicting. Therefore, we investigated whether this polymorphism is associated with diabetic nephropathy and retinopathy in Type 1 diabetes mellitus in a large case-control study and a family-based analysis. Methods: A total of 431 Type 1 diabetic patients with diabetic nephropathy and 468 patients with longstanding Type 1 diabetes and persistent normoalbuminuria were genotyped for the case-control study. In addition, 102 case trios and 98 control trios were genotyped for a family-based study. Results: Thirteen different alleles were identified. In the case-control study, the Z+2 allele frequency was significantly higher in the normoalbuminuric diabetic than in patients with diabetic nephropathy (0.17 vs. 0.11, P = 0.008), suggesting a protective function of the Z+2 allele. No significant increase in the frequency of the putative risk allele Z-2 was found in patients with diabetic nephropathy vs. controls (0.39 vs. 0.36). No association with diabetic retinopathy was found. Although the results of the transmission of the Z-2 and Z+2 alleles in the independent family-based study were consistent with the association study, the number of informative families was limited and thus differences were not statistically significant. Conclusions: The Z+2 allele of the ALR2 promoter polymorphism is associated with a reduced susceptibility to diabetic nephropathy in Danish Type 1 diabetic patients, suggesting a minor role for the polyol pathway in the pathogenesis of diabetic kidney disease. No association of the ALR2 polymorphism with diabetic retinopathy was found. abstract_id: PUBMED:16620264 Variants in the gene encoding aldose reductase (AKR1B1) and diabetic nephropathy in American Indians. Aims: The aldose reductase gene (AKR1B1) is a strong candidate for diabetic nephropathy, and the T allele at rs759853 and the Z-2 allele at an [AC]n microsatellite are associated with diabetic kidney disease in some populations. As AKR1B1 is located on 7q35, where we have previously reported linkage to diabetic nephropathy in Pima Indians, this study examined the association of AKR1B1 variants with diabetic nephropathy in this population. Methods: AKR1B1 variants were identified by sequencing and genotyped using allelic discrimination and pyrosequencing. Genotype distributions were compared between 107 cases with diabetic end-stage renal disease and 108 control subjects with diabetes for > or = 10 years and no evidence of nephropathy, and between 141 individuals with nephropathy and 416 individuals without heavy proteinuria in a family study of 257 sibships. Results: We identified 11 AKR1B1 single nucleotide polymorphisms (SNPs) and the [AC]n microsatellite polymorphism. Three SNPs were rare and two were in 100% genotypic concordance; thus, eight polymorphisms were genotyped. No variant was associated with diabetic kidney disease in the case-control or family-based study. For example, the T allele at rs759853 had an allele frequency of 0.165 in cases and 0.171 in control subjects (OR = 0.96, 95% CI, 0.57-1.59, P = 0.86); in the family study its frequency was 0.140 and 0.169 in affected and unaffected individuals, respectively (OR = 0.90, 95% CI, 0.53-1.54 P = 0.71). Corresponding values for the Z-2 allele at the [AC]n microsatellite were OR = 1.09 (95% CI 0.72-1.66, P = 0.67) and OR = 1.25 (95% CI 0.81-1.95, P = 0.31) in the case-control and family studies, respectively. Conclusions: Common AKR1B1 polymorphisms are unlikely to be major determinants of diabetic nephropathy in this population. abstract_id: PUBMED:18434430 Association between (AC)n dinucleotide repeat polymorphism at the 5'-end of the aldose reductase gene and diabetic nephropathy: a meta-analysis. Association between the (AC)(n) dinucleotide repeat polymorphism at the 5'-end of the aldose reductase gene and the occurrence of diabetic nephropathy was conducted. We examined eight studies consisting of ten Caucasian type 1 diabetes mellitus case-control comparisons and eight studies consisting of nine type 2 diabetes mellitus case-control comparisons, which were based on our inclusion criterion and available in the literature. The meta-analysis demonstrated a large heterogeneity among the studies on the type 1 diabetic subjects and a significant association was observed between the (AC)(n) dinucleotide repeat polymorphism at the 5'-end of the aldose reductase gene and diabetic nephropathy. The Z-2 allele appeared to be a genetic risk factor for susceptibility to diabetic nephropathy with a random effects odds ratio (OR) of 1.40 (95% confidence interval, CI (1.07, 1.84)). The Z+2 allele showed a protective effect on diabetic nephropathy with a random effects OR of 0.77 (95% CI (0.65, 0.91)). The meta-analysis, however, showed no association between the genetic polymorphism and diabetic nephropathy in type 2 diabetic subjects. Neither the risk Z-2 allele nor the protective Z+2 allele in type 1 diabetic subjects appeared to have an effect on nephropathy in type 2 diabetic subjects, while their fixed effects OR was 1.09 (95% CI (0.96, 1.22)) and 0.88 (95% CI (0.67, 1.15)) respectively. The current meta-analysis demonstrated a correlation between the (AC)(n) dinucleotide repeat polymorphism and the occurrence of diabetic nephropathy in Caucasian type 1 diabetic subjects in contrast to type 2 diabetic subject population in which such an association could not be demonstrated. abstract_id: PUBMED:12871136 Polymorphisms of the aldose reductase gene and susceptibility to diabetic microvascular complications. Diabetes is a major cause of mortality and morbidity due to the long term microvascular complications of this disease. There is now convincing evidence to show that genetic factors together with elevated blood glucose play an important role in the susceptibility to diabetic nephropathy as well as retinopathy. The polyol pathway is thought to play an important role in the pathogenesis of diabetic microvascular complications. Aldose reductase is the first and rate-limiting enzyme of the polyol pathway. Polymorphisms in the promoter region as well as elsewhere in the gene have been associated with susceptibility to nephropathy, retinopathy as well as diabetic neuropathy. These associations have been replicated in patients with either type 1 or type 2 diabetes mellitus as well as across ethnic groups. These polymorphisms in the promoter region are also associated with expression of the gene. Although clinical trials using inhibitors of aldose reductase to treat diabetic microvascular complications have largely been unsuccessful, the identification of the susceptibility genes may help in the design of future drug regimens. abstract_id: PUBMED:11796181 (AC)(n) polymorphism of aldose reductase gene and diabetic microvascular complications in type 2 diabetes mellitus. Recent studies suggest that the gene encoding aldose reductase, the enzyme that converts glucose to sorbitol, may confer susceptibility to microvascular disease. The aim of this study therefore, was to investigate the relationship between the aldose reductase gene and type 2 diabetic microvascular complications such as diabetic nephropathy and retinopathy. DNA from 127 Korean patients with type 2 diabetes was typed for an (AC)(n) dinucleotide repeat polymorphic marker at the 5'-end of the aldose reductase gene using polymerase chain reaction. No significant difference in the frequency of the putative risk allele Z-2 was found in patients of nephropathy and retinopathy groups compared with the uncomplicated group (32.2, 34.1 vs. 25.1%, respectively, P>0.05). Similarly, no difference was found in the frequency of the putative protective allele Z+2 among any of the study groups. In conclusion, the results of the study in Korean type 2 diabetic patients do not support the hypothesis that polymorphism at the 5' end of the aldose reductase gene contributes to the susceptibility to diabetic microvascular complications. Answer: The rs759853 polymorphism in the promoter of the aldose reductase (AR) gene has been investigated as a potential risk factor for diabetic nephropathy (DN). A meta-analysis including twelve comparisons and 4,735 individuals from nine published case-control or cohort studies found significant differences between the AR rs759853 polymorphism and the susceptibility of DN from both type 1 and type 2 diabetes in all genetic models (PUBMED:25885804). This suggests that the AR rs759853 polymorphism may correlate with an increased risk of developing DN. However, the same study also noted that the data do not support an association between this DNA variation and the progression of DN once it has developed. In contrast, another study involving American Indians did not find an association between common AKR1B1 polymorphisms, including the rs759853 variant, and diabetic kidney disease in a case-control or family-based study (PUBMED:16620264). This indicates that the rs759853 polymorphism may not be a major determinant of DN in this particular population. Overall, while there is evidence to suggest that the rs759853 polymorphism may be associated with an increased risk of DN, the strength of this association may vary among different populations and the polymorphism may not necessarily influence the progression of the disease. Further research is needed to fully understand the role of this genetic variation in the development and progression of DN across diverse populations.
Instruction: Do ongoing lifestyle disruptions differ across cancer types after the conclusion of cancer treatment? Abstracts: abstract_id: PUBMED:21174161 Do ongoing lifestyle disruptions differ across cancer types after the conclusion of cancer treatment? Introduction: Cancer interferes with participation in valued lifestyle activities (illness intrusiveness) throughout post-treatment survivorship. We investigated whether illness intrusiveness differs across life domains among survivors with diverse cancers. Intrusiveness should be highest in activities requiring physical/cognitive functioning (instrumental domain). Intrusiveness into relationship/sexual functioning (intimacy domain) should be higher in prostate, breast, and gastrointestinal cancers than in others. Methods: Cancer outpatients (N = 656; 51% men) completed the Illness Intrusiveness Ratings Scale (IIRS) during follow-up. We compared IIRS Instrumental, Intimacy, and Relationships and Personal Development [RPD] subscale and total scores across gastrointestinal, lung, lymphoma, head and neck, prostate (men), and breast cancers (women), comparing men and women separately. Results: Instrumental subscale scores (M(men) = 3.05-3.80, M(women) = 3.02-3.63) were highest for all groups, except prostate cancer. Men with prostate cancer scored higher on Intimacy (M = 3.40) than Instrumental (M = 2.48) or RPD (M = 1.59), p's < .05; their Intimacy scores did not differ from men with gastrointestinal or lung cancer. Women collectively showed higher Instrumental (M = 3.39) than Intimacy (M = 2.49) or RPD scores (M = 2.27), p's < .001, but not the hypothesized group difference in Intimacy. Conclusions: Post-treatment survivors continue to experience some long-term interference with activities requiring physical and cognitive functioning. Sexual adjustment may be of special concern to men when treatments involve genitourinary functioning. Implications For Cancer Survivors: Ongoing monitoring with the IIRS to detect lifestyle interference throughout survivorship may enhance quality of life. Screening and intervention should target particular life domains rather than global interference. abstract_id: PUBMED:37168365 ERBB family fusions are recurrent and actionable oncogenic targets across cancer types. Purpose: Gene fusions involving receptor tyrosine kinases (RTKs) define an important class of genomic alterations with many successful targeted therapies now approved for ALK, ROS1, RET and NTRK gene fusions. Fusions involving the ERBB family of RTKs have been sporadically reported, but their frequency has not yet been comprehensively analyzed and functional characterization is lacking on many types of ERBB fusions. Materials And Methods: We analyzed tumor samples submitted to Caris Life Sciences (n=64,354), as well as the TCGA (n=10,967), MSK IMPACT (n=10,945) and AACR GENIE (n=96,324) databases for evidence of EGFR, ERBB2 and ERBB4 gene fusions. We also expressed several novel fusions in cancer cell lines and analyzed their response to EGFR and HER2 tyrosine kinase inhibitors (TKIs). Results: In total, we identified 1,251 ERBB family fusions, representing an incidence of approximately 0.7% across all cancer types. EGFR, ERBB2, and ERBB4 fusions were most frequently found in glioblastoma, breast cancer and ovarian cancer, respectively. We modeled two novel types of EGFR and ERBB2 fusions, one with a tethered kinase domain and the other with a tethered adapter protein. Specifically, we expressed EGFR-ERBB4, EGFR-SHC1, ERBB2-GRB7 and ERBB2-SHC1, in cancer cell lines and demonstrated that they are oncogenic, regulate downstream signaling and are sensitive to small molecule inhibition with EGFR and HER2 TKIs. Conclusions: We found that ERBB fusions are recurrent mutations that occur across multiple cancer types. We also establish that adapter-tethered and kinase-tethered fusions are oncogenic and can be inhibited with EGFR or HER2 inhibitors. We further propose a nomenclature system to categorize these fusions into several functional classes. abstract_id: PUBMED:15996006 Context moderates illness-induced lifestyle disruptions across life domains: a test of the illness intrusiveness theoretical framework in six common cancers. The illness intrusiveness theoretical framework maintains that illness-induced lifestyle disruptions compromise quality of life in chronic life-threatening conditions and that this effect is moderated by social, psychological, and contextual factors. Considerable evidence indicates that lifestyle disruptions compromise quality of life in cancer and other diseases and that the effects differ across life domains. The hypothesis that contextual factors (e.g. age, education, income, stressful life events) moderate these effects has not been tested extensively. We investigated whether age, income, education, and/or recent stressful life events modify the experience of illness intrusiveness across three central life domains (Relationships and Personal Development, Intimacy, and Instrumental life) in six common cancers. A sample of 656 cancer outpatients with one of six common cancers (breast, prostate, lymphoma, lung, head and neck, and gastrointestinal, all n's>100) completed the Illness Intrusiveness Ratings Scale while awaiting follow-up appointments with an oncologist. Results indicated statistically significant (all p's<0.05) interactions involving each of the hypothesized moderator variables and the Life Domain factor. In each case, greatest divergence was evident when illness intrusiveness involved instrumental life domains (e.g. work, finances, health, and active recreation). The findings substantiate the illness intrusiveness theoretical framework and support its relevance for people with cancer. The psychosocial impact of chronic life-threatening disease differs across life domains and depends on the context in which it is experienced. abstract_id: PUBMED:32717103 Bayesian methods for the analysis of early-phase oncology basket trials with information borrowing across cancer types. Research in oncology has changed the focus from histological properties of tumors in a specific organ to a specific genomic aberration potentially shared by multiple cancer types. This motivates the basket trial, which assesses the efficacy of treatment simultaneously on multiple cancer types that have a common aberration. Although the assumption of homogeneous treatment effects seems reasonable given the shared aberration, in reality, the treatment effect may vary by cancer type, and potentially only a subgroup of the cancer types respond to the treatment. Various approaches have been proposed to increase the trial power by borrowing information across cancer types, which, however, tend to inflate the type I error rate. In this article, we review some representative Bayesian information borrowing methods for the analysis of early-phase basket trials. We then propose a novel method called the Bayesian hierarchical model with a correlated prior (CBHM), which conducts more flexible borrowing across cancer types according to sample similarity. We did simulation studies to compare CBHM with independent analysis and three information borrowing approaches: the conventional Bayesian hierarchical model, the EXNEX approach, and Liu's two-stage approach. Simulation results show that all information borrowing approaches substantially improve the power of independent analysis if a large proportion of the cancer types truly respond to the treatment. Our proposed CBHM approach shows an advantage over the existing information borrowing approaches, with a power similar to that of EXNEX or Liu's approach, but the potential to provide substantially better control of type I error rate. abstract_id: PUBMED:34515905 Medical care disruptions during the first six months of the COVID-19 pandemic: the experience of older breast cancer survivors. Purpose: Older cancer survivors required medical care during the COVID-19 pandemic, but there are limited data on medical care in this age group. Methods: We evaluated care disruptions in a longitudinal cohort of non-metastatic breast cancer survivors aged 60-98 from five US regions (n = 321). Survivors completed a web-based or telephone survey from May 27, 2020 to September 11, 2020. Care disruptions included interruptions in seeing or speaking to doctors, receiving medical treatment or supportive therapies, or filling prescriptions since the pandemic began. Logistic regression models evaluated associations between care disruptions and education, medical, psychosocial, and COVID-19-related factors. Multivariate models included age, county COVID-19 death rates, comorbidity, and post-diagnosis time. Results: There was a high response rate (n = 262, 81.6%). Survivors were 32.2 months post-diagnosis (SD 17.5, range 4-73). Nearly half (48%) reported a medical disruption. The unadjusted odds of care disruptions were higher with each year of education (OR 1.22, 95% CI 1.08-1.37, p = < 0.001) and increased depression by CES-D score (OR 1.04, CI 1.003-1.08, p = 0.033) while increased tangible support decreased the odds of disruptions (OR 0.99, 95% CI 0.97-0.99, p = 0.012). There was a trend between disruptions and comorbidities (unadjusted OR 1.13 per comorbidity, 95% CI 0.99-1.29, p = 0.07). Adjusting for covariates, higher education years (OR1.23, 95% CI 1.09-1.39, p = 0.001) and tangible social support (OR 0.98 95% CI 0.97-1.00, p = 0.006) remained significantly associated with having care disruptions. Conclusion: Older breast cancer survivors reported high rates of medical care disruptions during the COVID-19 pandemic and psychosocial factors were associated with care disruptions. CLINICALTRIALS. Gov Identifier: NCT03451383. abstract_id: PUBMED:36651198 Lifestyle, genetic risk and incidence of cancer: a prospective cohort study of 13 cancer types. Background: Genetic and lifestyle factors are associated with cancer risk. We investigated the benefits of adhering to lifestyle advice by the World Cancer Research Fund (WCRF) with the risk of 13 types of cancer and whether these associations differ according to genetic risk using data from the UK Biobank. Methods: In 2006-2010, participants aged 37-73 years had their lifestyle assessed and were followed up for incident cancers until 2015-2019. Analyses were restricted to those of White European ancestry with no prior history of malignant cancer (n = 195 822). Polygenic risk scores (PRSs) were computed for 13 cancer types and these cancers combined ('overall cancer'), and a lifestyle index was calculated from WCRF recommendations. Associations with cancer incidence were estimated using Cox regression, adjusting for relevant confounders. Additive and multiplicative interactions between lifestyle index and PRSs were assessed. Results: There were 15 240 incident cancers during the 1 926 987 person-years of follow-up (median follow-up = 10.2 years). After adjusting for confounders, the lifestyle index was associated with a lower risk of overall cancer [hazard ratio per standard deviation increase (95% CI) = 0.89 (0.87, 0.90)] and of eight specific cancer types. There was no evidence of interactions on the multiplicative scale. There was evidence of additive interactions in risks for colorectal, breast, pancreatic, lung and bladder cancers, such that the recommended lifestyle was associated with greater change in absolute risk for persons at higher genetic risk (P < 0.0003 for all). Conclusions: The recommended lifestyle has beneficial associations with most cancers. In terms of absolute risk, the protective association is greater for higher genetic risk groups for some cancers. These findings have important implications for persons most genetically predisposed to those cancers and for targeted strategies for cancer prevention. abstract_id: PUBMED:36090472 Characteristics associated with healthcare disruptions during the COVID-19 pandemic for women in the United States at high risk for breast cancer. Delays in healthcare, including breast cancer screening, were documented during the coronavirus disease 2019 (COVID-19) pandemic. However, no studies have examined the impact of COVID-19 on healthcare among women at high (≥20 % lifetime) risk for breast cancer. This study fills that gap. Between August 2020 and January 2021, high-risk women (N = 225) living in the United States (US) completed an online survey assessing COVID-related healthcare disruptions. Descriptive statistics characterized the frequency of breast cancer screening (mammogram and breast magnetic resonance imaging [MRI]) since the beginning of the COVID-19 pandemic. Multivariable linear regression analysis with backward selection examined demographic characteristics associated with COVID-related healthcare disruptions. Since March 2020, 40 % of participants had received a mammogram and 12 % had received a screening breast MRI. On average, participants reported low levels of COVID-related healthcare disruptions (M = 1.97 on a 0-4 scale, higher = more disruptions). Participants who were younger (β = -0.21, p = 0.002) and not working (β = 0.18, p = 0.009) reported more COVID-related healthcare disruptions. Compared to non-Hispanic White participants, those from any other racial or ethnic group reported fewer COVID-related healthcare disruptions (β = -0.15, p = 0.020). Although few high-risk women received breast cancer screening after the declaration of the COVID-19 pandemic, they reported overall low levels of COVID-related healthcare disruptions. Results identify subgroups of high-risk women whose healthcare may have been more affected by the pandemic. Efforts to encourage US women at high risk for breast cancer to return to routine preventive care (including breast cancer screening) may need to be targeted towards women who are younger, not working, and non-Hispanic White. abstract_id: PUBMED:35242223 Journeys: understanding access, affordability and disruptions to cancer care in India. Background: Much of the global cancer burden is in low- and middle-income countries (LMICs). Along with the high incidence of cancer, most LMICs have unevenly distributed health care resources. This study is a qualitative exploration of the journey of patients accessing cancer care in India and their caregivers. Methods: The study followed a cross-sectional qualitative design. Participants were recruited by stratified purposive sampling, and all common cancers in India as reported by the GLOBOCAN database were included in the study. Consenting patients and their caregivers were interviewed using in-depth interview techniques. The data was analysed using principles of qualitative content analysis. Results: Cancer patients (n = 100) and their caregivers (n = 48) were interviewed for the study. The six themes that emerged were related to a) the journey of patients to access care, b) the psychological journey of patients, c) stigma of cancer patients, d) decision-making and adherence to treatment, e) economic costs of cancer care and its impact and f) modifiers to accessing cancer care. Conclusions: Planning and policymaking in the future of cancer care delivery need to consider the views expressed by the cancer patients and their caregivers as regards to access, adherence and disruptions to cancer care in India. Future policies will hopefully address some of the difficulties faced by patients. abstract_id: PUBMED:37670311 Socioeconomic status and lifestyle patterns in the most common cancer types-community-based research. Introduction: As the global burden of chronic cancer increases, its correlation to lifestyle, socioeconomic status (SES) and health equity becomes more important. The aim of the present study was to provide a snapshot of the socioeconomic and lifestyle patterns for different cancer types in patients at a Nordic tertiary cancer clinic. Materials And Methods: In a descriptive observational study, questionnaires addressed highest-attained educational level, occupational level, economy, relationship status, exposures, and lifestyle habits. The questionnaire was distributed to all cancer patients attending the cancer clinic. Treating physicians added further information about the cancer disease, including primary origin, pathology report, TNM-classification and stage. Results: Patients with lung cancer had the lowest SES, and patients with gastrointestinal (GI) cancer, other cancer types and prostate cancer had the second, third and fourth lowest SES, respectively. However, breast cancer patients had the highest SES. Lifestyle and exposure patterns differed among the major cancer types. Lung cancer patients reported the highest proportion of unfavourable lifestyle and exposure patterns, and patients with GI cancer, prostate cancer and other cancer types had the second, third and fourth highest proportion of unfavourable lifestyle and exposure patterns, respectively. The most favourable exposure and lifestyle patterns were observed in breast cancer patients. Conclusions: The present study indicated significant socioeconomic and lifestyle differences among cancer types at a Nordic cancer centre, with differences in lifestyle being more prominent than socioeconomic differences. abstract_id: PUBMED:24311950 Making lifestyle changes after colorectal cancer: insights for program development. Background: Healthy lifestyle behaviours may improve outcomes for people with colorectal cancer (crc), but the intention to take action and to change those behaviours may vary with time and resource availability. We aimed to estimate the prevalence of current lifestyle behaviours in people with and without crc in our community, and to identify their desire to change and their resource preferences. Methods: A mixed-methods survey was completed by people diagnosed with crc who were pre-treatment (n = 54), undergoing treatment (n = 62), or done with treatment for less than 6 months (n = 67) or for more than 6 months (n = 178), and by people without cancer (n = 83). Results: Current lifestyle behaviours were similar in all groups, with the exception of vigorous physical activity levels, which were significantly lower in the pre-treatment and ongoing treatment respondents than in cancer-free respondents. Significantly more crc respondents than respondents without cancer had made lifestyle changes. Among the crc respondents, dietary change was the change most frequently made (39.3%), and increased physical activity was the change most frequently desired (39.1%). Respondents wanted to use complementary and alternative medicine (cam), reading materials, self-efficacy, and group activities to make future changes. Conclusions: Resources for lifestyle change should be made available for people diagnosed with crc, and should be tailored to address physical activity, cam, and diet. Lifestyle programs offered throughout the cancer trajectory and beyond treatment completion might be well received by people with crc. Answer: Yes, ongoing lifestyle disruptions do differ across cancer types after the conclusion of cancer treatment. The study reported in PUBMED:21174161 found that post-treatment survivors continue to experience some long-term interference with activities requiring physical and cognitive functioning. Specifically, the study found that instrumental subscale scores, which reflect activities requiring physical/cognitive functioning, were highest for all groups, except prostate cancer. Men with prostate cancer scored higher on intimacy than instrumental or relationships and personal development (RPD), indicating that sexual adjustment may be of special concern to men when treatments involve genitourinary functioning. Women collectively showed higher instrumental than intimacy or RPD scores, but the hypothesized group difference in intimacy was not observed. The study also suggests that ongoing monitoring with the Illness Intrusiveness Ratings Scale (IIRS) to detect lifestyle interference throughout survivorship may enhance quality of life, and that screening and intervention should target particular life domains rather than global interference. This indicates that the type of cancer and the domain of life affected by the treatment can lead to different patterns of lifestyle disruptions among survivors (PUBMED:21174161).
Instruction: Poor agreement between commercial ELISAs for plasma fetuin-A: An effect of protein glycosylation? Abstracts: abstract_id: PUBMED:20420817 Poor agreement between commercial ELISAs for plasma fetuin-A: An effect of protein glycosylation? Background: Fetuin-A is a circulating inhibitor of ectopic calcification. Low plasma levels have been associated in some studies with increased vascular calcification, aortic stiffness and mortality in patients with Chronic Kidney Disease (CKD). However, there are other studies examining the association of fetuin-A with vascular parameters and mortality, which do not show these associations. These conflicting data may be explained by methodological differences. Methods: We compared plasma fetuin-A measurements made with two widely-used commercial fetuin-A ELISA kits (Biovendor, Modrice, Czech Republic; Epitope Diagnostics Inc., San Diego, US) in samples from patients with and without CKD. We evaluated the effect of differences in fetuin-A glycosylation status on assay specificity. Results: Deming regression analysis showed poor agreement between methods (for CKD cohort: y=-0.05+2.52x, S(y|x)=0.099g/L, R(2)=0.694). The Epitope Diagnostics kit demonstrated significant positive bias and greater specificity for deglycosylated fetuin-A relative to the Biovendor assay. Conclusion: The apparently contradictory nature of reports of the association of fetuin-A with biological variables may reflect differences in the specificity of different ELISA methods for glycosylated plasma fetuin-A. abstract_id: PUBMED:26555091 Human plasma protein N-glycosylation. Glycosylation is the most abundant and complex protein modification, and can have a profound structural and functional effect on the conjugate. The oligosaccharide fraction is recognized to be involved in multiple biological processes, and to affect proteins physical properties, and has consequentially been labeled a critical quality attribute of biopharmaceuticals. Additionally, due to recent advances in analytical methods and analysis software, glycosylation is targeted in the search for disease biomarkers for early diagnosis and patient stratification. Biofluids such as saliva, serum or plasma are of great use in this regard, as they are easily accessible and can provide relevant glycosylation information. Thus, as the assessment of protein glycosylation is becoming a major element in clinical and biopharmaceutical research, this review aims to convey the current state of knowledge on the N-glycosylation of the major plasma glycoproteins alpha-1-acid glycoprotein, alpha-1-antitrypsin, alpha-1B-glycoprotein, alpha-2-HS-glycoprotein, alpha-2-macroglobulin, antithrombin-III, apolipoprotein B-100, apolipoprotein D, apolipoprotein F, beta-2-glycoprotein 1, ceruloplasmin, fibrinogen, immunoglobulin (Ig) A, IgG, IgM, haptoglobin, hemopexin, histidine-rich glycoprotein, kininogen-1, serotransferrin, vitronectin, and zinc-alpha-2-glycoprotein. In addition, the less abundant immunoglobulins D and E are included because of their major relevance in immunology and biopharmaceutical research. Where available, the glycosylation is described in a site-specific manner. In the discussion, we put the glycosylation of individual proteins into perspective and speculate how the individual proteins may contribute to a total plasma N-glycosylation profile determined at the released glycan level. abstract_id: PUBMED:24027776 Quantitative analysis of aberrant protein glycosylation in liver cancer plasma by AAL-enrichment and MRM mass spectrometry. A lectin-coupled mass spectrometry (MS) approach was employed to quantitatively monitor aberrant protein glycosylation in liver cancer plasma. To do this, we compared the difference in the total protein abundance of a target glycoprotein between hepatocellular carcinoma (HCC) plasmas and hepatitis B virus (HBV) plasmas, as well as the difference in lectin-specific protein glycoform abundance of the target glycoprotein. Capturing the lectin-specific protein glycoforms from a plasma sample was accomplished by using a fucose-specific aleuria aurantia lectin (AAL) immobilized onto magnetic beads via a biotin-streptavidin conjugate. Following tryptic digestion of both the total plasma and its AAL-captured fraction of each HCC and HBV sample, targeted proteomic mass spectrometry was conducted quantitatively by a multiple reaction monitoring (MRM) technique. From the MRM-based analysis of the total plasmas and AAL-captured fractions, differences between HCC and HBV plasma groups in fucosylated glycoform levels of target glycoproteins were confirmed to arise from both the change in the total protein abundance of the target proteins and the change incurred by aberrant fucosylation on target glycoproteins in HCC plasma, even when no significant change occurs in the total protein abundance level. Combining the MRM-based analysis method with the lectin-capturing technique proved to be a successful means of quantitatively investigating aberrant protein glycosylation in cancer plasma samples. Additionally, it was elucidated that the differences between HCC and control groups in fucosylated biomarker candidates A1AT and FETUA mainly originated from an increase in fucosylation levels on these target glycoproteins, rather than an increase in the total protein abundance of the target glycoproteins. abstract_id: PUBMED:27134828 Global serum glycoform profiling for the investigation of dystroglycanopathies & Congenital Disorders of Glycosylation. The Congenital Disorders of Glycosylation (CDG) are an expanding group of genetic disorders which encompass a spectrum of glycosylation defects of protein and lipids, including N- & O-linked defects and among the latter are the muscular dystroglycanopathies (MD). Initial screening of CDG is usually based on the investigation of the glycoproteins transferrin, and/or apolipoprotein CIII. These biomarkers do not always detect complex or subtle defects present in older patients, therefore there is a need to investigate additional glycoproteins in some cases. We describe a sensitive 2D-Differential Gel Electrophoresis (DIGE) method that provides a global analysis of the serum glycoproteome. Patient samples from PMM2-CDG (n = 5), CDG-II (n = 7), MD and known complex N- & O-linked glycosylation defects (n = 3) were analysed by 2D DIGE. Using this technique we demonstrated characteristic changes in mass and charge in PMM2-CDG and in charge in CDG-II for α1-antitrypsin, α1-antichymotrypsin, α2-HS-glycoprotein, ceruloplasmin, and α1-acid glycoproteins 1&2. Analysis of the samples with known N- & O-linked defects identified a lower molecular weight glycoform of C1-esterase inhibitor that was not observed in the N-linked glycosylation disorders indicating the change is likely due to affected O-glycosylation. In addition, we could identify abnormal serum glycoproteins in LARGE and B3GALNT2-deficient muscular dystrophies. The results demonstrate that the glycoform pattern is varied for some CDG patients not all glycoproteins are consistently affected and analysis of more than one protein in complex cases is warranted. 2D DIGE is an ideal method to investigate the global glycoproteome and is a potentially powerful tool and secondary test for aiding the complex diagnosis and sub classification of CDG. The technique has further potential in monitoring patients for future treatment strategies. In an era of shifting emphasis from gel- to mass-spectral based proteomics techniques, we demonstrate that 2D-DIGE remains a powerful method for studying global changes in post-translational modifications of proteins. abstract_id: PUBMED:31097672 Glycoproteogenomics: A Frequent Gene Polymorphism Affects the Glycosylation Pattern of the Human Serum Fetuin/α-2-HS-Glycoprotein. Fetuin, also known as α-2-HS-glycoprotein (gene name: AHSG), is one of the more abundant glycoproteins secreted into the bloodstream. There are two frequently occurring alleles of human AHSG, resulting in three genotypes (AHSG*1, AHSG*2, and heterozygous AHSG1/2). The backbone amino acid sequences of fetuin coded by the AHSG*1 and AHSG*2 genes differ in two amino acids including one known O-glycosylation site (aa position 256). Although fetuin levels have been extensively studied, the originating genotype is often ignored in such analysis. As fetuin has been suggested repeatedly as a potential biomarker for several disorders, the question whether the gene polymorphism affects the fetuin profile is of great interest. In this work, we describe detailed proteoform profiles of fetuin, isolated from serum of 10 healthy and 10 septic patient individuals and investigate potential glycoproteogenomics correlations, e.g. how gene polymorphisms affect glycosylation. We established an efficient method for fetuin purification from individuals' serum using ion-exchange chromatography. Subsequently, we performed hybrid mass spectrometric approaches integrating data from native mass spectra and peptide-centric MS analysis. Our data reveal a crucial effect of the gene polymorphism on the glycosylation pattern of fetuin. Moreover, we clearly observed increased fucosylation in the samples derived from the septic patients. Our serum proteoform analysis, targeted at one protein obtained from 20 individuals, exposes the wide variability in proteoform profiles, which should be taken into consideration when using fetuin as biomarker. Importantly, focusing on a single or few proteins, the quantitative proteoform profiles can provide, as shown here, already ample data to classify individuals by genotype and disease state. abstract_id: PUBMED:36206263 Post-translational modifications glycosylation and phosphorylation of the major hepatic plasma protein fetuin-A are associated with CNS inflammation in children. Fetuin-A is a liver derived plasma protein showing highest serum concentrations in utero, preterm infants, and neonates. Fetuin-A is also present in cerebrospinal fluid (CSF). The origin of CSF fetuin-A, blood-derived via the blood-CSF barrier or synthesized intrathecally, is presently unclear. Fetuin-A prevents ectopic calcification by stabilizing calcium and phosphate as colloidal calciprotein particles mediating their transport and clearance. Thus, fetuin-A plays a suppressive role in inflammation. Fetuin-A is a negative acute-phase protein under investigation as a biomarker for multiple sclerosis (MS). Here we studied the association of pediatric inflammatory CNS diseases with fetuin-A glycosylation and phosphorylation. Paired blood and CSF samples from 66 children were included in the study. Concentration measurements were performed using a commercial human fetuin-A/AHSG ELISA. Of 60 pairs, 23 pairs were analyzed by SDS-PAGE following glycosidase digestion with PNGase-F and Sialidase-AU. Phosphorylation was analyzed in 43 pairs by Phos-TagTM acrylamide electrophoresis following alkaline phosphatase digestion. Mean serum and CSF fetuin-A levels were 0.30 ± 0.06 mg/ml and 0.644 ± 0.55 μg/ml, respectively. This study showed that serum fetuin-A levels decreased in inflammation corroborating its role as a negative acute-phase protein. Blood-CSF barrier disruption was associated with elevated fetuin-A in CSF. A strong positive correlation was found between the CSF fetuin-A/serum fetuin-A quotient and the CSF albumin/serum albumin quotient, suggesting predominantly transport across the blood-CSF barrier rather than intrathecal fetuin-A synthesis. Sialidase digestion showed increased asialofetuin-A levels in serum and CSF samples from children with neuroinflammatory diseases. Desialylation enhanced hepatic fetuin-A clearance via the asialoglycoprotein receptor thus rapidly reducing serum levels during inflammation. Phosphorylation of fetuin-A was more abundant in serum samples than in CSF, suggesting that phosphorylation may regulate fetuin-A influx into the CNS. These results may help establish Fetuin-A as a potential biomarker for neuroinflammatory diseases. abstract_id: PUBMED:29966421 Similar Albeit Not the Same: In-Depth Analysis of Proteoforms of Human Serum, Bovine Serum, and Recombinant Human Fetuin. Fetuin, also known as alpha-2-Heremans Schmid glycoprotein (AHSG), belongs to some of the most abundant glycoproteins secreted into the bloodstream. In blood, fetuins exhibit functions as carriers of metals and small molecules. Bovine fetuin, which harbors 3 N-glycosylation sites and a suggested half dozen O-glycosylation sites, has been used often as a model glycoprotein to test novel analytical workflows in glycoproteomics. Here we characterize and compare fetuin in depth, using protein from three different biological sources: human serum, bovine serum, and recombinant human fetuin expressed in HEK-293 cells, with the aim to elucidate similarities and differences between these proteins and the post-translational modifications they harbor. Combining data from high-resolution native mass spectrometry and glycopeptide centric LC-MS analysis, we qualitatively and quantitatively gather information on fetuin protein maturation, N-glycosylation, O-glycosylation, and phosphorylation. We provide direct experimental evidence that both the human serum and part of the recombinant proteins are processed into two chains (A and B) connected by a single interchain disulfide bridge, whereas bovine fetuin remains a single-chain protein. Although two N-glycosylation sites, one O-glycosylation site, and a phosphorylation site are conserved from bovine to human, the stoichiometry of the modifications and the specific glycoforms they harbor are quite distinct. Comparing serum and recombinant human fetuin, we observe that the serum protein harbors a much simpler proteoform profile, indicating that the recombinant protein is not ideally engineered to mimic human serum fetuin. Comparing the proteoform profile and post-translational modifications of human and bovine serum fetuin, we observe that, although the gene structures of these two proteins are alike, they represent quite distinct proteins when their glycoproteoform profile is also taken into consideration. abstract_id: PUBMED:29212317 Targeted Measurements of O- and N-Glycopeptides Show That Proteins in High Density Lipoprotein Particles Are Enriched with Specific Glycosylation Compared to Plasma. High density lipoprotein (HDL) particles are believed to be protective due to their inverse correlation with the prevalence of cardiovascular diseases. However, recent studies show that in some conditions such as heart disease and diabetes, HDL particles can become dysfunctional. Great attention has been directed toward HDL particle composition because the relative abundances of HDL constituents determine HDL's functional properties. A key factor to consider when studying the structure and composition of plasma particles is the protein glycosylation. Here, we profile the O- and N-linked glycosylation of HDL associated-proteins including the truncated form of Apo CIII and their glycan heterogeneity in a site-specific manner. Apolipoprotein CIII, fetuin A, and alpha 1 antitrypsin are glycoproteins associated with lipoproteins and are implicated in many cardiovascular and other disease conditions. A targeted method (UHPLC-QQQ) was used to measure the glycoprotein concentrations and site-specific glycovariations of the proteins in human plasma and compared with HDL particles isolated from the same plasma samples. The proteins found in the plasma are differentially glycosylated compared to those isolated in HDL. The results of this study suggest that glycosylation may play a role in protein partitioning in the blood, with possible functional implications. abstract_id: PUBMED:33499061 Detection and Characterization of Phosphorylation, Glycosylation, and Fatty Acid Bound to Fetuin A in Human Blood. The hepatokine fetuin A (Fet A) has been associated with diverse pathological states such as insulin resistance, type 2 diabetes, macrovascular disease, and systemic ectopic and vascular calcification. Fet A may also play a role in tumor growth and metastasis. The biological activity of Fet A may be affected by various modifications, including phosphorylation, O- and N-glycosylation and fatty acid binding. We developed an antibody-based assay for the detection of Fet A phosphorylated at serine 312. Fatty acid pattern was determined by gas chromatography. Using the antibody, we found that the phosphorylation was stable in human plasma or serum at room temperature for 8 h. We observed that Fet A is present in several glycosylation forms in human plasma, but the extent of Ser312 phosphorylation was not associated with glycosylation. The phosphorylation pattern did not change during an oral glucose tolerance test (0-120 min). We further found that human Fet A binds preferentially saturated fatty acids (>90%) at the expense of mono- and poly-unsaturated fatty acids. Our results indicate that different molecular species of Fet A are present in human plasma and that these different modifications may determine the different biological effects of Fet A. abstract_id: PUBMED:28287093 HDL Glycoprotein Composition and Site-Specific Glycosylation Differentiates Between Clinical Groups and Affects IL-6 Secretion in Lipopolysaccharide-Stimulated Monocytes. The goal of this pilot study was to determine whether HDL glycoprotein composition affects HDL's immunomodulatory function. HDL were purified from healthy controls (n = 13), subjects with metabolic syndrome (MetS) (n = 13), and diabetic hemodialysis (HD) patients (n = 24). Concentrations of HDL-bound serum amyloid A (SAA), lipopolysaccharide binding protein (LBP), apolipoprotein A-I (ApoA-I), apolipoprotein C-III (ApoC-III), α-1-antitrypsin (A1AT), and α-2-HS-glycoprotein (A2HSG); and the site-specific glycovariations of ApoC-III, A1AT, and A2HSG were measured. Secretion of interleukin 6 (IL-6) in lipopolysaccharide-stimulated monocytes was used as a prototypical assay of HDL's immunomodulatory capacity. HDL from HD patients were enriched in SAA, LBP, ApoC-III, di-sialylated ApoC-III (ApoC-III2) and desialylated A2HSG. HDL that increased IL-6 secretion were enriched in ApoC-III, di-sialylated glycans at multiple A1AT glycosylation sites and desialylated A2HSG, and depleted in mono-sialylated ApoC-III (ApoC-III1). Subgroup analysis on HD patients who experienced an infectious hospitalization event within 60 days (HD+) (n = 12), vs. those with no event (HD-) (n = 12) showed that HDL from HD+ patients were enriched in SAA but had lower levels of sialylation across glycoproteins. Our results demonstrate that HDL glycoprotein composition, including the site-specific glycosylation, differentiate between clinical groups, correlate with HDL's immunomodulatory capacity, and may be predictive of HDL's ability to protect from infection. Answer: Yes, the poor agreement between commercial ELISAs for plasma fetuin-A measurements may indeed be an effect of protein glycosylation. A study comparing plasma fetuin-A measurements with two widely-used commercial ELISA kits found poor agreement between methods, with one kit showing significant positive bias and greater specificity for deglycosylated fetuin-A relative to the other assay. This suggests that the contradictory nature of reports on the association of fetuin-A with biological variables may reflect differences in the specificity of different ELISA methods for glycosylated plasma fetuin-A (PUBMED:20420817). Protein glycosylation is a complex and abundant modification that can significantly affect protein structure and function. It is involved in multiple biological processes and can influence the physical properties of proteins. Glycosylation is also targeted in the search for disease biomarkers for early diagnosis and patient stratification (PUBMED:26555091). The glycosylation pattern of proteins, including fetuin-A, can be altered in disease states, which may affect their detection by assays that are sensitive to glycosylation status. For instance, in liver cancer plasma, differences in fucosylated glycoform levels of target glycoproteins, including fetuin-A, were confirmed to arise from changes in total protein abundance and aberrant fucosylation (PUBMED:24027776). Moreover, gene polymorphisms can affect the glycosylation pattern of fetuin-A, as seen in a study that found a crucial effect of gene polymorphism on the glycosylation pattern of fetuin-A (PUBMED:31097672). Additionally, post-translational modifications such as glycosylation and phosphorylation of fetuin-A are associated with CNS inflammation in children, further indicating the importance of these modifications in the biological function and detection of fetuin-A (PUBMED:36206263). In summary, the variability in the glycosylation of fetuin-A can lead to discrepancies in its measurement by different ELISA kits, which may not account for the diversity in glycosylation patterns and could result in poor agreement between assays.
Instruction: Does a third year of emergency medicine training make a difference? Abstracts: abstract_id: PUBMED:25825161 Implementing a third-year emergency medicine medical student curriculum. Background: Emergency medicine (EM) is commonly introduced in the fourth year of medical school because of a perceived need to have more experienced students in the complex and dynamic environment of the emergency department. However, there is no evidence supporting the optimal time or duration for an EM rotation, and a number of institutions offer third-year rotations. Objective: A recently published syllabus provides areas of knowledge, skills, and attitudes that third-year EM rotation directors can use to develop curricula. This article expands on that syllabus by providing a comprehensive curricular guide for the third-year medical student rotation with a focus on implementation. Discussion: Included are consensus-derived learning objectives, discussion of educational methods, considerations for implementation, and information on feedback and evaluation as proposed by the Clerkship Directors in Emergency Medicine Third-Year Curriculum Work Group. External validation results, derived from a survey of third-year rotation directors, are provided in the form of a content validity index for each content area. Conclusions: This consensus-derived curricular guide can be used by faculty who are developing or revising a third-year EM medical student rotation and provide guidance for implementing this curriculum at their institution. abstract_id: PUBMED:11421051 Does a third year of emergency medicine training make a difference? Historical cohort study of Queen's University graduates. Objective: To describe outcomes of a third-year residency (PGY-3) program in family medicine/emergency medicine in terms of its graduates' practice characteristics and their self-assessed preparedness for practising emergency medicine. Design: A questionnaire was sent to graduates of Queen's University's family medicine residency programs. Setting: Recent graduates' practices. Participants: All 30 graduates of Queen's University's Family Medicine/Emergency Medicine Program (PGY-3s) from 1988 to 1997 and 90 matched controls chosen randomly from among the 250 graduates of the 2-year family medicine residency program (PGY-2s) during the same period. Six of the 120 were excluded. Response rate was 89%. Main Outcome Measures: Current practice of family and emergency medicine, leadership activities in emergency medicine, self-assessment of preparedness to practise and to lead others at the end of training, self-report of frequency of emergency care situations in subsequent practice for which physicians felt unprepared by their training, and catchment population and "rurality" of location of current practice. Results: Compared with controls, more PGY-3s practiced and took leadership roles in emergency medicine in their hospitals and communities. At the end of their training, PGY-3s reported higher levels of preparedness for practicing and providing leadership in emergency medicine. Both groups reported the same frequency of encountering emergency situations in subsequent practice for which they felt inadequately prepared. Both groups practised in communities of similar size and location. Conclusion: Graduates of Queen's University's third-year emergency medicine program appear to practise in accordance with their extra training. abstract_id: PUBMED:27307472 The value of emergency medicine placements for postgraduate doctors: views of Foundation Year 2 doctors and training leads in the emergency department (ED). Objectives: To examine the delivery of postgraduate training in the emergency medicine setting and its impact on postgraduate doctor (Foundation Year 2) performance and competence. Methods: A national study in four emergency departments (EDs) in England between 2009 and 2010 was undertaken. Semistructured interviews with ED training leads (TLs) and focus groups with Foundation Year 2 (F2) doctors were carried out in each ED. Interviews and focus group data were analysed to compare the perspectives of F2 doctors and TLs on the delivery of training and performance and confidence of F2 doctors. Results: Interviews were carried out with eight TLs and focus groups with 30 F2s. F2 doctors and EDTLs agreed that ED was a valuable environment for F2 doctors to develop their competence, with exposure to a broad range of patients and the opportunity to make decisions about clinical care. Diverging views existed around competence and performance of F2s. F2 doctors had anxieties about decision-making (particularly discharging patients) and required regular feedback to feel confident in their care. TLs recognised a need for more supervision and support for F2 doctors but this was challenging in a busy, performance-led service. Conclusions: Emergency medicine placements were important in the development of confident and competent F2 doctors, particularly in the context of less clinical exposure in other specialty placements. However, there are competing tensions between elements of postgraduate learning and service delivery within emergency medicine that require addressing to enable trainees to optimally develop knowledge and skills in this environment. abstract_id: PUBMED:31594743 Three- vs. Four-Year Emergency Medicine Training Programs. Postgraduate training in emergency medicine (EM) varies in length among different programs. This fact creates a dilemma for applicants to the specialty of EM and prevents EM educators from reaching a consensus regarding the optimal length of training. Historically, EM training existed in the postgraduate year (PGY) 1-3, 2-4, and 1-4 formats, until the PGY 2-4 program became obsolete in 2011-2012. Currently, three-quarters of EM programs follow the PGY 1-3 format. In this article, we clarify for the applicants the main differences between the PGY 1-3 and PGY 1-4 formats. We also discuss the institutional, personal, and graduate considerations that explain why an institution or an individual would choose one format over the other. abstract_id: PUBMED:23871480 Sports medicine training and practice opportunities for emergency physicians. Background: There is growing interest among emergency physicians to seek additional training in Sports Medicine (SM) and to add it to their clinical practice. This presents unique training and practice management issues. The majority of Primary Care SM fellowship programs list that they will accept emergency physicians, and approximately one-third have already had an emergency physician as an SM fellow. Objectives: The objective of this article is to provide an overview of the key elements for emergency physicians to consider as they pursue SM career goals. Discussion: Training needs such as continuity of care as it pertains to the athlete, SM skills development, and practice management are reviewed. Practice challenges such as malpractice insurance and billing issues are discussed. Examples of several practice models are presented. Evolving trends in SM practice and training opportunities for emergency physicians are discussed as well. Conclusions: Sports Medicine is a viable career option for emergency physicians and may complement their skills set in the management of acute injuries. Practice and training opportunities will continue to evolve as this pathway into the practice of SM gains further recognition. abstract_id: PUBMED:30939625 An example of public health training in emergency medicine. There is growing recognition of the vital roles that Emergency Physicians can play in public health. Accordingly, there is a need for emergency medicine trainees to have opportunities to acquire relevant public health skillsets. This article provides an example of such a possibility. It describes the experience of an emergency medicine trainee in undertaking a 6 month training rotation at a Public Health Unit. The example illustrates the feasibility and benefits of a public health rotation for interested trainees, and offers recommendations for establishing such rotations. There are challenges, including eligibility requirements and funding. However, possible approaches exist at the individual and college levels to facilitate such training opportunities so that emergency medicine trainees can acquire necessary and valuable public health skills. abstract_id: PUBMED:32833035 Intensive care and emergency medicine as part of training in internal medicine Intensive care and emergency medicine are extremely important for the treatment of patients with acute and severe medical conditions. While other internal medicine specialties are progressively focused on specific organ systems or diseases, rotations of residents into the intensive care unit (ICU) or the emergency department (ED) are an important part of their training in general internal medicine. Here they can acquire many of the diagnostic and therapeutic skills required in their curriculum. All internal medicine residents in Germany therefore must complete at least a 6 month rotation in the ICU and the ED. For training programs in general internal medicine, the German Society for Medical Intensive Care and Emergency Medicine (Deutsche Gesellschaft für Internistische Intensivmedizin und Notfallmedizin) asks the German State Medical Councils to accept rotations in the ICU and the ED for 15 months each. After adding an ICU or ED fellowship, these internists would be ideally qualified for much needed attending and leadership positions in the ICUs and EDs and would have a better perspective for a scientific career in these fields. In addition, ICU and ED fellowships of internists from other specialties should be supported. abstract_id: PUBMED:35103444 Medical training programs: comparisons between family and community medicine, internal medicine, intensive care medicine, anesthesiology and critical care, and emergency medicine. Objectives: To analyze whether current medical residency training programs in Spain foster the acquisition of knowledge and skills that enable specialists to provide emergency and urgent medical care. Material And Methods: We studied documents related to the content of residency programs for the following 4 specialties in effect in June 2021: internal medicine, intensive care medicine, anesthesiology and critical care, and family and community medicine. Program descriptions were obtained from the website of the Spanish Ministry of Health and the professional associations for each specialty. The associations' unofficial drafts and proposed updates were also studied. For emergency and urgent medicine, we studied drafts and updates of proposals presented to the Ministry of Health by the Spanish Society of Emergency Medicine (SEMES) in 2010 and an updated version presented in 2021. The proposals were drafted by the SEMES committee to promote specialization in emergency medicine. Results: Existing residency programs treat the management of urgent events from diverse points of view related to hospital settings (internal medicine focusing on admitted patients, anesthesiology on patients in perioperative settings, and intensive care on those in their specialized units) or to primary care centers (family medicine), where they are an extension of routine care. The residents' rotations in emergency medicine vary. In anesthesiology, intensive care medicine, and family medicine, training occurs while residents are on call - in the first year for trainees in anesthesiology and intensive care medicine but throughout the 4-year residency (with time decreasing each year) for trainees in family medicine. Internal medicine offers an initial 3-month rotation in emergency medicine in the early phase and then assigns on-call shifts, without specifying a number. The proposed program submitted by SEMES includes wide-ranging contact with emergencies in various hospital settings (observation ward, trauma care, and pediatric care). Training in out-of-hospital emergencies and emergency and urgent care resource coordination centers are also included. Conclusion: Currently available residency training in Spain does not provide appropriate preparation for practicing emergency medicine. Emergency medicine has its own body of theory and settings for practice that differ from those of other specialties. Therefore, at present there is a training gap in Spain that should be filled by creating a specialization in emergency medicine. abstract_id: PUBMED:30778849 Why our doctors are leaving Irish emergency medicine training. Background: In recent years, attrition from Emergency Medicine (EM) training in Ireland has increased. Australian data illustrates that increasing numbers of Irish-trained doctors are embarking on EM training in Australia. This has implications for EM in Ireland, particularly for Emergency Departments already under strain. An adequate supply of qualified specialist EM doctors is essential to provide high-quality patient care. Aims: The aim of this study is to gain insights into the reasons for attrition from EM training in Ireland. Methods: EM trainees who exited EM training in Ireland 2011-2016 were invited to complete a survey which included quantitative and free-text questions. Results: Of 43 doctors who had exited EM training, 71% responded and although some respondents spoke positively about the speciality, overall, their feedback illustrated levels of frustration and dissatisfaction with EM training in Ireland. Respondents exited their EM training programme due to a lack of training received, despite being formally registered on an Irish EM training scheme. The other factors raised included dissatisfaction with the general working conditions in EM in Ireland with respondents highlighting heavy workloads, high work intensity, stress, staff shortages, and poor work-life balance. Conclusions: Our findings indicate the need to improve training and working conditions in Emergency Medicine in Ireland. These improvements are necessary to reduce attrition and improve retention of EM staff. abstract_id: PUBMED:36721434 Clinical Outcomes Following the Implementation of a Novel One-Year Training Program in Emergency Medicine in Karachi, Pakistan. Background: Most Emergency Departments (EDs) in low- and middle-income countries (LMICs), particularly in Pakistan, are staffed by physicians not formally trained in Emergency Medicine (EM). As of January 2022, there were only 13 residency training programs in EM throughout all of Pakistan. Therefore, an intermediate solution-a one-year training program in EM-was developed to build capacity. Objective: To determine the impact of a novel training program in EM on clinical metrics and outcomes. Methods: The first cohort of a novel, one-year training program-the Certification Program in Emergency Medicine (CPEM)-completed the program in June 2019. The program consisted of two arms: CPEM-Clinical (CPEM-C), which included physicians from the Indus Hospital and Health Network (IHHN) ED; and CPEM-Didactic (CPEM-D), which included physicians from EDs across Karachi. Both groups participated in weekly conferences, such as didactics, small group discussions, workshops, and journal clubs. CPEM-C learners also received clinical mentorship from local and international faculty. Mortality, length of stay (LOS), and time-to-evaluation, as well as metrics in four key areas-patients at risk for cardiovascular disease/acute coronary syndrome, sepsis, respiratory illness, and intra-abdominal trauma-were assessed before and after the initial cohort at IHHN and compared with other groups in IHHN. Findings And Conclusions: More than 125,000 patients were seen from July to December 2017 (pre-CPEM) and July to December 2019 (post-CPEM). Overall, there were significant improvements in all clinical metrics and outcomes, with the exception of LOS and time-to-evaluation, and a trend toward improved mortality. In comparing CPEM graduates to other groups in IHHN ED, most metrics and outcomes significantly improved or trended toward improvement, including mortality. Implementation of a medium-duration, intensive EM training program can help improve patient care and the development of EM as a new specialty in lower-resource settings. Answer: Yes, a third year of emergency medicine (EM) training does make a difference. A historical cohort study of Queen's University graduates who completed a third-year residency program in family medicine/emergency medicine showed that compared to controls, more graduates of the third-year program practiced and took leadership roles in emergency medicine in their hospitals and communities. They also reported higher levels of preparedness for practicing and providing leadership in emergency medicine at the end of their training. However, both groups reported encountering emergency situations in subsequent practice for which they felt inadequately prepared at the same frequency, indicating that while the third year enhances preparedness, it may not cover all eventualities encountered in practice (PUBMED:11421051). Furthermore, the implementation of a novel one-year training program in EM in Karachi, Pakistan, showed significant improvements in clinical metrics and outcomes, suggesting that additional training in EM can positively impact patient care and the development of EM as a specialty in lower-resource settings (PUBMED:36721434). These findings support the notion that extending the duration of EM training can enhance the competence and confidence of physicians in emergency settings, which is crucial given the dynamic and complex nature of emergency departments.
Instruction: Do family physicians, emergency department physicians, and pediatricians give consistent sport-related concussion management advice? Abstracts: abstract_id: PUBMED:24925947 Do family physicians, emergency department physicians, and pediatricians give consistent sport-related concussion management advice? Objective: To identify differences and gaps in recommendations to patients for the management of sport-related concussion among FPs, emergency department physicians (EDPs), and pediatricians. Design: A self-administered, multiple-choice survey was e-mailed to FPs, EDPs, and pediatricians. The survey had been assessed for content validity. Setting: Two community teaching hospitals in the greater Toronto area in Ontario. Participants: Two hundred seventy physicians, including FPs, EDPs, and pediatricians, were invited to participate. Main Outcome Measures: Identification of sources of concussion management information, usefulness of concussion diagnosis strategies, and whether physicians use common terminology when explaining cognitive rest strategies to patients after sport-related concussions. Results: The response rate was 43.7%. Surveys were completed by 70 FPs, 23 EDPs, and 11 pediatricians. In total, 49% of FP, 52% of EDP, and 27% of pediatrician respondents reported no knowledge of any consensus statements on concussion in sport, and 54% of FPs, 86% of EDPs, and 78% of pediatricians never used the Sport Concussion Assessment Tool, version 2. Only 49% of FPs, 57% of EDPs, and 36% of pediatricians always advised cognitive rest. Conclusion: This study identified large gaps in the knowledge of concussion guidelines and implementation of recommendations for treating patients with sport-related concussions. Although some physicians recommended physical and cognitive rest, a large proportion failed to consistently advise this strategy. Better knowledge transfer efforts should target all 3 groups of physicians. abstract_id: PUBMED:27303008 Are Canadian clinicians providing consistent sport-related concussion management advice? Objective: To compare the knowledge and use of recommendations for the management of sport-related concussion (SRC) among sport and exercise medicine physicians (SEMPs) and emergency department physicians (EDPs) to assess the success of SRC knowledge transfer across Canada. Design: A self-administered, multiple-choice survey accessed via e-mail by SEMPs and EDPs. The survey had been assessed for content validity. Setting: Canada. Participants: The survey was completed between May and July 2012 by SEMPs who had passed the diploma examination of the Canadian Academy of Sport and Exercise Medicine and by EDPs who did not hold this diploma. Main Outcome Measures: Knowledge and identification of sources of concussion management information, use of concussion diagnosis strategies, and whether physicians use common and consistent terminology when explaining cognitive rest strategies to patients after an SRC. Results: There was a response rate of 28% (305 of 1085). The SEMP and EDP response rates were 41% (147 of 360) and 22% (158 of 725), respectively. Of the responses, 41% of EDPs and 3% of SEMPs were unaware of any consensus statements on concussion in sport; 74% of SEMPs used the Sport Concussion Assessment Tool, version 2 (SCAT2), "usually or always," whereas 88% of EDPs never used the SCAT2. When queried about how cognitive rest could best be achieved after an SRC, no consistent answer was documented. Conclusion: Differences and a lack of consistency in the implementation of recommendations for SRC patients were identified for SEMPs and EDPs. It appears that the SCAT2 is used more in the SEMP setting than in the emergency context. Further knowledge transfer efforts and research should address the barriers to achieving more consistent advice given by physicians who attend SRC patients. abstract_id: PUBMED:29167908 Pilot single-centre cross-sectional study to determine emergency physicians' knowledge and management of sports concussion: an experience from Singapore. Introduction: Sports concussion remains challenging to manage despite changes to policy and practice since the 2012 International Consensus Conference on Concussion in Sport. Emergency physicians (EPs) are usually the first line of medical care for athletes in amateur and youth collision sports. This single-centre cross-sectional study aimed to establish EPs' understanding and management of concussion in Singapore. Methods: An anonymised, 17-item online questionnaire was sent to EPs requesting for information on their clinical experience, training, exposure to concussion cases in the emergency department (ED) and assessed knowledge of the condition. Results: Out of 65 EPs, 52 (80%) responded, 25 (48.1%) of whom were medical officers. Over 90% had not received formal training in concussion management, and 73.1% regularly assessed concussion. 40 (76.9%) EPs recognised that loss of consciousness was not essential for diagnosis and only 24 (46.2%) knew the most common symptom. 26 (50.0%) incorrectly reported that they would perform brain imaging. Among those who indicated onward referral, 29 (55.8%) would refer concussed patients to neurosurgery. There were no significant differences between clinical grade or training in concussion and positive responses for definition, imaging modality or most common symptom of concussion. Conclusion: Concussion is a common presentation to EDs in Singapore. However, understanding of the condition, its clinical diagnosis, investigation and onward management is limited. Although EPs reported training in concussion, it is likely to be insufficient. Commencing relevant education programmes for undergraduate and postgraduate medical students may enable progressive acquisition of knowledge and thereby improve patient management in the future. abstract_id: PUBMED:29521080 Concussion in sport: the importance of accurate and reliable discharge advice. Concussion in sport is a common presentation in urgent and emergency care settings, so it is essential that nurses have a full understanding of the condition. Most patients who attend an emergency department with concussion are discharged and discharge advice must be well-informed and evidence based. This article outlines the normal anatomy of the brain and the pathophysiology of concussion, and discusses the guidelines on returning to sport following this injury. abstract_id: PUBMED:23178923 Sport concussion knowledge base, clinical practises and needs for continuing medical education: a survey of family physicians and cross-border comparison. Context: Evolving concussion diagnosis/management tools and guidelines make Knowledge Transfer and Exchange (KTE) to practitioners challenging. Objective: Identify sports concussion knowledge base and practise patterns in two family physician populations; explore current/preferred methods of KTE. Design: A cross-sectional study. Setting: Family physicians in Alberta, Canada (CAN) and North/South Dakota, USA. Participants: CAN physicians were recruited by mail: 2.5% response rate (80/3154); US physicians through a database: 20% response rate (109/545). INTERVENTION/INSTRUMENT: Online survey. MAIN AND SECONDARY OUTCOME MEASURES: Diagnosis/management strategies for concussions, and current/preferred KTE. Results: Main reported aetiologies: sports/recreation (52.5% CAN); organised sports (76.5% US). Most physicians used clinical examination (93.8% CAN, 88.1% US); far fewer used the Sport Concussion Assessment Tool (SCAT1/SCAT2) and balance testing. More US physicians initially used concussion-grading scales (26.7% vs 8.8% CAN, p=0.002); computerised neurocognitive testing (19.8% vs 1.3% CAN; p<0.001) and Standardised Assessment of Concussion (SAC) (21.8% vs 7.5% CAN; p=0.008). Most prescribed physical rest (83.8% CAN, 75.5% US), while fewer recommended cognitive rest (47.5% CAN, 28.4% US; p=0.008). Return-to-play decisions were based primarily on clinical examination (89.1% US, 73.8% CAN; p=0.007); US physicians relied more on neurocognitive testing (29.7% vs 5.0% CAN; p<0.001) and recognised guidelines (63.4% vs 23.8% CAN; p<0.001). One-third of Canadian physicians received KTE from colleagues, websites and medical school training. Leading KTE preferences included Continuing Medical Education (CME) courses and online CME. Conclusions: Existing published recommendations regarding diagnosis/management of concussion are not always translated into practise, particularly the recommendation for cognitive rest; predicating enhanced, innovative CME initiatives. abstract_id: PUBMED:30049779 Concussion management in combat sports: consensus statement from the Association of Ringside Physicians. Various organisations and experts have published numerous statements and recommendations regarding different aspects of sports-related concussion including definition, presentation, treatment, management and return to play guidelines. 1-7 To date, there have been no written consensus statements specific for combat sports regarding management of combatants who have suffered a concussion or for return to competition after a concussion. In combat sports, head contact is an objective of the sport itself. Accordingly, management and treatment of concussion in combat sports should, and must, be more stringent than for non-combat sports counterparts.The Association of Ringside Physicians (an international, non-profit organisation dedicated to the health and safety of the combat sports athlete) sets forth this consensus statement to establish management guidelines that ringside physicians, fighters, referees, trainers, promoters, sanctioning bodies and other healthcare professionals can use in the ringside setting. We also provide guidelines for the return of a combat sports athlete to competition after sustaining a concussion. This consensus statement does not address the management of moderate to severe forms of traumatic brain injury, such as intracranial bleeds, nor does it address the return to competition for combat sports athletes who have suffered such an injury. These more severe forms of brain injuries are beyond the scope of this statement. This consensus statement does not address neuroimaging guidelines in combat sports. abstract_id: PUBMED:31326205 Recommendations for the Emergency Department Prevention of Sport-Related Concussion. Sport-related concussion refers to the subset of concussive injuries occurring during sport activities. Similar to concussion from nonsport mechanisms, sport-related concussion is associated with significant morbidity, including migrainous headaches, disruption in normal daily activities, and long-term depression and cognitive deficits. Unlike nonsport concussions, sport-related concussion may be uniquely amenable to prevention efforts to mitigate these problems. The emergency department (ED) visit for sport-related concussion represents an opportunity to reduce morbidity by timely diagnosis and management using best practices, and through education and counseling to prevent a subsequent sport-related concussion. This article provides recommendations to reduce sport-related concussion disability through primary, secondary, and tertiary preventive strategies enacted during the ED visit. Although many recommendations have a solid evidence base, several research gaps remain. The overarching goal of improving sport-related concussion outcome through enactment of ED-based prevention strategies needs to be explicitly studied. abstract_id: PUBMED:26584436 Concussion Management Practice Patterns Among Sports Medicine Physicians. Objective: The primary purpose of this study was to examine concussion management practice patterns among sports medicine physicians in the United States. Design: Cross-sectional study using a web-based survey. Participants: Members of the American Medical Society for Sports Medicine (AMSSM). Main Outcome Measures: We distributed a questionnaire to physician members of the AMSSM assessing the current practices for evaluating and managing concussions sustained during sports. Specifically, we asked respondents about their use of management guidelines, medications, balance assessments, neuropsychological tests, and return-to-play strategies. Results: Of the 3591 members emailed, 425 (11.8%) respondents responded. Ninety-seven percent of respondents reported basing current management of sport-related concussion on a published set of criteria, with a majority (91.9%) following the guidelines provided by the Fourth International Conference on Concussion in Sport. Seventy-six percent of respondents reported using medication beyond 48 hours postinjury. Acetaminophen was reported as the most commonly administered medication, although tricyclic antidepressants and amantadine were also commonly administered. Vitamins, minerals, and dietary supplements were also reported as commonly administered. Most respondents reported using a form of neuropsychological testing (87.1%). A majority of respondents (88.6%) reported allowing athletes to return to competition after concussion only once the athlete becomes symptom free and completes a return-to-play protocol. Conclusions: Most sports medicine physicians seem to use recently developed guidelines for concussion management, regularly use medications and neuropsychological testing in management strategies, and follow established return-to-play guidelines. Clinical Relevance: Sports medicine physicians seem to have clinical expertise in the management of sport-related concussion. abstract_id: PUBMED:24925932 Family physicians can champion sport-related concussion management: it's about time. N/A abstract_id: PUBMED:34288793 Management of sport-related concussion in emergency departments in England: a multi-center study. Objective: To establish the current knowledge of sport-related concussion (SRC) management of clinicians in emergency departments in the North West of England. Methods: A cross-sectional, multi-center study design was used and included emergency department clinicians (EDCs) across 15 centers in the North West of England. A 22-question survey was issued with questions focused on the 5th International Conference on Concussion Consensus Statement. Absolute and percentages were presented, and comparisons in knowledge of guidelines and confidence between EDCs were made using the Kruskal-Wallis test. Results: Of an estimated 300 EDCs, 111 (37%) responded. Thirty (27%) were aware of the guidelines, whilst 63 (57%) had heard of a graduated return-to-sport protocol. Physical rest was advised by 106 (95%) respondents, with 68 (61%) advising cognitive rest and 47 (42%) providing written advice. There was no difference (p > .05) in awareness of SRC guidelines or confidence between clinician grade, with most receiving no SRC training. Conclusions: There is a lack of knowledge amongst EDCs in the North West of England in managing and providing discharge advice to patients with SRC. This is likely due to the limited awareness of guidelines and training, and therefore further education delivery on SRC management guidelines is required. Answer: Based on the provided abstracts, it appears that family physicians, emergency department physicians (EDPs), and pediatricians do not consistently give the same sport-related concussion management advice. Several studies have identified gaps in knowledge and differences in the implementation of concussion management recommendations among these groups of physicians. For instance, a study with participants from two community teaching hospitals in the greater Toronto area found that a significant number of family physicians (FPs), EDPs, and pediatricians reported no knowledge of any consensus statements on concussion in sport. Moreover, a large proportion of these physicians did not always advise cognitive rest, which is a key recommendation for managing sport-related concussions (PUBMED:24925947). Another study comparing sport and exercise medicine physicians (SEMPs) and EDPs in Canada revealed that a substantial number of EDPs were unaware of consensus statements on concussion in sport, and there was no consistent answer documented when queried about how cognitive rest could best be achieved after a sport-related concussion (SRC). This indicates a lack of consistency in the advice given by physicians to SRC patients (PUBMED:27303008). A survey of family physicians in Alberta, Canada, and North/South Dakota, USA, also highlighted that existing recommendations regarding the diagnosis and management of concussion were not always translated into practice, particularly the recommendation for cognitive rest. This suggests a need for enhanced and innovative continuing medical education initiatives to improve knowledge transfer and practice consistency (PUBMED:23178923). In summary, the evidence suggests that there is inconsistency in the sport-related concussion management advice given by family physicians, emergency department physicians, and pediatricians, which may be due to gaps in knowledge, differences in the use of concussion management tools, and variability in the implementation of guidelines across different medical specialties.
Instruction: Do ethnic patterns in cryptorchidism reflect those found in testicular cancer? Abstracts: abstract_id: PUBMED:23665269 Do ethnic patterns in cryptorchidism reflect those found in testicular cancer? Purpose: There are established variations in testicular cancer incidence between ethnic groups within countries. It is currently unclear whether the occurrence of cryptorchidism-a known risk factor for testicular cancer-follows similar patterns. In New Zealand Māori have unusually high rates of testicular cancer compared to individuals of European ancestry. We hypothesized that ethnic trends in the incidence of cryptorchidism would reflect those for testicular cancer in this setting. Materials And Methods: We followed 318,441 eligible male neonates born in New Zealand between 2000 and 2010 for the incidence of orchiopexy confirmed cryptorchidism and the incidence of known risk factors for cryptorchidism (low birth weight, short gestation, small size for gestational age) using routine maternity, hospitalization and mortality records. Logistic regression was used to calculate odds ratios for the presence of known risk factors for cryptorchidism by ethnic group. Poisson regression was used to calculate relative risk of cryptorchidism by ethnicity, adjusted for risk factors. Results: Ethnic patterns of cryptorchidism incidence in New Zealand closely mirrored those previously observed for testicular cancer. Māori had higher rates of cryptorchidism than all other ethnic groups (adjusted RR 1.2 [95% CI 1.11-1.3]), with Pacific (0.89 [0.8-0.99]) and Asian groups (0.68 [0.59-0.79]) having the lowest rates (European/other, referent). Conclusions: Since the principal risk factors for cryptorchidism are present in utero, the results of the current study strengthen the likelihood that the ethnic patterning of testicular cancer is at least partly due to prenatal risk factors. abstract_id: PUBMED:26566182 Ethnic patterns of hypospadias in New Zealand do not resemble those observed for cryptorchidism and testicular cancer: evidence of differential aetiology? It has been proposed that hypospadias, cryptorchidism, poor semen quality and testicular cancer might share common prenatal causes. We have previously demonstrated similar ethnic patterns for the incidence of testicular cancer and cryptorchidism - a known risk factor for testicular cancer. If the underlying exposure(s) that cause hypospadias, cryptorchidism and testicular cancer are shared, then we would expect the incidence relationship between ethnic groups to follow the same pattern across all three conditions. We followed a birth cohort of 318 345 eligible male neonates born in New Zealand between 2000-2010, and linked routinely collected maternity records with inpatient hospitalization and mortality records through to 2011. We searched hospitalization records for diagnoses of hypospadias, and used mortality records for censoring. We used Poisson regression methods to compare the relative risk of hypospadias between ethnic groups, adjusting for perinatal risk factors and total person time. We observed that European/Other children had the highest risk of hypospadias, with Māori, Pacific and Asian boys having around 40% lower risk of disease compared with this group (adjusted relative risk [RR]: Māori 0.62, 95% CI 0.55-0.70; Pacific 0.62, 95% CI 0.53-0.72; Asian 0.57, 95% CI 0.47-0.69). This contrasts substantially with our previous observations for cryptorchidism and testicular cancer, where Māori males have the greatest risk. Our observations suggest that - at least in New Zealand - the exposures that drive the development of hypospadias may differ to those that that drive the development of cryptorchidism and/or testicular cancer. abstract_id: PUBMED:34736726 Referral patterns for undescended testis: A 7 year comparative analysis of primary care providers. Introduction: Cryptorchidism, or undescended testis (UDT), is identified in 1% of boys by one year of age and carries long term risks of infertility and testicular neoplasia. In 2014, the American Urological Association (AUA) released a guideline statement stating that patients with UDT should be referred to a urologist by 6 months of age in order to facilitate timely surgical correction. This study is the follow-up to a 2010 study assessing referral patterns to our university center from primary care providers. Objective: In this new study, we aim to identify changes in referral patterns in response to the establishment of the 2014 AUA guidelines and to understand how our referring physicians stay abreast of current knowledge regarding UDT. Study Design: A 9 question anonymous survey regarding UDT referral patterns was sent to providers who had previously referred a patient to our pediatric urology practice. The results were categorized by specialty and were compared to the similar survey from 2010. Results: Surveys were sent to 500 physicians with 138 (27.6%) responses received. Less than half of respondents reported that they would refer a boy with unilateral or bilateral palpable UDT by 6 months of age (37.0% and 38.4% respectively). This was not significantly different than the 2010 survey (p = 0.68 and 0.27 respectively). Two-thirds of physicians would refer a patient with unilateral nonpalpable UDT within the recommended time frame (68.8%); this was also unchanged from 2010 (p = 0.87). There was an improvement in respondents who would refer immediately for bilateral nonpalpable testes from 49.8% in 2010 to 53.6% in 2017 (p = 0.01). Residency training was most commonly cited as the primary source of knowledge regarding UDT although 89.3% of respondents citing this were >5 years removed from residency training. Discussion: Delayed referral patterns were reported by the majority of providers for palpable UDT and by greater than one-third of providers for nonpalpable UDT. There was minimal change in referral patterns between 2010 and 2017 despite the release of the AUA cryptorchidism guidelines in 2014. In both 2010 and 2017, residency training was identified as the primary source of knowledge regarding management of UDT. Conclusion: These findings suggest an unmet need for education regarding contemporary management of UDT for the primary care physicians in our community. abstract_id: PUBMED:29572060 A Survey of the Current Practice Patterns of Contralateral Testis Fixation in Unilateral Testicular Conditions. Objective: To query the current contralateral testis fixation (CTF) practice patterns among pediatric urologists in different clinical situations that could result in monorchism. Methods: An online survey was sent to members of the Urology Section of the American Academy of Pediatrics. The survey included questions addressing CTF practice patterns in 14 clinical scenarios. Responses were anonymously submitted, blindly reviewed, and analyzed. Results: Among 53 respondents, 62.3% had academic appointments and 73.6% had an exclusive pediatric urology practice. All participants agreed on CTF necessity in testicular torsion beyond the neonatal period. CTF was advocated by 84.9% in prenatal torsion, 96.2% in postnatal torsion, and 94.3% in delayed torsion presentation. Emergent intervention was favored by 64.4% in prenatal and 98% in postnatal torsion. Only 1 participant (1.9%) preferred CTF with a unilateral testicular tumor and 5 (9.4%) in trauma substantiating an orchiectomy. There was less consensus on CTF in torsed undescended testis (79.3% in prepubertal and 81.13% in postpubertal), testicular nubbin in a child (40.4%), palpable atrophic undescended testis (13.2%), and unilateral bell-clapper anomaly (47.2%). In situations other than torsion, lack of strong evidence was the commonest reason not to perform CTF. Conclusion: The majority of responding pediatric urologists currently performs CTF in neonatal torsion. Although there is a general consensus on CTF in testicular torsion outside the neonatal period, CTF remains controversial in other clinical situations, warranting further research. The decision for CTF should involve patients, parents, and treating physicians. abstract_id: PUBMED:22552759 Practice variation and clinical confusion regarding undescended testes and retractile testes among primary care respondents: a multi-regional survey study in the United States. Purpose: Although previous studies have looked at referral patterns and indications for orchiopexy, this is the first attempt at quantifying the primary care provider knowledge base of cryptorchidism and its implications in a large multi-state setting in the United States. Methods: Primary care providers (PCPs) (n = 3,000) were invited to complete a web-based survey regarding cryptorchidism. We evaluated: practice setting; training; knowledge of cryptorchidism; relationship to pediatric urologists and surgeons; understanding of the relationship between cryptorchidism, infertility and testis cancer. Results: Seventeen percent (n = 453) of PCPs responded to the survey. Fifty-three percent indicated that they had minimal to no exposure to pediatric urology during training. Two-thirds refer patients with retractile testes to surgical specialists. Practice setting was associated with the type of information families received about the impact of UDT on fertility and malignancy with rural practices being more likely to counsel that unilateral UDT imparted a high risk of malignancy (RR 1.5; 95 % CI 1.2, 1.9) and bilateral UDT resulted in likely infertility (RR 2.1; 95 % CI 1.5, 2.8). Conclusions: This study underscores the need for increased evidence-based recommendations, as well as improved training and education of PCPs in the US who deal with disorders of testicular descent. abstract_id: PUBMED:26692683 Intra-abdominal seminoma found incidentally during trauma workup in a man with bilateral cryptorchidism. Bilateral cryptorchidism is a rare occurrence and seminoma is the most common germ cell tumor found in undescended testes when they occur. We present the case of a patient with bilateral cryptorchidism who presented to our trauma center after a motor vehicle collision and was found incidentally to have a 17-cm intra-abdominal mass. The mass was subsequently biopsied and proven to be seminoma. The patient completed three cycles of bleomycin/etoposide/cisplatin chemotherapy and successfully underwent a postchemo retroperitoneal lymph node dissection with no viable residual tumor or positive lymph nodes found in the surgical specimen. He also had an orchiopexy of the contralateral testicle. The patient recovered fully and has been found to be recurrence-free four months postoperatively. We highlight the importance of cisplatin-based chemotherapy and extensive tumor resection as the mainstay of initial cancer control. abstract_id: PUBMED:2875786 Incidence and descriptive features of testicular cancer among United States whites, blacks, and Hispanics, 1973-1982. This is a descriptive epidemiologic report based on over 3000 incident testicular cancer cases occurring among residents of the US and Puerto Rico, as reported to the Surveillance, Epidemiology and End Results (SEER) program of the National Cancer Institute for the years 1973 through 1982. White men had significantly higher incidence rates than their New Mexico Hispanic, black, and Puerto Rican Hispanic counterparts with weighted risk ratios of 1.36, 4.62, and 4.80, respectively. Ethnic differences in incidence were least evident at the extremes of age. Although the distribution of histologic subtypes did not differ across ethnic strata, morphologic expression was related to age at diagnosis. There was a predominance of right-sided tumor involvement in each ethnic group in childhood (less than 15 years of age), but not in the oldest age categories or among tumors presenting in cryptorchid testes. Never married men appeared to be at greater risk of developing nonseminoma testicular cancer than their married counterparts. Analysis of ethnic secular trends, using data from comparable geographic areas, showed a consistent increase in incidence among young men for all three ethnic groups. The contrast between the rate differences and the homogeneity of descriptive parameters across ethnic strata suggest the impact of quantitative rather than qualitative differences in environmental etiology. abstract_id: PUBMED:18499177 Incidental testicular lesions found during infertility evaluation are usually benign and may be managed conservatively. Purpose: Hypoechoic lesions on scrotal ultrasonography are often considered germ cell tumors and radical orchiectomy is recommended. We retrospectively reviewed the findings at 1 center in men with ultrasonographically detected testicular lesions found during evaluation of severe male infertility. Materials And Methods: A total of 145 men with nonobstructive azoospermia at 1 center underwent ultrasonographic analysis before diagnostic or therapeutic testicular biopsy. Mean age was 34 +/- 0.6 years (range 21 to 63). All men were azoospermic. Mean serum follicle-stimulating hormone was 25 IU/l. Of the men 26% had a history of cryptorchidism and 3 patients had a history of testis tumor. No other risk factors for testis cancer were identified for any patient. All sonographic lesions were followed with serial ultrasound examinations or were biopsied/excised. All men had tumor markers tested and the results were negative. Results: Of 145 men referred for azoospermia who underwent ultrasonographic analysis before biopsy 49 (34%) showed a focal sonographic abnormality. A hypoechoic lesion was seen in 20 patients (14%), a hyperechoic lesion was seen in 10 (7%) and a heterogeneous appearance to a region of testicular parenchyma was seen in 19 patients (13%). Some lesions classified as hypoechoic demonstrated hyperechoic or heterogeneous interior components. Two of the patients with hypoechoic lesions were lost to followup. Of the remaining 18 patients 11 had lesions less than 5 mm in greatest diameter and all of these were confirmed to be benign. Only 1 patient had a seminoma, and that patient had an inguinal testis with a mass detected on routine ultrasound. All other patients with hyperechoic or heterogeneous areas on ultrasound with subsequent tissue diagnoses were found to have benign lesions. Conclusions: Men with severe infertility who are found to have incidental testicular lesions and negative tumor markers, especially lesions less than 5 mm, may be initially observed with serial scrotal ultrasound examinations. Enlarging lesions or those of greater dimension should be considered for histological examination. abstract_id: PUBMED:15965655 Cryptorchidism: an indicator of testicular dysgenesis? Cryptorchidism is a common ailment of new-born boys, affecting 1-9% of full term boys at birth. Cryptorchidism has been associated with an increased risk of testicular cancer and reduced fertility. Aetiology of cryptorchidism remains obscure in most cases. Familial occurrence suggests a heritable susceptibility to cryptorchidism; however, seasonal variation in the incidence of cryptorchidism suggests that environmental factors also contribute. Testicular descent is characterised by androgen-dependent regression of cranial suspensory ligament and androgen + insulin-like hormone 3 (Ins l3)-dependent gubernacular outgrowth. Even though hormonal defects are rarely detected in patients, both hypo-and hypergonadotropic hormonal patterns have been associated with cryptorchidism. Moreover, cryptorchid boys have significantly reduced serum androgen bioactivity at 3 months of age when normal boys have a strong surge of reproductive hormones. Defects in Ins l3 action cause cryptorchidism in male mice, and over-expression in female mice causes ovarian descent. Defects in leucine-rich repeat-containing G-protein-coupled receptor 8/G-protein-coupled receptor affecting testis descent (LGR8/GREAT), the receptor for Ins l3, manifest the same phenotype as Ins l3 knockout mutants. Even though mutations found in Ins l3 and LGR8/GREAT genes are not a common cause of cryptorchidism in patients, it remains to be resolved whether low Ins l3 levels during development are associated with cryptorchidism. Cryptorchidism may reflect foetal testicular dysgenesis that may later manifest as subfertility or testicular cancer. abstract_id: PUBMED:16466550 Ethnic differences in occurrence of TDS-- genetics and/or environment? It has been hypothesized that poor semen quality, testis cancer, undescended testis and hypospadias are symptoms of one underlying entity, the so-called testicular dysgenesis syndrome (TDS). TDS was suggested to be a result of disruption of embryonal programming and gonadal development during foetal life and as aetiological factor, an impact of adverse environmental factors such as hormone disrupters, probably acting upon a susceptible genetic background, was suggested. Extensive studies considering the risk of TDS-related diseases in Denmark compared with Finland, showed higher sperm counts and lower risk of cryptorchidism and testicular cancer among Finns. However, when comparing these two populations, the question arises, to which degree this difference might be due to discrepancy in genetic background. A more obvious example of the impact of genetic factors on the risk of TDS concerns Afro-Americans having significantly lower incidence of testicular cancer when compared with Caucasians living in the USA. A yet unexplored scenario is a possible interaction between genetic and environmental/lifestyle-related factors, certain genotypes making individuals more susceptible to adverse exogenous exposures. Studying such interactions has biological, epidemiological and public health-related implications. It will help us to understand the background for the defects in male reproductive organs, facilitate proper design of epidemiological studies and add to identifying individuals susceptible to certain environmental and lifestyle-related hazards. Such 'susceptibility genes' need to be identified, those involved in the synthesis, action and metabolism of sex steroids being strong candidates. Answer: Yes, ethnic patterns in cryptorchidism do reflect those found in testicular cancer, at least in the context of New Zealand. A study conducted in New Zealand found that the incidence of cryptorchidism closely mirrored the previously observed patterns for testicular cancer. Specifically, Māori males had higher rates of cryptorchidism than all other ethnic groups, with Pacific and Asian groups having the lowest rates when compared to Europeans/others. This suggests that the ethnic patterning of testicular cancer may be partly due to prenatal risk factors, as the principal risk factors for cryptorchidism are present in utero (PUBMED:23665269). However, it is important to note that while cryptorchidism and testicular cancer may share common prenatal causes, this does not necessarily extend to other related conditions such as hypospadias. In New Zealand, European/Other children had the highest risk of hypospadias, which contrasts with the observations for cryptorchidism and testicular cancer where Māori males have the greatest risk. This indicates that the exposures driving the development of hypospadias may differ from those driving the development of cryptorchidism and/or testicular cancer (PUBMED:26566182). The relationship between cryptorchidism and testicular cancer is further supported by the fact that cryptorchidism is a known risk factor for testicular cancer. For instance, bilateral cryptorchidism is a rare occurrence, and seminoma, a type of testicular cancer, is the most common germ cell tumor found in undescended testes (PUBMED:26692683). Additionally, cryptorchidism has been associated with an increased risk of testicular cancer and reduced fertility, and it may reflect fetal testicular dysgenesis that could later manifest as subfertility or testicular cancer (PUBMED:15965655). Ethnic differences in the occurrence of testicular dysgenesis syndrome (TDS) related diseases, which include cryptorchidism and testicular cancer, have been observed, with genetic factors potentially playing a role in the risk of TDS. For example, Afro-Americans have a significantly lower incidence of testicular cancer compared to Caucasians living in the USA, suggesting that genetic factors may influence the risk of TDS-related diseases (PUBMED:16466550).
Instruction: Pancreatic injury in damage control laparotomies: Is pancreatic resection safe during the initial laparotomy? Abstracts: abstract_id: PUBMED:19054513 Pancreatic injury in damage control laparotomies: Is pancreatic resection safe during the initial laparotomy? Objectives: While damage control (DC) techniques such as the rapid control of exsanguinating haemorrhage and gastrointestinal contamination have improved survival in severely injured patients, the optimal pancreatic injury management strategy in these critically injured patients requiring DC is uncertain. We sought to characterise pancreatic injury patterns and outcomes to better determine optimal initial operative management in the DC population. Materials And Methods: A two-centre, retrospective review of all patients who sustained pancreatic injury requiring DC in two urban trauma centres during 1997-2004 revealed 42 patients. Demographics and clinical characteristics were analysed. Study groups based on operative management (pack+/-drain vs. resection) were compared with respect to clinical characteristics and hospital outcomes. Results: The 42 patients analysed were primarily young (32.8+/-16.2 years) males (38/42, 90.5%) who suffered penetrating (30/42, 71.5%) injuries of the pancreas and other abdominal organs (41/42, 97.6%). Of the 12 patients who underwent an initial pancreatic resection (11 distal pancreatectomies, 1 pancreaticoduodenectomy), all distal pancreatectomies were performed in entirety during the initial laparotomy while pancreaticoduodenectomy reconstruction was delayed until subsequent laparotomy. Comparing the pack+/-drain and resection groups, no difference in mechanism, vascular injury, shock, ISS, or complications was revealed. Mortality was substantial (packing only, 70%; packing with drainage, 25%, distal pancreatectomy, 55%, pancreaticoduodenectomy, 0%) in the study population. Conclusions: The presence of shock or major vascular injury dictates the extent of pancreatic operative intervention. While pancreatic resection may be required in selected damage control patients, packing with pancreatic drainage effectively controls both haemorrhage and abdominal contamination in patients with life-threatening physiological parameters and may lead to improved survival. Increased mortality rates in patients who were packed without drainage suggest that packing without drainage is ineffective and should be abandoned. abstract_id: PUBMED:26038043 Damage control laparotomy and delayed pancreatoduodenectomy for complex combined pancreatoduodenal and venous injuries. Background: This single-centre study evaluated the efficacy of damage control surgery and delayed pancreatoduodenectomy and reconstruction in patients who had combined severe pancreatic head and visceral venous injuries. Methods: Prospectively recorded data of patients who underwent an initial damage control laparotomy and a subsequent pancreatoduodenectomy for severe pancreatic injuries were evaluated to assess optimal operative sequencing. Results: During the 20-year study period, 312 patients were treated for pancreatic injuries of whom 14 underwent a pancreatoduodenectomy. Six (five men, one woman, median age 20, range 16-39 years) of the 14 patients were in extremis with exsanguinating venous bleeding and non-reconstructable AAST grade 5 pancreatoduodenal injuries and underwent a damage control laparotomy followed by delayed pancreatoduodenectomy and reconstruction when stable. During the initial DCS, the blood loss compared to the subsequent laparotomy and definitive procedure was 5456 ml, range 2318-7665 vs 1250 ml, range 850-3600 ml (p < 0.01). The mean total fluid administered in the operating room was 11,150 ml, range 8450-13,320 vs 6850 ml, range 3350-9020 ml (p < 0.01). The mean operating room time was 113 min, range 90-140 vs 335 min, range 260-395 min (p < 0.01). During the second laparotomy five patients had a pylorus-preserving pancreatoduodenectomy and one a standard Whipple resection. Four of the six patients survived. Two patients died in hospital, one of MOF and coagulopathy and the other of intra-abdominal sepsis and multi-organ failure. Median duration of intensive care was 6 days, (range 1-20 days) and median duration of hospital stay was 29 days, (range 1-94 days). Conclusion: Damage control laparotomy and delayed secondary pancreatoduodenectomy is a live-saving procedure in the small cohort of patients who have dire pancreatic and vascular injuries. When used appropriately, the staged resection and reconstruction allows survival in a previously unsalvageable group of patients who have severe physiological derangement. abstract_id: PUBMED:10955067 Management of pancreatic trauma in damage-control laparotomy Major pancreatic trauma challenges the trauma surgeon with diagnostic problems and choices of treatment modalities. The most important determinant guiding management is the integrity of the main pancreatic duct. The preoperative and intraoperative assessment and treatment of pancreatic injury may be difficult, especially when concurrent severe injuries are present. There are alternative approaches in the management of the traumatized pancreas when ductal injury is not obvious during initial exploratory laparotomy. abstract_id: PUBMED:28323788 Traumatic colon injury in damage control laparotomy-A multicenter trial: Is it safe to do a delayed anastomosis? Background: Delayed colonic anastomosis after damage control laparotomy (DCL) is an alternative to colostomies during a single laparotomy (SL) in high-risk patients. However, literature suggests increased colonic leak rates up to 27% with DCL, and various reported risk factors. We evaluated our regional experience to determine if delayed colonic anastomosis was associated with worse outcomes. Methods: A multicenter retrospective cohort study was performed across three Level I trauma centers encompassing traumatic colon injuries from January 2006 through June 2014. Patients with rectal injuries or mortality within 24 hours were excluded. Patient and injury characteristics, complications, and interventions were compared between SL and DCL groups. Regional readmission data were utilized to capture complications within 6 months of index trauma. Results: Of 267 patients, 69% had penetrating injuries, 21% underwent DCL, and the mortality rate was 4.9%. Overall, 176 received primary repair (26 in DCL), 90 had resection and anastomosis (28 in DCL), and 26 had a stoma created (10 end colostomies and 2 loop ileostomies in DCL). Thirty-five of 56 DCL patients had definitive colonic repair subsequent to their index operation. DCL patients were more likely to be hypotensive; require more resuscitation; and suffer acute kidney injury, pneumonia, adult respiratory distress syndrome, and death. Five enteric leaks (1.9%) and three enterocutaneous fistulas (ECF, 1.1%) were identified, proportionately distributed between DCL and SL (p = 1.00, p = 0.51). No difference was seen in intraperitoneal abscesses (p = 0.13) or surgical site infections (SSI, p = 0.70) between cohorts. Among SL patients, pancreas injuries portended an increased risk of intraperitoneal abscesses (p = 0.0002), as did liver injuries in DCL patients (p = 0.06). Conclusions: DCL was not associated with increased enteric leaks, ECF, SSI, or intraperitoneal abscesses despite nearly two-thirds having delayed repair. Despite this being a multicenter study, it is underpowered, and a prospective trial would better demonstrate risks of DCL in colon trauma. Level Of Evidence: Therapeutic study, level IV. abstract_id: PUBMED:37369739 Treatment of pancreatic injuries after blunt abdominal trauma Pancreatic injuries resulting from blunt abdominal trauma are uncommon but carry a high risk of morbidity and mortality for patients. Prompt diagnosis and management are critical to optimize patient outcomes. This review article provides an overview of the different types of pancreatic injuries and the various management strategies available, based on the severity of the injury. In unstable patients with a positive focused assessment with sonography for trauma (FAST), immediate trauma laparotomy is required. Stable patients should be assessed with contrast-enhanced computed tomography (CT) imaging. Low-grade injuries can be managed with irrigation and drainage. In cases of left-sided ductal injury below the level of the portal vein, left-sided pancreatic resection is often necessary. Higher grade injuries to the pancreatic head need to be evaluated in the context of other accompanying injuries, where damage control may be required. Pancreaticoduodenectomy is a rare intervention and is usually only required in the later course in these cases. abstract_id: PUBMED:28602224 General surgeon management of complex hepatopancreatobiliary trauma at a level I trauma center. Background: The impact of general surgeons (GS) taking trauma call on patient outcomes has been debated. Complex hepatopancreatobiliary (HPB) injuries present a particular challenge and often require specialized care. We predicted no difference in the initial management or outcomes of complex HPB trauma between GS and trauma/critical care (TCC) specialists. Materials And Methods: A retrospective review of patients who underwent operative intervention for complex HPB trauma from 2008 to 2015 at an ACS-verified level I trauma center was performed. Chart review was used to obtain variables pertaining to demographics, clinical presentation, operative management, and outcomes. Patients were grouped according to whether their index operation was performed by a GS or TCC provider and compared. Results: 180 patients met inclusion criteria. The GS (n = 43) and TCC (n = 137) cohorts had comparable patient demographics and clinical presentations. Most injuries were hepatic (73.3% GS versus 72.6% TCC) and TCC treated more pancreas injuries (15.3% versus GS 13.3%; P = 0.914). No significant differences were found in HPB-directed interventions at the initial operation (41.9% GS versus 56.2% TCC; P = 0.100), damage control laparotomy with temporary abdominal closure (69.8% versus 69.3%; P = 0.861), LOS, septic complications or 30-day mortality (13.9% versus 10.2%; P = 0.497). TCC were more likely to place an intraabdominal drain than GS (52.6% versus 34.9%; P = 0.043). Conclusions: We found no significant differences between GS and TCC specialists in initial operative management or clinical outcomes of complex HPB trauma. The frequent and proper use of damage control laparotomy likely contribute to these findings. abstract_id: PUBMED:28871350 Approach to liver, spleen and pancreatic injuries including damage control surgery of terrorist attacks Background: Terrorist attacks have outreached to Europe with more and more attacks on civilians. Derived from war surgery experience and from lessons learned from major incidents, it seems mandatory for every surgeon to improve understanding of the special circumstances of trauma following a terrorist attack and its' management. Method: A short literature review is followed by outlining the damage control surgery (DCS) principle for each organ system with practical comments from the perspective of a specialized hepatobiliary (HPB) surgery unit. Conclusion: Every surgeon has to become familiar with the new entities of blast injuries and terrorist attack trauma. This concerns not only the medical treatment but also tailoring surgical treatment with a view to a lack of critical resources under these circumstances. For liver and pancreatic trauma, simple treatment strategies are a key to success. abstract_id: PUBMED:33795904 Pancreatic damage control: the pancreas is simple don't complicate it. Pancreatic trauma is a rare but potentially lethal injury because often it is associated with other abdominal organ or vascular injuries. Usually, it has a late clinical presentation which in turn complicates the management and overall prognosis. Due to the overall low prevalence of pancreatic injuries, there has been a significant lack of consensus among trauma surgeons worldwide on how to appropriately and efficiently diagnose and manage them. The accurate diagnosis of these injuries is difficult due to its anatomical location and the fact that signs of pancreatic damage are usually of delayed presentation. The current surgical trend has been moving towards organ preservation in order to avoid complications secondary to exocrine and endocrine function loss and/or potential implicit post-operative complications including leaks and fistulas. The aim of this paper is to propose a management algorithm of patients with pancreatic injuries via an expert consensus. Most pancreatic injuries can be managed with a combination of hemostatic maneuvers, pancreatic packing, parenchymal wound suturing and closed surgical drainage. Distal pancreatectomies with the inevitable loss of significant amounts of healthy pancreatic tissue must be avoided. General principles of damage control surgery must be applied when necessary followed by definitive surgical management when and only when appropriate physiological stabilization has been achieved. It is our experience that viable un-injured pancreatic tissue should be left alone when possible in all types of pancreatic injuries accompanied by adequate closed surgical drainage with the aim of preserving primary organ function and decreasing short and long term morbidity. abstract_id: PUBMED:33863629 A case-control study of risk factors for survival after laparotomy in patients with pancreatic trauma. Background: Pancreatic trauma results in significant morbidity and mortality. However, few studies have investigated the postoperative prognostic factors in patients with pancreatic trauma. Material And Methods: A retrospective study was conducted on consecutive patients with pancreatic trauma who underwent surgery in a national referral trauma center. Clinical data were retrieved from the electronic medical system. Univariate and binary logistic regression analyses were performed to identify the perioperative clinical parameters that may predict the factors of mortality of the patients. Results: A total of 150 patients underwent laparotomy due to pancreatic trauma during the study period. 128(85.4%) patients survived and 22 (14.6%) patients died due to pancreatic injury (10 patients died of recurrent intra-abdominal active hemorrhage and 12 died of multiple organ failure). Univariate analysis showed that age, hemodynamic status, and injury severe score (ISS) as well as postoperative serum levels of C-reactive protein (CRP), procalcitonin, albumin, creatinine and the volume of intraoperative blood transfusion remained strongly predictive of mortality (P < 0.05). Binary logistic regression analysis showed that the independent risk factors for prognosis after pancreatic trauma were age (P = 0.010), preoperative hemodynamic instability (P = 0.015), postoperative CRP ≥154 mg/L (P = 0.014), and postoperative serum creatinine ≥177 μmol/L (P = 0.027). Conclusions: In this single-center retrospective study, we demonstrated that preoperative hemodynamic instability, severe postoperative inflammation (CRP ≥154 mg/L) and acute renal failure (creatinine ≥177 μmol/L) were associated with a significant risk of mortality after pancreatic trauma. abstract_id: PUBMED:15784225 Traumatic pancreatic injuries If an emergency laparotomy is necessary, a damage control laparotomy may be useful. If during the laparotomy the hemodynamic is stabilised, the severity is depending on the existence of a ductal injury and an associated duodenal lesion. Surgical indications and techniques are described in these different cases. If no laparotomy, the location and type of injury is assessed by CT scan, magnetic resonance cholangiopancreatography or ERCP. Injury of the pancreatic duct is the main part of prognosis and indications. The non operative treatment in case of ductal injury remains controversial. Answer: The safety and outcomes of pancreatic resection during the initial laparotomy in the context of damage control (DC) techniques for severely injured patients have been explored in several studies. According to a retrospective review of patients who sustained pancreatic injury requiring DC, pancreatic resection may be required in selected damage control patients. However, packing with pancreatic drainage effectively controls both hemorrhage and abdominal contamination in patients with life-threatening physiological parameters and may lead to improved survival. The study found that increased mortality rates in patients who were packed without drainage suggest that this approach is ineffective and should be abandoned (PUBMED:19054513). Another study evaluated the efficacy of damage control surgery and delayed pancreatoduodenectomy in patients with severe pancreatic head and visceral venous injuries. The study concluded that damage control laparotomy followed by delayed pancreatoduodenectomy is a life-saving procedure in a small cohort of patients with dire pancreatic and vascular injuries. When used appropriately, the staged resection and reconstruction allow survival in a previously unsalvageable group of patients who have severe physiological derangement (PUBMED:26038043). In general, the management of pancreatic trauma in damage control laparotomy is challenging, and the integrity of the main pancreatic duct is a critical determinant guiding management. The preoperative and intraoperative assessment and treatment of pancreatic injury may be difficult, especially when concurrent severe injuries are present (PUBMED:10955067). The literature suggests that pancreatic injuries resulting from blunt abdominal trauma are uncommon but carry a high risk of morbidity and mortality. Prompt diagnosis and management are critical to optimize patient outcomes. In unstable patients, immediate trauma laparotomy is required, while stable patients should be assessed with contrast-enhanced computed tomography (CT) imaging. Low-grade injuries can be managed with irrigation and drainage, while higher-grade injuries to the pancreatic head may require damage control and potentially pancreaticoduodenectomy in the later course (PUBMED:37369739). In summary, pancreatic resection during the initial laparotomy can be safe and necessary in selected patients undergoing damage control laparotomy. However, the decision to perform resection should be based on the patient's physiological status, the extent of the injury, and the presence of other abdominal organ or vascular injuries. Packing with drainage is an effective alternative in managing hemorrhage and contamination, especially in patients with life-threatening conditions.
Instruction: Do Doppler studies enhance surveillance of uncomplicated monochorionic diamniotic twins? Abstracts: abstract_id: PUBMED:25792571 Do Doppler studies enhance surveillance of uncomplicated monochorionic diamniotic twins? Objectives: To determine whether isolated abnormal Doppler indices before 28 weeks predict adverse pregnancy outcomes in uncomplicated monochorionic diamniotic (MCDA) twins. Methods: A retrospective cohort study of MCDA twin pregnancies receiving antenatal testing at a single center between 2007 and 2013 was conducted. Sonographic surveillance, including Doppler velocimetric studies of the umbilical artery, ductus venosus, and middle cerebral artery of each twin, was initiated by 28 weeks and repeated at least every 2 weeks. All pregnancies were deemed "uncomplicated" at initial sonography, without evidence of polyhydramnios, oligohydramnios, intrauterine growth restriction, twin growth discordance of at least 20%, structural or chromosomal anomalies, or unclear chorionicity. Pregnancies were divided into 2 groups: those with isolated Doppler abnormalities before 28 weeks and those with normal Doppler indices. The primary outcome was a composite including twin-twin transfusion syndrome, intrauterine growth restriction of more than 1 twin, growth discordance of at least 20%, preterm delivery before 34 weeks for fetal indications, or demise of more than 1 fetus. Results: Ninety-six patients were included, with 22 (22.9%) having isolated Doppler abnormalities before 28 weeks. The incidence of the primary outcome did not differ between groups (36.4% versus 28.4%; P = .47). The abnormal Doppler group underwent a greater number of sonographic examinations (15 versus 10; P= .001) and more antenatal admissions for fetal concerns (50.0% versus 12.2%; P < .001). Conclusions: Isolated Doppler abnormalities are commonly encountered in uncomplicated MCDA pregnancies before 28 weeks yet are not clearly predictive of twin-specific complications. Doppler abnormalities were associated with increased sonographic surveillance and antenatal hospitalizations, suggesting an influence on physician practice patterns. Data may not support Doppler studies before 28 weeks for routine MCDA twin monitoring. abstract_id: PUBMED:33345886 Renal artery Doppler studies in the assessment of monochorionic, diamniotic twin pregnancies with and without twin-twin transfusion syndrome. Background: In monochorionic, diamniotic twin pregnancies complicated by twin-twin transfusion syndrome, imbalanced fetofetal blood shunting and discordant activation of the renin-angiotensin system result in a hypovolemic, oliguric donor twin and a hypervolemic, polyuric recipient twin. Renal artery Doppler assessments have been reported to predict oligohydramnios in singleton pregnancies; however, their value in assessing monochorionic, diamniotic twin gestations, including pregnancies complicated by twin-twin transfusion syndrome, remains unstudied. Objective: This investigation evaluated whether renal artery Doppler parameters are different between sets of monochorionic, diamniotic twins with and without twin-twin transfusion syndrome. Study Design: We conducted a prospective study of women with monochorionic, diamniotic twin pregnancies complicated by twin-twin transfusion syndrome and women with gestational-age matched monochorionic, diamniotic twin control pregnancies without twin-twin transfusion syndrome. For each twin, renal artery Doppler assessment was performed from a coronal view of the fetus, and pulsatility index, resistive index, and peak systolic velocity were measured. In twin pairs complicated by twin-twin transfusion syndrome, renal artery Doppler pulsatility index, resistive index, and peak systolic velocity intertwin ratios were calculated by comparing paired recipient-to-donor values. Similar ratios were calculated for control twin pairs. We compared each measurement between donors and controls and between recipients and controls. Intertwin ratios were compared between twin-twin transfusion syndrome cases and monochorionic, diamniotic controls. Our primary outcome was a comparison of renal artery Doppler pulsatility index measurement between donor fetuses in twin-twin transfusion syndrome cases and matched control twins. Secondary outcomes included comparisons of other renal artery Doppler parameters between twin-twin transfusion syndrome donor twins and controls and between twin-twin transfusion syndrome recipients and controls and comparisons of the renal artery Doppler parameter ratios between twin-twin transfusion syndrome and non-twin-twin transfusion syndrome twin pair groups. Finally, we performed a subgroup analysis involving pregnancies with twin-twin transfusion syndrome that underwent fetoscopic laser therapy and resulted in twin survivors to evaluate if laser therapy influenced renal artery Doppler findings. Results: We recruited 12 donors and 12 controls. Renal artery Doppler pulsatility index and resistive index did not differ between donors and controls. However, donor renal artery Doppler peak systolic velocity (median, 14.24 [11.51-15.36]) was significantly lower than that of controls (median, 24.79 [22.46-27.03]; P<.0004). When recipient twins were compared with controls, there was no difference in renal artery Doppler pulsatility index, renal artery Doppler resistive index, or renal artery Doppler peak systolic velocity. Renal artery Doppler peak systolic velocity ratios were significantly higher among cases with twin-twin transfusion syndrome (median, 1.70 [1.53-2.02]) than controls (median, 1.06 [1.03-1.15]; P=.002). Applying a renal artery Doppler peak systolic velocity ratio cutoff of >1.35 for the diagnosis of twin-twin transfusion syndrome correctly classified 11 of 12 cases with twin-twin transfusion syndrome and all 12 controls without twin-twin transfusion syndrome (92% sensitivity and 100% specificity). Among donors with twin-twin transfusion syndrome with double survivors after undergoing laser therapy (n=4), prelaser renal artery Doppler peak systolic velocity ratios (mean, 1.46; standard deviation, 0.37) were significantly higher than postlaser renal artery Doppler peak systolic velocity ratios (mean, 0.87; standard deviation=0.37; P=.02). Postlaser intertwin renal artery Doppler peak systolic velocity ratios did not differ from control twin pairs (P=.125). Conclusion: Renal artery Doppler peak systolic velocity ratios differ between monochorionic, diamniotic twins with twin-twin transfusion syndrome and those without twin-twin transfusion syndrome. This effect seems to be driven by altered donor twin renal artery Doppler peak systolic velocity. Laser therapy may normalize renal artery Doppler peak systolic velocity ratios and donor twin renal artery Doppler values to those observed in monochorionic, diamniotic twins without twin-twin transfusion syndrome. Further study is warranted to explore whether renal artery Doppler peak systolic velocity has utility as a tool for twin-twin transfusion syndrome prediction, diagnosis, or staging, and fetal response to laser therapy. abstract_id: PUBMED:34649016 Monochorionic diamniotic twin pregnancies. About 1 in 5 twin pregnancies are monochorionic diamniotic. In these twins, transfusion imbalances may develop because of the vascular anastomoses that are invariably present. The most severe imbalance is twin-twin transfusion syndrome, which presents as a significant amniotic fluid difference diagnosed on ultrasound scan, usually before 26 weeks. Another transfusion imbalance is twin anemia polycythemia sequence, which is a severe hemoglobin difference occurring mainly after 26 weeks in previously uncomplicated pregnancies. Fetoscopic laser coagulation of the anastomoses addresses the cause of these transfusion imbalances. Beside transfusion imbalances, the monochorionic placenta may be unevenly divided, or one part may be dysfunctional, leading to an intertwin growth difference of more than 20%. Here, laser ablation of the anastomoses will usually not cure the problem, but it aims to protect the appropriately growing twin if the growth-restricted cotwin dies. In monochorionic twins, structural anomalies are twice as common as in singleton pregnancies, probably because of a teratogenic effect of the embryonic splitting or because of transfusion imbalances. In contrast, chromosomal anomalies seem less common, possibly because they more likely result in an early demise. Selective reduction is feasible in monochorionic pregnancies but carries increased risks of loss, miscarriage, and early birth than selective reduction in a dichorionic setting. Finally, if one monochorionic twin dies spontaneously, its cotwin may acutely exsanguinate into the demised cotwin, which may result in the loss of both twins or brain damage in the surviving twin. This update addresses the pathophysiology, diagnosis, and management of these common complications in monochorionic diamniotic twin pregnancies. abstract_id: PUBMED:28579783 Developmental and behavioral outcomes of uncomplicated monochorionic diamniotic twins born in the third trimester. Background: Relatively little is known about the neurodevelopmental and behavioral outcomes of monochorionic diamniotic (MCDA) twin pregnancies where there are no antenatal complications peculiar to monochorionicity or prematurity. Methods: Twenty-two MCDA twins (44 children) with an average age of 4.3 years, and with no antenatal complications detected by 28 weeks of gestation, were recruited from a feto-maternal unit database. Parents completed a battery of neurodevelopmental and behavioral assessment questionnaires. Results: Eighteen children (41%) were identified as having developmental or behavioral concerns, predominantly of mild severity, which in turn were associated with a lower birth weight of medium effect size (Cohen's d=0.59). Conclusion: MCDA twins delivered in the third trimester with no antenatal monochorionic complications in the first two trimesters appear to be at risk for subtle neurodevelopmental difficulties, associated with a lower birth weight. Ongoing developmental surveillance of these children during preschool-age is indicated for early identification and intervention. abstract_id: PUBMED:28191153 Spontaneous septostomy in monochorionic diamniotic twins resulting in cord entanglement and fetal demise. Monoamniotic twins are known to be at risk of cord entanglement, and have an elevated perinatal morbidity and mortality associated with this. We present a case of cord entanglement in a monochorionic diamniotic (MCDA) twin pregnancy thought to be due to spontaneous septostomy. Unfortunately the cord entanglement was detected postpartum, and ultimately resulted in the demise of one of the twins. Literature review reveals this as a recognised, albeit rare, phenomena. abstract_id: PUBMED:34847869 Reference values for fetal Doppler-based cardiocirculatory indices in monochorionic-diamniotic twin pregnancy. Background: Placental anastomoses in monochorionic diamniotic (MCDA) twin pregnancy have a major impact on fetal circulation. This study was designed to define reference ranges of cardiac and vascular Doppler indices in MCDA twin pregnancies. Methods: This cross-sectional study included 442 uncomplicated MCDA twin fetuses undergoing Doppler ultrasonography at 18-35 weeks of gestation. Left and right myocardial performance index (LV-MPI, RV-MPI), E/A ratio of atrioventricular valves, pulsatility indices of umbilical artery, middle cerebral artery (MCA), and ductus venosus (DV), cerebroplacental ratio, peak systolic velocity of MCA, S/a ratio of DV, and early diastolic filling time of ductus venosus (DV-E) were evaluated under standardized settings. The equation models between Doppler indices and gestational age (GA) were fitted. After adjustment for GA, the correlations between MPI and fetal heart rate (FHR), and between MPI and DV indices were analyzed. Results: Estimated centiles of Doppler indices were derived as a function of GA, being distinct in values from those of singletons. There was no correlation between GA-adjusted MPI and FHR. DV-E was inversely related to LV-MPI. Conclusions: MCDA twins showed significant changes in some Doppler indices throughout gestation with quantitative differences from singletons, emphasizing the importance of MC twin-specific reference values for clinical application. Further adjustment of MPI for FHR was unnecessary. DV-E is a vascular index indirectly representing fetal diastolic function. abstract_id: PUBMED:33182282 Uncomplicated Monochorionic Twins: Two Normal Hearts Sharing One Placenta. Cardiovascular dysfunction has been reported in complicated monochorionic diamniotic (MCDA) pregnancies; however, little is known whether hemodynamic changes occur in uncomplicated MCDA twins. A prospective observational study was conducted including 100 uncomplicated MCDA twins matched by gestational age to 200 low-risk singletons. Echocardiography was performed at 26-30 weeks gestation and cord blood B-type natriuretic peptide (BNP) was measured at delivery. In both groups, z-scores for echocardiographic parameters were within normal ranges; however the monochorionic group had larger atrial areas (mean (standard deviation) right atria-to-heart ratio: 17.0 (2) vs. 15.9 (1); p = 0.018; left atria-to-heart ratio: 17.0 (3) vs. 15.8 (2); p < 0.001) and signs of concentric hypertrophy (right relative wall thickness: 0.66 (0.12) vs. 0.56 (0.11); p < 0.001; left relative wall thickness: 0.69 (0.14) vs. 0.58 (0.12); p < 0.001). Longitudinal function was increased in twins, leading to higher tricuspid annular plane systolic excursion (6.9 mm (0.9) vs. 5.9 mm (0.7); p < 0.001) and mitral annular plane systolic excursion (4.9 mm (0.8) vs. 4.4 mm (1.1); p < 0.001. BNP levels at birth were also higher in MCDA twins (median [interquartile range]: 20.81 pg/mL [16.69-34.01] vs. 13.14 pg/mL [9.17-19.84]; p < 0.001). Thus, uncomplicated MCDA fetuses have normal cardiac shape and function, but signs of cardiac adaptation were identified by echocardiographic and biochemical parameters, when compared with singletons. abstract_id: PUBMED:33000485 Spontaneous septostomy in a monochorionic-diamniotic twin gestation. Monochorionic twin gestations are associated with a greater incidence of neonatal morbidity and mortality when compared with their dichorionic counterparts. In turn, monochorionic-monoamniotic (MCMA) gestations carry greater risks compared with monochorionic-diamniotic (MCDA) gestations. While the true incidence of spontaneous septostomy of the dividing membranes (SSDM) in MCDA twins is unknown, SSDM has been demonstrated to be associated with increased morbidity and mortality, due to functional transition from a MCDA gestation to a MCMA gestation. We report a case of SSDM in a mid-trimester MCDA gestation, review the literature, and describe how to identify and manage this complication. abstract_id: PUBMED:27257402 Perinatal Outcomes of Monochorionic-Diamniotic Twin Pregnancies Uncomplicated at 28 Weeks of Gestation. We examined the prevalence of specific perinatal complications of monochorionic-diamniotic twin pregnancies in cases without any abnormal findings until the second trimester of pregnancy. This was a retrospective cohort study performed at a tertiary perinatal center in Tokyo, Japan. There were 88 cases of uncomplicated monochorionic-diamniotic twin pregnancies at 28 weeks of gestation. In five of them (5.7%), there were serious complications associated with placental circulatory imbalance between the twins during the third trimester of pregnancy. Two cases were complicated by twin-twin transfusion syndrome, two cases were complicated by twin anemia-polycythemia sequence, and one case was complicated by acute twin-twin transfusion syndrome. In the five cases, no abnormal ultrasonographic findings or symptoms were recognized one or two weeks prior to the diagnosis. Fifty-eight cases (65.9%) were delivered at term uneventfully. Serious complications due to placental circulatory imbalance between twins occurred in about 6% of cases during the third trimester of pregnancy. abstract_id: PUBMED:32576252 Repeated triplets complicated by monochorionic diamniotic twins following assisted reproduction: a case report and literature review. Background: Monochorionic twinning involves numerous maternal and fetal complications, triplets complicated by a monochorionic pair are at further increased risk. Here, we report a case of repeated triplets complicated by monochorionic diamniotic twins with successful pregnancy outcomes in a woman using autologous oocytes. Case Presentation: A 30-year-old female undergoing embryo transfer with fresh and frozen embryo cycles with autologous oocytes. The two cycles were confirmed by transvaginal ultrasound to result in successful clinical pregnancies of triplets complicated by a monochorionic twinning. The first pregnancy resulted in a singleton delivery after a selective reduction of the monochorionic pair. The subsequent pregnancy resulted in a dichorionic diamniotic twin pregnancy after the heartbeat of one of the monochorionic twin fetuses stopped at 43 days after embryo transfer. Both of the pregnancies ended with successful live births. Conclusions: Our case report of repeated triplets with monochorionic twins suggests the potential causes and risk factors of monochorionic twinning in assisted reproduction and raises concern regarding the timing of multifetal pregnancy reduction. Answer: Doppler studies are a common tool used in the surveillance of monochorionic diamniotic (MCDA) twin pregnancies due to the unique risks associated with this type of twin gestation. However, the effectiveness of Doppler studies in enhancing surveillance for uncomplicated MCDA twins is not entirely clear. A retrospective cohort study found that isolated Doppler abnormalities before 28 weeks in uncomplicated MCDA twin pregnancies were not predictive of twin-specific complications, such as twin-twin transfusion syndrome, intrauterine growth restriction, or preterm delivery. The study concluded that data may not support the use of Doppler studies before 28 weeks for routine monitoring of MCDA twin pregnancies, as these abnormalities were associated with increased sonographic surveillance and antenatal hospitalizations without a clear benefit in predicting adverse outcomes (PUBMED:25792571). In contrast, another study suggested that renal artery Doppler peak systolic velocity ratios could differentiate between MCDA twins with and without twin-twin transfusion syndrome, potentially serving as a diagnostic tool. This study found that donor twins in twin-twin transfusion syndrome cases had significantly lower peak systolic velocity compared to controls, and that laser therapy could normalize these values. However, further research is warranted to determine the utility of renal artery Doppler in predicting, diagnosing, or staging twin-twin transfusion syndrome and in assessing fetal response to laser therapy (PUBMED:33345886). Overall, while Doppler studies are commonly used in the surveillance of MCDA twins, their role in uncomplicated cases is less certain. Some Doppler indices may not be predictive of complications, while others may have potential diagnostic value, particularly in the context of twin-twin transfusion syndrome. Clinicians may need to balance the benefits of increased surveillance against the potential for unnecessary interventions and hospitalizations. Further research is needed to clarify the role of Doppler studies in the routine monitoring of uncomplicated MCDA twin pregnancies.
Instruction: Can Western based online prostate cancer risk calculators be used to predict prostate cancer after prostate biopsy for the Korean population? Abstracts: abstract_id: PUBMED:23549812 Can Western based online prostate cancer risk calculators be used to predict prostate cancer after prostate biopsy for the Korean population? Purpose: To access the predictive value of the European Randomized Screening of Prostate Cancer Risk Calculator (ERSPC-RC) and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC) in the Korean population. Materials And Methods: We retrospectively analyzed the data of 517 men who underwent transrectal ultrasound guided prostate biopsy between January 2008 and November 2010. Simple and multiple logistic regression analysis were performed to compare the result of prostate biopsy. Area under the receiver operating characteristics curves (AUC-ROC) and calibration plots were prepared for further analysis to compare the risk calculators and other clinical variables. Results: Prostate cancer was diagnosed in 125 (24.1%) men. For prostate cancer prediction, the area under curve (AUC) of the ERSPC-RC was 77.4%. This result was significantly greater than the AUCs of the PCPT-RC and the prostate-specific antigen (PSA) (64.5% and 64.1%, respectively, p<0.01), but not significantly different from the AUC of the PSA density (PSAD) (76.1%, p=0.540). When the results of the calibration plots were compared, the ERSPC-RC plot was more constant than that of PSAD. Conclusion: The ERSPC-RC was better than PCPT-RC and PSA in predicting prostate cancer risk in the present study. However, the difference in performance between the ERSPC-RC and PSAD was not significant. Therefore, the Western based prostate cancer risk calculators are not useful for urologists in predicting prostate cancer in the Korean population. abstract_id: PUBMED:30924451 Phi-based risk calculators performed better in the prediction of prostate cancer in the Chinese population. Risk prediction models including the Prostate Health Index (phi) for prostate cancer have been well established and evaluated in the Western population. The aim of this study is to build phi-based risk calculators in a prostate biopsy population and evaluate their performance in predicting prostate cancer (PCa) and high-grade PCa (Gleason score ≥7) in the Chinese population. We developed risk calculators based on 635 men who underwent initial prostate biopsy. Then, we validated the performance of prostate-specific antigen (PSA), phi, and the risk calculators in an additional observational cohort of 1045 men. We observed that the phi-based risk calculators (risk calculators 2 and 4) outperformed the PSA-based risk calculator for predicting PCa and high-grade PCa in the training cohort. In the validation study, the area under the receiver operating characteristic curve (AUC) for risk calculators 2 and 4 reached 0.91 and 0.92, respectively, for predicting PCa and high-grade PCa, respectively; the AUC values were better than those for risk calculator 1 (PSA-based model with an AUC of 0.81 and 0.82, respectively) (all P < 0.001). Such superiority was also observed in the stratified population with PSA ranging from 2.0 ng ml-1to 10.0 ng ml-1. Decision curves confirmed that a considerable proportion of unnecessary biopsies could be avoided while applying phi-based risk calculators. In this study, we showed that, compared to risk calculators without phi, phi-based risk calculators exhibited superior discrimination and calibration for PCa in the Chinese biopsy population. Applying these risk calculators also considerably reduced the number of unnecessary biopsies for PCa. abstract_id: PUBMED:35813248 External Validation of the Prostate Biopsy Collaborative Group Risk Calculator and the Rotterdam Prostate Cancer Risk Calculator in a Swedish Population-based Screening Cohort. Background: External validation of risk calculators (RCs) is necessary to determine their clinical applicability beyond the setting in which these were developed. Objective: To assess the performance of the Rotterdam Prostate Cancer RC (RPCRC) and the Prostate Biopsy Collaborative Group RC (PBCG-RC). Design Setting And Participants: We used data from the prospective, population-based STHLM3 screening study, performed in 2012-2015. Participants with prostate-specific antigen ≥3 ng/ml who underwent systematic prostate biopsies were included. Outcome Measurements And Statistical Analysis: Probabilities for clinically significant prostate cancer (csPCa), defined as International Society of Urological Pathology grade ≥2, were calculated for each participant. External validity was assessed by calibration, discrimination, and clinical usefulness for both original and recalibrated models. Results And Limitations: Out of 5841 men, 1054 (18%) had csPCa. Distribution of risk predictions differed between RCs; median risks for csPCa using the RPCRC and PBCG-RC were 3.3% (interquartile range [IQR] 2.1-7.1%) and 20% (IQR 15-28%), respectively. The correlation between RC risk estimates on individual level was moderate (Spearman's r = 0.55). Using the RPCRC's recommended risk threshold of ≥4% for finding csPCa, 36% of participants would get concordant biopsy recommendations. At 10% risk cut-off, RCs agreed in 23% of cases. Both RCs showed good discrimination, with areas under the curves for the RPCRC of 0.74 (95% confidence interval [CI] 0.72-0.76) and the PBCG-RC of 0.70 (95% CI 0.68-0.72). Calibration was adequate using the PBCG-RC (calibration slope: 1.13 [95% CI 1.03-1.23]), but the RPCRC underestimated the risk of csPCa (calibration slope: 0.73 [0.68-0.79]). The PBCG-RC showed a net benefit in a decision curve analysis, whereas the RPCRC showed no net benefit at clinically relevant risk threshold levels. Recalibration improved clinical benefit, and differences between RCs decreased. Conclusions: Assessment of calibration is essential to ensure the clinical value of risk prediction tools. The PBCG-RC provided clinical benefit in its current version online. On the contrary, the RPCRC cannot be recommended in this setting. Patient Summary: Predicting the probability of finding prostate cancer on biopsy differed between two assessed risk calculators. After recalibration, the agreement of the models improved, and both were shown to be clinically useful. abstract_id: PUBMED:29594016 Head-to-head comparison of prostate cancer risk calculators predicting biopsy outcome. Background: Multivariable risk calculators (RCs) predicting prostate cancer (PCa) aim to reduce unnecessary workup (e.g., MRI and biopsy) by selectively identifying those men at risk for PCa or clinically significant PCa (csPCa) (Gleason ≥7). The lack of an adequate comparison makes choosing between RCs difficult for patients, clinicians and guideline developers. We aim to perform a head-to-head comparison of seven well known RCs predicting biopsy outcome. Methods: Our study comprised 7,119 men from ten independent contemporary cohorts in Europe and Australia, who underwent prostate biopsy between 2007 and 2015. We evaluated the performance of the ERSPC RPCRC, Finne, Chun, ProstataClass, Karakiewicz, Sunnybrook, and PCPT 2.0 (HG) RCs in predicting the presence of any PCa and csPCa. Performance was assessed by discrimination, calibration and net benefit analyses. Results: A total of 3,458 (48%) PCa were detected; 1,784 (25%) men had csPCa. No particular RC stood out predicting any PCa: pooled area under the ROC-curve (AUC) ranged between 0.64 and 0.72. The ERSPC RPCRC had the highest pooled AUC 0.77 (95% CI: 0.73-0.80) when predicting csPCa. Decision curve analysis (DCA) showed limited net benefit in the detection of csPCa, but that can be improved by a simple calibration step. The main limitation is the retrospective design of the study. Conclusions: No particular RC stands out when predicting biopsy outcome on the presence of any PCa. The ERSPC RPCRC is superior in identifying those men at risk for csPCa. Net benefit analyses show that a multivariate approach before further workup is advisable. abstract_id: PUBMED:28046017 Development and External Validation of the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer: Comparison with Two Western Risk Calculators in an Asian Cohort. Purpose: We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Materials And Methods: Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. Results: PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, p<0.001) but not different from that of the ERSPCRC-HG (0.83) on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11%) would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2%) would not have been diagnosed. Conclusions: KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings. abstract_id: PUBMED:36160831 Augmenting prostate magnetic resonance imaging reporting to incorporate diagnostic recommendations based upon clinical risk calculators. Risk calculators have offered a viable tool for clinicians to stratify patients at risk of prostate cancer (PCa) and to mitigate the low sensitivity and specificity of screening prostate specific antigen (PSA). While initially based on clinical and demographic data, incorporation of multiparametric magnetic resonance imaging (MRI) and the validated prostate imaging reporting and data system suspicion scoring system has standardized and improved risk stratification beyond the use of PSA and patient parameters alone. Biopsy-naïve patients with lower risk profiles for harboring clinically significant PCa are often subjected to uncomfortable, invasive, and potentially unnecessary prostate biopsy procedures. Incorporating risk calculator data into prostate MRI reports can broaden the role of radiologists, improve communication with clinicians primarily managing these patients, and help guide clinical care in directing the screening, detection, and risk stratification of PCa. abstract_id: PUBMED:35941246 External validation of two mpMRI-risk calculators predicting risk of prostate cancer before biopsy. Purpose: Risk calculators (RC) aim to improve prebiopsy risk stratification. Their latest versions now include multiparametric magnetic resonance imaging (mpMRI) findings. For their implementation into clinical practice, critical external validations are needed. Methods: We retrospectively analyzed the patient data of 554 men who underwent ultrasound-guided targeted and systematic prostate biopsies at 2 centers. We validated the mpMRI-RCs of Radtke et al. (RC-R) and Alberts et al. (RC-A), previously shown to predict prostate cancer (PCa) and clinically significant PCa (csPCa). We assessed these RCs' prediction accuracy by analyzing the receiver-operating characteristics (ROC) curve and evaluated their clinical utility using Decision Curve Analysis (DCA), including Net-Benefit and Net-Reduction curves. Results: We found that the Area Under the ROC Curve (AUC) for predicting PCa was 0.681 [confidence interval (CI) 95% 0.635-0.727] for RC-A. The AUCs for predicting csPCa were 0.635 (CI 95% 0.583-0.686) for RC-A and 0.676 (CI 95% 0.627-0.725) for RC-R. For example, at a risk threshold of 12%, RC-A needs to assess 334 and RC-R 500 patients to detect one additional true positive PCa or csPCa patient, respectively. At the same risk threshold of 12%, RC-A only needs to assess 6 and RC-R 16 patients to detect one additional true negative PCa or csPCa patient. Conclusion: The mpMRI-RCs, RC-R and RC-A, are robust and valuable tools for patient counseling. Although they do not improve PCa and csPCa detection rates by a clinically meaningful margin, they aid in avoiding unnecessary prostate biopsies. Their implementation could reduce overdiagnosis and reduce PCa screening morbidity. abstract_id: PUBMED:30698162 Liquid Biopsy Potential Biomarkers in Prostate Cancer. Prostate cancer (PCa) is the second most common cancer in men worldwide with an incidence of 14.8% and a mortality of 6.6%. Shortcomings in comprehensive medical check-ups in low- and middle-income countries lead to delayed detection of PCa and are causative of high numbers of advanced PCa cases at first diagnosis. The performance of available biomarkers is still insufficient and limited applicability, including logistical and financial burdens, impedes comprehensive implementation into health care systems. There is broad agreement on the need of new biomarkers to improve (i) early detection of PCa, (ii) risk stratification, (iii) prognosis, and (iv) treatment monitoring. This review focuses on liquid biopsy tests distinguishing high-grade significant (Gleason score (GS) ≥ 7) from low-grade indolent PCa. Available biomarkers still lack performance in risk stratification of biopsy naïve patients. However, biomarkers with highly negative predictive values may help to reduce unnecessary biopsies. Risk calculators using integrative scoring systems clearly improve decision-making for invasive prostate biopsy. Emerging biomarkers have the potential to substitute PSA and improve the overall performance of risk calculators. Until then, PSA should be used and may be replaced whenever enough evidence has accumulated for better performance of a new biomarker. abstract_id: PUBMED:38478041 Incorporating PHI in decision making: external validation of the Rotterdam risk calculators for detection of prostate cancer. Purpose: External validation of existing risk calculators (RC) to assess the individualized risk of detecting prostate cancer (PCa) in prostate biopsies is needed to determine their clinical usefulness. The objective was to externally validate the Rotterdam Prostate Cancer RCs 3 and 4 (RPCRC-3/4) and that incorporating PHI (RPCRC-PHI) in a contemporary Spanish cohort. Methods: Multicenter prospective study that included patients suspicious of harboring PCa. Men who attended the urology consultation were tested for PHI before prostate biopsy. To evaluate the performance of the prediction models: discrimination (receiver operating characteristic (ROC) curves), calibration and net benefit [decision curve analysis (DCA)] were calculated. These analyses were carried out for detection of any PCa and clinically significant (cs)PCa, defined as ISUP grade ≥ 2. Results: Among the 559 men included, 337 (60.28%) and 194 (34.7%) were diagnosed of PCa and csPCa, respectively. RPCRC-PHI had the best discrimination ability for detection of PCa and csPCa with AUCs of 0.85 (95%CI 0.82-0.88) and 0.82 (95%CI 0.78-0.85), respectively. Calibration plots showed that RPCRC-3/4 underestimates the risk of detecting PCa showing the need for recalibration. In DCA, RPCRC-PHI shows the highest net benefit compared to biopsy all men. Conclusions: The RPCRC-PHI performed properly in a contemporary clinical setting, especially for prediction of csPCa. abstract_id: PUBMED:31451332 Predicting Biopsy Outcomes During Active Surveillance for Prostate Cancer: External Validation of the Canary Prostate Active Surveillance Study Risk Calculators in Five Large Active Surveillance Cohorts. Background: Men with prostate cancer (PCa) on active surveillance (AS) are followed through regular prostate biopsies, a burdensome and often unnecessary intervention, not without risks. Identifying men with at a low risk of disease reclassification may help reduce the number of biopsies. Objective: To assess the external validity of two Canary Prostate Active Surveillance Study Risk Calculators (PASS-RCs), which estimate the probability of reclassification (Gleason grade ≥7 with or without >34% of biopsy cores positive for PCa) on a surveillance biopsy, using a mix of months since last biopsy, age, body mass index, prostate-specific antigen, prostate volume, number of prior negative biopsies, and percentage (or ratio) of positive cores on last biopsy. Design, Setting, And Participants: We used data up to November 2017 from the Movember Foundation's Global Action Plan (GAP3) consortium, a global collaboration between AS studies. Outcome Measurements And Statistical Analysis: External validity of the PASS-RCs for estimating reclassification on biopsy was assessed by calibration, discrimination, and decision curve analyses. Results And Limitations: Five validation cohorts (Prostate Cancer Research International: Active Surveillance, Johns Hopkins, Toronto, Memorial Sloan Kettering Cancer Center, and University of California San Francisco), comprising 5105 men on AS, were eligible for analysis. The individual cohorts comprised 429-2416 men, with a median follow-up between 36 and 84 mo, in both community and academic practices mainly from western countries. Abilities of the PASS-RCs to discriminate between men with and without reclassification on biopsy were reasonably good (area under the receiver operating characteristic curve values 0.68 and 0.65). The PASS-RCs were moderately well calibrated, and had a greater net benefit than most default strategies between a predicted 10% and 30% risk of reclassification. Conclusions: Both PASS-RCs improved the balance between detecting reclassification and performing surveillance biopsies by reducing unnecessary biopsies. Recalibration to the local setting will increase their clinical usefulness and is therefore required before implementation. Patient Summary: Unnecessary prostate biopsies while on active surveillance (AS) should be avoided as much as possible. The ability of two calculators to selectively identify men at risk of progression was tested in a large cohort of men with low-risk prostate cancer on AS. The calculators were able to prevent unnecessary biopsies in some men. Usefulness of the calculators can be increased by adjusting them to the characteristics of the population of the clinic in which the calculators will be used. Answer: Based on the provided abstracts, Western-based online prostate cancer risk calculators may not be optimal for predicting prostate cancer after prostate biopsy for the Korean population. Specifically, a study analyzed the performance of the European Randomized Screening of Prostate Cancer Risk Calculator (ERSPC-RC) and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC) in a Korean cohort and found that while the ERSPC-RC performed better than the PCPT-RC and PSA alone, it was not significantly different from the PSA density (PSAD) in predicting prostate cancer risk. The study concluded that Western-based prostate cancer risk calculators are not useful for urologists in predicting prostate cancer in the Korean population (PUBMED:23549812). Additionally, a separate study developed and externally validated the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG), which showed higher predictive accuracy than the PCPTRC-HG in a Korean population and similar performance with the ERSPCRC-HG. This suggests that region-specific risk calculators, such as the KPCRC-HG, may be more appropriate for use in the Korean population (PUBMED:28046017). In contrast, phi-based risk calculators were found to perform better in predicting prostate cancer in the Chinese population, indicating that incorporating additional biomarkers like the Prostate Health Index (phi) could improve the performance of risk calculators in Asian cohorts (PUBMED:30924451). Overall, while Western-based risk calculators have been validated and used extensively in Western populations, their applicability to the Korean population may be limited, and the development of region-specific calculators or the incorporation of additional biomarkers may provide better predictive accuracy for prostate cancer risk in this group.
Instruction: Should extended disability be an exclusion criterion for tertiary rehabilitation? Abstracts: abstract_id: PUBMED:27866422 Mapping the rehabilitation interventions of a community stroke team to the extended International Classification of Functioning, Disability and Health Core Set for Stroke. Purpose: This study aim was to evaluate whether the Extended International Classification of Functioning, Disability and Health Core Set for Stroke captured the interventions of a community stroke rehabilitation team situated in a large city in New Zealand. It was proposed that the results would identify the contribution of each discipline, and the gaps and differences in service provision to Māori and non-Māori. Applying the Extended International Classification of Functioning, Disability and Health Core Set for Stroke in this way would also inform whether this core set should be adopted in New Zealand. Method: Interventions were retrospectively extracted from 18 medical records and linked to the International Classification of Functioning, Disability and Health and the Extended International Classification of Functioning, Disability and Health Core Set for Stroke. The frequencies of linked interventions and the health discipline providing the intervention were calculated. Results: Analysis revealed that 98.8% of interventions provided by the rehabilitation team could be linked to the Extended International Classification of Functioning, Disability and Health Core Set for Stroke, with more interventions for body function and structure than for activities and participation; no interventions for emotional concerns; and limited interventions for community, social and civic life. Results support previous recommendations for additions to the EICSS. Conclusions: The results support the use of the Extended International Classification of Functioning, Disability and Health Core Set for Stroke in New Zealand and demonstrates its use as a quality assurance tool that can evaluate the scope and practice of a rehabilitation service. Implications for Rehabilitation The Extended International Classification of Functioning Disability and Health Core Set for Stroke appears to represent the stroke interventions of a community stroke rehabilitation team in New Zealand. As a result, researchers and clinicians may have increased confidence to use this core set in research and clinical practice. The Extended International Classification of Functioning Disability and Health Core Set for Stroke can be used as a quality assurance tool to establish whether a community stroke rehabilitation team is meeting the functional needs of its stroke population. abstract_id: PUBMED:9794056 Should extended disability be an exclusion criterion for tertiary rehabilitation? Socioeconomic outcomes of early versus late functional restoration in compensation spinal disorders. Study Design: A prospective cohort design with two groups of patients representing short-term or long-term disability (n = 497) who were selected from a larger cohort (n = 938) of consecutively treated spinal disorder patients with chronic compensation injuries. Objectives: To prospectively evaluate the impact of length of spinal disability on socioeconomic outcomes of medically directed rehabilitation. Summary Of Background Data: Despite an increasing tendency of managed care organizations to limit rehabilitation services for disabled workers with chronic spinal disorders, there has been a surprising lack of prospective research evaluating the impact of length of disability on objective socioeconomic treatment outcomes. Although only approximately 10% of all patients with spinal disorders are disabled beyond 4 months, they account for nearly 80% of all workers' compensation expenditures. Little is known about whether relatively early intervention improves outcomes after chronicity has been established or whether any predictors distinguish between these groups. Methods: Two comparison groups of functional restoration tertiary treatment graduates were identified from the same community referral pool. The "long-term disabled" group involved a minimum of 18 months of disability (n = 252). This group was compared with a "short-term disabled" group (n = 245), no more than 8 months since injury, but chronic based on a minimum of 4 months after injury. The long-term disabled group showed significantly higher rates of pretreatment surgery than the short-term disabled group (P < 0.001). All patients were evaluated prospectively with specific physical, psychological, and occupational measurements. They also underwent a structured interview 1 year after treatment evaluating work status, health care use, and recurrent injury. Results: The short-term disabled group showed statistically higher return to work (P < 0.001) and work retention (P < 0.05) relative to the long-term disabled group. However, health care use and recurrent lost time injury claims were low in both groups and did not differ significantly. No predictors of outcome were found among the prospectively collected physical performance or psychosocial variables. Conclusions: This study suggests that early tertiary nonoperative care, once patients with chronic spinal disorders are identified as having potentially high-cost chronic pain and disability, is efficacious in achieving goals of better work return and work retention. Such early rehabilitation may also prevent significant indemnity expense, as well as some late surgical interventions sought by progressively more desperate patients. However, individuals with long-term disability achieve respectable work return and retention rates, while faring no worse on other socioeconomic outcomes that represent major "cost drivers" to the workers' compensation system. Early intervention is not a panacea or a necessary condition for the successful rehabilitation of workers with disabling chronic spinal disorders. abstract_id: PUBMED:25481421 Defining the principles of musculoskeletal disability and rehabilitation. Disability is strongly associated with musculoskeletal conditions such as arthritis, low back pain and other soft tissue and joint disorders. The burden of these conditions may become exponentially high in the absence of rehabilitation. To understand disability, the ICF (International Classification of Functioning, Disability and Health) provides the framework to disentangle the different domains that comprise disability. Disability in ICF term is defined as an impairment of the body functions and body structure and limited activity and restricted participation and can be influenced by environmental and personal factors. The ICF can provide the domains of disability pertinent to individuals with musculoskeletal conditions by using ICF Core Sets. Musculoskeletal-related disability is amenable to rehabilitation and there is evidence to suggest the effectiveness of multidisciplinary forms of rehabilitation programs. Community-based programs as an extension of rehabilitation also have evidence to improve clinical and quality of life outcomes in people with musculoskeletal conditions. abstract_id: PUBMED:34771851 A Reliable Fracture Angle Determination Algorithm for Extended Puck's 3D Inter-Fiber Failure Criterion for Unidirectional Composites. Determination of the fracture angle and maximum exposure value of extended Puck's 3D inter-fiber failure (IFF) criterion is of great importance for predicting the failure mechanism of unidirectional fiber-reinforced composites. In this paper, a reliable semi-analytical algorithm (RSAA) is presented for searching fracture angle and corresponding exposure value for the extended Puck's failure criterion. One hundred million cases are tested for verifying the accuracy of the present and other algorithms on Python using the strength-value-stress-state combinations more universal than those in previous literatures. The reliability of previous algorithms is discussed and counterexamples are provided for illustration. The statistical results show RSAA is adequate for implementation in extended Puck's criterion and much more reliable than previous algorithms. RSAA can correctly predict the results with a probability of over 99.999%. abstract_id: PUBMED:23110397 Rehabilitation time before disability pension. Background: The decision to grant a disability pension is usually the end of a long process of medical examinations, treatment and rehabilitation attempts. This study investigates to what extent the time spent on rehabilitation time prior to disability pension is associated with characteristics of the individual or the local employment and welfare office, measured as municipality variance. Methods: A study of 2,533 40 to 42 year olds who received disability pension over a period of 18 years. The logarithm of the rehabilitation time before granting a disability pension was analysed with multilevel regression. Results: The rehabilitation time before a disability pension was granted ranged from 30 to 5,508 days. Baseline health characteristics were only moderately associated with rehabilitation time. Younger people and people with unemployment periods had longer rehabilitation time before a disability pension was granted. There were only minor differences in rehabilitation time between men and women and between different levels of education. Approximately 2% of the total variance in rehabilitation time could be attributed to the municipality of residence. Conclusions: There is a higher threshold for granting a disability pension to younger persons and those who are expecting periods of unemployment, which is reflected in the extended rehabilitation requirements for these groups. The longer rehabilitation period for persons with psychiatric disorders might reflect a lack of common knowledge on the working capacity of and the fitted rehabilitation programs for people with psychiatric disorders. abstract_id: PUBMED:37573067 Public sector rehabilitation services for physical disabilities in tertiary care hospitals of Karachi, Pakistan: A qualitative study. Background: Around the globe there are 2.4 billion people in need of rehabilitation. Disability and rehabilitation have not gained much importance in the public health arena, possibly due to limited research and awareness regarding the significance of rehabilitation. This study aimed to provide an in-depth understanding of the barriers and facilitators to the provision and utilization of rehabilitation services in public sector tertiary care hospitals of Karachi, Pakistan. Methodology: This qualitative exploratory study was conducted from August to September 2015 in two tertiary care hospitals of Karachi. The criterion for hospital selection included the availability of functional rehabilitation services. Twenty-four key informant interviews were conducted with health professionals, including doctors, physiotherapists, occupational therapists, prosthetists, orthotists, as well as patients and caregivers. Qualitative content analysis was performed using a consensual qualitative research approach. Responses from key informant interviews were coded into free nodes and then categorized into themes. Results: Two themes emerged from the data: constraints to the provision of rehabilitation services, and barriers to the utilization of rehabilitation services. Basic infrastructure was available at both study sites; however, a lack of structured guidelines for referring patients to these services, a lack of information-sharing and guidance to the patients, and a limited supply of equipment (treatment modalities) hampered the provision of services. The barriers to rehabilitation services included cost pertaining to transport, environmental barriers, lack of support from the employer, and strain on and lack of support from caregivers. Conclusion: Rehabilitation services require attention and investment in research and allocation of funds to strengthen the service delivery system. Hospitals, alongside the department of health, need to develop a strategic plan to set future directions and standards of available rehabilitation services. abstract_id: PUBMED:27374002 Between exclusion and emancipation: Foucault's ethics and disability. The aim of the study was to demonstrate how Foucault's ethics, which we understand as a tension between exclusion and emancipation, helps both critically reassess two disability models that prevail in the contemporary literature concerning disability, that is the medical model and the social one, and support and inspire an ethical project of including people with disabilities in spheres of life from which they have been excluded by various power/knowledge regimes. We claim, following Foucault, that such a project should be informed by critical reflection on exclusion-generating forms of knowledge about people with disabilities and focused on individual ethical actions fostering self-realization and emancipation of people with disability. abstract_id: PUBMED:33527144 Five years after development of the national disability, health and rehabilitation plan for Ukraine: Achievements and challenges. In 2015, technical consultation to support development of the National Disability, Health and Rehabilitation Plan in Ukraine was carried out by the Rehabilitation Advisory Team of the International Society of Physical and Rehabilitation Medicine. Recommendations for actions and projects to improve rehabilitation services within the healthcare system in Ukraine were developed, proposed and implemented. The achieve-ments in the subsequent 5 years include establishing, training and enabling employment at health-care facilities for new rehabilitation professionals (physical and rehabilitation medicine physi-cians, physical therapists, occupational therapists), commencing implementation of the International Classification of Functioning, Disability and Health (ICF), and increasing rehabilitation knowledge among Ukrainian communities and decision-makers. The main challenges include inappropriate healthcare legislation for developing modern rehabilitation services, gaps in the clinical environment and quality control for training rehabilitation professionals, and the slow pace of implementation of the ICF. A gen-eral facilitator is the ongoing healthcare reform in Ukraine, especially its continuation to secondary and tertiary healthcare levels. Future high-priority activ-ities will include amendments to basic healthcare legislation, and introducing a bio-psycho-social approach for the provision of rehabilitation services at all healthcare levels and all rehabilitation treatment phases, starting with the most debilitating health conditions. It will be important to continue collaboration with European and international partners. abstract_id: PUBMED:34821573 Diagnosis, rehabilitation and development of disability 2000-2019 in Iceland Introduction: The disability assessment standard based on medically recognized illnesses or disabilities was introduced in Iceland 1999. The aim of this study is to examine the development of Social Insurance Administration (Tryggingastofnun ríkisins, TR) rulings regarding rehabilitation and disability pensions over a twenty-year period, since its introduction. Material And Methods: All registered diagnoses in the medical certificates of TR due to the approved rehabilitation or disability pension were examined in the period 2000-2019. The gender distribution and age distribution of these applicants and the number development during the period are described. At the same time, costs as a percentage of government expenditure are examined. Results: The number of younger rehabilitation pensioners has increased rapidly in recent years, at the same time as the relative increase in disability pensioners has slowed slightly. Mental and musculoskeletal disorders are by far the most common types of illness leading to disability. Mental illnesses differ in terms of age distribution and increase over time. The proportion of individuals aged 18-66 with a 75% disability assessment has increased by a third during the period, from about 6% to 8%. The gender distribution of disability pensioners remains similar, with women accounting for 62% in total. Women are much more likely to receive disability pension due to musculoskeletal disorders than men and men are somewhat more likely to suffer from mental illness. The relative development of central government expenditure on total payments to rehabilitation and pensioners continues to grow as a proportion of central government expenditure. Conclusion: The number of rehabilitation pensioners has increased significantly since 2018, at the same time as the number of disability pensioners has decreased and there are indications that rehabilitation results in a lower number of new disability pensioners. Mental and musculoskeletal disorders are by far the most common types of illness leading to disability. A slightly lower proportion of disabled people have psychiatric diagnosis as a first diagnosis in the period 2000-2019 compared to those with a valid disability assessment in 2005, but the proportion of musculoskeletal disorders is slightly higher. Nevertheless, mental illnesses differ in age distribution and increase over time. abstract_id: PUBMED:10627966 Medical rehabilitation in long-term disability Unlabelled: Based on data from a prospective observational cohort study the following article attempts to answer two key questions of rehabilitation research in Germany: (1) What are the utilization rates of rehabilitation measures in patients with long-term work-incapacity due to low-back pain? (2) In this group of patients: does participation in rehabilitation programs correlate with return to work? Patients with long-term work-incapacity due to low-back pain report a severely impaired health status (measured by FFbH, SF-36, numeric rating scale for pain intensity) which calls for comprehensive medical treatment and rehabilitation. 40.5% of study patients received first time medical rehabilitation between three months and one year after the beginning of long-term work-incapacity, 20.5% between one year and two years after the beginning of long-term work-incapacity. These figures were not influenced by age. Considering the severity of impairment we judge these rates as being quite low. There was no positive correlation found between utilization of medical rehabilitation measures or any other medical treatment and return to work. A positive correlation was observed for utilization of first time medical rehabilitation between three months to one year after the beginning of long-term work-incapacity and functional capacity. This effect was demonstrable for patients 50 years of age or younger. Conclusions: (1) In our cohort utilization of medical rehabilitation measures in patients with low-back pain and long-term work-incapacity is quite low. This suggests a considerable amount of underutilization. (2) In order to avoid underutilization the process of accessing medical rehabilitation within the German health care system should be critically reviewed. (3) In the given context of high unemployment rates and the German early pensioning system the use of "return to work" as the main criterion to judge effectiveness of rehabilitation measures remains debatable. Answer: Based on the abstracts provided, it appears that extended disability should not necessarily be an exclusion criterion for tertiary rehabilitation. The study described in PUBMED:9794056 found that early tertiary nonoperative care is efficacious in achieving better work return and work retention for patients with chronic spinal disorders, suggesting that early rehabilitation can be beneficial. However, it also noted that individuals with long-term disability still achieve respectable work return and retention rates and do not fare worse on other socioeconomic outcomes. This indicates that even with extended disability, patients can still benefit from rehabilitation services. Furthermore, PUBMED:23110397 highlights that there is a higher threshold for granting a disability pension to younger persons and those who are expecting periods of unemployment, which is reflected in the extended rehabilitation requirements for these groups. This suggests that extended disability does not preclude the need for rehabilitation but rather indicates a need for tailored rehabilitation programs. Additionally, PUBMED:25481421 emphasizes that musculoskeletal-related disability is amenable to rehabilitation and that there is evidence supporting the effectiveness of multidisciplinary rehabilitation programs. This further supports the notion that extended disability should not be an exclusion criterion for tertiary rehabilitation, as rehabilitation can be effective for a range of disabilities. In summary, the evidence from the abstracts suggests that extended disability should not automatically exclude individuals from receiving tertiary rehabilitation, as rehabilitation can lead to positive outcomes even for those with long-term disabilities.
Instruction: Monitoring electrical skin conductance: a tool for the assessment of postoperative pain in children? Abstracts: abstract_id: PUBMED:19672172 Monitoring electrical skin conductance: a tool for the assessment of postoperative pain in children? Background: Monitoring changes in electrical skin conductance has been described as a potentially useful tool for the detection of acute pain in adults. The aim of this study was to test the method in pediatric patients. Methods: A total of 180 postoperative pediatric patients aged 1-16 yr were included in this prospective, blinded observational study. After arrival in the recovery unit, pain was assessed by standard clinical pain assessment tools (1-3 yr: Face Legs Activity Cry Consolability Scale, 4-7 yr: Revised Faces Scale, 8-16 yr: Visual Analogue Scale) at various time points during their stay in the recovery room. The number of fluctuations in skin conductance per second (NFSC) was recorded simultaneously. Results: Data from 165 children were used for statistical analysis, and 15 patients were excluded. The area under the Receiver Operating Characteristic curve for predicting moderate to severe pain from NFSC was 0.82 (95% confidence interval 0.79-0.85). Over all age groups, an NFSC cutoff value of 0.13 was found to distinguish between no or mild versus moderate or severe pain with a sensitivity of 90% and a specificity of 64% (positive predictive value 35%, negative predictive value 97%). Conclusions: NFSC accurately predicted the absence of moderate to severe pain in postoperative pediatric patients. The measurement of NFSC may therefore provide an additional tool for pain assessment in this group of patients. However, more research is needed to prospectively investigate the observations made in this study and to determine the clinical applicability of the method. abstract_id: PUBMED:21510069 Monitoring of skin conductance in anesthesiology and intensive care The purpose of the study is to examine the efficacy and safety through the introduction of a local anesthetic wound irrigation catheter for continuous postoperative analgesia in newborns. The study was conducted during the postoperative period of 33 full-term newborns. In the study group (n = 15), analgesia was carried out through the wound catheter with infusion of local anesthetic, for the control group (n = 18) continuous intravenous infusion promedol was used. The effectiveness of postoperative analgesia was rated by CRIES postoperative pain control scale, monitoring of skin conductance, cardiointervalogram and hormone level control in the blood. The study showed that the use of wound catheter for continuous postoperative analgesia with local anesthetics provides effective level of analgesia in newborns, which is comparable with the introduction of opioid analgesics. There were no severe postoperative complications. The method of analgesia by administering a local anesthetic through the wound catheter can be recommended for postoperative analgesia in newborns, especially during major surgeries. abstract_id: PUBMED:25829906 Postoperative pain assessment using four behavioral scales in Pakistani children undergoing elective surgery. Background: Several measurement tools have been used for assessment of postoperative pain in pediatric patients. Self-report methods have limitations in younger children and parent, nurse or physician assessment can be used as a surrogate measure. These tools should be tested in different cultures as pain can be influenced by sociocultural factors. The objective was to assess the inter-rater agreement on four different behavioral pain assessment scales in our local population. Materials And Methods: This prospective, descriptive, observational study was conducted in Pakistan. American Society of Anesthesiologists I and II children, 3-7 years of age, undergoing elective surgery were enrolled. Four pain assessment scales were used, Children's Hospital of Eastern Ontario Pain Scale (CHEOPS), Toddler Preschool Postoperative Pain Scale (TPPPS), objective pain scale (OPS), and Face, Legs, Activity, Cry, Consolability (FLACC). After 15 and 60 min of arrival in the postanesthesia care unit (PACU), each child evaluated his/her postoperative pain by self-reporting and was also independently assessed by the PACU nurse, PACU anesthetist and the parent. The sensitivity and specificity of the responses of the four pain assessment scales were compared to the response of the child. Results: At 15 min, sensitivity and specificity were >60% for doctors and nurses on FLACC, OPS, and CHEOPS scales and for FLACC and CHEOPS scale for the parents. Parents showed poor agreement on OPS and TPPS. At 60 min, sensitivity was poor on the OPS scale by all three observers. Nurses showed a lower specificity on FLACC tool. Parents had poor specificity on CHEOPS and rate of false negatives was high with TPPS. Conclusions: We recommend the use of FLACC scale for assessment by parents, nurses, and doctors in Pakistani children aged between 3 and 7. abstract_id: PUBMED:17060329 Monitoring of skin conductance to assess postoperative pain intensity. Background: Pain is known to alter the electrogalvanic properties of the skin. The aim of this pilot study was to investigate the influence of postoperative pain on skin conductance (SC) readings. Methods: After obtaining ethical approval and written informed consent, 25 postoperative patients were asked to quantify their level of pain on a numeric rating scale (NRS, 0-10) at different time points in the recovery room. As a parameter of SC, the number of fluctuations within the mean SC per second (NFSC) was recorded. Simultaneously, the NRS was obtained from patients by a different observer who was blinded to the NFSC values. Results: Data from 110 readings of 25 patients (14 female, 11 male; 21-67 yr) were included. NFSC showed a significant correlation with the NRS (r=0.625; P<0.01), whereas heart rate and blood pressure showed no or very weak correlation with the NRS. NFSC was significantly different between patients with no (NRS=0), mild (NRS=1-3), moderate (NRS=4-5) and severe (NRS=6-10) pain (no: 0.047, mild: 0.089, moderate: 0.242, severe: 0.263; P<0.0001). Post hoc, a cut-off value for NFSC (0.1) was calculated above which a pain score >3 on the NRS was predicted with sensitivity of 89% and specificity of 74%. Conclusions: The severity of postoperative pain significantly influences SC. Using cut-off values, NFSC may prove a useful tool for pain assessment in the postoperative period. abstract_id: PUBMED:22562934 Impact of surgical intervention and postoperative pain on electrical skin resistance at acupuncture points: an exploratory study. Objectives: One theory about acupuncture suggests that pathological processes can cause measurable changes in electrical skin resistance (ESR) at acupuncture points (APs). Although the theory has yet to be proven, ESR measurements (ESRMs) form a frequently used part of contemporary acupuncture. The aim of this study was to test the so-called 'electrical responsiveness' of APs in the setting of a defined operative trauma. Methods: ESRMs (n=424) were performed at the APs and surrounding skin of GB34 and ST38 in 163 participants using an impedance meter array developed for the purpose of ESRMs. For each group the percentage of measurements with a significantly different ESR between the APs and the surrounding skin was calculated and compared with each other. Measurements of four groups were compared: healthy control subjects (n=30) and patients after ophthalmic (n=29), hip (n=42) and shoulder (n=30) surgery. The influence of postoperative pain intensity was also assessed. Results: Group comparison showed no significant differences for ST38. The ESRMs at GB34 had a significantly higher percentage of measurements with an increased ESR after ophthalmic (23.2%) and hip (22.2%) surgery, but not after shoulder surgery (7.5%). Subgroup analysis showed that an increase in pain intensity tended to lead to a decrease in the number of APs with ESR changes. Conclusion: These results suggest that reactive changes in ESR at APs might exist. Pain and alertness seem to have an impact on ESR at APs. However, the current data do not allow for conclusions to be drawn concerning the clinical use of ESRMs. abstract_id: PUBMED:19183409 Monitoring of sympathetic tone to assess postoperative pain: skin conductance vs surgical stress index. The number of fluctuations in skin conductance per second has been described as a potential tool for monitoring postoperative pain. More recently, the surgical stress index has shown promising correlations with intra-operative painful stimuli. We compared both methods for their ability to assess postoperative pain, in 100 postoperative patients who were also asked to quantify their level of pain at different time points in the recovery room. The number of fluctuations per second and surgical stress index were significantly different between pain scoring <or= 5/10 and > 5/10 on a numeric rating scale (mean (SE) number of fluctuations per second 0.12 (0.02) vs 0.21 (0.03), respectively; p = 0.017, and surgical stress index 57 (1.4) vs 64 (1.9) points, respectively; p = 0.001). Both number of fluctuations in skin conductance per second and surgical stress index identified timepoints with moderate to severe pain with only moderate sensitivity and specificity. abstract_id: PUBMED:29402577 High-voltage electric stimulation of the donor site of skin grafts accelerates the healing process. A randomized blinded clinical trial. Introduction: Severe burns benefit from skin grafting, and grafting surgery is of great importance in the treatment of these injuries. As a result, there is formation of an additional wound at the donor site, which is painful and susceptible to infection. However, the therapeutic approach to these problems at donor sites for skin grafting is insufficiently explored in the literature. Aim: To evaluate electrical stimulation of the donor sites of burn patients treated by grafting surgery. Methods: This work evaluated 30 donor sites of cutaneous graft burn patients treated with high-voltage electrical stimulation. Subjects were randomized into two groups: electrical stimulation (GES), treated with electrostimulation (50min, 100Hz, twin pulses 15 us, monophasic), and the sham group (GS), treated by the same procedures but without current. Pain was assessed by visual analog scale daily before and after the electrical stimulation. The time elapsed until complete epithelization was evaluated (time of primary dressing detached spontaneously). Skin temperature was measured by thermography. The characteristics of donor sites were qualitatively evaluated using images and the plug-in CaPAS® (Carotid Plaque Analysis Software). Results: The results showed a significant decrease in pain, which was absent on the third day in the GES and the sixth day in the GS. The time the primary dressing detached spontaneously in days decreased (p<0.05) (4.7±0.2) compared to the GS group (7.0±1.3). Donor site healing characteristics such as vascularization, pigmentation, height, the quantity of crust formed, irregularities, and the quality of healing was better in the GES; moreover, homogeneity and inertia of the images confirmed higher healing quality. Conclusion: As a result of the study, the technology shows promise and merits a larger study with objective assessments and different physical variables. abstract_id: PUBMED:27266751 Children's nurses' post-operative pain assessment practices. Pain assessment is crucial to achieving optimal pain management in children. Pain that is insufficiently controlled can have extensive short- and long-term repercussions. Many studies continue to report that children experience unnecessary post-operative pain when they are in hospital. The purpose of this literature review was to explore post-operative pain assessment practices used by children's nurses. A literature search of databases was undertaken and inclusion criteria identified. Four themes emerged: pain assessment tools; behavioural cues; documentation; and communication between child, parent/carer and nurse. The findings showed that pain assessment tools were inadequately used, that children's behavioural cues were misinterpreted, and that there was inconsistency in the documentation of pain scores and in communication about pain scores between children, parent/carer and nurse. Addressing the key issues identified from the articles reviewed can help improve nursing practice and care. abstract_id: PUBMED:17845649 The assessment of postoperative pain by monitoring skin conductance: results of a prospective study. The number of fluctuations of skin conductance per second correlates with postoperative pain. The aim of this prospective study was to test the cut-off value for the number of fluctuations of skin conductance per second obtained from a previous study. Seventy-five patients were asked to quantify their level of pain on a numeric rating scale (0-10) in the recovery room. The number of fluctuations of skin conductance per second was recorded simultaneously. The number of fluctuations of skin conductance per second was different between patients with no (0.07), mild (0.16), moderate (0.28) and severe pain (0.33); p < 0.001. The tested cut-off value for the number of fluctuations of skin conductance per second (0.1) distinguished a numeric rating scale </= 3 from > 3 with 88.5% sensitivity and 67.7% specificity. The number of fluctuations of skin conductance per second may be a useful means of assessing postoperative pain. abstract_id: PUBMED:10437441 Use of Faces Scale in children's self-assessment of pain In this study 3-8 year old children's pain assessment and management after tonsillectomies were evaluated on two otological wards. The sample consisted of 80 children, 40 from both wards and the response rate was 92%. The groups were compared with each other: on one ward nurses used the Faces Scale in children's self assessment of pain while the other was a comparative ward. The data were collected using measurements: 1. background of children, 2. intensity of children's pain after the operation observed by their parents, 3. parents' satisfaction with children's assessment and management of pain, 4. children's self assessment of pain with the Faces Scale and 5. documentation of children's pain management. The data were analysed by cross-tabulations, mean-values, t-test, Pearson's correlation and content analysis. 40% of children had severe or intolerable pain after the operation observed by their parents. Children's self assessment of pain and parents' observations correlated highly (r = .74, p < .05). Children had less pain (the difference was statistically significant) on the ward, where the Faces Scale was used in pain assessment. The parents were also more satisfied with their children's pain assessment and management on this ward. However, there were differences in pain management practices between there two wards. Nurses need more education to be able to assess and manage children's pain adequately. Answer: Monitoring electrical skin conductance has been explored as a potential tool for assessing postoperative pain in children. A study involving 180 postoperative pediatric patients aged 1-16 years found that the number of fluctuations in skin conductance per second (NFSC) could accurately predict the absence of moderate to severe pain. The study suggested an NFSC cutoff value of 0.13 to distinguish between no or mild pain versus moderate or severe pain, with a sensitivity of 90% and a specificity of 64% (PUBMED:19672172). Another study confirmed that NFSC showed a significant correlation with the numeric rating scale (NRS) for pain, indicating that the severity of postoperative pain significantly influences skin conductance (PUBMED:17060329). Moreover, the number of fluctuations in skin conductance per second has been compared with the surgical stress index for their ability to assess postoperative pain, with both methods showing moderate sensitivity and specificity in identifying moderate to severe pain (PUBMED:19183409). However, while these findings are promising, more research is needed to prospectively investigate the clinical applicability of the method and to determine its effectiveness across different age groups and cultural settings, as pain assessment can be influenced by sociocultural factors (PUBMED:25829906). In conclusion, monitoring electrical skin conductance shows potential as an additional tool for pain assessment in postoperative pediatric patients, but further studies are required to establish its reliability and practicality in clinical settings.
Instruction: Probably benign breast masses diagnosed by sonography: is there a difference in the cancer rate according to palpability? Abstracts: abstract_id: PUBMED:25038810 Benign core biopsy of probably benign breast lesions 2 cm or larger: correlation with excisional biopsy and long-term follow-up. Purpose: To evaluate the accuracy of benign core biopsy of probably benign breast lesions (category 3) 2 cm or larger on the basis of excisional biopsy and long-term follow-up. Methods: We retrospectively reviewed 146 category 3 lesions in 146 patients 2 cm or larger which were diagnosed as benign by ultrasound (US)-guided core biopsy. Patients were initially diagnosed as benign at core needle biopsy and then followed up with excisional biopsy (surgical excision, n=91; US-guided vacuum assisted excision, n=35) or breast ultrasonography (n=20). Results: Of the 126 patients who underwent surgical excision or US-guided vacuum-assisted excision, 114 patients were diagnosed with benign lesions, 10 patients with borderline lesions (benign phyllodes tumor), and two patients with malignant phyllodes tumors. The probabilities of lesions being benign, borderline and malignant were 91.8% (134/146), 6.8% (10/146), and 1.4% (2/146), respectively. Of 13 patients who had growing masses on follow-up ultrasonography, three (23.1%) were non-benign (two benign phyllodes tumors and one malignant phyllodes tumor). Conclusion: US-guided core needle biopsy of probably benign breast mass 2 cm or larger was accurate (98.6%) enough to rule out malignancy. But, it was difficult to rule out borderline lesions even when they were diagnosed as benign. abstract_id: PUBMED:18855166 Palpable breast masses with probably benign morphology at sonography: can biopsy be deferred? Background: When a palpable breast mass is detected, a biopsy is usually performed even if the mass reveals probably benign morphologic features on imaging, as there is relatively little data reporting the outcome of such breast masses. Purpose: To determine the negative predictive value for sonographic evaluation of palpable breast masses with probably benign morphology, and to assess whether follow-up may be an acceptable alternative to immediate biopsy. Material And Methods: Of the 1399 sonograms of palpable masses from January 2004 to September 2005, there were 397 patients with masses of probably benign morphology. This study included 274 of these patients (age range 12-64 years, mean age 34 years) with 312 palpable masses that were pathologically confirmed by fine-needle aspiration (n=7), ultrasound (US)-guided core needle biopsy (n=180), or surgical biopsy (n=125). The false-negative rate, negative predictive value (NPV), and 95% confidence interval (CI) were calculated using the SPSS statistical software package for Windows, version 12.0. A P value <0.05 was considered statistically significant. Results: Of the 312 masses, there were 310 benign lesions and two malignancies, resulting in a false-negative rate of 0.6% (NPV 99.4%, P value=0.0432, 95% CI 0.0-1.5%). Conclusion: The negative predictive value of sonography for palpable breast masses with probably benign morphology is high (99.4%). Therefore, short-term imaging follow-up can be an acceptable alternative to immediate biopsy, similar to the management of nonpalpable probably benign lesions (BI-RADS category 3). abstract_id: PUBMED:29460095 Diagnosis, characteristics, and treatment of breast carcinomas within benign fibroepithelial tumors. Background: Breast carcinoma arising in a benign fibroepithelial tumor is rare, and is usually discovered incidentally during examination of the breast mass. Methods: We evaluated the clinicopathological features, treatment, and prognosis of seven women with breast carcinomas within benign fibroepithelial tumors, diagnosed and treated at a single institution between 2011 and 2015. Results: Seven women, aged 21-64 years, visited our hospital complaining of a breast mass detected by self-checking or screening examination. All patients had well-demarcated movable breast masses. Ultrasonography showed circumscribed masses suggesting benign tumors in all cases, and mammography revealed well-defined, high-density masses, with or without calcification. The masses progressed in two patients. A preoperative histological diagnosis of carcinoma was made following needle biopsy in four patients. Tumorectomy, breast-conserving surgery, and mastectomy were performed in three, two, and two patients, respectively. One patient underwent lymph node dissection and another underwent sentinel node biopsy. Histologically, the masses were diagnosed as invasive ductal carcinoma, ductal carcinoma in situ or lobular carcinoma in situ, arising in a benign phyllodes tumor, fibroadenoma, or fibroadenomatoid mastopathy. Lymph node metastasis was detected in two patients. There was no recurrence in any of the patients. Conclusions: Although a carcinoma within a preexisting benign fibroepithelial tumor is extremely rare, it is important to be aware of the possibility of invasive and metastatic disease. abstract_id: PUBMED:32870394 Can Follow-up be Avoided for Probably Benign US Masses with No Enhancement on MRI? Objectives: To assess whether no enhancement on pre-treatment MRI can rule out malignancy of additional US mass(es) initially assessed as BI-RADS 3 or 4 in women with newly diagnosed breast cancer. Methods: This retrospective study included consecutive women from 2010-2018 with newly diagnosed breast cancer; at least one additional breast mass (distinct from index cancer) assigned a BI-RADS 3 or 4 on US; and a bilateral contrast-enhanced breast MRI performed within 90 days of US. All malignant masses were pathologically proven; benign masses were pathologically proven or defined as showing at least 2 years of imaging stability. Incidence of malignant masses and NPV were calculated on a per-patient level using proportions and exact 95% CIs. Results: In 230 patients with 309 additional masses, 140/309 (45%) masses did not enhance while 169/309 (55%) enhanced on MRI. Of the 140 masses seen in 105 women (mean age, 54 years; range 28-82) with no enhancement on MRI, all had adequate follow-up and 140/140 (100%) were benign, of which 89/140 (63.6%) were pathologically proven and 51/140 (36.4%) demonstrated at least 2 years of imaging stability. Pre-treatment MRI demonstrating no enhancement of US mass correlate(s) had an NPV of 100% (95% CI 96.7-100.0). Conclusions: All BI-RADS 3 and 4 US masses with a non-enhancing correlate on pre-treatment MRI were benign. The incorporation of MRI, when ordered by the referring physician, may decrease unnecessary follow-up imaging and/or biopsy if the initial US BI-RADS assessment and management recommendation were to be retrospectively updated. Key Points: • Of 309 BI-RADS 3 or 4 US masses with a corresponding mass on MRI, 140/309 (45%) demonstrated no enhancement whereas 169/309 (55%) demonstrated enhancement • All masses classified as BI-RADS 3 or 4 on US without enhancement on MRI were benign • MRI can rule out malignancy in non-enhancing US masses with an NPV of 100. abstract_id: PUBMED:31561317 What you see is not always what you get: Radiographic-pathologic discordance among benign breast masses. The differential diagnosis for benign breast masses is broad and ranges from common lesions like fibroadenomas to rare masses like breast hamartomas. Fibroadenomas are proliferative benign masses made up of fibroglandular tissue. Hamartomas are neoplasms comprised of different tissues that are endogenous to the area where they originate. Breast hamartomas specifically, are rare, benign slow growing tumours comprised of fibrotic stroma, adipose, glandular tissue, and epithelial components. Both lesions are painless, firm, and are typically palpable on clinical exam. Given their similarities in composition, diagnosing these masses can be challenging, but may be confirmed with ultrasonography, mammogram, computed tomography, magnetic resonance imaging, or via histological specimen. Once diagnosed, surgical excision is the preferred treatment option. We present a 33-year-old woman with a large left breast mass that gradually increased in size and provide a review of the current literature regarding the challenge of distinguishing between breast fibroadenomas and hamartomas. abstract_id: PUBMED:36340228 Classic illustrations of benign and malignant phyllodes breast tumors in two patients. Phyllodes tumors, World Health Organization fibroepithelial tumors, are classified as benign, borderline, or malignant based on histopathology. Phyllodes must be distinguished from benign fibroadenomas, also WHO fibroepithelial tumors. The distinction of phyllodes from fibroadenomas can be challenging clinically, as these tumors may mirror one another. Here, we present 2 cases, classic clinical and imaging examples of benign and malignant phyllodes, to review the current epidemiology, classification, diagnosis, and treatment of phyllodes tumors. abstract_id: PUBMED:27009935 Benign and Malignant Breast Disease at Rwanda's First Public Cancer Referral Center. Background: Breast cancer incidence is rising in low- and middle-income countries. Understanding the distribution of breast disease seen in clinical practice in such settings can guide early detection efforts and clinical algorithms, as well as support future monitoring of cancer detection rates and stage. Patients And Methods: We conducted a retrospective medical record review of 353 patients who presented to Butaro Cancer Center of Excellence in Rwanda with an undiagnosed breast concern during the first 18 months of the cancer program. Results: Eighty-two percent of patients presented with a breast mass. Of these, 55% were diagnosed with breast cancer and 36% were diagnosed with benign disease. Cancer rates were highest among women 50 years and older. Among all patients diagnosed with breast cancer, 20% had stage I or II disease at diagnosis, 46% had locally advanced (stage III) disease, and 31% had metastatic disease. Conclusion: After the launch of Rwanda's first public cancer referral center and breast clinic, cancer detection rates were high among patients presenting with an undiagnosed breast concern. These findings will provide initial data to allow monitoring of changes in the distribution of benign and malignant disease and of cancer stage as cancer awareness and services expand nationally. Implications For Practice: The numbers of cases and deaths from breast cancer are rising in low-income countries. In many of these settings, health care systems to address breast problems and efficiently refer patients with symptoms concerning for cancer are rudimentary. Understanding the distribution of breast disease seen in such settings can guide early detection efforts and clinical algorithms. This study describes the characteristics of patients who came with a breast concern to Rwanda's first public cancer referral center during its first 18 months. More than half of patients with a breast mass were diagnosed with cancer; most had late-stage disease. Monitoring changes in the types of breast disease and cancer stages seen in Rwanda will be critical as breast cancer awareness and services grow. abstract_id: PUBMED:23250772 Evaluation of the psychological and biological changes of patients diagnosed with benign and malignant breast tumors. Purpose: Posttraumatic stress disorder (PTSD) is a severe anxiety disorder developed by exposure to any incident or circumstance that results in psychological trauma. In this study we compared the psychological and physiological changes between patients with malignant and benign breast tumors. Methods: We selected 150 Chinese women with a breast mass, aged 20 to 45 years, from the Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital between 2009 and 2011 for this study; 30 healthy participants were enrolled into the control group. All subjects were examined and had their tumor mass aspirated for diagnosis. Equal numbers of patients with benign and malignant tumors were recruited. Patients with malignant tumors presented with low grade, minimal tumor invasion and non-involved lymph nodes. Questionnaires regarding anxiety, depression and PTSD were conducted 2 hours before getting the diagnostic result and 1 month after the diagnosis. Serum levels of IL-6, TNF-α, cortisol and high-sensitivity C-reactive protein before and after diagnosis were investigated and compared. The number of occurrences of oral ulcerative mucositis was also recorded. Results: All patients experienced a certain degree of anxiety and their biomarkers were elevated compared with the normal reference range before the pathological report was disclosed. However, 1 month after the operation, the benign tumor group showed significantly lower levels of biomarkers and anxiety scores than patients with a malignant breast tumor. The results were consistent throughout 12 months of study. Conclusion: Study subjects with a benign tumor returned to their normal condition after being diagnosed, while patients with a malignant tumor suffered from a certain degree of PTSD or depression. abstract_id: PUBMED:32062112 Benign phyllodes tumor with hemorrhagic cyst in a 14-year-old girl: A case report. Introduction: The incidence of mammary phyllodes tumor is less than 0.3-0.5% of all breast tumors. Only a few pediatric cases have been reported. Here we report a benign phyllodes tumor that occurred in a 14-year-old girl. Presentation Of Case: A left breast mass with a bleeding cyst of about 9 cm was resected after the tumor had rapidly increased in size over the course of 2 months. Imaging examinations suggested an intracystic tumor. Histopathological examination, however, revealed a benign phyllodes tumor without cytomorphism or mitotic figures. The tumor was accompanied by partial bleeding, which was considered to be the cause of the rapid increase in size of the hemorrhagic cyst. The patient remains under observation, and local recurrence has not occurred as of 3 years after surgery. Conclusion: Due to the rarity of the condition in children and adolescents, an individualized, case-based approach is recommended. abstract_id: PUBMED:19409529 Long-term follow-up results for ultrasound-guided vacuum-assisted removal of benign palpable breast mass. Background: This study was conducted to evaluate the long-term follow-up results from ultrasound-guided vacuum-assisted removal (US-VAR) of palpable benign breast mass and to identify patient characteristics that are correlated with recurrence or residual lesions. Methods: US-VAR was performed on 95 benign, palpable breast masses that underwent subsequent intervention or at least a 2-year follow-up. During the follow-up period, we reviewed the lesion and patient's characteristics, and then determined whether the presence of recurrence or residual lesions was associated with any of the characteristics. Results: Six lesions (6.3%) underwent subsequent intervention due to the recurrence of palpability. The remaining 89 lesions underwent imaging follow-up (range, 24 to 60 months; mean, 35 months). Among these lesions, 32.6% (31 of 95 masses) showed sonographically visible, nonpalpable residual lesions. The remaining 61.1% (58 masses) showed no evidence of residual lesion. The initial size of the lesion at VAR was the only characteristic correlated with recurrence (P = .017; odds ratio, 1.238). Conclusion: Our long-term follow-up results show that US-VAR may be a useful alternative to surgical excision in the management of palpable breast masses. Answer: The available abstracts suggest that there is a high negative predictive value (NPV) for sonography in the evaluation of probably benign breast masses, regardless of palpability. According to the study reported in PUBMED:18855166, the NPV of sonography for palpable breast masses with probably benign morphology was found to be high (99.4%). This indicates that short-term imaging follow-up can be an acceptable alternative to immediate biopsy, similar to the management of nonpalpable probably benign lesions (BI-RADS category 3). However, the abstracts do not provide a direct comparison of cancer rates between palpable and nonpalpable probably benign breast masses diagnosed by sonography. They do emphasize the overall high accuracy of sonography in diagnosing probably benign breast masses and suggest that follow-up rather than immediate biopsy might be a reasonable approach in certain cases (PUBMED:18855166, PUBMED:32870394). It is important to note that while sonography has a high NPV, there is still a small risk of malignancy. For instance, in the study with 146 category 3 lesions 2 cm or larger, the probability of lesions being benign was 91.8%, with a small percentage being borderline (6.8%) or malignant (1.4%) (PUBMED:25038810). This indicates that while the majority of probably benign breast masses diagnosed by sonography are indeed benign, there is still a need for careful follow-up and consideration of biopsy in certain cases, especially when masses show growth on follow-up ultrasonography. In summary, the abstracts do not provide a direct comparison of cancer rates between palpable and nonpalpable probably benign breast masses diagnosed by sonography, but they do suggest that sonography has a high NPV for probably benign breast masses in general. The decision to defer biopsy in favor of follow-up should be made on a case-by-case basis, taking into account the specific characteristics of the mass and the patient's clinical context.
Instruction: Weight-bearing radiographs in thoracolumbar fractures: do they influence management? Abstracts: abstract_id: PUBMED:15129074 Weight-bearing radiographs in thoracolumbar fractures: do they influence management? Study Design: Prospective observational study. Objective: Our objective was to compare supine and erect (weight-bearing) radiographs in patients with thoracolumbar fractures without a neurologic deficit and to determine whether the erect radiographs alter the deformity and the management plan. Summary Of Background Data: Nonoperative treatment for thoracolumbar fracture without a neurologic deficit is safe and effective. There are some guidelines in the literature that provide objective standards to identify the patients that are suitable for nonoperative treatment. These guidelines are based on measurements on supine radiographs. The role of weight-bearing radiographs in influencing the management plan of these injuries has not been explored. Methods: Fractures between T11 and L2 in 28 patients were considered suitable for nonoperative treatment initially. Radiographic measurements included anterior and posterior vertebral body heights, interpedicular distance, and the Cobb angle on the supine and erect radiographs. A change in the treatment from the initial nonoperative management plan, based on the radiographic findings, was recorded. Results: Mean supine Cobb angle of 11 degrees increased to 18 degrees on weight-bearing films. The mean anterior vertebral compression increased from 34% to 46%. No change was noted between the posterior vertebral heights and the interpedicular distance. Seven of the 28 patients were subjected to surgical stabilization based on these findings. Conclusion: Performing erect radiographs in patients with thoracolumbar fractures without a neurologic deficit provides additional information and did alter the management plan in a significant proportion (25%) of our patients. abstract_id: PUBMED:38430387 Variability in amount of weight-bearing while performing weight-bearing radiographs for assessing stability of ankle fractures. Purpose: Weight-bearing (WB) radiographs are commonly used to judge stability of type B fibula fractures and guide the choice of treatment. Stable fractures can be treated conservatively, and unstable fractures surgically. The question is raised how much weight patients actually put on their broken ankle while making a WB radiograph. The current study will give insight in the actual amount of WB in WB radiographs. Methods: In this retrospective cohort study, 57 patients with a type B fibula fracture with a medial clear space (MCS) < 6 mm on regular mortise (RM) view who underwent a WB radiograph were included. We designed a ramp with a scale in the plateau where the radiographs were taken. Total body weight (TBW) and amount of WB on the fractured limb were measured. Results: The mean WB on the fractured limb was 49 (13-110) kg and the mean TBW was 79 (45-128) kg, calculating a mean percentage of WB of 63. The mean MCS on the RM radiograph was 3.0 mm, compared to 2.9 mm on the WB radiograph. The mean superior clear space (SCS) was 3.2 mm on the RM view, compared to 3.2 mm on the WB radiograph as well. The average fibular dislocation was 1.5 mm on the RM radiograph, compared to 1.6 mm on the WB radiograph. Conclusion: There is a big variability in the amount of weight-bearing on the ankle when a WB radiograph is made. This is important to keep in mind when assessing the radiographs and deciding on the treatment course. abstract_id: PUBMED:30780193 Utility of weight-bearing radiographs compared to computed tomography scan for the diagnosis of subtle Lisfranc injuries in the emergency setting. Objective: The goal of the present study was to compare the diagnostic yield of weight-bearing radiographs with non-weight-bearing computed tomography (CT) scan for subtle Lisfranc (LF) injuries in the ED. Methods: Retrospective analysis of patients presenting with a suspected LF injury over a 2 year period from 2016 to 2017 who had both weight-bearing bilateral foot radiographs and CT scan. Information extracted included patient demographics, radiographic findings of metatarsal fractures, fleck sign and measurements of diastasis between the medial cuneiform and second metatarsal expressed as a ratio of the uninjured side, clinical findings and treatment modality. Radiologist reports were recorded as either positive, negative or equivocal for a LF injury. Results: A total of 117 patients were included with a mean age of 38 years and 54% (n = 63) women. When initial weight-bearing radiographs were positive, 54% of subsequent CT scans were reported as either equivocal or negative. Of the patients who had a negative or equivocal weight-bearing radiograph, only 12% had a positive CT scan. The mean diastasis ratio in patients undergoing surgery was 1.53 (95% confidence interval 1.41-1.65) compared to 1.11 (95% confidence interval 1.07-1.16) for patients with injuries not requiring operative intervention (P < 0.001). There was a statistically significant correlation between a positive weight-bearing radiograph and surgical treatment (R = 0.339, P < 0.001), and between diastasis measurements and surgical treatment (R = 0.576, P < 0.001). Conclusions: CT provides limited benefit in the diagnosis and initial management of suspected subtle LF injuries in the ED. We advocate for the use of bilateral weight-bearing radiographs as a first-line investigation. abstract_id: PUBMED:25298555 Acetabular roof arc angles and anatomic biomechanical superior acetabular weight bearing area. Background: Acetabular fracture involves whether superior articular weight bearing area and stability of the hip are assessed by acetabular roof arc angles comprising medial, anterior and posterior. Many previous studies, based on clinical, biomechanics and anatomic superior articular surface of acetabulum showed different degrees of the angles. Anatomic biomechanical superior acetabular weight bearing area (ABSAWBA) of the femoral head can be identified as radiographic subchondral bone density at superior acetabular dome. The fracture passes through ABSAWBA creating traumatic hip arthritis. Therefore, acetabular roof arc angles of ABSAWBA were studied in order to find out that the most appropriate degrees of recommended acetabular roof arc angles in the previous studies had no ABSAWBA involvement. Materials And Methods: ABSAWBA of femoral head was identified 68 acetabular fractures and 13 isolated pelvic fractures without unstable pelvic ring injury were enrolled. Acetabular roof arc angle was measured on anteroposterior, obturator and iliac oblique view radiographs of normal contralateral acetabulum using programmatic automation controller digital system and measurement tools. Results: Average medial, anterior and posterior acetabular roof arc angles of the ABSAWBA of 94 normal acetabulum were 39.09 (7.41), 42.49 (8.15) and 55.26 (10.08) degrees, respectively. Conclusions: Less than 39°, 42° and 55° of medial, anterior and posterior acetabular roof arc angles involve ABSAWBA of the femoral head. Application of the study results showed that 45°, 45° and 62° from the previous studies are the most appropriate medial, anterior and posterior acetabular roof arc angles without involvement of the ABSAWBA respectively. abstract_id: PUBMED:38290382 The additional value of weight-bearing radiographs in assessing stability of isolated type B fibular fractures. Introduction: In isolated type B fibular fractures, the decision whether to operate or treat conservatively is principally based on congruency and stability of the ankle joint. The purpose of the current study is to examine the additional diagnostic value of the weight-bearing radiograph (WBR) in assessing stability of potentially unstable type B fibular fractures. Materials And Methods: In this retrospective cohort study, patients were selected based on who presented an isolated type B fracture of the fibula. Unstable fractures were directly planned for surgery. Other patients underwent additional weight-bearing imaging 4-7 days after initial trauma, on which definitive treatment was based. The medial clear space (MCS) was compared between the WBR and the conventional radiograph. Results: A total of 70 patients were included in the period January 2018-December 2021. The average MCS on conventional mortise radiograph was 3.56 mm compared to 3.02 mm on the WBR(p<0.05). The superior clear space (SCS) was equal between both groups (respectively 3.12 mm and 3.08 mm, p = 0.44). All 70 patients were initially treated conservatively with immediate weight-bearing in a brace or soft cast. One patient had the need for open repair and internal fixation due to non-union. Discussion: Different techniques for the assessment of deep deltoid ligament integrity exist, such as the MRI, the ultrasonography and different kinds of stress test radiographs. As fractured ankles don't need direct fixation and operative treatment can be postponed, a stepped approach for the assessment of deep deltoid ligament (DDL) integrity can be of assistance in assessing ankle stability. In isolated type B fibular fractures a stepped approach, using the weight-bearing radiograph can be of additional value in assessing the stability. It is a low cost, prevents unnecessary surgery and contributes to a quick weight-bearing non-operative treatment. abstract_id: PUBMED:34331545 The additional value of weight-bearing and gravity stress ankle radiographs in determining stability of isolated type B ankle fractures. Goal: The aim of this study is to investigate whether weight-bearing and gravity stress radiographs have additional value in predicting concomitant deep deltoid ligamentous (DDL) injury in case of isolated Weber type B fibular fractures. This may help to make the clinically relevant distinction between unstable fractures and fractures that can be treated conservatively. Methods: In this prospective cohort study, 90 patients with an isolated type B ankle fracture, without a medial or posterior fracture, and a medial clear space (MCS) < 6 mm on the regular mortise (RM) view were included. In all patients, an additional gravity stress (GS) view and an MRI scan were performed. Furthermore, in 51 patients, an additional weight-bearing (WB) radiograph was performed. The MCS and superior clear space (SCS) measurements of these radiographs were compared with MRI findings to measure sensitivity and specificity in excluding deep deltoid ligament (DDL) rupture. Results: The mean MCS on the RM view was 3.32 mm (1.73-5.93) compared to 4.75 mm (2.33-10.40) on the GS view and 3.18 mm (1.93-6.9) on the WB radiograph. MRI showed a high-grade or complete deltoid ligament tear in 25 (28%) patients. Using an MCS cut-off value of ≥ SCS + 2 mm, the RM view showed 0% sensitivity and 97% specificity in diagnosing a DDL rupture. Both the GS view (with MCS ≥ SCS + 3 mm as cut-off value) and the WB radiograph (with cut-off value MCS ≥ SCS + 2 mm) showed 6% sensitivity and 100% specificity. Conclusion: The gravity stress and weight-bearing radiograph can accurately exclude DDL injury. They might have extra value in addition to the conventional mortise view in assessing the stability of isolated type B ankle fractures. This helps in deciding whether patients should be selected for operative or safe conservative treatment. abstract_id: PUBMED:29102370 Characteristics of postoperative weight bearing and management protocols for tibial plateau fractures: Findings from a scoping review. Objective: To identify and describe the characteristics of existing practices for postoperative weight bearing and management of tibial plateau fractures (TPFs), identify gaps in the literature, and inform the design of future research. Methods: Seven electronic databases and clinical trial registers were searched from inception until November 17th 2016. Studies were included if they reported on the surgical management of TPFs, had a mean follow-up time of ≥1year and provided data on postoperative management protocols. Data were extracted and synthesized according to study demographics, patient characteristics and postoperative management (weight bearing regimes, immobilisation devices, exercises and complications). Results: 124 studies were included involving 5156 patients with TPFs. The mean age across studies was 45.1 years (range 20.8-72; 60% male), with a mean follow-up of 34.9 months (range 12-264). The most frequent fracture types were AO/OTA classification 41-B3 (29.5%) and C3 (25%). The most commonly reported non-weight bearing time after surgery was 4-6 weeks (39% of studies), with a further 4-6 weeks of partial weight bearing (51% of studies), resulting in 9-12 weeks before full weight bearing status was recommended (55% of studies). Loading recommendations for initial weight bearing were most commonly toe-touch/<10kg (28%), 10kg-20kg (33%) and progressive (39%). Time to full weight bearing was positively correlated with the proportion of fractures of AO/OTA type C (r=0.465, p=0.029) and Schatzker type IV-VI (r=0.614, p<0.001). Similar rates of rigid (47%) and hinged braces were reported (58%), most frequently for 3-6 weeks (43% of studies). Complication rates averaged 2% of patients (range 0-26%) for abnormal varus/valgus and 1% (range 0-22%) for non-union or delayed union. Conclusions: Postoperative rehabilitation for TPFs most commonly involves significant non-weight bearing time before full weight bearing is recommended at 9-12 weeks. Partial weight bearing protocols and brace use were varied. Type of rehabilitation may be an important factor influencing recovery, with future high quality prospective studies required to determine the impact of different protocols on clinical and radiological outcomes. abstract_id: PUBMED:29312872 Weight bearing cone beam CT scan versus gravity stress radiography for analysis of supination external rotation injuries of the ankle. For AO 44-B2 ankle fractures of uncertain stability, the current diagnostic standard is to obtain a gravity stress radiograph, but some have advocated for the use of weight-bearing radiographs. The primary aim was to compare measures of medial clear space (MCS) on weight-bearing cone beam computed tomography (CBCT) scans versus gravity stress radiographs for determining the state of stability of ankle fractures classified as AO SER 44-B2 or Weber B. The secondary aim was to evaluate the details offered by CBCT scans with respect to other findings that may be relevant to patient care. Nine patients were enrolled in this cross-sectional study between April 2016 and February 2017 if they had an AO SER 44-B2 fracture of uncertain stability, had a gravity stress radiograph, and were able to undergo CT scan within seven days. The width of the MCS was measured at the level of the talar dome on all radiographs and at the mid coronal slice on CT. Wilcoxon signed-ranks tests were used to compare MCS between initial radiographs, gravity stress radiographs and weight-bearing CBCT scans. MCS on weight-bearing CBCT scan (1.41±0.41 mm) was significantly less than standard radiographs (3.28±1.63 mm, P=0.004) and gravity stress radiographs (5.82±1.93 mm, P=0.02). There was no statistically significant difference in MCS measured on standard radiographs versus gravity stress radiographs (P=0.11). Detailed review of the multiplanar CT images revealed less than perfect anatomical reduction of the fractures, with residual fibular shortening, posterior displacement, and fracture fragments in the incisura as typical findings. Similar to weight-bearing radiographs, weight-bearing CBCT scan can predict stability of AO 44-B2 ankle fractures by showing restoration of the MCS, and might be used to indicate patients for non-operative treatment. None of the fractures imaged in this study were perfectly reduced however, and further clinical research is necessary to determine if any of the detailed weight-bearing CBCT findings are related to patient outcomes. abstract_id: PUBMED:33993209 Optimal management of older people with frailty non-weight bearing after lower limb fracture: a scoping review. Background: Patients with lower limb fractures who are non-weight bearing are at risk of the complications of the associated immobility and disability, particularly people with frailty, but there is lack of clarity about what constitutes optimal care for such patients. A scoping literature review was conducted to explore what evidence is available for the management of this patient group. Methods: MEDLINE (PubMed) CINAHL, EMBASE and the Cochrane databases of published literature and the HMIC and SIGLE sites for grey literature were searched for primary research studies and expert reports, using an iterative approach initially including the key term 'non-weight bearing'. All study types were included. Analysis was by narrative synthesis. Results: No papers were identified from a search using the key phrase 'non-weight bearing'. With this term removed, 11 indirectly relevant articles on lower limb fractures were retrieved from the searches of the electronic databases comprising three observational studies, five non-systematic review articles, a systematic review, an opinion piece and a survey of expert opinion that had relevance to restricted weight bearing patients. The observational studies indicated depression, cognition and nutrition affect outcome and hence have indirect relevance to management. The non-systematic reviews articles emphasised the importance of maintaining strength and range of movement during immobilisation and advised an orthogeriatric model of care. Fourteen UK and 97 non-UK guidelines relevant to fragility fractures, falls and osteoporosis management were found in the grey literature, but none made specific recommendations regarding the management of any period of non-weight bearing. Discussion: These findings provide a summary of the evidence base that can be used in the development of a clinical guideline for these patients but is not sufficient. We propose that, a guideline should be developed for these patients using an expert consensus process. abstract_id: PUBMED:35219132 Conservative management of Jones fractures with immediate weight-bearing in a walking boot demonstrates healing. Jones fractures, or proximal metatarsal fractures at the level of the fourth and fifth intermetatarsal junction, have a high risk for nonunion due to a vascular watershed region. Classically, treatment consists of weight bearing restrictions in a cast or surgical fixation. Some studies have assessed immediate weight bearing following a Jones fracture. Due to conflicting results, the most appropriate treatment method remains unclear. This study analyzes outcomes after treating adults with acute Jones fractures non-operatively without weight bearing restrictions in a walking boot. This study hypothesizes that patients will not require future operative intervention following functional treatment. A retrospective review of 55 adult patients who sustained acute, closed Jones fractures was conducted. 47 were treated weight bearing as tolerated (WBAT) in a walking boot and eight were treated non-weight bearing (NWB) in a cast. They were followed radiographically by an orthopedic surgeon for an average of 6.4 and 15.5 months, respectively. Three patients in each group (6.4% WBAT, 37.5% NWB) developed painful nonunion leading to surgical fixation. Thirty (66.7%) patients in the WBAT group demonstrated radiographic union on final radiographs. Only two (13.3%) of the 15 patients with partial union were seen at least six months from time of injury, one of whom had ongoing pain but declined surgery. The remaining 13 patients were asymptomatic at their final clinic appointment. Controversy still exists as to the best treatment methodology for acute Jones fractures. Due to a lack of clear guidelines, it can be difficult for the multiple medical specialties involved to evaluate and treat this injury. Our study suggests that non-operative management of minimally displaced Jones fractures, in the adult, low demand population, without weight bearing restrictions in a walking boot offers similar outcomes to cast immobilization with weight bearing restrictions, resulting in bony union or asymptomatic fibrous nonunion. Answer: Yes, weight-bearing radiographs in thoracolumbar fractures do influence management. A prospective observational study found that erect (weight-bearing) radiographs provided additional information compared to supine radiographs in patients with thoracolumbar fractures without a neurologic deficit. The study observed that the mean supine Cobb angle of 11 degrees increased to 18 degrees on weight-bearing films, and the mean anterior vertebral compression increased from 34% to 46%. No change was noted between the posterior vertebral heights and the interpedicular distance. Based on these findings, 7 out of the 28 patients initially considered suitable for nonoperative treatment were subjected to surgical stabilization. This indicates that performing erect radiographs can alter the management plan in a significant proportion (25%) of patients with thoracolumbar fractures without a neurologic deficit (PUBMED:15129074).
Instruction: Is total knee replacement more painful than total hip replacement? Abstracts: abstract_id: PUBMED:30362363 Smartphone apps for total hip replacement and total knee replacement surgery patients: a systematic review. Purpose: The aim of this study was to critically examine the content of smartphone apps produced for patients undergoing total hip replacement and total knee replacement surgery.Materials and methods: A systematic search was conducted across the five most popular smartphone app stores: iTunes; Google Play; Windows Mobile; Blackberry App World; and Nokia Ovi. Apps were identified for screening if they: targeted total hip replacement or total knee replacement patients; were free of charge; and were in English. App quality assessment was conducted independently by three reviewers using the Mobile App Rating Scale.Results: 15 apps met the inclusion criteria. On the 5-point Mobile App Rating Scale, the mean overall app quality score was 3.1. Within the Mobile App Rating Scale, the "Aesthetics" subscale had the most inconsistency across all apps, generating the highest and lowest mean scores (4.7 and 1, respectively). The "functionality" subscale had the highest mean score (3.8) among the four subscales, and the "information" subscale had the lowest mean score (2.7).Conclusion: This study highlights that despite a wide range of apps currently available to total hip replacement and total knee replacement patients, there is significant variability in their quality. Future efforts should be made to develop apps in partnership with patients, to improve the content, interactivity and relevance of apps.Implications for RehabilitationClinicians and rehabilitation professionals should consider that patients are using Smartphone apps to supplement their rehabilitation and be aware of currently available apps.Although some apps show promise, at present there is no one openly available app which is clinically recommended for rehabilitation use following total hip replacement and total knee replacement.Rehabilitation professionals tailor recovery programmes to individual patient needs following total hip replacement and total knee replacement, and future apps should also consider personal needs.We recommend that patients are involved in all stages of designing and testing future total hip replacement and total knee replacement rehabilitation apps. abstract_id: PUBMED:34659763 Safety of apixaban in Indian patients undergoing elective total knee replacement or total hip replacement surgery: A multi-center, phase-IV study. Background: Venous thromboembolism is a significant source of morbidity and mortality following total hip replacement and total knee replacement. Apixaban has been proven to be efficacious without increased risk of bleeding in phase-III trials in patients undergoing total knee replacement and total hip replacement. Due to paucity of data on safety of apixaban in Indian patients, this phase-IV study was conducted to evaluate safety of apixaban in patients undergoing total knee replacement and total hip replacement. Methods: In this non-comparative phase-IV clinical trial, patients undergoing elective total knee replacement or total hip replacement surgery, or a revision of at least one component of total knee replacement or total hip replacement, were enrolled. The eligible patients were given the approved dosage of apixaban 12 to 24 h after completing the skin wound closure. The primary safety outcome was the composite of the International Society on Thrombosis and Haemostasis-defined major bleeding and clinically relevant non-major bleeding events at the end of the treatment. The secondary efficacy endpoint was the composite of venous thromboembolism/all-cause death at the end of the treatment. Results: A total of 498 patients received apixaban prophylaxis therapy. Six (1.2%) bleeding adverse events were observed during the treatment period. Only one bleeding event was adjudicated as an International Society on Thrombosis and Haemostasis-defined clinically relevant non-major bleeding event (moderate severity). There were no fatal bleeding events and no deaths following the treatment. One venous thromboembolism event, that is, symptomatic distal left leg DVT, was reported in a total knee replacement patient and was adjudicated during the treatment period. Conclusion: Apixaban demonstrated a favorable safety profile for venous thromboembolism prevention in Indian patients undergoing total knee replacement or total hip replacement. abstract_id: PUBMED:31442079 Total hip and knee replacement and return to work: clinicians' perspectives. Purpose: An ageing workforce means that our understanding of return to work following total hip and knee replacement is of increasing importance. The purpose of this qualitative study was to explore the views and experiences of clinicians in treating working patients undergoing total hip or knee replacement. Materials And Methods: We conducted semi-structured interviews in primary and secondary National Health Service care using framework methodology. A total of 40 interviews were conducted. Participants included 12 hospital-based Allied Health Professionals and nurses, 12 orthopedic surgeons, and 16 General Practitioners. Data were analyzed thematically. Results: A key theme concerned participants' perceptions and experiences around the process and practice of listing working patients for total hip or knee replacement. Four sub-themes were identified; the perceived likelihood of listing employed patients for surgery, expectations and outcomes of surgery, the impact of work issues, and referral procedures and waiting lists. Conclusions: Decisions around listing working patients for total hip and knee replacement are complex and difficult. Clinicians need to consistently consider patients' work issues, and to be supported in this by appropriate commissioning and service delivery decisions. Further research is indicated to better understand the work-related expectations and anticipated outcomes of both patients and clinicians, and the optimum timing of surgery to maintain and improve patients' work performance.Implications for rehabilitationClinicians need to consistently consider patients' work issues, and current evidence, in their consultations and decisions regarding total hip and knee replacement.Clinical practice should reflect the growing proportion of working patients undergoing total hip and knee replacement, and routinely measure work outcomes.Changes are required at commissioning and service levels to support clinicians in changing their practice with this patient population. abstract_id: PUBMED:32821800 Readability of Information on Smartphone Apps for Total Hip Replacement and Total Knee Replacement Surgery Patients. Background: Readability is a vital component of health information and providing this material at an appropriate literacy level may positively influence patient experience. Objective: To assess the readability of the information provided within total hip replacement and total knee replacement apps to understand more about the impact this could have on patients. Method: A systematic search was conducted across the 5 most popular smartphone app stores: iTunes, Google Play, Windows Mobile, Blackberry App World, and Nokia Ovi. Apps were identified for screening if they: targeted total hip replacement or total knee replacement patients; were free of charge; and were in English. App readability assessment was conducted independently by 3 reviewers using the Gunning Fog Index, the Flesch Reading Ease Score, and the Flesch-Kincaid Grade Level. Results: Fifteen apps met the inclusion criteria. Only one app was found "easy to read" (My THR). Conclusion: Findings suggest that the overall readability of information provided is written at a level which is difficult for patients to comprehend. App developers should engage patients in the design process of their apps, in order to enhance patient experience and for the potential impact of these innovative health technologies to be truly realized. abstract_id: PUBMED:26712651 Impact of Tranexamic Acid in Total Knee and Total Hip Replacement. Objective: To evaluate the net clinical benefit of tranexamic acid use in patients undergoing total knee or total hip replacement. Methods: This is a retrospective study of patients undergoing total knee or total hip replacement. The primary outcome was the net clinical benefit of tranexamic acid use. Secondary outcomes included length of stay, incidence of venous thromboembolism, change in hemoglobin, and number of units of blood transfused. Results: Four hundred and six patients were screened for inclusion and 327 patients met inclusion criteria; 174 patients received tranexamic acid versus 153 patients who received usual care. Tranexamic acid demonstrated a positive net clinical benefit versus usual care (40.8% vs 13.7%, P < .01) but did not affect length of stay (3.39 vs 3.37 days, respectively, P = .76). Venous thromboembolism was comparable between groups (2.3% vs 0.7%, P = .38). Average change in hemoglobin and need for transfusion were lower in the treatment group versus the usual care group, respectively (3.46 vs 4.26 mg/dL, P < .01). Conclusion: Tranexamic acid demonstrated a significant benefit in decreasing change in hemoglobin as well as the need for blood transfusion with no increase in the risk of venous thromboembolism in patients undergoing total knee or total hip replacement. abstract_id: PUBMED:34932241 Impact of a new post-operative care model in a rural hospital after total hip replacement and total knee replacement. Problem: The predicted global trend of increasing total hip replacement and total knee replacement numbers leads to a direct and growing impact on health care services. Models of care including 'fast-track' mobilisation after total hip replacement and total knee replacement have been reported to reduce length of stay. This has not been verified in rural settings. Setting: Armidale Rural Referral Hospital. Key Measures For Improvement: The new post-operative care included early discharge planning with or without Day 0 mobilisation with aims to decrease hospital length of stay without affecting complication rates, compared to the conventional model of care. Strategies For Change: Consistent communication and planning for early discharge occurred before and throughout admission and Day 0 mobilisation. Effects Of Change: There was a statistically significantly less median length of stay following implementation of the new post-operative care model (3.24 vs 2.29 days [P < .01]). There was no statistically significant difference in complications or readmissions following the change. Those who were allocated to mobilise on Day 0 had a lesser median length of stay than those who did not (2.40 vs 2.27 days, P = .03). Lessons Learnt: Our results indicate that the new post-operative care model is safe and feasible for total knee replacement or total hip replacement patients in a rural setting and might reduce length of stay without compromising clinical outcomes. abstract_id: PUBMED:34574900 Rehabilitation Process Issues and Functional Performance after Total Hip and Knee Replacement. (1) Background: to ensure satisfactory outcomes in patients who have undergone total hip or knee replacement, it is crucial to prioritize postoperative rehabilitation process and its management rather than the successful surgery alone. The goal of our study was to investigate the outcomes of rehabilitation process after the total hip or knee replacement, including local orthopedic follow-up, communication with local health authorities, patients' satisfaction regarding rehabilitation, and their functional performance after the surgery. (2) Methods: the study included 523 patients who underwent total hip replacement, and 650 patients who underwent total knee replacement. All patients were surveyed via formalized phone interviews containing questions related to postoperative rehabilitation parameters and outcomes. (3) Results: in postsurgical period, nearly 70% of patients had regular local orthopedic follow-up. Lack of the latter at the local level was indicated by approximately 10% of patients, and the rest of the respondents note the practice of sporadic follow-up. Half of patients reported pains of various severity. Good physical activity (e.g., ability to walk unassisted within their residential district) was acknowledged by about a quarter of patients. Ability to walk unassisted was reported by just 54.5% of patients. (4) Conclusions: the disproportion between generally high patient satisfaction of rehabilitation management (>80%) and low feedback level on the part of local health authorities (9.4%) demonstrated lack of communication between the key parties involved in the rehabilitation process. abstract_id: PUBMED:24482624 Coupling device and distal femoral replacement for periprosthetic supracondylar femur fractures with an ipsilateral total knee and hip replacement. Background: Periprosthetic supracondylar fractures in the setting of an ipsilateral total knee and hip replacement are a rare but serious orthopedic challenge. Description Of Technique: In this unique situation, we consider insertion of a custom coupling device with a distal femoral replacement as an appropriate surgical solution that allows for stability and preserves hip and knee functionality, while not relying on the poor healing potential of the remaining intercalary femoral bone. Patients And Methods: We report the outcomes of two patients who underwent this custom coupling procedure. Results: These custom devices resulted in successful outcomes with restoration of ambulatory function and range of motion at greater than 14 months follow-up. Conclusions: We found that this technique provides sufficient stability and preserves functionality in a difficult-to-treat patient population. The results indicate that this technique may be considered as an alternative to more invasive procedures including total femoral replacement and complete femoral allograft constructs. abstract_id: PUBMED:11125689 Is total knee replacement more painful than total hip replacement? Background: During its use in pain management the patient-controlled analgesia (PCA) devices are capable of registering the course of treatment at patient request, the condition of drug delivery and total amount of drug being given. The patients could determine the need of medication to their own satisfaction while forced treatment by the bias of the health care personnel is avoided and the safety of patients is further warranted. In pain relief with this device, the number of requests for analgesia and the dose of analgesic used can be easily measured. Therefore, it is more objective to compare the pain intensity among different types of operation when PCA device is used. Using PCA morphine consumption as a parameter, we attempted to elucidate the difference of intensity of pain associated with total hip and total knee replacements by comparing their morphine requirement. Methods: In this prospective cohort study, 50 patients who underwent either total hip replacement (THR, n = 24) or total knee replacement (TKR, n = 26) were enrolled. After recovery from general anesthesia when the patients first complained intense pain in the recovery room, morphine was given intravenously in titration with a calculated loading dose in 30 min to achieve an acceptable analgesia (VAS < or = 3) followed by morphine PCA at 1 mg bolus with a lockout interval of 6 min. The patients were then followed for 48 h. During and at the end of the course the data relevant to pain score, total dose, demand, delivery, and adverse effects were recorded for assessment. Results: With the use of PCA, the pain scores were similar in both surgical groups in the 48 h observation. Total consumption of morphine in THR was 13.2 +/- 8.1 mg as against 19.7 +/- 5.7 mg in TKR in postoperative day 1 and 25.2 +/- 12.7 mg as against 34.1 +/- 13.9 mg in postoperative day 2 (P < 0.05, t-test). Demand/delivery ratio was not statistically significant between the 2 groups at 24 and 48 h (t-test). Minor adverse effects were seen in both groups but the differences were not significant. Conclusions: Using PCA morphine consumption as parameter, we can distinguish the magnitude of pain intensity between 2 major orthopedic surgeries. The deeper and more extensive operation would in total hip replacement does not mean that it is a more painful procedure than total knee replacement. Several speculations are proposed. abstract_id: PUBMED:38274925 Current Evidence and Expert Opinion on Thromboprophylaxis After Total Knee and Hip Replacement. An effective anticoagulant provides a balance between the risk for venous thromboembolism (VTE) and bleeding and is crucial in achieving optimal clinical outcomes in patients undergoing total hip replacement (THR) and total knee replacement (TKR) surgeries. We performed a review of the literature on thromboprophylaxis for patients undergoing total hip or knee replacement. This review article summarizes current guidelines and evidence for anticoagulation along with the expert opinion about pharmacological VTE prophylaxis, particularly non-Vitamin K antagonist oral anticoagulants (NOACs), for patients after total hip or knee replacement. Aspirin for VTE prophylaxis after TKR/THR has been controversial and most of the evidence is reported from observational research. Although the guidelines do not recommend any specific thromboprophylaxis agent, available evidence suggests that NOACs are as effective as low molecular weight heparins (LMWHs) in preventing VTE. Oral administration and the lack of dose monitoring make NOACs easy to use in outpatient settings in cases with challenging treatment compliances. They can be used for two weeks after TKR and five weeks after THR - six weeks after TKR and THR to cover the at-risk period for VTE post-discharge. Owing to the lack of evidence for a head-to-head comparison of NOACs, an anticoagulant with better efficacy and safety may be suggested in special patient populations (elderly, obese patients, or those with renal dysfunction). The expert opinion on pharmacological VTE prophylaxis provided in this article could address some gaps in the management of anticoagulation in patients with total hip or knee replacement. Answer: According to a study that used patient-controlled analgesia (PCA) morphine consumption as a parameter to compare the intensity of pain associated with total hip and total knee replacements, it was found that total knee replacement (TKR) was associated with higher morphine consumption than total hip replacement (THR). Specifically, on postoperative day 1, the total consumption of morphine in THR was 13.2 +/- 8.1 mg compared to 19.7 +/- 5.7 mg in TKR, and on postoperative day 2, it was 25.2 +/- 12.7 mg for THR versus 34.1 +/- 13.9 mg for TKR. These results suggest that TKR may be associated with greater pain intensity than THR (PUBMED:11125689).
Instruction: Does parental anxiety cause biases in the processing of child-relevant threat material? Abstracts: abstract_id: PUBMED:23536411 Does parental anxiety cause biases in the processing of child-relevant threat material? Objectives: Anxiety leads to biases in processing personally relevant information. This study set out to examine whether anxious parents also experience biases in processing child-relevant material. Design And Methods: Ninety parents acted as a control condition, or received a social anxiety or child-related anxiety induction. They completed a task examining attentional biases in relation to child-threat words and social-threat words, and a task examining ability to categorize emotion in children's faces and voices. Results: There was a trend indicating group differences in attentional bias towards social-threat words, and this appears to have been only in the social anxiety condition, but not the child anxiety or control conditions. For child-threat words, attentional bias was present in the child anxiety condition, but not the social anxiety or control conditions. In the emotion recognition task, there was no difference between the control and child anxiety conditions, but the social anxiety condition were more likely to erroneously label children's faces and voices as sad. Conclusions: Parents' anxious biases may spill over into their child's world. Practitioner Points: Parents' anxious biases may spill over into their child's world. Anxious parents may have attentional biases towards threats in their children's environment. Anxious parents may over-attribute negative emotion to children. abstract_id: PUBMED:34581933 Attention Biases to Threat in Infants and Parents: Links to Parental and Infant Anxiety Dispositions. Parent-to-child transmission of information processing biases to threat is a potential causal mechanism in the family aggregation of anxiety symptoms and traits. This study is the first to investigate the link between infants' and parents' attention bias to dynamic threat-relevant (versus happy) emotional expressions. Moreover, the associations between infant attention and anxiety dispositions in infants and parents were explored. Using a cross-sectional design, we tested 211 infants in three age groups: 5-to-7-month-olds (n = 71), 11-to-13-month-olds (n = 73), and 17-to-19-month-olds (n = 67), and 216 parents (153 mothers). Infant and parental dwell times to angry and fearful versus happy facial expressions were measured via eye-tracking. The parents also reported on their anxiety and stress. Ratings of infant temperamental fear and distress were averaged across both parents. Parents and infants tended to show an attention bias for fearful faces with marginally longer dwell times to fearful versus happy faces. Parents dwelled longer on angry versus happy faces, whereas infants showed an avoidant pattern with longer dwell times to happy versus angry expressions. There was a significant positive association between infant and parent attention to emotional expressions. Parental anxiety dispositions were not related to their own or their infant's attention bias. No significant link emerged between infants' temperament and attention bias. We conclude that an association between parental and infant attention may already be evident in the early years of life, whereas a link between anxiety dispositions and attention biases may not hold in community samples. abstract_id: PUBMED:19117552 Do anxious parents interpretive biases towards threat extend into their child's environment? Anxiety disorders are known to run in families [Turner, S. M., Beidel, D. C., & Costello, A. (1987). Psychopathology in the offspring of anxiety disorder patients. Journal of Consulting and Clinical Psychology, 55(2), 229-235] and environmental factors may largely account for the concordance between parental and child anxieties. Cognitive psychology models emphasise the importance of interpretive biases towards threat in the maintenance of anxiety and it is well established that anxious adults and children display similar interpretive biases and that these biases in anxious parents and their children are correlated. This raises the question of whether anxious cognitions/cognitive style may be transmitted from parent to child. We propose that this is more likely if anxious parents demonstrate interpretive biases not only about potential threats in their own environment but also about potential threats in their child's environment. Forty parents completed a recognition memory measure of interpretation bias adapted from Eysenck, Mogg, May, Richards, and Mathews (1991) [Bias in interpretation of ambiguous sentences related to threat in anxiety. Journal of Abnormal Psychology, 100(2), 144-150] to measure biases in response to potentially threat provoking situations involving themselves and their child. The interpretive biases demonstrated by parents were similar across situations involving themselves and their children. As expected, parental interpretive biases were further modified by anxiety with higher levels of parental anxiety associated with more negative interpretive biases about situations in their own and their child's environment, although this association was significantly stronger for potentially threat provoking situations in their own environment. These results are consistent with parent's interpretive biases extending beyond their own environment into their child's environment, although future research should continue to consider the mechanisms by which anxious parents may transmit fear cognitions to their children. abstract_id: PUBMED:20811944 Is there room for 'development' in developmental models of information processing biases to threat in children and adolescents? Clinical and experimental theories assume that processing biases in attention and interpretation are a causal mechanism through which anxiety develops. Despite growing evidence that these processing biases are present in children and, therefore, develop long before adulthood, these theories ignore the potential role of child development. This review attempts to place information processing biases within a theoretical developmental framework. We consider whether child development has no impact on information processing biases to threat (integral bias model), or whether child development influences information processing biases and if so whether it does so by moderating the expression of an existing bias (moderation model) or by affecting the acquisition of a bias (acquisition model). We examine the extent to which these models fit with existing theory and research evidence and outline some methodological issues that need to be considered when drawing conclusions about the potential role of child development in the information processing of threat stimuli. Finally, we speculate about the developmental processes that might be important to consider in future research. abstract_id: PUBMED:35138476 Does irritability predict attention biases toward threat among clinically anxious youth? Aberrant threat processing is a known cognitive characteristic of anxiety disorders and irritability. Youth with more severe symptomatology show greater allocation of attention towards threat relative to neutral stimuli. Although irritability contributes to poorer outcomes among anxious youths, irritability has not been considered as a contributing factor to threat processing in anxiety disorders. Thus, the current study examined the role of irritability in predicting attention biases for threat among clinically anxious youth. Our study included 84 clinically anxious youth (M = 9.31 years old, SD = 2.44) who completed a dot-probe task to determine attention biases. Anxiety disorders were assessed using semi-structured diagnostic interviews. Well validated measures were used to assess the severity of anxiety and irritability symptoms via child- and parent-report, respectively. Findings indicated that more severe irritability predicted greater attention biases toward threat among clinically anxious youth, covarying for age, anxiety severity, and the number of comorbid diagnoses. At a trend-level, anxiety severity also predicted attention bias for threat. Among clinically anxious youth, irritability severity was the strongest predictor of attention bias toward threat. Findings point to the salience of irritability, and to some extent anxiety severity, in relation to threat processing among youth with clinical anxiety disorders. abstract_id: PUBMED:25328258 Parental and Family Factors as Predictors of Threat Bias in Anxious Youth. The present study examined the relative predictive value of parental anxiety, parents' expectation of child threat bias, and family dysfunction on child's threat bias in a clinical sample of anxious youth. Participants (N = 488) were part of the Child/Adolescent Anxiety Multi-modal study (CAMS), ages 7-17 years (M = 10.69; SD = 2.80). Children met diagnostic criteria for generalized anxiety disorder, separation anxiety and/or social phobia. Children and caregivers completed questionnaires assessing child threat bias, child anxiety, parent anxiety and family functioning. Child age, child anxiety, parental anxiety, parents' expectation of child's threat bias and child-reported family dysfunction were significantly associated with child threat bias. Controlling for child's age and anxiety, regression analyses indicated that parents' expectation of child's threat bias and child-reported family dysfunction were significant positive predictors of child's self-reported threat bias. Findings build on previous literature by clarifying parent and family factors that appear to play a role in the development or maintenance of threat bias and may inform etiological models of child anxiety. abstract_id: PUBMED:35218453 Intergenerational Transmission of Anxious Information Processing Biases: An Updated Conceptual Model. Anxiety disorders are globally one of the most prevalent and disabling forms of psychopathology in adults and children. Having a parent with an anxiety disorder multiplies the risk of anxiety disorders in the offspring, although the specific mechanisms and processes that play a role in this intergenerational transmission remain largely unknown. According to information processing theories, threat-related biases in cognitive processing are a causal mechanism in the development and maintenance of anxiety. These theories propose that individuals with anxiety are more likely to cognitively process novel stimuli in their environment as threatening. Creswell and colleagues proposed a theoretical model that highlighted the role of these cognitive biases as a mechanism in the intergenerational transmission of anxiety (Creswell et al., in Hadwin, Field (eds) Information processing biases and anxiety: a developmental perspective, Wiley, pp 279-295, 2010). This model postulated significant associations between (1) parents' and children's threat-related cognitive biases (2) parents' threat-related cognitive biases in their own and their child's environment, (3) parents' threat-related cognitive biases and parenting behaviors that convey anxiety risk to the offspring (e.g., modeling of fear, and verbal threat information transmission), and (4) parenting behaviors and child threat-related biases. This theoretical review collated the recent empirical work testing these four core hypotheses of the model. Building on the reviewed empirical work, an updated conceptual model focusing on threat-related attention and interpretation is proposed. This updated model incorporates the links between cognition and anxiety in parents and children and addresses the potential bidirectional nature of parent-child influences. abstract_id: PUBMED:22906122 Maternal anxiety and cognitive biases towards threat in their own and their child's environment. Cognitive biases are known to play an important role in anxiety. In this study we investigate whether maternal anxiety is associated with biases in interpretation, attention, and catastrophic processing about self-referent stimuli that may signal potential threat in the mother's own environment. We also investigate whether maternal anxiety is associated with biases about stimuli that their own child may encounter or to child-related stimuli more broadly. Three hundred mothers with a child aged 6 to 10 years participated. All participants completed a trait anxiety measure and an ambiguous sentences task to assess interpretation bias for self- and child-referent situations. A subset of the sample completed a catastrophizing interview about a self- (n = 194) or child-referent (n = 99) worry topic and an attentional dot-probe task (n = 99) with general threat and child threat stimuli. Maternal anxiety was not significantly associated with an attentional bias for general or child threat stimuli but was significantly associated with a bias for threat interpretations of both self and child-referent situations. Higher maternal anxiety was also significantly associated with generating more catastrophic outcomes to both a self-referent and child-referent hypothetical worry situation. We consider whether maternal cognitive biases, which extend to influence how mothers process potential threats in their child's world, may be an important mechanism through which intergenerational transmission of anxiety could occur. abstract_id: PUBMED:31354048 General threat and health-related attention biases in illness anxiety disorder. A brief research report. Illness anxiety disorder, formerly known as hypochondria, has been conceptualised in the psychological literature as an anxiety disorder, and its dimensional correlate is usually referred to as health anxiety. Similarly to other anxiety disorders, health anxiety has also been investigated in the context of attention biases as maintaining factors. However, so far, there is little consensus in the literature concerning the types of biases most relevant to health anxiety (i.e. facilitation, difficulty in disengaging, or avoidance), and whether biases occur towards generally threatening or specific stimuli. The current study aimed to investigate the presence of all three types of biases in relation to both general-threat and health-related threat pictures in clinical participants with illness anxiety disorder as compared to participants with low levels of health anxiety. The results showed a larger difficulty in disengagement bias for health-related threatening stimuli than for general-threatening stimuli in all participants regardless of group factor. No other significant effects were identified. Thus, attention biases follow a simiar pattern in illness anxiety and low-anxiety participants. abstract_id: PUBMED:30767154 A Longitudinal Study on the Relations Among Fear-Enhancing Parenting, Cognitive Biases, and Anxiety Symptoms in Non-clinical Children. This longitudinal study explored the relations between fear-enhancing parenting behaviors (modeling and threat information transmission) and children's cognitive biases and anxiety symptoms on three subsequent time points over a one-year period. Participants were 216 children aged 7-12 years (114 boys and 102 girls), and their mothers (n = 199) and/or fathers (n = 117). On each time point, children and parents completed the Parental Enhancement of Anxious Cognitions scale, which measures parental modeling and threat information transmission. Furthermore, children filled in a measure of anxiety disorder symptoms. In addition, confirmation bias and interpretation bias were measured by means of a number of computerized tasks. The results yielded support for a circular model in which cognitive biases enhanced anxiety symptoms, which in turn promoted cognitive biases on each of the three time points. However, no evidence was found for longitudinal effects of cognitive biases on anxiety or vice versa. In contrast to what we expected, cognitive biases and anxiety appeared to promote parental modeling and threat information rather than the other way around. These findings extend research on the relations between parenting behaviors, cognitive biases, and childhood anxiety symptoms, and suggest valuable leads for assessment and intervention. Answer: Yes, parental anxiety does appear to cause biases in the processing of child-relevant threat material. Studies have shown that anxious parents may experience attentional biases towards threats in their children's environment (PUBMED:23536411). Specifically, parents with child-related anxiety demonstrated an attentional bias for child-threat words, whereas this was not present in parents with social anxiety or in control conditions (PUBMED:23536411). Additionally, anxious parents have been found to over-attribute negative emotion to children (PUBMED:23536411). Further research indicates that anxious parents interpret potential threats in their child's environment more negatively, suggesting that their interpretive biases extend beyond their own environment (PUBMED:19117552). This is consistent with the idea that anxious cognitions or cognitive styles may be transmitted from parent to child (PUBMED:19117552). Moreover, maternal anxiety has been associated with biases in interpretation and catastrophic processing about stimuli that their child may encounter (PUBMED:22906122). The intergenerational transmission of anxious information processing biases has been proposed as a mechanism in the development and maintenance of anxiety, with significant associations between parents' and children's threat-related cognitive biases (PUBMED:35218453). Parental anxiety dispositions have been shown to be linked to their own and their infant's attention bias, although the association may not be as strong in community samples (PUBMED:34581933). In summary, the evidence suggests that parental anxiety can influence the processing of child-relevant threat material, potentially contributing to the intergenerational transmission of anxiety disorders (PUBMED:23536411; PUBMED:19117552; PUBMED:22906122; PUBMED:35218453; PUBMED:34581933).
Instruction: Can dietary interventions change diet and cardiovascular risk factors? Abstracts: abstract_id: PUBMED:9314790 Can dietary interventions change diet and cardiovascular risk factors? A meta-analysis of randomized controlled trials. Objectives: This study evaluated the effectiveness of dietary advice in primary prevention of chronic disease. Methods: A meta-analysis was conducted of 17 randomized controlled trials of dietary behavior interventions of at least 3 months' duration. Results were analyzed as changes in reported dietary fat intakes and biomedical measures (serum cholesterol, urinary sodium, systolic and diastolic blood pressure) in the intervention group minus changes in the control group at 3 to 6 months and 9 to 18 months of follow-up. Results: After 3 to 6 months, mean net changes in each of the five outcomes favored intervention. For dietary fat as a percentage of food energy, the change was -2.5% (95% confidence interval [CI] = -3.9%, -1.1%). Mean net changes over 9 to 18 months were as follows: serum cholesterol, -0.22 (95% CI = -0.39, -0.05) mmol/L; urinary sodium, -45.0 (95% CI = -57.1, -32.8) mmol/24 hours; systolic blood pressure, -1.9 (95% CI = -3.0, 0.8) mm Hg; and diastolic blood pressure, -1.2 (95% CI = -2.6, 0.2) mm Hg. Conclusions: Individual dietary interventions in primary prevention can achieve modest improvements in diet and cardiovascular disease risk status that are maintained for 9 to 18 months. abstract_id: PUBMED:31817107 Dietary Interventions for Gout and Effect on Cardiovascular Risk Factors: A Systematic Review. Gout is one of the most prevalent inflammatory rheumatic disease. It is preceded by hyperuricemia and associated with an increased risk for cardiovascular disease, both related to unhealthy diets. The objective of this systematic review is to better define the most appropriate diet addressing both disease activity and traditional cardiovascular risk factors in hyperuricemic patients. We included clinical trials with patients diagnosed with hyperuricemia or gout, investigating the effect of dietary interventions on serum uric acid (SUA) levels, gout flares and-if available-cardiovascular risk factors. Eighteen articles were included, which were too heterogeneous to perform a meta-analysis. Overall, the risk of bias of the studies was moderate to high. We distinguished four groups of dietary interventions: Calorie restriction and fasting, purine-low diets, Mediterranean-style diets, and supplements. Overall, fasting resulted in an increase of SUA, whilst small (SUA change +0.3 to -2.9 mg/dL) but significant effects were found after low-calorie, purine-low, and Mediterranean-style diets. Studies investigating the effect on cardiovascular risk factors were limited and inconclusive. Since Mediterranean-style diets/DASH (Dietary Approach to Stop Hypertension) have shown to be effective for the reduction of cardiovascular risk factors in other at-risk populations, we recommend further investigation of such diets for the treatment of gout. abstract_id: PUBMED:19307070 Cardiovascular disease prevention in women: Impact of dietary interventions. In Western countries, cardiovascular disease is the leading cause of death in women. Prevalence of risk factors and clinical presentation are different in men and women and it is important to take these differences into account in both prevention and treatment. A healthy diet can substantially reduce cardiovascular risk. This article summarises the evidence of the effects of dietary interventions in women and discuss gender differences where data are available. For a number of specific dietary factors, there is some evidence of gender-specific effects. However, overall risk estimates are not very different from the ones observed in men, and, at this stage, the evidence is insufficient to advocate gender-specific dietary guidelines. Thus a heart-healthy diet is the same for both men and women. However, dietary advice might be more effective in women because of their stronger beliefs in the values of healthy eating. More efforts are needed to further increase awareness of cardiovascular risk and the benefits of a healthy diet. Thus, dietary counselling should be an integral part of the cardiovascular risk management in women. abstract_id: PUBMED:38331867 Diet in secondary prevention: the effect of dietary patterns on cardiovascular risk factors in patients with cardiovascular disease: a systematic review and network meta-analysis. Background: Improving dietary habits is a first-line recommendation for patients with cardiovascular disease (CVD). It is unclear which dietary pattern most effectively lowers cardiovascular risk factors and what the short- and long-term effects are. Therefore, this network meta-analysis compared the effects of popular dietary patterns on cardiovascular risk factors in patients with established CVD. Methods: A systematic search of PubMed, Embase, the Cochrane library, SCOPUS and Web of Science was conducted up to 1 April 2023. Randomized controlled trials (RCTs) comparing the effect of popular dietary patterns (Mediterranean, moderate carbohydrate, low glycemic index, low-fat and minimal dietary intervention) on cardiovascular risk factors (body weight, systolic blood pressure, lipids) in CVD populations were selected. A random-effects network meta-analysis was performed. Results: Seventeen RCTs comprising 6,331 participants were included. The moderate carbohydrate diet had the most beneficial effect on body weight (-4.6 kg, 95%CrI -25.1; 15.8) and systolic blood pressure (-7.0 mmHg 95%CrI -16.8; 2.7) compared to minimal intervention. None of the included dietary patterns had a favorable effect on low-density lipoprotein cholesterol. After 12 months, the effects were attenuated compared to those at < 6 months. Conclusions: In this network meta-analysis of 17 randomized trials, potentially clinically relevant effects of dietary interventions on CV risk factors were observed, but there was considerable uncertainty due to study heterogeneity, low adherence, or actual diminished effects in the medically treated CVD population. It was not possible to select optimal dietary patterns for secondary CVD prevention. Given recent clinical trials demonstrating the potential of dietary patterns to significantly reduce cardiovascular event risk, it is likely that these effects are effectuated through alternative physiological pathways. abstract_id: PUBMED:37630854 Dietary Models and Cardiovascular Risk Prevention in Pediatric Patients. Nutritional intervention is worldwide recognized as a first step treatment for subjects with increased cardiovascular risk and it is of utmost importance especially for children and adolescents. Currently scientific evidence supports the role of dietary patterns instead of simple single nutrients or foods in cardiovascular risk prevention. Indeed, the American Heart Association dietary guidelines have expanded beyond nutrients to dietary pattern, that comprise not only single food items but also behavioral or cultural habits of specific populations. The aim of our narrative review is to analyze the most frequently adopted dietary patterns in children and adolescents and to evaluate their effect on cardiovascular risk factors and in cardiovascular risk prevention. Literature review showed that children cannot be considered as little adults: nutritional intervention must always grant adequate growth and neurodevelopment before reaching the proposed goals, therefore dietary patterns considered heart-healthy for adult subjects might not be suitable for pediatric patients. Mediterranean diet, DASH diet, Nordic diet and some plant-based diets seem to be the most promising dietary patterns in terms of cardiovascular health in the developmental age, even if further studies are needed to better standardize and analyze their effect on growing up individuals. abstract_id: PUBMED:30141315 Dietary interventions in the treatment of metabolic syndrome as a cardiovascular disease risk-inducing factor. A review Metabolic syndrome (MetS) is a concept which refers to a simultaneous occurrence of clinically significant cardiovascular disease (CVD) risk factors that increase the risk of atherosclerosis and type 2 diabetes as well as their vascular complications. The metabolic syndrome is a complex disorder, therefore its treatment should be multifactorial and intensive. MetS occurs due to a combination of genetic and environmental factors. Each of MetS components is a well-known risk factor of atherosclerosis. Such modifications to the lifestyle as increasing physical activity, introducing a well-balanced diet and reducing the body mass are associated with reduced occurrence of MetS and its individual components. It is necessary to implement proper dietary processes, a physical training program and pharmacological treatment. The treatment of MetS should begin with weight loss, which affects the occurrence of abdominal obesity, as well as increased physical activity, leading to an increased tissue sensitivity to insulin. It is recommended to introduce a low-energy, individually balanced diet that will lead to a 7-10% weight loss over the course of 6-12 months. Patients are also advised to quit smoking and limit the consumption of salt and alcohol. abstract_id: PUBMED:30518065 Dietary Composition and Cardiovascular Risk: A Mediator or a Bystander? The role of nutrition in the pathogenesis of cardiovascular disease has long been debated. The established notion of the deleterious effects of fat is recently under question, with numerous studies demonstrating the benefits of low-carbohydrate, high-fat diets in terms of obesity, diabetes, dyslipidemia, and metabolic derangement. Monounsaturated and polyunsaturated fatty acids, especially n-3 PUFAs (polyunsaturated fatty acids), are the types of fat that favor metabolic markers and are key components of the Mediterranean Diet, which is considered an ideal dietary pattern with great cardioprotective effects. Except for macronutrients, however, micronutrients like polyphenols, carotenoids, and vitamins act on molecular pathways that affect oxidative stress, endothelial function, and lipid and glucose homeostasis. In relation to these metabolic markers, the human gut microbiome is constantly revealed, with its composition being altered by even small dietary changes and different microbial populations being associated with adverse cardiovascular outcomes, thus becoming the target for potential new treatment interventions. This review aims to present the most recent data concerning different dietary patterns at both the macro- and micronutrient level and their association with atherosclerosis, obesity, and other risk factors for cardiovascular disease. abstract_id: PUBMED:11603652 Dietary effects on cardiovascular disease risk factors: beyond saturated fatty acids and cholesterol. Hypercholesterolemia represents a significant risk for cardiovascular disease (CVD). While diet intervention remains the initial choice for the prevention and treatment of CVD, the nature of the dietary modification remains controversial. For example, reducing calories from total fat, without decreasing saturated fat intake results in insignificant changes in low density lipoprotein cholesterol (LDL-C). Similarly, diet interventions that focus solely on lowering dietary cholesterol and saturated fat intake not only decrease LDL-C, but also high density lipoprotein cholesterol (HDL-C) and therefore may not improve the lipoprotein profile. This brief review summarizes dietary interventions that lower LDL-C without affecting HDL-C levels. These interventions include soy protein, soluble fiber, soy lecithin and plant sterols. This review also includes some of the reported dietary interventions, such as polyphenols, isoflavones, folic acid and vitamins B6 and B12, which reduce the risk of CVD without changes in lipoprotein cholesterol. abstract_id: PUBMED:36436878 Dietary Factors, Dietary Patterns, and Cardiovascular Disease Risk in Representative Japanese Cohorts: NIPPON DATA80/90. Diet is one of the most important factors affecting healthy life expectancy through the onset of cardiovascular disease (CVD) risk as well as various chronic diseases. Because dietary habits and disease structure differ depending on the country, region, and/or race, evidence from each population is required. NIPPON DATA80/90 is a long-term cohort study of a representative Japanese population that participated in national nutrition surveys. Among the many findings of this cohort study, a dietary pattern with higher intake of fruits, vegetables, fish (n-3 polyunsaturated fatty acids), and dietary fiber and lower intake of salt as well as sodium-to-potassium ratio was found to be associated with a lower risk of CVD mortality. The results from our cohort study would be useful for effectively preventing CVD. This article reviews the published studies from the NIPPON DATA80/90 to highlight the significant findings that may be used to develop risk prevention strategies for CVD. abstract_id: PUBMED:7852683 Secular trends in diet and risk factors for cardiovascular disease: the Framingham Study. Objective: In this study we examined changes in dietary intake and risk factors for cardiovascular disease that occurred over three decades in a US-population-based sample. Design: Secular trends in dietary profiles and risk factors were studied in cross-sectional samples of subjects from the Framingham Study in 1957-1960, 1966-1969, and 1984-1988. Results: Dietary levels of cholesterol appeared to have declined considerably, whereas macronutrient and fatty acid intakes appeared to change only slightly. Men appeared to increase their saturated fat intakes from 16.4% in 1966-1969 to 17.0% in 1984-1988 (P < .01). In spite of relatively stable mean total fat intake levels, 35% to 60% of Framingham Study men and women reported decreased consumption of higher-fat animal products over the 10-year period between 1974-1978 and 1984-1988. Framingham subjects who reported modifying their diets by substituting lower-fat foods for high-fat items between 1974-1978 and 1984-1988 were more likely to achieve the guidelines of the National Cholesterol Education Program and Healthy People 2000 for dietary fat and cholesterol intake and for serum total cholesterol level. Levels of systolic and diastolic blood pressure, total and low-density lipoprotein cholesterol, and cigarette smoking were also lower in 1984-1988 than in earlier times. Compared with 1957-1960, mean body mass index and prevalence rates of overweight and hypertension were higher in 1984-1988, despite higher levels of reported physical activity. Conclusions: The observed secular trends in diet and risk factor levels for cardiovascular disease in the Framingham population are important to guide the development and implementation of population-based strategies for promoting cardiovascular health, including nutrition interventions. Answer: Yes, dietary interventions can change diet and cardiovascular risk factors. A meta-analysis of 17 randomized controlled trials found that individual dietary interventions in primary prevention can achieve modest improvements in diet and cardiovascular disease risk status that are maintained for 9 to 18 months (PUBMED:9314790). Dietary interventions such as calorie restriction, purine-low diets, and Mediterranean-style diets have shown small but significant effects on serum uric acid levels and are recommended for further investigation for the treatment of gout, which is associated with cardiovascular disease (PUBMED:31817107). In women, a healthy diet can substantially reduce cardiovascular risk, and dietary counseling should be an integral part of cardiovascular risk management (PUBMED:19307070). A network meta-analysis of 17 randomized trials indicated that dietary interventions have potentially clinically relevant effects on cardiovascular risk factors, although there is considerable uncertainty due to study heterogeneity and low adherence (PUBMED:38331867). In pediatric patients, certain dietary patterns like the Mediterranean diet, DASH diet, and Nordic diet are promising in terms of cardiovascular health, although more studies are needed to standardize their effects on growing individuals (PUBMED:37630854). For metabolic syndrome, which increases the risk of cardiovascular disease, proper dietary interventions are crucial and should include weight loss, increased physical activity, and a balanced diet (PUBMED:30141315). Different dietary patterns and micronutrients have been associated with atherosclerosis, obesity, and other cardiovascular disease risk factors, with recent data highlighting the role of the human gut microbiome in cardiovascular outcomes (PUBMED:30518065). Dietary interventions that include soy protein, soluble fiber, soy lecithin, and plant sterols can lower LDL-C without affecting HDL-C levels, and other interventions like polyphenols and vitamins have been shown to reduce cardiovascular disease risk (PUBMED:11603652). In a Japanese cohort, a dietary pattern with higher intake of fruits, vegetables, fish, and dietary fiber and lower intake of salt was associated with a lower risk of cardiovascular disease mortality (PUBMED:36436878). Finally, secular trends in the Framingham Study showed that dietary changes, such as decreased consumption of higher-fat animal products and increased consumption of lower-fat foods, were associated with improved levels of risk factors for cardiovascular disease (PUBMED:7852683).
Instruction: Particular cutaneous side effects with etoposide-containing courses: is VP16 or etoposide phosphate responsible? Abstracts: abstract_id: PUBMED:15526203 Particular cutaneous side effects with etoposide-containing courses: is VP16 or etoposide phosphate responsible? Purpose: Etoposide is commercially available in France in two different pharmaceutical forms: VP16 and its phosphate ester (etoposide phosphate, EP). EP shows better chemical and physical properties, is said to be less toxic but is five times more expensive than VP16. Some criteria were defined for the use of each form in the Paediatric Oncohaematology Department in Hopital Sud in Rennes. As some particular cutaneous side effects were observed during treatment with etoposide-based course in this department, a retrospective study was initiated. The aims of this work were to determine the side effects (especially cutaneous toxicity), whether the pharmaceutical formulation of etoposide had any influence on the toxicity of the drug, and whether the observed side effects resulted from etoposide alone or from particular antineoplastic drug associations. Methods: Five types of etoposide-containing protocols were chosen: NB 97 and NB 99 (neuroblastoma), FRALLE 93 (acute lymphoid leukaemia), LAME 91 (acute myeloid leukaemia), OS 94 (osteosarcoma), Ewing 97 and Euro-Ewing 99 (Ewing sarcoma). The medical files of 36 children (88 EP courses, 25 VP16 courses) included in these protocols were analysed on the basis that if a child showed a side effect during a course, the child had to have recovered from that side effect before the beginning of the next course. Results: Apart from classical side effects (haematological and digestive toxicities etc.), two particular cutaneous side effects were observed: (1) palmar-plantar eruptions and nail inflammations, and (2) irritation of the anal area and anal fissures. Those side effects were observed with three of the studied protocols: NB 97, OS 94 and Ewing sarcoma treatments. Conclusions: No striking differences in toxicity appeared between the two etoposide formulations, but this retrospective study seemed to confirm the appearance of particular cutaneous and anal side effects especially with two associations: (1) etoposide-ifosfamide (OS 94 and Ewing 97), and (2) etoposide-ifosfamide-Adriamycin-vincristine (VIDE course of the Euro-Ewing 99 protocol). abstract_id: PUBMED:35435506 EDP-mitotane in children: reassuring evidence of reversible side-effects and neurotoxicity. Adrenocortical carcinoma affects one in 5 million children each year. Since prognosis for children older than 4 years is limited, clinicians often choose aggressive treatment with etoposide, doxorubicin, cisplatin (EDP) and mitotane after resection. However, little is known about the impact of EDP-mitotane in children. We provide an overview of case-reports and case series listing side-effects and neurotoxicity of EDP-mitotane in children. Fourteen studies were identified describing a range of gastro-intestinal, endocrine, developmental and neuropsychological side-effects. Neurotoxicity included motor- and speech delay, decreased concentration and lower school performance. These side-effects appear to be reversible after mitotane discontinuation. We have added our own experience with a 10 year old girl with advanced adrenocortical carcinoma treated with EDP and 2 years of mitotane after irradical resection. She developed an impactful, but reversible, decrease in cognitive development measured by a standardized neuropsychological assessment before, during and after mitotane therapy. This decrease was mostly measurable in terms of decreased processing speed and concentration and a significant drop in school performance. Combined with fatigue and insecurity, this caused problems in short-term memory and the need to change her school type. In conclusion, EDP-mitotane is associated with several side-effects including neurotoxicity in pediatric cases, all reversible after mitotane discontinuation. abstract_id: PUBMED:34606174 Comparing efficacy and side effects of two systemic chemotherapy regimens for eye-preserving therapy in children with retinoblastoma. Background: Eye-preserving therapy in retinoblastoma comprises systemic chemotherapy, but studies analyzing the efficacy of different chemotherapy regimens are scarce. Methods: The efficacy and side effects of two different eye-preserving chemotherapy regimens containing either vincristine, etoposide, and carboplatin (VEC) or cyclophosphamide, vincristine, etoposide, and carboplatin (CyVEC) were compared in a prospective non-interventional observational study including children diagnosed with retinoblastoma between 2013 and 2019 in Germany and Austria. Event-free eye survival (EFES) and overall eye survival (OES) of all 164 eyes treated with both regimens and risk factors were investigated. Results: The EFES after VEC (2-year EFES 72.3%) was higher than after CyVEC (2-year EFES 50.4%) (plogrank < .001). The OES did not differ significantly between the two treatment groups (plogrank = .77; 2-year OES VEC: 82.1% vs. CyVEC: 84.8%). Advanced International Classification of Retinoblastoma (ICRB) group was prognostic for a lower EFES (plogrank < .0001; 2-year EFES ICRB A/B/C 71.3% vs. ICRB D/E 43.0%) and OES (plogrank < .0001; 2-year OES ICRB A/B/C 93.1% vs. ICRB D/E 61.5%). The multivariate analysis showed that age at diagnosis older than 12 months and ICRB A/B/C were associated with better EFES. No second malignancies or ototoxicities were reported after a follow-up of median 3.1 years after diagnosis of retinoblastoma (range 0.1-6.9 years). Conclusions: Despite omitting cyclophosphamide, the EFES was higher after VEC chemotherapy that contains higher doses of carboplatin compared to CyVEC. The major risk factor for enucleation was advanced ICRB tumor grouping. Randomized clinical trials on efficacy and side effects of eye-preserving chemotherapy are required to tailor treatment protocols for retinoblastoma patients. abstract_id: PUBMED:10079650 Physical and chemical stability of etoposide phosphate solutions. Objective: To evaluate the physical and chemical stability of etoposide phosphate solutions over 7 days at 32 degrees C and 31 days at 4 degrees C and 23 degrees C: (1) at etoposide concentrations of 0.1 and 10 mg/mL as phosphate in 0.9% sodium chloride injection and 5% dextrose injection and (2) at etoposide concentrations of 10 and 20 mg/mL as phosphate in bacteriostatic water for injection packaged in plastic syringes. Design: Test samples of etoposide phosphate were prepared in polyvinyl chloride (PVC) bags of the two infusion solutions at etoposide concentrations of 0.1 and 10 mg/mL as phosphate. Additional test samples were prepared in bacteriostatic water for injection containing benzyl alcohol 0.9% at etoposide concentrations of 10 and 20 mg/mL as phosphate and were packaged in 5 mL plastic syringes. Evaluations for physical and chemical stability were performed initially; after 1 and 7 days of storage at 32 degrees C; and after 1, 7, 14, and 31 days of storage at 4 degrees C and 23 degrees C. Physical stability was assessed using visual observation in normal light and using a high-intensity monodirectional light beam. Turbidity and particle content were measured electronically. Chemical stability of the drug was evaluated by using a stability-indicating high-performance liquid chromatographic (HPLC) analytic technique. Results: All samples were physically stable throughout the study. Little or no change in particulate burden and haze level were found. In the intravenous infusion solutions, little or no loss of etoposide phosphate occurred in any of the samples throughout the study period. The 10 and 20 mg/mL samples in bacteriostatic water for injection repackaged in syringes were also stable throughout the study, exhibiting a maximum of 6% or 7% loss after 31 days of storage at 23 degrees C and less than 4% in 31 days at 4 degrees C. Conclusion: Etoposide phosphate prepared as intravenous admixtures of etoposide 0.1 and 10 mg/mL as phosphate in 5% dextrose injection and 0.9% sodium chloride injection in PVC bags and as etoposide 10 and 20 mg/mL as phosphate in bacteriostatic water for injection packaged in plastic syringes is physically and chemically stable for at least 7 days at 32 degrees C and 31 days at 4 degrees C and 23 degrees C. This new water-soluble phosphate-ester of etoposide formulation solves the precipitation problems associated with the old organic solvent and surfactant-based formulation. abstract_id: PUBMED:23740378 Hypersensitivity reactions to etoposide phosphate. Etoposide phosphate is a prodrug of intravenous etoposide recommended for use in patients with demonstrated allergy to etoposide. We have previously published a case series comprising six patients who were successfully treated with etoposide phosphate following preceding etoposide hypersensitivity. In this new paper we now present the cases of two patients who had allergic reactions to both etoposide and etoposide phosphate. As such, we suggest that whilst most patients with etoposide hypersensitivity can safely be treated with etoposide phosphate, a small number are at risk of an additional allergic reaction to etoposide phosphate. Patients being treated for the first time with etoposide phosphate after etoposide allergy should receive their first dose under medical supervision. abstract_id: PUBMED:28101356 A case report of primary cutaneous natural killer/T-cell lymphoma. Primary cutaneous extranodal natural killer/T-cell lymphoma, nasal type (ENKL-NT) is a relatively rare disease associated with aggressive tumor-cell behavior and poor prognosis. Progress in immunohistochemistry has improved the identification of ENKL-NT. The present case study reported on a 64-year-old female patient presenting with several red nodular lesions on the neck developed over four months. Cutaneous biopsy revealed these cells were positive for CD3, CD56, CD5, CD8 and negative for CD2, CD34, CD7, CD20 and Granzyme B. A computed tomography scan and bone marrow biopsy did not show any abnormalities and a diagnosis of primary cutaneous ENKL-NT was made. After treatment with chemotherapy regimens comprising cyclophosphamide, doxorubicin, vincristine and prednisone (CHOP), followed by high-dose treatment with methotrexate, dexamethasone, ifosfamide, etoposide and L-asparaginasum (SMILE), the patient succumbed to the disease. The present study exemplified that immunohistochemical analysis as well as the recognition of atypical lymphoid cells showing angiocentricity is crucial for the correct diagnosis of ENKL-NT. The prognosis of primary cutaneous ENKL-NT remains poor. As the CHOP regimen is not completely effective, high-dose chemotherapy, such as the SMILE regimen, is required. abstract_id: PUBMED:8932836 Phase I study of high-dose etoposide phosphate in man. Etoposide is a widely used cytotoxic agent with a broad spectrum of activity in human malignancies. This agent has been incorporated into many transplant regimens although toxicity occurs because of its poor water solubility and toxic excipients. Etoposide phosphate, a water soluble prodrug of etoposide, has been studied at conventional dosages in man and shown to have advantages over the parent compound. We have extended our previous experience with this new agent to evaluate the levels needed in transplantation protocols. This phase I study of intravenous high-dose etoposide phosphate over 2 h on days 1 and 2 was designed to determine whether or not dose linearity between the amount of etoposide phosphate administered to patients and generation of etoposide in vivo as seen with conventional dosages of this agent would be present at transplant-dose levels. In addition, the toxicities of these dose levels with the short infusion schedule were defined. A conservative dose escalation scheme was chosen based upon prior knowledge of etoposide. Thirty-one patients (19 male, 12 female) with CALGB performance status 0-1 with a variety of solid tumors entered this study. The patients were treated with dose levels of etoposide phosphate given as the etoposide-equivalent doses of 250, 500, 750, 1000, 1200, 1400, and 1600 mg/m2/day in 250-400 ml of normal saline given as an intravenous infusion over 2 h on days 1 and 2 every 28 days. After the maximal tolerated dose level was determined on this schedule, additional patients received etoposide phosphate as a 4 h infusion on both days in an attempt to reduce toxicities. G-CSF (5 micrograms/kg/day) was administered subcutaneously to all patients from day 3 until the WBC > or = 10000/microliters. Nonhematologic toxicity was considered to be dose limiting. Serial plasma samples for pharmacokinetics were obtained from patients on day 1 of cycle 1. For the 2 h infusion, the maximum tolerated dose of etoposide phosphate was 1000 mg/m2/day x 2 with dose limiting mucositis. In the small number of patients studied, the maximum tolerated dose was reached for the 4 h infusion at 1400 mg/m2/day of drug, again due to mucositis. Other toxicities, despite the rapid infusion schedule, were modest with transient mild headache being most common. At the highest doses etoposide phosphate was efficiently and rapidly dephosphorylated to etoposide. Etoposide generated by dephosphorylation of etoposide phosphate had plasma disposition curves characteristic of etoposide administered parenterally. One partial response occurred in a patient with small cell lung cancer. Etoposide phosphate can be rapidly infused in modest fluid volumes at dosages required for transplantation protocols with minimal acute side-effects. On a 2 h schedule, mucositis becomes the dose limiting nonhematologic toxicity. Mucositis seems to correlate with peak dose levels of the drug rather than total drug administered. On a 4 h infusion schedule given sequentially for 2 days, the maximum tolerated dosage could be increased 40% compared to the 2 h schedule. The relative ease of administration and the rapid conversion of this prodrug into etoposide should make it useful in high-dose therapy settings. abstract_id: PUBMED:27233689 Transdermal Delivery of Etoposide Phosphate I: In Vitro and In Vivo Evaluation. Cancer chemotherapy frequently requires long periods of multiple intravenous infusions that often results in patients opting out of treatment. The main purpose of this study was to investigate the feasibility of delivering one of these anticancer agents: etoposide phosphate (ETP) transdermally using iontophoresis and a combination of iontophoresis/microporation. The iontophoresis conditions for ETP were first optimized in vitro then tested in vivo in a rabbit model. Both ETP and its active form etoposide (VP) were quantified in dermis (via microdialysis sampling) and in plasma, with a specially developed high-performance liquid chromatography method. In vitro, the amount of total etoposide permeated and the steady state flux increased (p < 0.05) with increase in iontophoretic current densities (100-400 μA/cm(2)). At 300 μA/cm(2), microporation/iontophoresis further improved both parameters by 2- and 2.8-fold, respectively. In vivo, exposure increased proportionally to current density in plasma, whereas dermal concentration dropped significantly at the highest current density. Microporation led to a 50% increase in Cmax and AUClast values in both skin and plasma. In conclusion, a mild current density (300 μA/cm(2)) and a small surface area (10.1 cm(2)) achieved and maintained the minimum effective concentration for the entire duration of electrical current delivery; microporation further increased the plasma concentrations at the same current density. abstract_id: PUBMED:27181628 Chemotherapeutic treatment is associated with Notch1 induction in cutaneous T-cell lymphoma. The Notch pathway is important for survival of cutaneous T-cell lymphoma (CTCL) cells. We investigated the effect of chemotherapy (doxorubicin, etoposide, and gemcitabine) and radiation modalities on Notch signaling in CTCL cell lines. Chemotherapy induced Notch1 expression at the mRNA and protein level in MyLa2000 and Hut78. Upregulation of well-established Notch targets supported the functional activity of Notch1. Transfection of Notch1 siRNA into MyLa2000 cells was not able to suppress the effects of chemotherapy on Notch1 activation significantly. Notch1 knockdown in combination with doxorubicin, etoposide, or gemcitabine compared to chemotherapy alone decreased cell viability by 12, 20, and 26%, respectively (p < 0.05). Additionally, X-rays (in MyLa2000 but not SeAx) and psoralen plus UVA (PUVA) (in MyLa2000, Hut78, and SeAx) increased the expression of Notch1 family members. Our results indicate that CTCL cells activate the Notch pathway in vitro in response to chemotherapy and radiation modalities as a possible protective mechanism. abstract_id: PUBMED:28064300 Primary Cutaneous T Cell Lymphoma (Gamma Delta subtype). Primary cutaneous T-cell lymphoma gamma-delta subtype is an extremely rare entity of all the cutaneous T-cell lymphomas. Our case provides an insight on clinical behavior and treatment response with feasible effective combination chemotherapy. We believe this will be of great interest to clinicians when facing this difficult clinical entity. We present a case of a 66-year-old Malay man with a threeweek history of rapidly growing skin nodules and plaques which spread throughout his body. He was commenced on combination chemotherapy gemcitabine, etoposide, and carboplatin with near complete remission on completion of second cycle but he defaulted. He relapsed within a month and he progressed despite treatment with the same regime. He was salvaged with fludarabine, cytarabine, and vinblastine combination chemotherapy but progressed with brain metastasis and died. However, more investigations and studies need to be done in this relatively unknown rare entity. A rare lymphoma registry might be of help to better understand and treat similar conditions. Answer: The retrospective study conducted in the Paediatric Oncohaematology Department in Hopital Sud in Rennes aimed to determine the side effects, particularly cutaneous toxicity, of etoposide-containing treatments and whether the pharmaceutical formulation of etoposide (VP16 or etoposide phosphate, EP) influenced the drug's toxicity. The study analyzed medical files of 36 children who underwent 88 EP courses and 25 VP16 courses as part of various treatment protocols. The results showed two particular cutaneous side effects: palmar-plantar eruptions and nail inflammations, as well as irritation of the anal area and anal fissures. These side effects were observed with three of the studied protocols: NB 97, OS 94, and Ewing sarcoma treatments. The study concluded that no striking differences in toxicity appeared between the two etoposide formulations. However, it seemed to confirm the appearance of particular cutaneous and anal side effects, especially with two drug associations: etoposide-ifosfamide (OS 94 and Ewing 97), and etoposide-ifosfamide-Adriamycin-vincristine (VIDE course of the Euro-Ewing 99 protocol) (PUBMED:15526203). In summary, the study did not find significant differences in the cutaneous side effects between VP16 and etoposide phosphate formulations, suggesting that the particular cutaneous side effects observed may be more related to the drug combinations used in the treatment protocols rather than the specific form of etoposide administered.
Instruction: Effects of hypertrophy on regional action potential characteristics in the rat left ventricle: a cellular basis for T-wave inversion? Abstracts: abstract_id: PUBMED:9323099 Effects of hypertrophy on regional action potential characteristics in the rat left ventricle: a cellular basis for T-wave inversion? Background: In cardiac hypertrophy, ECG T-wave changes imply an abnormal sequence of ventricular repolarization. We investigated the hypothesis that this is due to changes in the normal regional differences in action potential duration. We assessed the contribution of potassium- and calcium-dependent currents to these differences. Both the altered sequence of ventricular repolarization and the underlying cellular mechanisms may contribute to the increased incidence of ventricular arrhythmias in hypertrophy. Methods And Results: Rats received daily isoproterenol injections for 7 days. Myocytes were isolated from basal subendocardial (endo), basal midmyocardial (mid), and apical subepicardial (epi) regions of the left ventricular free wall. Action potentials were stimulated with patch pipettes at 37 degrees C. The ratio of heart weight to body weight and mean cell capacitance are increased by 22% and 18%, respectively, in hypertrophy compared with controls (P<.001). Normal regional differences in action potential duration at 25% repolarization (APD25) are reduced in hypertrophy (control: endo, 11.4+/-0.9 ms; mid, 8.2+/-0.9 ms; epi, 5.1+/-0.4 ms; hypertrophy: endo, 11.6+/-0.9 ms; mid, 10.4+/-0.8 ms; epi, 7.8+/-0.6 ms). The regional differences in APD25 are still present in 3 mmol/L 4-aminopyridine. Hypertrophy affects APD75 differently, depending on the region of origin of myocytes (ANOVA P<.05). APD75 is shortened in subendocardial myocytes but is prolonged in subepicardial myocytes (control: endo, 126+/-7 ms; epi, 96+/-10 ms; hypertrophy: endo, 91+/-6 ms; epi, 108+/-7 ms). These changes in APD75 are altered by intracellular calcium buffering. Conclusions: Normal regional differences in APD and the changes observed in hypertrophy are only partially explained by differences in I(tol). In hypertrophy, the normal endocardial/epicardial gradient in APD75 appears to be reversed. This may explain the T-wave inversion observed and will have implications for arrhythmogenesis. abstract_id: PUBMED:10081616 Effects of BDF 9148 on the action potentials and contractions of left ventricles from normo- and hypertensive rats. 1. The aim of the present study was to test the hypothesis that responses to BDF 9148, which prolongs the opening of sodium channels, are reduced in the spontaneously hypertensive rat (SHR) left ventricle in the presence of hypertrophy and failure. 2. We studied the effects of BDF 9148 on the action potentials and contractions of left ventricles from 5-week-old prehypertensive, 14-week-old hypertensive, 6- and 12-month-old hypertension-associated hypertrophy and 18-month-old hypertension-induced heart failure SHR and age-matched Wistar-Kyoto normotensive (WKY) rats. 3. Action potentials and left ventricular contractions did not alter in the early stages of hypertension (14-week-old SHR). The diastolic membrane potential did not change with hypertension-associated hypertrophy, but there was a reduction in amplitude and a prolongation of action potentials in the left ventricles of 6-18-month-old SHR. Cardiac stimulation responses and maximum contractions to 10(-6) mol/L isoprenaline were reduced at 6 months, whereas the maximum contractions to 10(-2) mol/L CaCl2 were only reduced in left ventricles of 18-month-old SHR. 4. At concentrations ranging from 10(-7) to 3 x 10(-6) mol/L, BDF 9148 increased the amplitude and prolonged the duration of action potentials and augmented the force in WKY rat left ventricles. The augmenting effects of BDF 9148 at 3 x 10(-6) mol/L were smaller than at 10(-6) mol/L, possibly because the high concentration of BDF 9148 was also blocking calcium channels. Similar effects were observed with BDF 9148 in the early stages of hypertension (14-week-old SHR). 5. In the presence of persistent hypertension-associated hypertrophy of the SHR left ventricle at > or = 6 months, the effects of BDF 9148 on action potentials and contractions were significantly reduced to a small extent. This impairment of the response to BDF 9148 may reflect the reduced contractility of the SHR left ventricle and/or it may indicate that the response to the opening of sodium channels is altered from 6 months of age. 6. In summary, most of the response of BDF 9148 is maintained in the presence of hypertrophy and failure. Thus, BDF 9148 may have some potential for the treatment of heart failure. abstract_id: PUBMED:11033114 Regional electrophysiological effects of left ventricular hypertrophy in isolated rabbit hearts under normal and ischaemic conditions. Objectives: Left ventricular hypertrophy (LVH) has been reported to produce differential electrophysiological effects in isolated epicardial and endocardial cells. This study aimed to examine regional electrophysiological effects of LVH in normal and ischaemic conditions in the whole heart. Methods: LVH was secondary to perinephritis-induced hypertension. Monophasic action potential duration (MAPD(90)), effective refractory period (ERP) and conduction delay were measured in paced, isolated working rabbit hearts either at one right ventricular and two left ventricular sites (apical and basal epicardium) or at three left ventricular sites (apical and basal epicardium, apical endocardium). The hearts were subjected to 30 min of regional ischaemia and 15 min of reperfusion. Results: In non-ischaemic conditions, LVH produced uniform prolongation of MAPD(90) and ERP in the left ventricular epicardium, but not in the endocardium. After coronary artery occlusion, LVH significantly increased ischaemia-induced transepicardial dispersion of repolarisation, but not refractoriness. LVH did not affect arrhythmogenesis in either non-ischaemic or ischaemic conditions. Conclusions: Differential effects of LVH on epicardial and endocardial electrophysiological parameters are also observed in the whole heart. In addition, the sensitivity of hypertrophied myocardium to ischaemia is increased and leads to an increase in ischaemia-induced dispersion of repolarisation. However, neither dispersion of refractoriness nor arrhythmogenesis are affected by LVH in non-ischaemic or ischaemic conditions in this experimental model. abstract_id: PUBMED:10323483 The effects of lignocaine and tetrodotoxin on the action potentials and contractions of left ventricles from normo- and hypertensive rats. The objective was to test the hypothesis that the effects of the sodium channel blockers lignocaine and tetrodotoxin are modified in the presence of hypertension-induced hypertrophy. We describe the effects of lignocaine and tetrodotoxin on the action potentials and contractions of left ventricles isolated from 6-month-old Wistar Kyoto (WKY) rats and spontaneously hypertensive rats (SHRs). The upstroke velocity, amplitude, and overshoot of the action potential were reduced; action potentials were prolonged; and the contractions were reduced on the hypertrophied left ventricles of the SHRs. Lignocaine and tetrodotoxin reduced the upstroke velocity, amplitude, and overshoot and prolonged the left ventricular action potentials. These effects of lignocaine and tetrodotoxin on the SHR were less than those on the WKY left ventricle, possibly because the action potential was already modified by hypertrophy. Lignocaine also reduced the left ventricular contractions and the concentrations producing this reduction were lower for the hypertrophied than those for the normal left ventricle. Tetrodotoxin at 3 x 10(-6)-10(-5) M caused similar attenuation of the WKY and SHR left ventricle contractions. Our study shows that the effects of lignocaine on contraction are enhanced in the hypertrophied left ventricle of the SHR, which suggests that the binding is increased or the access of lignocaine to the receptor is enhanced in hypertrophy. In contrast, the effects of tetrodotoxin on contractions are similar, and thus the binding or access of tetrodotoxin to the receptor is not altered in the hypertrophied left ventricle of the SHR. abstract_id: PUBMED:8143304 Prolongation by captopril of action potential duration in the normal and hypertrophied rat ventricle: direct action or inhibition of the local angiotensin converting enzyme? Objective: The aims were: (1) to study the acute effects of captopril on the action potential characteristics of ventricular fibres from the normal rat, (2) to compare the effects of captopril with those of perindoprilat, a non-thiol angiotensin I converting enzyme (ACE) inhibitor, (3) to determine the electrophysiological properties of the peptide substrates of converting enzyme, bradykinin and angiotensin I, and (4) to investigate whether the effects of captopril occurring in the healthy heart also occur in two models of ventricular hypertrophy. Methods: Action potentials were recorded with the standard glass microelectrode technique in right ventricular preparations excised from rat hearts and superfused under baseline conditions and with drug containing or peptide containing Tyrode solution. Ventricular hypertrophy was induced in response to hypertension (unilaterally nephrectomised, DOCA-salt model) or 4 week old left ventricular infarction. Results: In preparations from normal rat hearts, captopril increased action potential duration in a concentration dependent fashion [EC50 = 3.5 x 10(-8) M; maximum effect = 44(SEM 5.1)% prolongation at 10(-5) M for action potential duration at 90% repolarisation, APD90]. Perindoprilat similarly caused a dose dependent increase in action potential duration, but with 100 times greater potency [EC50 = 3.1 x 10(-10) M; maximum effect = 71(11)% prolongation at 10(-5) M for APD90]. SQ 14,534, a stereoisomer of captopril with one hundredth the ACE inhibitor potency, had no significant effect on action potential duration at 10(-5) M. Angiotensin I and bradykinin caused concentration dependent prolongation of action potential, but angiotensin II (10(-6) M) had no effect. Captopril (10(-5) M) had no significant effect in the hypertrophied right ventricle from DOCA-salt hypertensive rats, but significantly increased APD90 [39(4.9)%] in right ventricular preparations from rats with 4 week old anterior left ventricular infarction. Conclusions: In the rat, captopril prolongs action potential duration, an effect possibly due to local accumulation of bradykinin and angiotensin I. abstract_id: PUBMED:12644424 Effects of halothane on action potential configuration in sub-endocardial and sub-epicardial myocytes from normotensive and hypertensive rat left ventricle. Background: Halothane shortens ventricular action potential duration (APD), as a consequence of its inhibitory effects on a variety of membrane currents, an effect that is greater in sub-endocardial than sub-epicardial myocytes. In hypertrophied ventricle, APD is prolonged as a consequence of electrical remodelling. In this study, we compared the effects of halothane on transmural APD in myocytes from normal and hypertrophied ventricle. Methods: Myocytes were isolated from the sub-endocardium and sub-epicardium of the left ventricle of spontaneously hypertensive (SHR) and normotensive Wistar-Kyoto (WKY) rats. Action potentials were recorded before, during, and after a 1-min exposure to 0.6 mM halothane and APD measured from the peak of the action potential to repolarization at -50 mV (APD(-50 mV)). Data are presented as mean (SEM). Results: In WKY myocytes, halothane reduced APD(-50 mV) from 21 (2) to 18 (2) ms (P<0.001, n=15) in sub-epicardial myocytes but abbreviated APD(-50 mV) to a greater extent in sub-endocardial myocytes (37 (4) to 28 (3) ms; P<0.001, n=14). In SHR myocytes, APD(-50 mV) values were prolonged compared with WKY and APD(-50 mV) was reduced by halothane from 36 (6) to 27 (4) ms (P<0.016) and from 77 (10) to 38 (4) ms (P<0.001) in sub-epicardial and sub-endocardial myocytes, respectively. Conclusions: In the SHR, hypertrophic remodelling was not homogeneous; APD(-50 mV) was prolonged to a greater extent in sub-endocardial than sub-epicardial cells. Halothane reduced APD to a greater extent in sub-endocardium than sub-epicardium in both WKY and SHR but this effect was larger proportionately in SHR myocytes. The transmural gradient of repolarization was reduced in WKY and effectively abolished in SHR by halothane, which might disturb normal ventricular repolarization. abstract_id: PUBMED:9087578 Modulation of electrical heterogeneity by compensated hypertrophy in rat left ventricle. Modulation of the regional distribution of the action potential by left ventricular hypertrophy and the role of the L-type Ca2+ current (I(Ca)) and transient outward current (I(to)) in the action potential duration (APD) were investigated in normal and hypertrophied rat ventricular myocytes from the apex (A), septum (S) and left ventricular free wall (FW) by using whole cell current- and voltage-clamp techniques. Hypertrophy was induced by abdominal aortic constriction. In control cells, the APD measured at 20% repolarization (APD20) assumed the shortest values in the A and the longest in the S, whereas FW cells showed intermediate values. Hypertrophy significantly prolonged the APD20 and increased APD variability within the A and FW regions but did not modify the APD in S cells. Analysis of the APD, I(Ca), and I(to) at the instant of 20% repolarization in the same cell showed that in control cells the shortest APD20 was associated with a prominent I(to) in the A and FW, whereas the long APD20 was identified with a lower I(to) in S myocytes. Hypertrophy-induced prolongation ofAPD20 was associated with a reduction in the I(to) in the A and FW. Significant correlations could be established between the APD20 and the "net current," defined as the algebraic addition of I(to) and I(Ca) in the A and FW control groups but not in the control S or hypertrophied cells whatever their origin. Our results indicate that interregional APD heterogeneity is lost while intraregional APD variability is increased in the A and FW during the hypertrophic process. These effects are largely due to a change in the balance between the I(Ca) and I(to), which is a major contributing factor to the heterogeneity of the initial phase of repolarization in the normal rat ventricle. abstract_id: PUBMED:9261715 Abnormal action potential conduction in isolated human hypertrophied left ventricular myocardium. Introduction: Cardiac hypertrophy is associated with an increased incidence of arrhythmias that result from altered action potential configuration or propagation velocity. These variables were measured in isolated preparations of human left ventricular myocardium and correlated with the degree of hypertrophy. Methods And Results: Cardiac mass was estimated by echocardiography and cell diameter was measured from fixed isolated specimens; the two variables correlated significantly. Action potential duration was measured under field stimulation but was independent of the degree of hypertrophy; however, the duration was longer in septal preparations (405 +/- 12 msec, 37 degrees C, 1-Hz stimulation) than in papillary muscles (342 +/- 11 msec). Conduction velocity decreased progressively as cell diameter increased both in septal and papillary muscle preparations. Cable analysis showed that the variation of conduction velocity could be accounted for adequately by an increase of the intracellular resistivity of the preparations. Conclusion: The data suggest that conduction defects occur in a progressive manner in human hypertrophy, which would provide an important substrate for dysrhythmias in human left ventricular hypertrophy and could result from a decrease of electrical coupling between adjacent myocardial cells. abstract_id: PUBMED:11780772 The action of isoprenaline on the electrophysiological properties of hypertrophied left ventricular myocytes. The electrophysiological effects of the beta-agonist, isoprenaline, on hypertrophied left ventricular myocardium were measured to understand better the arrhythmic effects of beta-stimulation on the hypertrophied heart. Left ventricular hypertrophy was induced in guinea-pigs by constriction of the thoracic aorta. An age-matched sham-operated group served as controls. Isolated myocytes were held under voltage- and current clamp and the effect of isoprenaline on the L-type Ca2+ current, I(Ca), a Cl- current, I(Cl), and action potential morphology were measured. Cardiac growth was mirrored by cellular hypertrophy. I(Ca) and I(Cl) current density were reduced as myocyte hypertrophy progressed. The augmentation of I(Ca) and I(Cl) by isoprenaline was also reduced in hypertrophy, but no other characteristics of the two currents, or the dose-dependency of the action of isoprenaline were a function of cardiac growth. Isoprenaline prolonged the action potential, but to a smaller extent in hypertrophied myocytes. This difference in action potential prolongation was abolished by glibenclamide. The changes to I(Ca) and I(Cl) in hypertrophy would not tend to increase triggered activity in this situation. Under maximum inotropic stimulation hypertrophied myocytes show action potential changes which are consistent with intracellular ATP depletion, and which could enhance the likelihood of re-entrant circuits. A simple diffusion model for oxygen is constructed to demonstrate the possibility of cellular hypoxia in hypertrophied myocytes. abstract_id: PUBMED:11469438 Reduced anisotropy of action potential conduction in left ventricular hypertrophy. Introduction: The aim of this study was to determine if anisotropic action potential conduction was altered during development of left ventricular hypertrophy (LVH). Methods And Results: Isolated guinea pig left ventricular preparations from hearts that had developed LVH were used to measure conduction velocity in longitudinal and transverse orientations to the fiber direction. A variable degree of LVH was induced by placing a ring around the ascending aorta for 50 to 250 days. Results were compared with an age-matched control group that underwent a similar operation but with no ring placement. LVH was measured as the heart-to-body-weight ratio (HBR), which correlated with an increase of mean myocyte cross-sectional area. Longitudinal conduction velocity (LCV) declined progressively as HBR increased (mean +/- SD: sham vs LVH: HBR 3.74 +/- 0.30 g/kg vs 4.53 +/- 0.52 g/kg; LCV 72.8 +/- 15.5 vs 63.6 +/- 11.1 cm/sec). Mean transverse conduction velocity (TCV) was greater in LVH compared with control (20.5 +/- 4.7 cm/sec vs 25.4 +/- 8.1 cm/sec), but there was no significance in the trend as a function of HBR. The anisotropic ratio (LCV/TCV) significantly declined as HBR increased. The time constant of the foot of the action potential was smaller in the transverse compared with the longitudinal dimension. There was no influence of hypertrophy. Conclusion: The decrease of LCV and reduction of the anisotropic conduction ratio suggest remodeling of the tissue in LVH. The consequences for the generation of arrhythmias are discussed. Answer: The effects of hypertrophy on regional action potential characteristics in the rat left ventricle have been studied to understand the cellular basis for T-wave inversion observed in cardiac hypertrophy. In hypertrophic conditions, the normal regional differences in action potential duration (APD) at 25% repolarization (APD25) are reduced, and the normal endocardial/epicardial gradient in APD75 appears to be reversed, which may explain the T-wave inversion seen in electrocardiograms (ECGs) and could have implications for arrhythmogenesis (PUBMED:9323099). Hypertrophy affects APD75 differently depending on the region of origin of myocytes, with APD75 being shortened in subendocardial myocytes but prolonged in subepicardial myocytes. These changes in APD75 are altered by intracellular calcium buffering, suggesting that the altered sequence of ventricular repolarization and the underlying cellular mechanisms may contribute to the increased incidence of ventricular arrhythmias in hypertrophy (PUBMED:9323099). In addition, hypertrophy-induced prolongation of APD20 is associated with a reduction in the transient outward current (I(to)) in the apex and free wall regions of the rat ventricle. This change in the balance between the L-type Ca2+ current (I(Ca)) and I(to) is a major contributing factor to the heterogeneity of the initial phase of repolarization in the normal rat ventricle, and the loss of interregional APD heterogeneity and increased intraregional APD variability during the hypertrophic process are largely due to these changes (PUBMED:9087578). Overall, the studies suggest that hypertrophy leads to significant alterations in the electrophysiological properties of the left ventricle, which can disrupt the normal sequence of ventricular repolarization and potentially lead to T-wave inversion and increased susceptibility to arrhythmias.
Instruction: Agitation in the morning: symptom of depression in dementia? Abstracts: abstract_id: PUBMED:22591584 Correlates of agitation and depression in nursing home residents with dementia. Background: The purpose of this study was to investigate the relationship between dementia severity, age, gender, and prescription of psychotropics, and syndromes of agitation and depression in a sample of nursing home residents with dementia. Methods: The Cohen-Mansfield Agitation Inventory (CMAI) was administered to residents with dementia (N = 304) of 18 nursing homes. Agitation symptoms were clustered using factorial analysis. Depression was estimated using the Dementia Mood Assessment Scale (DMAS). Dementia severity was assessed categorically using predefined cut-off scores derived from the Mini-Mental State Examination (MMSE). The relationship between agitation and its sub-syndromes, depression, and dementia severity was calculated using χ 2-statistics. Linear regression analyses were used to calculate the effect of dementia severity and psychotropic prescriptions on agitation and depression, controlling for age and gender. Results: Increasing stages of dementia severity were associated with higher risk for physically aggressive (p < 0.001) and non-aggressive (p < 0.01) behaviors, verbally agitated behavior (p < 0.05), and depression (p < 0.001). Depressive symptoms were associated with physically aggressive (p < 0.001) and verbally agitated (p < 0.05) behaviors, beyond the effects of dementia severity. Prescription of antipsychotics was correlated with depression and all agitation sub-syndromes except hiding and hoarding. Conclusions: Dementia severity is a predictor for agitation and depression. Beyond that, depression increased with dementia severity, and the severity of depression was associated with both physically and verbally aggressive behaviors, indicating that, in advanced stages of dementia, depression in some patients might underlie aggressive behavior. abstract_id: PUBMED:20635806 Impact of underlying depression on treatment of neuropsychiatric symptoms in older adults with dementia. This article examines the moderating effect of depression on interdisciplinary treatment approaches for behaviors in dementia. A secondary analysis of data collected on tailored treatment of 105 long-term care residents with dementia found a significant relationship between treatment and passivity (p < 0.001), treatment and agitation (p = 0.001), and the mediating effect of change in passivity on change in agitation (p < 0.001). The moderating effect of depression was found as a significant factor. For participants with depression and agitation, a significant change in passive behavior was related to significant change in agitated behavior. Thus, by focusing treatment on passivity, both types of neuropsychiatric behaviors improved. The implications of thoroughly assessing not only a behavior problem such as agitation but also other neuropsychiatric symptoms that complicate the delivery of the intervention are discussed. abstract_id: PUBMED:24108179 Current treatment of depression and agitation in the elderly -- clinical use of trazodone Due to their increasing frequency, mental disorders among the elderly have special importance in the clinical practice. In this article we summarize the characteristics, diagnostic problems and modern treatments of mental disorders (especially depression) in old age. As this period of life means special somatic and psychic features in people's condition, it may be difficult to find the most effective and well-tolerated treatment, especially in case of comorbid dementia or agitated behaviour. In this article we review the therapeutic experience with the SARI antidepressant trazodone. Clinical studies and everyday practice indicate that trazodone due to its special multifunctional receptorprofile can be particularly effective in the treatment of depression accompanied by severe insomnia and anxiety. Due to its special anxiolytic and sleep normalising effect and well-tolerated side effect profile trazodone is found to be clinically useful not only in the treatment of depression in the elderly, but also in the case of serious comorbidity with dementia or agitated behaviour. We also illustrate the possibilities of using trazodone in the everyday practice with the presentation of two case reports. Furthermore we review the viewpoints of complex therapy which facilitates the successful treatment of depression in the elderly and the restoration of quality of life. abstract_id: PUBMED:25024328 Depression in mild cognitive impairment is associated with progression to Alzheimer's disease: a longitudinal study. Background: Behavioral and psychological signs and symptoms of dementia (BPSD) belong to the core symptoms of dementia and are also common in mild cognitive impairment (MCI). Objective: This study would like to contribute to the understanding of the prognostic role of BPSD in MCI for the progression to dementia due to Alzheimer's disease (AD). Methods: Data were generated through an ongoing prospective longitudinal study on BPSD. Assessment was performed by means of the Middelheim Frontality Score, Behave-AD, Cohen-Mansfield Agitation Inventory, Cornell Scale for Depression in Dementia (CSDD), and Geriatric Depression Scale 30-questions (GDS-30). Cox proportional hazard models were used to test the hypothesis that certain BPSD in MCI are predictors of developing AD. Results: The study population consisted of 183 MCI patients at baseline. At follow-up, 74 patients were stable and 109 patients progressed to AD. The presence of significant depressive symptoms in MCI as measured by the CSDD (HR: 2.06; 95% CI: 1.23-3.44; p = 0.011) and the GDS-30 (HR: 1.77; 95% CI: 1.10-2.85; p = 0.025) were associated with progression to AD. The severity of depressive symptoms as measured by the GDS-30 was a predictor for progression too (HR: 1.06; 95% CI: 1.01-1.11; p = 0.020). Furthermore, the severity of agitated behavior, especially verbal agitation and the presence of purposeless activity, was also associated with progression, whereas diurnal rhythm disturbances were associated with no progression to AD. Conclusion: Depressive symptoms in MCI appear to be predictors for progression to AD. abstract_id: PUBMED:23255479 Prevalence and associated behavioral symptoms of depression in mild cognitive impairment and dementia due to Alzheimer's disease. Background: Mild cognitive impairment (MCI) is a clinical concept that categorizes subjects who are in an intermediate cognitive state between normal aging and dementia. The aims of this study are to determine the prevalence of significant depressive symptoms in MCI and Alzheimer's disease (AD) patients and to characterize the behavior associated with significant depressive symptoms in MCI and AD patients. Methods: A cross-sectional analysis of baseline data from a prospective, longitudinal study on behavioral symptoms of dementia and MCI was performed. The study population consisted of 270 MCI and 402 AD patients. Behavioral assessment was performed by means of Middelheim Frontality Score, Behavioral Pathology in Alzheimer's Disease Rating Scale (Behave-AD) and Cohen-Mansfield Agitation Inventory. The presence of significant depressive symptoms was defined as a Cornell Scale for Depression in Dementia total score >7. Results: The prevalence of significant depressive symptoms in AD patients (25%) was higher compared with MCI patients (16%) (p = 0.005). Patients with significant depressive symptoms showed an increased severity of frontal lobe symptoms, behavioral symptoms and agitation (Middelheim Frontality Score, Behave-AD and Cohen-Mansfield Agitation Inventory total scores; p < 0.001). Also, most of the individual frontal lobe and behavioral symptoms were more prevalent and severe, resulting in higher Behave-AD global scores. Mild cognitive impairment patients with depressive symptoms showed more severe behavioral symptoms and more severe verbally agitated behavior than AD patients without depressive symptoms (p < 0.001). Conclusions: Frontal lobe and behavioral symptoms are more prevalent and severe in MCI and AD patients with significant depressive symptoms as compared with patients without depressive symptoms. abstract_id: PUBMED:12801159 Comorbidity of psychopathological domains in community-dwelling persons with Alzheimer's disease. In this post hoc analysis of baseline responses to the CERAD Behavior Rating Scale for Dementia in a clinical trial of interventions for agitation in Alzheimer's disease (AD), the authors investigated the distribution of, and relationships between, agitation, depression, and psychosis in 148 individuals with AD. Prevalence of depressive symptoms was highest (78.4%), followed by agitation (77.6%) and psychotic symptoms (69.3%); 51.1% of the sample had symptoms in all 3 domains. Cross-sectionally, psychotic symptoms were most closely associated with Mini-Mental State Examination (MMSE) scores, while agitation was less so. Depressive symptoms were relatively consistently prevalent across MMSE levels. After controlling for the presence of agitated symptoms, psychosis and depression were significantly associated, but neither symptoms of psychosis nor of depression were associated with agitation when depression or psychosis, respectively, was controlled for. Significant psychopathological comorbidity should be considered in the design of clinical trials targeting particular psychopathology in this disease population. abstract_id: PUBMED:31429131 Modelling factors of urinary incontinence in institutional older adults with dementia. Aims And Objectives: This study applied structural equation modelling to explore the relationships among agitated behaviours, depression, cognitive function and activities of daily living, as well as associations between these factors and urinary incontinence). Background: A high prevalence of urinary incontinence is found among institutional older adults with dementia. People with urinary incontinence suffer from increased financial burden and social isolation and experience reduced quality of life. Design: Cross-sectional correlational research. The study complied with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement. Methods: In total, 226 older adults with dementia were recruited through convenience sampling at 15 long-term care facilities in southern Taiwan. The urinary incontinence frequency, agitated behaviours, depression, cognitive function and activities of daily living were evaluated using bladder records, the Cohen-Mansfield Agitation Inventory, the Cornell Scale for Depression in Dementia, the Mini-Mental State Examination and the Barthel Index, respectively. Results: Activities of daily living performance was found to be significantly associated with urinary incontinence; however, age, cognitive function, depression and agitated behaviours were not significantly related to urinary incontinence. Age did not have effects on any of the variables tested in this model, whereas activities of daily living performance was significantly associated with cognitive function and depression. Results further showed that cognitive function and depression were mediators between activities of daily living and agitated behaviours. Conclusion: Enhanced activities of daily living independency directly reduced urinary incontinence, improved cognitive function, decreased degrees of depression and indirectly reduced agitated behaviours. Relevance To Clinical Practice: The findings could serve as a valuable reference for long-term care facilities in providing effective urinary incontinence care and prevention to older adults with dementia. abstract_id: PUBMED:34454923 Feasibility of Group-Based Multiple Virtual Reality Sessions to Reduce Behavioral and Psychological Symptoms in Persons Living in Residential Aged Care. Objectives: To assess the feasibility of using group-based fully immersive virtual reality (VR) across multiple sessions to reduce behavioral and psychological symptoms (BPSs), including depression, anxiety, and agitated behaviors, in cognitively diverse aged care residents. Design: A 6-session feasibility trial was conducted within a residential aged care facility using convenience sampling to recruit N = 25 residents of varying cognitive capacity. Groups of 5 residents viewed 360-degree videos on a wireless head-mounted display to provide fully immersive VR experiences. Setting And Participants: Half of the participants recruited from the 160-bed facility had a diagnosis of dementia (48%), whereas assessment with the Psychogeriatric Assessment Scale for cognitive impairment revealed that 64% experienced cognitive impairment (mild 20%, moderate 16%, and severe 28%). Additionally, 32% of participants had an existing anxiety or depression diagnosis. Measures: The Cornell Scale for Depression in Dementia, Generalized Anxiety Disorder 7-item, and Cohen Mansfield Agitation Inventory-Short were used to assess changes in persisting BPS pre- to postintervention period. The Person-Environment Apathy Rating apathy subscale, Observed Emotions Rating Scale, and a visual analog scale (Smileometer) were used to assess immediate mood responses from residents at every VR session. VR tolerability and resident feedback was also recorded. Results: Pleasure (z = -5.892, P < .001) and general alertness (z = -2.455, P = .014) of participants improved at VR sessions, whereas apathy diminished (z = -5.275, P < .001). Compared to baseline, post-intervention depression was significantly lowered (z = -2.60, P = .009), whereas agitation increased (z = -2.98, P = .003). No significant changes in anxiety were observed. The quality of 360-degree videos and the device used did not induce any major VR-related negative side effects. Conclusions And Implications: Overall group-based VR reduced depressive symptoms and apathy, and induced a positive emotional response in most residents, with few observed side effects. Results indicate feasibility of group-based VR technological innovation within RAC. abstract_id: PUBMED:10085474 Effect of melatonin in selected populations of sleep-disturbed patients. In an open pilot study on the efficacy of melatonin in the treatment of sleep disorders, patients with sleep disturbances alone, patients with sleep disturbances and signs of depression and patients with sleep disorders and dementia received 3 mg melatonin p.o. for 21 days, at bed time. After 2-3 days of treatment, melatonin significantly augmented sleep quality and decreased the number of awakening episodes in patients with sleep disturbances associated or not with depression. Estimates of next-day alertness improved significantly only in patients with primary insomnia. Agitated behavior at night (sundowning) decreased significantly in dementia patients. In a second retrospective study, 14 Alzheimer's disease (AD) patients received 9 mg melatonin daily for 22-35 months. A significant improvement of sleep quality was found, while there were no significant differences between initial and final neuropsychological evaluation (Functional Assessment Tool for AD, Mini-Mental). The results indicate that melatonin can be useful to treat sleep disturbances in elderly insomniacs and AD patients. abstract_id: PUBMED:26412509 A personalized multimedia device to treat agitated behavior and improve mood in people with dementia: A pilot study. Agitated behaviors and dysphoric moods in nursing home residents with dementia may be a response to a lack of personalized, meaningful activity and stimulation. To address this deficiency, a personal computer was adapted to play favorite music and display photographs, movies and messages that were selected or made by family members. The system (called Memory Box) is accompanied by a simplified interface to help people with dementia access material independently. The system's ability to reduce agitation, and improve symptoms of depression and anxiety, was tested by means of an eight-week randomized, single-blinded, cross-over trial comparing Memory Box with a control condition that offered equivalent contact with research staff. Eleven nursing home residents with mild to severe dementia and persistent, daily agitated behaviors completed the study. Outcome measures included ratings of anxiety, depression and agitated behavior made by knowledgeable staff members in collaboration with researchers. Memory Box was well utilized and highly rated by residents, families and staff members. There were significant reductions in depressive and anxiety symptoms during the course of the intervention. The system shows promise as a tool to assist families and nursing home staff to improve the wellbeing of cognitively impaired older people with agitated behaviors. Answer: The relationship between agitation and depression in dementia is complex and multifaceted. The provided abstracts suggest that agitation can be a symptom of depression in individuals with dementia, and that the two conditions often co-occur and can influence each other. For instance, one study found that increasing stages of dementia severity were associated with higher risk for physically aggressive and non-aggressive behaviors, verbally agitated behavior, and depression. Depressive symptoms were associated with physically aggressive and verbally agitated behaviors beyond the effects of dementia severity, indicating that depression might underlie aggressive behavior in advanced stages of dementia (PUBMED:22591584). Another study highlighted that depressive symptoms in mild cognitive impairment (MCI) are predictors for progression to Alzheimer's disease (AD), suggesting a link between depression and the worsening of cognitive and behavioral symptoms (PUBMED:25024328). Furthermore, the presence of significant depressive symptoms in MCI and AD patients was associated with an increased severity of frontal lobe symptoms, behavioral symptoms, and agitation. MCI patients with depressive symptoms showed more severe behavioral symptoms and more severe verbally agitated behavior than AD patients without depressive symptoms (PUBMED:23255479). This indicates that depression can exacerbate agitation in dementia patients. Additionally, a study on the treatment of neuropsychiatric symptoms in older adults with dementia found that for participants with depression and agitation, a significant change in passive behavior was related to a significant change in agitated behavior. By focusing treatment on passivity, both types of neuropsychiatric behaviors improved (PUBMED:20635806). While these studies do not specifically address whether agitation in the morning is a symptom of depression in dementia, they do establish a clear association between depression and agitation in dementia patients. It is plausible that agitation experienced in the morning, or at any other time of day, could be a manifestation of underlying depression in individuals with dementia. However, further research would be needed to confirm the timing and specific patterns of agitation related to depressive symptoms in dementia.
Instruction: Is inferior dissection of the rectus sheath necessary during Pfannenstiel incision for lower segment Caesarean section? Abstracts: abstract_id: PUBMED:16621227 Is inferior dissection of the rectus sheath necessary during Pfannenstiel incision for lower segment Caesarean section? A randomised controlled trial. Objective: The objective was to determine the benefit of non-dissection of the rectus sheath inferiorly in a Pfannenstiel incision during an elective Caesarean section with regard to operative blood loss and post-operative pain. Design: A randomised controlled trial. Setting: The Royal Free Teaching Hospital in London. Population: One hundred and twenty women who underwent elective Caesarean section delivery. Method: Women were randomised to have dissection or non-dissection of the rectus sheath inferiorly during an elective Caesarean section. Main Outcome Measures: Estimated blood loss during the operation, as well as the difference between pre- and post-operative haemoglobin levels. Post-operative pain was assessed by the visual analogue scale, a verbal rating scale and the use of post-operative analgesia. Patient satisfaction was assessed by a verbal rating scale. Results: There was no significant difference in the estimated blood loss during the procedure between the two groups; however, the mean difference between the pre-operative and post-operative haemoglobin was significantly smaller in the study group (p=0.05). There was a statistically significant difference in both the visual analogue scale (p-value=0.03) and the verbal rating scale scores (p-value=0.02) for pain between the two groups, with lower scores for the study group. There was no overall difference in the verbal rating scale scores for patient satisfaction. Conclusion: Non-dissection of the rectus sheath inferiorly in Pfannenstiel incisions during Caesarean section procedures is associated with a significant reduction in the post-operative pain as well as a smaller drop in post-operative haemoglobin. abstract_id: PUBMED:34321988 Pfannenstiel incision for surgical excision of a huge pelvi-abdominal cystadenoma: a case report. A 56-year-old post-menopausal woman, with 3 previous caesarean sections (CSs), presented to the emergency department with abdominal distension, without abdominal pain, tenderness, and/or rigidity. The abdominal examination of the studied woman showed a mobile, pelvi-abdominal mass 4 fingers breadth above the umbilicus. Magnetic resonance imaging study of the mass showed a large, well-defined, multi-locular cystic mass measuring 25.5 x 21 cm, which was most probably a right ovarian cystadenoma. The studied woman signed a written consent form for total abdominal hysterectomy and bilateral salpingo-oophorectomy, after the normal tumour markers, and pre-operative investigations. Under general anaesthesia, an elliptical Pfannenstiel skin incision was done to remove the old CSs scars, followed by opening of the patient's anterior abdominal wall in layers. Total abdominal hysterectomy and unilateral left SO were done first, to deliver the ovarian mass easily and intact outside the abdomen after the uterus. Due to failure to deliver the mass outside the abdomen after removal of the uterus, the right infundibulopelvic ligament was ligated behind the mass, while the mass was still inside the abdomen. A longitudinal midline incision in the upper flap of the rectus sheath (not involving the skin) was added to deliver the excised right ovarian mass outside the abdomen. Successfully, the right ovarian mass delivered intact outside the abdomen after the added longitudinal midline incision. This report highlights that the midline vertical incision is not the standard abdominal incision. Moreover, the transverse Pfannenstiel incision is cosmetically better, and should be routinely used to avoid unnecessary vertical abdominal incision. abstract_id: PUBMED:37829600 Can the Pfannenstiel skin incision length be adjusted according to the fetal head during elective cesarean delivery? Objective: The study aims to determine whether the Pfannenstiel skin incision can be adjusted according to the fetal head's occipitofrontal diameter (OFD) during primary cesarean delivery. Background: Eligible 114 nulliparous women delivered at term by cesarean section in which Pfannenstiel skin incision was performed according to the OFD of the fetal head between June 2017 and September 2021 were included. Excluded cases were non-vertex presentations, all emergency cesarean sections, severe preeclampsia, women in an active phase of the first stage of labor and second stage of labor, placenta previa and low-lying placenta, multiple pregnancies, and uncontrolled gestational diabetes mellitus. Results: Among 114 eligible nulliparous women, the mean OFD was 116.1 ± 7.2 (99-138) mm, and the measurement of the Pfannenstiel skin incision length, which was performed according to the OFD was found to be 122.8 ± 9.2 (100-155) mm. The difference between OFD and Pfannenstiel incision kept remained within 10 mm in 90 (82.5.2%), 10-20 mm in 17 (15.5%), and more than 20 mm in two women (1.8%). This technique was successful in 109 (95.6%) out of 114 women without extending the skin incision. In five women, skin incision needed to be extended up to 38 mm. In 10 women (8.7%), the rectus abdominis muscle was cut partially to deliver the fetal head. The mean fetal umbilical artery pH was 7.33 ± 0.05. No neonatal hypoxia was encountered in the study. Conclusion: Pfannenstiel skin incision can be adjusted according to the OFD with minimal margins of error. This technique may provide better cosmetic results by avoiding unnecessarily prolonged incisions with similar newborn outcomes. Clinical Trial Registration: Clinicaltrials.gov, identifier [NCT05632796]. abstract_id: PUBMED:21846453 Partial herniation through the peritoneal defect of a pfannenstiel incision: a rare complication of non-closure. Background: The Pfannenstiel incision is used almost invariably in Caesarean section. With Caesarean section rates increasing, the Pfannenstiel is a commonly performed incision. The prevailing recommendation is not to close the peritoneum when closing a Pfannenstiel incision, and peritoneal non-closure does not appear to statistically influence postoperative complication rates. Case: A 33-year-old woman presented with severe, intermittent lower abdominal pain one year after a Caesarean section. Laparoscopy showed a hernial defect at the Pfannenstiel incision, between the left rectus abdominis muscle and the anterior rectus sheath. Mesh repair was performed with uncomplicated postoperative outcome. Conclusion: The current case illustrates that complications specific to non-closure of the peritoneum do arise. We advocate that laparoscopy should be considered for any patient with persistent, severe, or atypical pain following a Pfannenstiel incision. abstract_id: PUBMED:24592067 A randomized comparative study on modified Joel-Cohen incision versus Pfannenstiel incision for cesarean section. Objective: Pfanennstiel incision is the most commonly used incision for cesarean section, but may not be the best. This study compared the modified Joel-Cohen incision with the Pfannenstiel incision to evaluate whether techniques to open the abdomen might influence operative time, and maternal and neonatal outcomes. Material And Methods: In a randomized comparative trial, 302 women with gestational age >34 weeks, requiring cesarean section, were randomly assigned to either modified Joel-Cohen incision or Pfannenstiel incision for entry into the peritoneal cavity. The primary outcome measure was total time required for performing operation and secondary outcome measures were baby extraction time, number of haemostatic procedures used in the abdominal wall, postoperative morbidity, postoperative hospital stay and neonatal outcome. Results: Mean total operative time was significantly less in the modified Joel-Cohen group as compared to the Pfannenstiel group (29.81 vs 32.67 min, p<0.0001, 95%CI=2.253 to 3.467). Time taken to deliver the baby and haemostatic procedures required during operation were also significantly less in the modified Joel-Cohen group as compared to the Pfannenstiel group. Requirement of strong analgesics was higher in the Pfannenstiel group (53.64% vs 21.85%, p<0.0001). There was no statically significant difference in the incidence of postoperative wound complications but postoperative stay in hospital was significantly less in the modified Joel-Cohen group (p=0.002). Neonatal outcomes were similar in both groups. Conclusion: The modified Joel-Cohen incision for entry into peritoneal cavity during cesarean section is associated with reduced mean total operative and baby extraction times with less postoperative pain and shorter hospital stay, which may be beneficial and cost effective. abstract_id: PUBMED:25276647 The influence of pfannenstiel incision scarring on deep inferior epigastric perforator. Background: Abdominal wall free flaps are used most frequently in autologous breast reconstruction, and these flaps require intact and robust deep inferior epigastric perforator (DIEP) vessels. Pfannenstiel incisions are often present during preoperative visits for breast reconstruction and could potentially signal compromised blood supply to the lower abdominal wall. In this study, we compared the number of DIEP vessels between patients with and without Pfannenstiel incisions undergoing autologous breast reconstruction. Methods: A retrospective review of medical records was performed for patients with (study) and without (control) Pfannelstiel incisions (n=34 for each group) between June 2010 and July 2013. In addition to patient demographics, number of caesarian sections, and outcomes of free flap reconstruction, abdominal wall vasculature was compared using the preoperative computed tomography angiographic data between the groups. For each patient, vessels measuring greater than 1 mm were counted and divided into four sections of the lower abdominal wall. Results: The mean number of perforator vessels was 10.6 in the study group and 11.4 in the control group, which was not statistically different (P=0.575). Pfannenstiel incisions with history of repeat caesarian sections were not associated with decreased number of perforator vessels. Conclusions: Pfannenstiel scars are associated with neither a change in the number of DIEP vessels nor decreased viability of a free transverse rectus abdominis myocutaneous and DIEP flap. Lower abdominal free flaps based on DIEP vessels appear safe even in patients who have had multiple caesarian sections through Pfannenstiel incisions. abstract_id: PUBMED:35645231 Scar Endometriosis: A Rare Cause of Abdominal Pain. Scar endometriosis or incisional endometriosis is the presence of endometrial tissues with glands in the previous incision or scar. Its overall estimated incidence after post-cesarean and post-hysterectomy is 0.03-0.4% and 1.08-2%, respectively. The patient presents with non-specific symptoms such as cyclical abdominal pain at the site of a previous surgical incision and scar and an abdominal lump with a cyclical increment in size, which is tender. The diagnosis is made only after the surgical excision with confirmation by histopathological analysis. We present the case of a 31-year-old female complaining of cyclical abdominal pain and a lump on the right side of a Pfannenstiel incision for five months. She had undergone two Lower Segment Caesarean Sections (LSCSs); the last surgery was eight months prior. Surgical excision was planned with the corresponding clinical features and radiological data. After the surgical excision, the sample was sent for histopathological examination, and scar endometriosis was diagnosed. abstract_id: PUBMED:26376528 Two techniques of pyramidalis muscle dissection in Pfannenstiel incision for cesarean section. Objectives: The aim of the study was to compare two techniques of pyramidalis muscle dissection during cesarean section. Material And Methods: A total of 108 patients undergoing a cesarean section were randomly allocated to group I (N = 57), with the pyramidalis muscle left attached to the rectus muscles, and group II (N = 51), with preservation of the connection between the pyramidalis muscle and the rectus sheath. Results: There were no statistically significant differences between the groups regarding surgery duration, blood loss and postoperative pain. After three months, patients from group II more frequently reported paresthesia in the scar region (47.1 vs. 28.1%; p = 0.041), but their self-assessment of the abdominal appearance and presence of the bulging below the wound were comparable with group I. Conclusions: None of the two techniques of pyramidalis muscle dissection appear to be superior to the other. The technique leaving the pyramidalis muscle attached to the fascia gave more frequent paresthesia during a 3-month follow-up. abstract_id: PUBMED:24592105 Caesarean deliveries by Pfannenstiel versus Joel-Cohen incision: A randomised controlled trial. Objective: This study was designed to compare the Pfannenstiel versus Joel-Cohen incisions during caesarean deliveries. Material And Methods: Women undergoing caesarean deliveries (n=153) were randomly assigned to the conventional Pfannenstiel or the Joel-Cohen incision. The outcome measures included postoperative pain, requirement for analgesics, operative time and other postoperative data. Results: Maternal age, parity, gestational age and indications for caesarean delivery were similar across groups. Total operative time, postoperative recovery duration, time to get out of bed, to walk straight without support, to detect audible intestinal sounds and to pass gases or stools were shorter in the Joel-Cohen group. Postoperative haematocrit decreases and estimated intraoperative blood loss were similar between the two techniques. Moderate and severe pain at 6, 12 and 18 hours postoperatively was less frequent after the Joel-Cohen technique. Conclusion: Joel-Cohen incision in the non-scarred abdomen may provide a faster technique for caesarean section with less postoperative pain and probably early postoperative recovery in our circumstances. abstract_id: PUBMED:37064074 Strangulated interstitial interparietal hernia following lower segment caesarean section: a case report. Interparietal hernias are rare and occur most commonly in the long-term postoperative setting following abdominopelvic surgery. This case report describes the development of a strangulated interparietal hernia through a defect in the posterior rectus sheath 4 days post-lower segment caesarean section (LSCS) in a 36-year-old female. Whilst she had an acute abdomen, she did not have a palpable incisional hernia. Her interparietal hernia was identified via a computed tomography scan of her abdomen and pelvis. She underwent a midline laparotomy where ischaemic small bowel was found within her hernia. This was resected and she had an unremarkable recovery. This case is the first of its kind and highlights the diagnostic challenge of interparietal hernias in acute postoperative patients as well as the importance of having a low threshold for suspicion for complicated interparietal hernias in patients with an acute abdomen following LSCS. Answer: Inferior dissection of the rectus sheath during a Pfannenstiel incision for lower segment Caesarean section is not necessary and may be associated with benefits when not performed. A randomised controlled trial found that non-dissection of the rectus sheath inferiorly is associated with a significant reduction in post-operative pain and a smaller drop in post-operative haemoglobin levels, without a significant difference in estimated blood loss during the procedure (PUBMED:16621227). This suggests that avoiding inferior dissection of the rectus sheath can be beneficial for patient outcomes in terms of pain and blood loss management.
Instruction: Does a Simplified 6-Joint Ultrasound Index Correlate Well Enough With the 28-Joint Disease Activity Score to Be Used in Clinical Practice? Abstracts: abstract_id: PUBMED:27219303 Does a Simplified 6-Joint Ultrasound Index Correlate Well Enough With the 28-Joint Disease Activity Score to Be Used in Clinical Practice? Objective: Ultrasound (US) has become an important tool in the management of rheumatoid arthritis (RA) but it is time consuming in clinical practice. We compared 3 US indices (with different numbers of joints) with disease activity measured by the 28-Joint Disease Activity Score (DAS28) in order to find the most parsimonious index still useful in clinical practice. Methods: Sixty consecutive RA patients were included. The DAS28 score was calculated by the attending rheumatologist, and later in the day, they underwent US examination by another rheumatologist trained in US (bilateral gray-scale and power Doppler examination of the wrist and metacarpophalangeal and proximal interphalangeal joints). Three different US indices were constructed: index A (22 joints), index B (10 joints), and index C (6 joints). Results: All 3 US indices were significantly higher in patients with active disease versus inactive disease (P &lt; 0.05 for all 3). Ultrasound index C showed the best correlation with DAS28 (rho = 0.5020, P &lt; 0.0001) and a very good discriminative value for moderate to high disease activity (DAS28 &gt;3.2) and for absence of remission (DAS28 &gt;2.6) (areas under receiver operating characteristic curve = 0.75 and 0.80, respectively). A cutoff value of 3 in US index C showed sensitivity of 88.89% and specificity of 66.67% for absence of remission. Correlation between the 3 US indices was excellent. Conclusions: A US index of 6 joints (both wrists and second and third metacarpophalangeal joints bilaterally) correlated well with disease activity measured by DAS28 and may be used to evaluate RA patients in daily practice. abstract_id: PUBMED:23754243 Test-retest reliability of the disease activity score 28 CRP (DAS28-CRP), the simplified disease activity index (SDAI) and the clinical disease activity index (CDAI) in rheumatoid arthritis when based on patient self-assessment of tender and swollen joints. Composite disease activity scores are frequently used in daily practice as tools for treatment decisions in patients with rheumatoid arthritis (RA). If reliable, patient-reported disease activity may be time saving in the busy clinic. The objective was to examine the test-retest reliability of the Disease Activity Score 28 CRP (DAS28-CRP) with four variables (4v) and three variables (3v), the Simplified Disease Activity Index (SDAI) and the Clinical Disease Activity Index (CDAI) when based on patient self-assessment of tender and swollen joints and to examine the agreement between these scores and physician-derived scores. Thirty out-clinic RA patients with stable disease were included. A joint count was performed two times 1 week apart by the patient and by an experienced physician. Test-retest reliability was expressed as the least significant difference (LSD), as the LSD in percent of the mean score (%LSD) and as intra-individual coefficients of variation (CVi). Mean scores based on physician vs. patient joint counts (visit 1) were: DAS28-CRP(4v) 3.5 ± 1.0 vs. 3.6 ± 1.1 (not significant (NS)), DAS28-CRP(3v) 3.4 ± 0.9 vs. 3.5 ± 0.9 (NS), SDAI 14.2 ± 9.4 vs.14.1 ± 9.4 (NS) and CDAI 13.4 ± 9.3 vs. 13.3 ± 9.4 (NS). The LSDs (%LSD) for duplicate assessments of patient-derived scores (visit 2 vs. 1) were: DAS28-CRP(4v) 0.8 (23.2), DAS28-CRP(3v) 0.9 (25.2), SDAI 8.3 (59.9) and CDAI 8.4 (63.8). Similar LSDs were found for differences between duplicate assessments of physician-derived scores and for differences between physician and patient-derived scores. CVis for SDAI and CDAI were significantly higher than for DAS28-CRP(4v) and DAS28-CRP(3v) (p &lt; 0.005). Patient- and physician-derived scores agreed closely on group level. On the individual level, the LSDs between patient- and physician-derived scores were considerable but corresponded to both patient and physician intra-observer LSDs. Thus, scores based on patient-performed joint counts may be an alternative to traditional physician-derived scores in patients with stable disease. abstract_id: PUBMED:34175145 Correlation of Ultrasound Synovitis Joint Count with Disease Activity and Its Longitudinal Variation with Treatment Response to Etanercept in Rheumatoid Arthritis. More research is needed into rheumatoid arthritis (RA), and ultrasound (US) synovitis is a promising factor for assisting in the management of RA; however, related research is extremely limited. The goal of this study was to evaluate the correlation of US synovitis joint count with clinical features, and its longitudinal changes with treatment response to etanercept in RA. We consecutively enrolled 117 people with active RA being treated with etanercept. US synovitis joint count was evaluated in 28 joints at baseline (W0), week 4 (W4), week 12 (W12) and week 24 (W24) after initiation of etanercept treatment. The mean (±standard deviation), median, inter-quartile range, and total range of the US synovitis joint count at W0 were 9.3 ± 4.0, 9.0, 7.0-11.0 and 2.0-21.0, respectively. US synovitis joint count was positively associated with tenderness joint count, swollen joint count, erythrocyte sedimentation rate, 28-joint Disease Activity Score based on erythrocyte sedimentation rate and Health Assessment Questionnaire-Disability Index score. Then participants were categorized into response and non-response groups according to their response status at W24. Further analyses showed that US synovitis joint count gradually decreased from W0 to W24, and displayed a more notable declining trend in the response group compared with the non-response group. In addition, US synovitis joint count at W0 and W4 was similar between groups, but at W12 and W24 it was markedly decreased in the response group compared with the non-response group. In conclusion, US synovitis joint count correlates with disease activity, and its longitudinal decrease is associated with treatment response to etanercept in RA. abstract_id: PUBMED:37844912 Clinical disease activity index applicability in lupus arthropathy: Unraveling underdiagnosed joint activity. Introduction: Lupus arthropathy (LA) ranges from arthralgia and non-deforming arthritis to severe forms such as Jaccoud-type deformities and mutilating arthritis. Considering the evolving concept of LA, measuring arthritis activity in lupus patients may require a more practical and sensitive tool other than the classical composite scores. Methods: In this cross-sectional study, we evaluated the articular pattern of a sample of SLE patients which were divided into those that scored in articular domain on Systemic Lupus Erythematosus Disease Activity Index 2000 (SLEDAI-2K) and those with activity arthritis using the Clinical Disease Activity Index (CDAI). After all, we analyzed the association between CDAI and arthritis by SLEDAI-2K as well as its association with the presence or not of Jaccoud-type arthropathy (JA). Results: A total of 127 patients with SLE were evaluated. According to SLEDAI-2K, 17 (13.4%) patients have scored in its joint criteria and 32 patients (25.19%) were considered to have some articular activity by CDAI. A total of 16 patients (50%) who scored some activity on CDAI did not score in articular domain of SLEDAI-2K. Also, the presence of Jaccoud-type arthropathy was significantly associated with arthritis activity according to the CDAI score (p = .014) but not with SLEDAI-2K joint criteria (p = .524). Conclusion: The CDAI was not directly associated with the presence of arthritis by the joint criteria of SLEDAI-2K and the presence of JA was significantly associated with the CDAI but not with arthritis at SLEDAI-2K. abstract_id: PUBMED:25371736 Application of high frequency color Doppler ultrasound in the monitoring of rheumatoid arthritis treatment. The aim of the present study was to explore the use of high frequency color Doppler ultrasound to measure synovial thickness and blood flow to assess the therapeutic value of the recombinant human tumor necrosis factor (TNF) II receptor antibody fusion protein in rheumatoid arthritis (RA) treatment. A total of 36 clinically-diagnosed patients with RA were treated with methotrexate tablets or the recombinant TNF-receptor antibody fusion protein for 24 weeks. Joint synovial thickness and synovial blood flow integrity were monitored by high frequency color Doppler in the second metacarpophalangeal joint in one hand. The correlation of the erythrocyte sedimentation rate, C-reactive protein (CRP) and 28-joint disease activity score (DAS28) with the ultrasound parameters were analyzed. Metacarpophalangeal second joint 2 (MCP2) synovial thickness, wrist joint synovial thickness and MCP2 synovial blood flow, prior and subsequent to the treatment, have a high correlation with DAS28 (P&lt;0.05), and the MCP2 synovial blood flow integral has a strong correlation with CRP. Evaluating the wrist joint synovial thickness and synovial integrity of the second metacarpophalangeal joint using high frequency ultrasound detection can effectively evaluate the disease status in patients with RA. This procedure is potentially valuable as a means of evaluating the curative effects of RA treatments. abstract_id: PUBMED:27692392 A new musculoskeletal ultrasound scoring system (US10) of the hands and wrist joints for evaluation of early rheumatoid arthritis patients. Objective: To propose a novel ultrasound scoring system for hand and wrist joints (US10) for evaluation of patients with early rheumatoid arthritis (RA) and to correlate the US10 with clinical, laboratory and functional variables. Methods: Forty-eight early RA patients underwent clinical and laboratory evaluations as well as blinded ultrasound (US) examinations at baseline, three, six and 12 months. The proposed US10 system involved the assessment of the wrist, second and third metacarpophalangeal and proximal interphalangeal joints. The score consisted of inflammation parameters (synovial proliferation [SP], power Doppler [PD] and tenosynovitis [TN]) and joint damage parameters (bone erosion [BE] and cartilage damage [CD]). SP, PD, BE and CD were scored qualitatively (0-1) and semi-quantitatively (grades 0-3). Tenosynovitis was scored as presence/absence. The evaluation also involved the 28-Joint Disease Activity Score (DAS28), Health Assessment Questionnaire (HAQ) and C-reactive protein level (CRP). Results: Mean duration of symptoms was 7.58±3.59 months. Significant correlations (p&lt;0.05) were found between inflammation parameters and CRP at baseline and between the changes in these variables throughout the study. Significant correlations (p&lt;0.05) were found between DAS28 score and both PD and TN at baseline and between the changes in DAS28 score and both SP and TN throughout the follow up. Moreover, significant correlations were found between the changes in inflammation parameter scores and HAQ score throughout the follow up. Conclusion: The proposed US10 scoring system proved to be a useful tool for monitoring inflammation and joint damage in early RA patients, demonstrating significant correlations with longitudinal changes in disease activity and functional status. abstract_id: PUBMED:19962620 The Simplified Disease Activity Index and Clinical Disease Activity Index to monitor patients in standard clinical care. Rheumatoid arthritis (RA) disease activity plays a central role in causing disability directly and via indirect effects mediated through joint damage, a major sequel of persistent active disease. Evaluation of RA disease activity is therefore important to predict the outcome and effectiveness of therapeutic interventions during follow-up. However, disease activity assessment is among the greatest challenges in the care of patients with RA. The authors regard measurement of activity as an essential element in following the fate of joint diseases such as RA. This evaluation can be facilitated by the use of reduced joint counts and simple indices, such as the Simplified Disease Activity Index (SDAI) and the Clinical Disease Activity Index (CDAI). These scores are validated outcomes for RA and allow the assessment of actual disease activity, response to therapy, and achievement of particular states such as remission. The simplicity of these scores enables patients to understand the level of their disease activity, as assessed by the rheumatologist, and to correlate increments and decrements of disease activity directly with all aspects of the disease. Moreover, remission criteria of CDAI and SDAI are currently the most stringent. abstract_id: PUBMED:33159378 Joint damage in rheumatoid arthritis: Lessons learned from an extended 36-joint ultrasonography and 28-joint Disease Activity Score assessments. Aim: To study joint damage severity in rheumatoid arthritis (RA) patients classified using ultrasound power Doppler (PD) and gray-scale (GS) joint inflammation outcomes and the 28-joint Disease Activity Score (DAS28). Method: Ultrasound erosion scores were compared between (a) patients in group 1 (PD positive and GS ≥ median score), group 2 (PD negative and GS ≥ median score) and group 3 (PD positive and GS &lt; median score) vs group 4 (PD negative and GS &lt; median score) and (b) patients with high, moderate and low DAS28 scores vs those in DAS28 remission. Comparative analyses were performed using the 2-sample Student's t test. Results: There were 1080 joints and 1800 joint recesses from 36 joints scanned in 30 RA adult patients (mean DAS28, 3.58; mean disease duration, 70.3 months) in this cross-sectional study. The mean and 95% CI ultrasound erosion scores were significantly higher (P = .026) for groups 1 (9.75, 6.69-12.81) vs 4 (3.4, 1.11-5.69) with a difference (95% CI) of 6.35 (0.78-11.83), but not significantly different (P values all &gt; .05) for (a) groups 2 and 3 vs 4 and (b) patients with high, moderate and low DAS28 scores vs those in DAS28 remission. Conclusion: Severity of ultrasound-detected bone erosions was significantly greater when both positive PD and a greater degree of GS joint inflammation were present in RA. This association was not observed when either component was absent. Single time point ultrasound joint inflammation assessment - and not DAS28 - is reflective of joint damage severity in RA patients. abstract_id: PUBMED:24924839 Interleukin-23 and its correlation with disease activity, joint damage, and functional disability in rheumatoid arthritis. The purpose of this study was to compare the serum interleukin (IL)-23 levels between rheumatoid arthritis (RA) patients and healthy controls and to determine the correlation of IL-23 levels with disease activity, joint damage and functional disability in RA. Serum samples were obtained from 45 patients with RA and 45 healthy controls. The enzyme-linked immunosorbent assay method was used for quantitative analysis of IL-23. All the RA patients were assessed for disease activity based on the 28-joint disease activity score, joint damage based on modified Sharp score, and functional ability using the Health Assessment Questionnaire-Disability Index. The mean serum IL-23 level was much higher among the RA patients (24.50 ± 13.98 pg/mL) compared to the controls (5.98 ± 3.40 pg/mL; p &lt; 0.01). There was a significant positive relationship between IL-23 levels and disease activity and questionnaire scores (p = 0.003 and 0.020, respectively). On logistic regression analysis, IL-23 levels were significantly higher in patients with moderate to high disease activity (p = 0.008, odds ratio = 1.073, 95% confidence interval = 1.019-1.130) and patients with significant functional disability (p = 0.008, odds ratio = 1.085, 95% confidence interval = 1.021-1.153). RA patients have significantly higher levels of serum IL-23. The IL-23 levels correlate well with disease activity and functional disability but not with radiographic joint damage. abstract_id: PUBMED:17678828 The Simplified Disease Activity Index (SDAI) and Clinical Disease Activity Index (CDAI) to monitor patients in standard clinical care. Rheumatoid arthritis (RA) disease activity plays a central role in causing disability both directly and via indirect effects mediated through joint damage, a major sequel of persistent active disease. Evaluation of RA disease activity is therefore important to predict the outcome and effectiveness of therapeutic interventions during follow-up. However, disease activity assessment is among the greatest challenges in the care of patients with RA. We regard measurement of activity as an essential element of following the fate of joint diseases like RA. This evaluation can be facilitated by the use of reduced joint counts and simple indices, such as the Simplified Disease Activity Index (SDAI) and the Clinical Disease Activity Index (CDAI). These scores are validated outcomes for RA and allow the assessment of: actual disease activity, response to therapy, and achievement of particular states such as remission. The simplicity of these scores enables patients to understand the level of their disease activity, as assessed by the rheumatologist, and to correlate increments and decrements of disease activity directly with all aspects of the disease. Answer: Yes, a simplified 6-joint ultrasound index correlates well enough with the 28-joint Disease Activity Score (DAS28) to be used in clinical practice. A study compared three ultrasound (US) indices with different numbers of joints to the DAS28 and found that the 6-joint US index (index C), which includes both wrists and the second and third metacarpophalangeal joints bilaterally, showed the best correlation with DAS28 (rho = 0.5020, P < 0.0001). This index also demonstrated a very good discriminative value for moderate to high disease activity (DAS28 >3.2) and for the absence of remission (DAS28 >2.6), with areas under the receiver operating characteristic curve of 0.75 and 0.80, respectively. A cutoff value of 3 in the US index C showed a sensitivity of 88.89% and specificity of 66.67% for the absence of remission. The correlation between the three US indices was excellent, suggesting that the 6-joint US index can be effectively used to evaluate RA patients in daily practice (PUBMED:27219303).
Instruction: Can ultrasonography improve on radiographic assessment in osteoarthritis of the hands? Abstracts: abstract_id: PUBMED:18037626 Can ultrasonography improve on radiographic assessment in osteoarthritis of the hands? A comparison between radiographic and ultrasonographic detected pathology. Objectives: Ultrasonography (US) is used in rheumatology to assess small joints in inflammatory arthritis. Recently there has been some investigation into the utility of US in osteoarthritis (OA), however there has been little comparison of US to other imaging modalities in OA. This study aimed to compare the detection of osteophytosis and joint space narrowing (JSN) by US and conventional radiography (CR) in OA of the hand. Methods: Subjects: with OA of the hand underwent US and CR examination of the small joints of both hands to identify osteophytosis and joint space narrowing. Results: 1106 joints of 37 patients were imaged with US and CR. US detected osteophytosis in 448 joints, compared to CR that detected osteophytosis in 228 joints (approximately 30% fewer joints). Where osteophytosis was detected by US but not CR, this was usually proximal to the joint line. Joint space narrowing was detected in 450 joints by US, but only 261 joints by CR. The distribution of US and CR detected osteoarthritis changes in this cohort was consistent with population studies of radiographic hand OA, although metacarpophalangeal (MCP) involvement was higher than might be expected Conclusions: US detected more osteophytosis and joint space narrowing than CR in OA of the hand. Involvement of MCP joints was more common than would be expected from population radiographic studies. The increased detection of OA structural pathology by US may make this a useful tool for hand OA research. abstract_id: PUBMED:32655702 Hand X-ray examination in two planes is not required for radiographic assessment of hand osteoarthritis. Aims: Radiographic imaging is essential in the diagnosis of hand osteoarthritis (HOA); however, it is unknown whether a multiplanar examination would add essential information to dorso-palmar (dp) views alone. This study evaluated whether an additional radiographic view would aid clinicians in the diagnostic process of HOA. Methods: The dp radiographs of both hands from 159 HOA patients were assessed according to the scores described by Kellgren and Lawrence (K/L). In oblique view images, structures similar to classic ostophytes (OPs) were found, namely bony proliferations on the dorsal and/or ventral margins of joints, and were documented as dorsal/ventral OPs (dvOPs). Function and pain were assessed by applying standardised read-out systems. Logistic regression analysis and Mann-Whitney tests were implemented. Results: The presence of dvOPs was associated with the degree of joint damage; however, dp views were sufficient to estimate radiographic changes. Only a few joints showed dvOPs as the only structural alteration; nevertheless, in almost all cases, classical radiographic OA changes were found in dp views of other joints of the same or the contralateral hand. The presence of dvOPs did not affect joint function or pain according to established scores, but was associated with radiographic progression in distal interphalangeal joints. Conclusion: This is the first study to confirm that additional radiographic planes, oblique/lateral views, are not necessary in the diagnostic process in HOA in daily clinical practice. Nevertheless, the presence of dvOPs reflect more severe joint damage and is associated with radiographic progression in HOA; hence, oblique/lateral views could be a useful tool for academic purposes. abstract_id: PUBMED:12215868 Prevalence of radiographic osteoarthritis of the hands in Saudi Arabia. We studied the radiographic evidence of osteoarthritis (OA) of the hands in 300 patients (167 male, 133 female) attending 14 primary care facilities for a variety of medical conditions. Radiographic OA was seen in 30.3% of males and 36.3% of females. Most commonly affected were the metacarpophalangeal joints, followed by the distal interphalangeal and carpometacarpophalangeal joints. abstract_id: PUBMED:8816556 Kashin-Beck disease: radiographic appearance in the hands and wrists. Purpose: To review the radiographic changes of Kashin-Beck disease (KBD) in the hands and wrists and provide a method for the diagnosis, staging, and evaluation of the effectiveness of prevention and treatment of this disease. To perform radiographic-histologic correlation to facilitate an early diagnosis of KBD. Materials And Methods: Hand and wrist radiographs in 300 children and five autopsy cases with KBD from an endemic area of China were analyzed for radiographic changes in the epiphyseal plates, bone ends, epiphyses, and carpal bones. Histopathologic sections of the hand and wrist from five autopsies were studied for comparison with corresponding radiographs. Results: The chief histopathologic changes in KBD included chondronecrosis, secondary repair, and remodeling processes. These histopathologic stages occurred in isolation or simultaneously in a patient. The radiographic and histopathologic changes within cartilage were related specifically to the different sites of cartilage. Conclusion: Radiographic changes in the hands and wrists in patients with KBD result from atrophy and necrosis of chondrocytes and their subsequent repair processes in the deep layer of growing cartilage. abstract_id: PUBMED:31771037 Assessment of knee osteoarthritis by ultrasonography and its association with knee pain. Background: Pain is a significant cause of disability in knee osteoarthritis. Conventional radiography is widely used in the assessment of knee osteoarthritis, however radiographic findings do not correlate well with pain. Ultrasonography can be used to evaluate the soft tissue structures of the knee that can be related to pain. Objective: To evaluate pain-related soft tissue structures of the knee with ultrasonography. Methods: This cross-sectional study included a total of 198 knees from 99 patients with knee osteoarthritis. Knee pain and functional status were evaluated by performing visual analogue scale (VAS) and Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC). On the ultrasound, cartilaginous thickness, knee effusion and presence of Baker's cyst were assessed and the correlation to pain was investigated. Results: Baker's cyst was significantly more frequent in symptomatic knees (13.9%) compared to asymptomatic knees (2.5%). Patients with Baker's cyst had a significantly more limited degree of knee flexion, significantly higher resting VAS pain scores and worse WOMAC scores compared to patients without Baker's cyst. In log-linear analysis, presence of Baker's cyst increased the risk of pain by 2.94 times. Conclusion: Ultrasound as a modality that is easily accessible, inexpensive and without radiation exposure is helpful to demonstrate factors related to pain in knee osteoarthritis by allowing assessment of soft tissue structures. abstract_id: PUBMED:27942107 Relationship between atherosclerosis and knee osteoarthritis as graded by radiography and ultrasonography in females. [Purpose] The aim of this study was to assess the relationship between atherosclerosis and knee osteoarthritis grade in women as assessed by both ultrasonography and radiography. [Subjects and Methods] Seventy women diagnosed with knee osteoarthritis were classified into two groups according to cartilage grading/radiographic grading. Patients with Kellgren-Lawrence grades 1 and 2 were included in group 1, while those with Kellgren-Lawrence grades 3 and 4 were included in group 2. Patients with cartilage grades 1-3 were included in group 1, while those with cartilage grades 4-6 were included in group 2. Patients were clinically assessed using a visual analog scale and the Western Ontario and McMaster Universities Arthritis Index. Radiographic osteoarthritis grade was scored using the Kellgren and Lawrence grading system. Using ultrasonography, symptomatic knees were graded and evaluated for distal femoral cartilage thickness. Carotid intima-media thickness and serum lipid levels were measured to assess atherosclerosis. [Results] Carotid intima-media thickness measurements were higher in group 2 than in group 1 as determined by the Kellgren-Lawrence and cartilage grading systems. Carotid intima-media thickness measurements were positively correlated with both the ultrasonographic cartilage grade and Kellgren-Lawrence. [Conclusion] The results of this study suggest that osteoarthritis as assessed by ultrasonography was successful and comparable to assessment with radiography. We showed a correlation between atherosclerosis and ultrasonographic knee osteoarthritis grade. abstract_id: PUBMED:30725363 An Assessment of Hand Erosive Osteoarthritis: Correlation of Radiographic Severity with Clinical, Functional and Laboratory Findings. Introduction: The present study aimed (a) to evaluate the clinical and radiographic characteristics of hand erosive osteoarthritis (EOA) in a group of consecutive patients, (b) to correlate the severity of radiographic involvement with clinical and laboratory findings and (c) to associate the levels of pain and functional impairment with some radiographic findings. Methods: Patients with EOA were consecutively enrolled. Inclusion criteria required the American College of Rheumatology (ACR) criteria for hand osteoarthritis and the presence of at least one joint in "E" or "R" phase according to Verbruggen-Veys. For each patient, demographic and clinical data were collected including evaluation of pain and function with the Australian Canadian Osteoarthritis Hand Index (AUSCAN) scale and Health Assessment Questionnaire (HAQ). Laboratory parameters and plain radiography of both hands were also collected. Each radiograph was evaluated in accordance with the Verbruggen-Veys classification and scored with the Kallman score. Results: During the study period 60 patients (M/F 13:47) with EAO were enrolled. More severe radiographic disease ("E" or "R") was often found at II and III distal interphalangeal (IP) joints. In addition, Kallman score, presence of osteophytes, erosions and joint space narrowing correlated significantly with duration of symptoms, AUSCAN, pain and active joints. More severe radiographic involvement was associated with AUSCAN and with the presence of ankylosis only at proximal IP joints. Conclusion: The present study showed that EOA is characterised by a significant correlation between radiographic involvement and some clinical characteristics of the disease. However, an impairment of joint function was mainly associated to radiological proximal IP joint involvement, but not with other symptoms such as pain. abstract_id: PUBMED:31624483 Clinical Assessments of Hand Function in First Carpometacarpal Osteoarthritis Do Not Appear to Correlate with Radiographic Findings. Background: Thumb carpometacarpal (first CMC) osteoarthritis (OA), a degenerative process affecting hand use, is typically assessed by clinical examination and radiographs. This assessment determines treatment, but it may not reflect functional limitations. Questions/purposes: We aimed to explore the relationship between measures of hand function and radiographs in individuals with and without first CMC OA. Methods: We designed a cross-sectional, observational pilot study, enrolling five patients with first CMC OA (nine thumbs with modified Eaton-Littler grades ranging from 1 to 4, using retrospective radiographic data) and nine healthy controls. They underwent evaluation of hand function using four patient-reported outcome measures (PROMs)-the Patient-Specific Functional Scale (PSFS); the Patient-Rated Wrist/Hand Evaluation (PRWHE); the Disabilities of the Arm, Shoulder, and Hand (DASH); and the Modified Score for the Assessment and Quantification of Chronic Rheumatoid Affections of the Hands (M-SACRAH)-and one performance measure, the Arthritis Hand Function Test (AHFT). Spearman's ρ with 95% subject clustered bootstrapped confidence intervals was calculated to assess for correlations between radiographic findings and measures of hand function. Results: Only the DASH work score showed strong positive correlation with radiographic OA grade, with PSFS, PRWHE, M-SACRAH, and AHFT scores demonstrating low to moderate correlations. Notable differences were found between patients and control subjects in median scores of the DASH, PSFS, PRWHE, and M-SACRAH, as well as in the grip, pinch, and button scores of the AHFT. Conclusion: While only the DASH work score strongly correlated with radiographic grade of first CMC OA, several measures detected considerable differences in functional hand use between patients and control subjects. The findings of this pilot study suggest that hand function scores be considered in addition to radiographs when determining severity of first CMC OA. The findings can also inform the design of a larger, powered study. abstract_id: PUBMED:17908659 Sonographic imaging of meniscal subluxation in patients with radiographic knee osteoarthritis. Background/purpose: This study was undertaken to describe the sonographic features of meniscal subluxation in the weight-bearing position and to determine any association between meniscal subluxation and radiographic osteoarthritis. Methods: In total, 238 knees with symptoms were examined successfully with weight-bearing anteroposterior and lateral radiographs and high resolution ultrasonography. The radiographs were examined to determine whether participants had radiographic osteoarthritis, graded using the Kellgren-Lawrence Scale. The degree of subluxation of the medial meniscus in each knee was measured using high resolution ultrasound with a 10-MHz linear transducer, at the level of the medial collateral ligament in weight-bearing condition. The degree of subluxation was compared in knees with the presence or absence of radiographic osteoarthritis using Students t test. Additional analysis between knees with early and advanced radiographic osteoarthritis was also performed. Results: Meniscal subluxation for knees with (n = 141) and without (n = 97) radiographic signs of osteoarthritis were 4.3 1.9 mm and 0.7 0.6 mm, respectively. The difference was highly significant (p &lt; 0.001). After age adjustment, the medial meniscal subluxation of age-matched subjects were 4.8 1.7 mm for knees with radiographic osteoarthritis (n = 43) and 1.0 0.8 mm for knees without such changes (n = 43). The difference between the two groups was still significant (p &lt; 0.001). The greatest meniscal subluxation was seen in knees with advanced radiographic signs of osteoarthritis; no knee with osteoarthritic changes on radiographs had an undisplaced meniscus. Conclusion: Meniscal subluxation is a prominent feature on weight-bearing sonographic imaging in patients with radiographic osteoarthritis and could be considered as a risk factor for the development of knee osteoarthritis. By using musculoskeletal ultrasonography, one can detect this occult meniscal derangement early before the appearance of radiographic signs of osteoarthritis. abstract_id: PUBMED:35905573 Intra-articular triamcinolone hexacetonide injections in hands osteoarthritis ‒ A double-blinded randomized controlled trial with a one year follow-up. Objective: to evaluate the effectiveness of triamcinolone Hexacetonide (TH) Intra-Articular Injection (IAI) in hand Interphalangeal Joints (IP) of Osteoarthritis (OA) patients to improve pain and joint swelling; improve function, goniometry, and grasping force, and assess IAI influence on radiographic evolution over 1-year. Methods: A randomized, double-blind study. 60 patients who underwent IAI at the most symptomatic IP joint were randomly assigned to receive TH+Lidocaine (LD) (TH/LD group) or just LD (LD group). Patients were assessed blindly for 1-year, at baseline and 1, 4, 8, 12, and 48 weeks. The following variables were assessed: articular pain and swelling, AUSCAN and COCHIN functional questionnaires, grip and pinch strength, goniometry, perception of improvement, acetaminophen consumption, and simple radiography. Repeated-measures ANOVA test was used to analyze the intervention response. Results: Sixty patients completed the follow-up. There were nine missed assessments. 97% were women; mean age of 61-years (SD = 8.2), and approximately 5-years of disease (SD = 3.6). Half of the patients present radiographic classification Kellgren and Lawrence (KL) grades I and II, and the other half grades III and IV. The two groups evolved similarly at 48-weeks. TH/LD group had a better evaluation in joint swelling and acetaminophen consumption (p = 0.04 and p &lt; 0.001, respectively) at 48-weeks. Radiographically there was no statistical difference between groups (KL, p = 0.564; erosive OA, p = 0.999; worsening, p = 0.573). Conclusion: The IAI IP hands OA is effective for the improvement of joint swelling and decrease of analgesic consumption and does not influence the radiographic evolution of the injected joint. Answer: Yes, ultrasonography can improve on radiographic assessment in osteoarthritis of the hands. A study comparing the detection of osteophytosis and joint space narrowing (JSN) by ultrasonography (US) and conventional radiography (CR) in osteoarthritis (OA) of the hand found that US detected more osteophytosis and JSN than CR. Specifically, US detected osteophytosis in 448 joints, compared to CR which detected it in 228 joints, and US detected JSN in 450 joints, while CR detected it in only 261 joints. The increased detection of OA structural pathology by US suggests that it may be a useful tool for hand OA research (PUBMED:18037626). Additionally, ultrasonography has been shown to be helpful in demonstrating factors related to pain in knee osteoarthritis by allowing assessment of soft tissue structures, which are not typically evaluated by CR (PUBMED:31771037). Moreover, ultrasonography has been successful and comparable to radiography in assessing osteoarthritis and has shown a correlation between atherosclerosis and ultrasonographic knee osteoarthritis grade (PUBMED:27942107). Furthermore, sonographic imaging has been used to detect meniscal subluxation in patients with radiographic knee osteoarthritis, which could be considered as a risk factor for the development of knee osteoarthritis. This suggests that musculoskeletal ultrasonography can detect occult meniscal derangement early before the appearance of radiographic signs of osteoarthritis (PUBMED:17908659). In summary, ultrasonography has been demonstrated to detect more pathological changes associated with OA than conventional radiography and can assess soft tissue structures related to pain, which are not visible on radiographs. Therefore, ultrasonography can indeed improve on radiographic assessment in osteoarthritis of the hands.
Instruction: Does preterm birth influence the response to growth hormone treatment in short, small for gestational age children? Abstracts: abstract_id: PUBMED:19067725 Does preterm birth influence the response to growth hormone treatment in short, small for gestational age children? Objective: To investigate whether prematurity has an independent influence on the response to GH treatment in short, small for gestational age (SGA) children. Design: A longitudinal 3-year GH study. Patients: A total of 392 prepubertal non-GH-deficient, short SGA children, comprising 138 preterm (&lt; 36 weeks) and 254 term (&gt;or= 36 weeks) children. Measurements: Height, weight, head circumference, skinfolds and serum IGF-I and IGFBP-3 levels were measured before start of GH treatment and after 6 months, 1, 2 and 3 years of treatment. Results: Preterm short SGA children were significantly lighter and shorter at birth after correction for gestational age than term short SGA children (P &lt; 0.001). At start of GH treatment, preterm children were significantly shorter than term children when height was corrected for target height (TH). Preterm children were also significantly leaner as shown by a lower body mass index (BMI) standard deviation score (SDS) and a lower sum of four skinfolds SDS. Prematurity had no influence on childhood IGF-I and IGFBP-3 levels. The response to GH treatment was similar for preterm and term SGA children. Conclusions: Within a population of short SGA children, prematurity is associated with a smaller size for gestational age and a shorter height corrected for TH and leaner phenotype in childhood. The response to GH treatment is similar for preterm and term short SGA children. abstract_id: PUBMED:21430029 Growth hormone treatment in short children born prematurely--data from KIGS. Context: Children born prematurely with growth failure might benefit from GH treatment. Objectives: The aim was to evaluate the first year growth response to GH treatment in short children born prematurely and to identify predictors of the growth response. Design/patients: A total of 3215 prepubertal children born prematurely who were on GH treatment were selected from KIGS (The Pfizer International Growth Database), a large observational database. They were classified according to gestational age as preterm (PT; 33 to no more than 37 wk) and very preterm (VPT; &lt;33 wk), and according to birth weight as appropriate for gestational age [AGA; between -2 and +2 sd score (SDS)] and small for gestational age (SGA; -2 SDS or below). Results: Four groups were identified: PT AGA (n = 1928), VPT AGA (n = 629), PT SGA (n = 519), and VPT SGA (n = 139). GH treatment was started at a median age of 7.5, 7.2, 6.7, and 6.0 yr, respectively. After the first year of GH treatment, all four groups presented a significant increase in weight gain and height velocity, with a median increase in height SDS higher than 0.6. Using multiple stepwise regression analysis, 27% of the variation in height velocity could be explained by the GH dose, GH peak during provocative test, weight and age at GH start, adjusted parental height, and birth weight SDS. The first year growth response of the children born PT and SGA could be estimated by the SGA model published previously. Conclusion: Short children born prematurely respond well to the first year of GH treatment. Long-term follow-up is needed. abstract_id: PUBMED:27803030 MANAGEMENT OF ENDOCRINE DISEASE: Growth and growth hormone therapy in short children born preterm. Approximately 15 million babies are born preterm across the world every year, with less than 37 completed weeks of gestation. Survival rates increased during the last decades with the improvement of neonatal care. With premature birth, babies are deprived of the intense intrauterine growth phase, and postnatal growth failure might occur. Some children born prematurely will remain short at later ages and adult life. The risk of short stature increases if the child is also born small for gestational age. In this review, the effects of being born preterm on childhood growth and adult height and the hormonal abnormalities possibly associated with growth restriction are discussed, followed by a review of current information on growth hormone treatment for those who remain with short stature during infancy and childhood. abstract_id: PUBMED:19094073 The effect of growth hormone treatment on metabolic and cardiovascular risk factors is similar in preterm and term short, small for gestational age children. Context: We previously reported that short, small for gestational age (SGA) children who were born preterm have a lower body fat percentage and a higher blood pressure, insulin secretion and disposition index than short SGA children born at term. Whether preterm birth also influences these parameters during GH treatment is unknown. Objective: To compare blood pressure, insulin sensitivity, beta-cell function and body composition during 4 years of GH treatment, between preterm and term short SGA children. Patients: A total of 404 prepubertal non-GH-deficient short SGA children were divided into 143 preterm (&lt; 36 weeks) and 261 term children. Outcome Measures: Height, blood pressure (n = 404), body composition measured by dual energy X-ray absorptiometry (DXA) (n = 138) and insulin sensitivity and beta-cell function calculated from a frequent sampling intravenous glucose tolerance test (FSIGT) with tolbutamide (n = 74) or from the homeostasis model assessment of insulin resistance (HOMA-IR) (n = 204). Results: In preterm and term children, GH treatment resulted in a similar decrease in systolic and diastolic blood pressure, body fat percentage, limb fat/total fat ratio and insulin sensitivity, and a similar increase in insulin secretion and disposition index. Lean body mass (LBM) corrected for gender and height increased in term children and did not change in preterm children. Multiple regression analysis revealed that this difference in GH effect on LBM was not associated with gestational age. Conclusion: The effect of GH treatment on metabolic and cardiovascular risk factors is similar in preterm and term short, SGA children. abstract_id: PUBMED:9401573 Nutritional management and growth hormone treatment of preterm infants born small for gestational age. In severe cases of intrauterine growth retardation, elective preterm delivery may provide the possibility for nutritional intervention to prevent some of the long-term consequences of the catabolic condition in utero. Neonatal nutritional management is aimed at providing a high protein intake of up to 4 g/kg/day in order to obtain the rapid increase in protein that is seen in normally growing infants during the early postnatal period. Unfortunately, due to impaired production of urea, high plasma levels of ammonia, which may be rate limiting with respect to an optimal gain in protein, are often observed in the preterm infant born small for gestational age (SGA). In an attempt to stimulate protein synthesis in preterm infants born SGA, growth hormone (GH) treatment was given to seven such infants during the early postnatal period. The infants received daily subcutaneous injections of GH (1.0 IU/kg/day) from postnatal day 7 until a body weight of 2000 g was reached (postnatal week 7-8). A further seven preterm infants born SGA were studied as controls. GH treatment had no significant effects on growth, body composition, net protein gain and glucose metabolism. Furthermore, plasma levels of insulin-like growth factor I (IGF-1) and IGF-binding protein-3 revealed a normal developmental increase and were not significantly altered by GH treatment. These results may be explained by a relative GH insensitivity or resistance during this period of early preterm life. abstract_id: PUBMED:33786659 Incidence of and risk factors for short stature in children with chronic kidney disease: results from the KNOW-Ped CKD. Background: Preserving optimal growth has long been a significant concern for children with chronic kidney disease (CKD). We aimed to examine the incidence of and risk factors for short stature in Asian pediatric patients with CKD. Methods: We analyzed growth status by height, weight, and body mass index (BMI) standard deviation scores (SDSs) for 432 participants in the KoreaN cohort study for Outcome in patients With Pediatric Chronic Kidney Disease. Results: The median height, weight, and BMI SDSs were - 0.94 (interquartile range (IQR) - 1.95 to 0.05), - 0.58 (IQR - 1.46 to 0.48), and - 0.26 (IQR - 1.13 to 0.61), respectively. A high prevalence of short stature (101 of 432 patients, 23.4%) and underweight (61 of 432 patients, 14.1%) was observed. In multivariable logistic regression analysis, CKD stages 4 and 5 (adjusted odds ratio (aOR) 2.700, p = 0.001), onset before age 2 (aOR 2.928, p &lt; 0.0001), underweight (aOR 2.353, p = 0.013), premature birth (aOR 3.484, p &lt; 0.0001), LBW (aOR 3.496, p = 0.001), and low household income (aOR 1.935, p = 0.030) were independent risk factors associated with short stature in children with CKD. Conclusions: Children with CKD in Korea were shorter and had lower body weight and BMI than the general population. Short stature in children with CKD was most independently associated with low birth weight, followed by premature birth, onset before age 2, CKD stages 4 and 5, underweight, and low household income. Among these, underweight is the only modifiable factor. Therefore, we suggest children with CKD should be carefully monitored for weight, nutritional status, and body composition to achieve optimal growth. abstract_id: PUBMED:37025401 Racial differences in the associations between adiposity, placental growth hormone and inflammatory cytokines in pregnant women. Background: The prevalence of obesity among women of child-bearing age has contributed to an increased risk of pregnancy complications with a disproportional impact on women of lower socioeconomic status and among certain racial groups. In particular, socio-demographic and historical factors have resulted in higher rates of premature births and small-for-gestational age infants among Black women, which may be associated with placental function during pregnancy. The current study investigated the influence of maternal pre-pregnancy adiposity and race on the associations between inflammatory proteins, placental growth hormone (PGH), and infant birthweight. This information was collected for a subsample of 109 participants (Black, n = 39 vs. White, n = 70) from the Brain and Early Experiences (BEE) study. Methods: Serum samples were acquired late in the second trimester to assess PGH levels, C-reactive protein (CRP), interleukin 6 (IL-6), interleukin 8 (IL-8), and interleukin-1 receptor antagonist (IL-1Ra). Participant questionnaire responses provided information on pre-pregnancy BMI, health, race, educational attainment, and infant birthweight. Bivariate correlations and multiple linear regression models were utilized to evaluate associations by race between preconception adiposity, inflammatory markers and PGH. Results: After controlling for covariates including maternal age and education, gestational age, and fetal sex, regression models indicated that pre-pregnancy BMI was negatively associated with PGH (β=-0.42, p&lt;0.05) and IL-8 was positively associated with PGH (β=0.35, p&lt;0.05) among the Black mothers only; neither were significantly associated with PGH in the White mothers. When extending models to birth outcomes, BMI was positively associated with birthweight corrected for gestational age (BWz) (β=0.24, p&lt;0.05) and educational attainment was negatively associated with BWz (β=0.28, p&lt;0.05) for infants of White women. In contrast, neither variable was predictive of BWz for infants of Black mothers. Conclusion: Future work is needed to investigate racial differences in the association between adiposity and placental functioning, which are likely to contribute to differential effects on pregnancy outcomes and fetal growth. abstract_id: PUBMED:31647006 Growth of HIV-uninfected children born to HIV-infected mothers in Guangdong, China: an 18-month longitudinal follow-up study. Background: To evaluate the early growth (weight and length) of HIV-exposed uninfected (HEU) children from the prevention of mother-to-child transmission (PMTCT) of human immunodeficiency virus (HIV) program in Guangdong Province, China. Methods: A total of 731 HEU children were longitudinally followed up at 7 time points, with anthropometric measurement conducted of weight and length (supine) in the first 18 months. Z scores were calculated, with and without adjustment for gestational age. Results: A total of 708 HEU children were included in the final follow-up cohort, and 105 (14.83%) children completed all 7 follow-up visits. The mean of adjusted weight-for-age Z scores in these children was above zero and showed a decreasing trend in 18 months. The mean of adjusted length-for-age Z scores showed a decreasing trend and was above zero in the first 12 months; this declined to under zero at age 18 months. The proportion of underweight was 0.28-2.19% and that of stunting was 0.71-4.63% at each follow-up month-age. Slower growth in HEU children was associated with no sustained food subside after 6 month, mothers' hemoglobin content less than 100 g/L during pregnancy, preterm birth, and low birth weight (p &lt; 0.05). Conclusions: HEU children could catch up to WHO growth standards in first 18 months in Guangdong; however, growth declined after 12 months, and these children need sustained nutritional support. abstract_id: PUBMED:24488742 Symmetrical and asymmetrical growth restriction in preterm-born children. Objective: To determine how symmetric (proportionate; SGR) and asymmetric (disproportionate; AGR) growth restriction influence growth and development in preterms from birth to 4 years. Methods: This community-based cohort study of 810 children comprised 86 SGR, 61 AGR, and 663 non-growth restricted (NGR) preterms, born in 2002 and 2003. Symmetrical growth restriction was defined as a birth weight below the 16th percentile (-1 SD) compared with full-terms and a head circumference (HC) z score not exceeding the infant's birth weight z score by &gt;1 SD. Asymmetric growth restriction was defined as a HC z score exceeding that for by &gt;1 SD as a proxy of brain sparing. Developmental delay was assessed by the Ages and Stages Questionnaire at 4 years. Results: Longitudinal gains in weight and height were similar for SGR and AGR children and less compared with NGR children. At age 4, z scores for weight were -1.1 for SGR and -0.7 for AGR children vs -0.3 for NGR children. z scores for height were -0.8 and -0.5 vs. -0.2. HC gain were 2 cm more in SGR, but at 1 year, they were -0.2 vs. 0.2 (AGR) and 0.1 (NGR). Developmental delay increased with odds ratios of 2.5 (95% confidence interval 1.1-6.0) for SGR and 2.1 (95% confidence interval 0.7-5.9) for AGR. Conclusions: Weight and height gains were similar for AGR and SGR children but poorer compared with NGR children. SGR children caught up on HC. Developmental delay was more likely in growth-restricted preterms independent of HC at birth. abstract_id: PUBMED:20889672 Influence of preterm birth and small birth size on serum anti-Müllerian hormone levels in young adult women. Background/objectives: Preterm birth has been associated with reduced reproduction rates, and controversies remain regarding the effect of being born small for gestational age (SGA) on ovarian function. Recent findings in young men showed no effect of preterm and SGA birth on testis function. We hypothesised that follicle pool size in young adult women is also not affected by preterm and SGA birth. Design/methods: In 279 young women of the PROGRAM/PREMS study, aged 18-24 years, the influence of gestational age, birth length and birth weight on serum levels of anti-Müllerian hormone (AMH) was analysed with multiple regression modelling. Additionally, AMH levels were analysed in preterm- versus term-born females and in three subgroups: females born SGA with either short stature or catch-up growth (SGA-CU), and females born term and appropriate for gestational age with normal stature (AGA controls). Results: Preterm and SGA birth did not affect AMH and other hormone levels. Older age at menarche and oral contraceptive pill use (OC-use) were related to lower AMH levels, and maternal smoking during gestation was related to higher AMH levels. After correction for maternal smoking, lower socioeconomic status (SES) was associated with lower AMH levels. In subgroup comparisons, SGA-CU women showed higher AMH levels than AGA controls, also after adjustment for several factors. Conclusion: Preterm and SGA birth did not affect AMH levels. Factors associated with serum AMH levels were OC-use, age at menarche, maternal smoking during gestation and SES. We conclude that preterm- and/or SGA-born females are not likely to have a reduced follicle pool size. Answer: Preterm birth does not appear to have an independent influence on the response to growth hormone (GH) treatment in short, small for gestational age (SGA) children. A study investigating this found that preterm short SGA children were significantly lighter and shorter at birth after correction for gestational age than term short SGA children, and at the start of GH treatment, preterm children were significantly shorter than term children when height was corrected for target height (TH). However, the response to GH treatment was similar for preterm and term SGA children, indicating that prematurity does not affect the efficacy of GH treatment in promoting growth in these children (PUBMED:19067725). Additionally, another study from the Pfizer International Growth Database (KIGS) involving 3215 prepubertal children born prematurely and on GH treatment showed that after the first year of GH treatment, all groups presented a significant increase in weight gain and height velocity. The study concluded that short children born prematurely respond well to the first year of GH treatment (PUBMED:21430029). Furthermore, a review on growth and GH therapy in short children born preterm discussed that some children born prematurely will remain short in later ages and adult life, especially if they are also born SGA. The review suggests that GH treatment can be considered for those who remain with short stature during infancy and childhood, although it does not specifically compare the response to GH treatment between preterm and term children (PUBMED:27803030). In summary, the available evidence suggests that preterm birth does not influence the response to GH treatment in short SGA children, as both preterm and term SGA children show similar responses to GH therapy in terms of growth outcomes.
Instruction: Does the effect of gender modify the relationship between deprivation and mortality? Abstracts: abstract_id: PUBMED:22846597 Does the effect of gender modify the relationship between deprivation and mortality? Background: In this study we propose improvements to the method of elaborating deprivation indexes. First, in the selection of the variables, we incorporated a wider range of both objective and subjective measures. Second, in the statistical methodology, we used a distance indicator instead of the standard aggregating method principal component analysis. Third, we propose another methodological improvement, which consists in the use of a more robust statistical method to assess the relationship between deprivation and health responses in ecological regressions. Methods: We conducted an ecological small-area analysis based on the residents of the Metropolitan region of Barcelona in the period 1994-2007. Standardized mortality rates, stratified by sex, were studied for four mortality causes: tumor of the bronquial, lung and trachea, diabetes mellitus type II, breast cancer, and prostate cancer. Socioeconomic conditions were summarized using a deprivation index. Sixteen socio-demographic variables available in the Spanish Census of Population and Housing were included. The deprivation index was constructed by aggregating the above-mentioned variables using the distance indicator, DP2. For the estimation of the ecological regression we used hierarchical Bayesian models with some improvements. Results: At greater deprivation, there is an increased risk of dying from diabetes for both sexes and of dying from lung cancer for men. On the other hand, at greater deprivation, there is a decreased risk of dying from breast cancer and lung cancer for women. We did not find a clear relationship in the case of prostate cancer (presenting an increased risk but only in the second quintile of deprivation). Conclusions: We believe our results were obtained using a more robust methodology. First off, we have built a better index that allows us to directly collect the variability of contextual variables without having to use arbitrary weights. Secondly, we have solved two major problems that are present in spatial ecological regressions, i.e. those that use spatial data and, consequently, perform a spatial adjustment in order to obtain consistent estimators. abstract_id: PUBMED:25706042 Regional Deprivation in Germany: Nation-wide Analysis of its Association with Mortality Using the German Index of Multiple Deprivation (GIMD) Background: Deprivation indices are increasingly being used to assess the effects of contextual factors on health. In Germany, the recently developed 'German Index of Multiple Deprivation (GIMD)' integrates various dimensions of regional deprivation. We aim to assess the validity of the GIMD through a recalculation using more recent rural and urban district level data and by analysing its association with mortality at the national level. Methods: We calculated a new version of the GIMD based on data from 2007 to 2010 for all 412 rural and urban districts in Germany. Mortality was quantified using indirectly standardised mortality ratios (SMRs). Correlation analyses and Poisson regression analyses were used to assess the association between the GIMD scores and total mortality, as well as premature mortality (&lt; 65 years). Results: Correlation analyses showed a positive association between the GIMD and both total mortality (p&lt;0.001) and premature mortality (p&lt;0.001). In the Poisson regression analyses, rural and urban districts in the quintile with the highest deprivation showed a significantly elevated risk of total mortality (RR: 1.29; 95% CI: 1.28-1.30) as well as premature mortality (RR: 1.50; 95% CI: 1.47-1.53), compared to the districts in the lowest quintile. Conclusion: The association between regional deprivation and mortality has already been shown for the federal state of Bavaria. Using more recent data, this relationship could be confirmed here for Germany as a whole. The GIMD has been shown to be able to effectively assess regional deprivation. Concerning public health policy, the significant, positive and stable association between regional deprivation and mortality indicates an increased need for health care provision particularly in the most deprived districts. Further studies should examine, for example, whether and how the allocation of districts to quintiles of regional deprivation changes over time, and how this affects mortality. abstract_id: PUBMED:30253698 Heat wave-related mortality in Sweden: A case-crossover study investigating effect modification by neighbourhood deprivation. Aims: The present study aimed to investigate if set thresholds in the Swedish heat-wave warning system are valid for all parts of Sweden and if the heat-wave warning system captures a potential increase in all-cause mortality and coronary heart disease (CHD) mortality. An additional aim was to investigate whether neighbourhood deprivation modifies the relationship between heat waves and mortality. Methods: From 1990 until 2014, in 14 municipalities in Sweden, we collected data on daily maximum temperatures and mortality for the five warmest months. Heat waves were defined according to the categories used in the current Swedish heat-wave warning system. Using a case-crossover approach, we investigated the association between heat waves and mortality in Sweden, as well as a modifying effect of neighbourhood deprivation. Results: On a national as well as a regional level, heat waves significantly increased both all-cause mortality and CHD mortality by approximately 10% and 15%, respectively. While neighbourhood deprivation did not seem to modify heat wave-related all-cause mortality, CHD mortality did seem to modify the risk. Conclusions: It may not be appropriate to assume that heat waves in Sweden will have the same impact in a northern setting as in a southern, or that the impact of heat waves will be the same in affluent and deprived neighbourhoods. When designing and implementing heat-wave warning systems, neighbourhood, regional and national information should be incorporated. abstract_id: PUBMED:34244310 Mortality disparities and deprivation among people with intellectual disabilities in England: 2000-2019. Background: The effect of policy initiatives and deprivation on mortality disparities in people with intellectual disabilities is not clear. Methods: An electronic health record observational study of linked primary care data in England from the Clinical Practice Research Datalink and the Office for National Statistics deaths data from 2000 to 2019 was undertaken. All-cause and cause-specific mortality for people with intellectual disabilities were calculated by gender and deprivation status (index of multiple deprivation quintile) using direct age-standardised mortality rates (all years) and ratios (SMR; 2000-2009 vs 2010-2019). Results: Among 1.0 million patients (n=33 844 with intellectual disability; n=980 586 general population without intellectual disability), differential mortality was consistently higher in people with intellectual disabilities and there was no evidence of attenuation over time. There was a dose-response relationship between all-cause mortality and lower deprivation quintile in the general population which was not observed in people with intellectual disabilities. Cause-specific SMR were consistent in both the 2000-2009 and 2010-2019 calendar periods, with a threefold increased risk of death in both males and females with intellectual disabilities (SMR ranges: 2.91-3.51). Mortality was highest from epilepsy (SMR ranges: 22.90-52.74) and aspiration pneumonia (SMR ranges: 19.31-35.44). SMRs were disproportionately high for people with intellectual disabilities living in the least deprived areas. Conclusions: People with intellectual disabilities in England continue to experience significant mortality disparities and there is no evidence that the situation is improving. Deprivation indicators may not be effective for targeting vulnerable individuals. abstract_id: PUBMED:22020751 Regional deprivation and mortality in Bavaria. Development of a community-based index of multiple deprivation Introduction: Deprivation indices are valuable instruments for the analysis of regional effects on health. They may also be used as a surrogate when individual socioeconomic data are not available. These regional deprivation indices are integral parts of the public health discussion in the United Kingdom. In Germany, however, the discussion on this topic has just begun. Our aim was to develop a small-area based, multidimensional Index of Multiple Deprivation (IMD) for Germany, based on an established British method. Methods: We chose the German state of Bavaria with its n=2 056 communities as a model region. We used official sociodemographic, socioeconomic and environmental data and created a Bavarian Index of Multiple Deprivation (BIMD). In order to test the applicability of the BIMD in epidemiological analyses we then performed a correlation analysis (Spearman's correlation coefficient) as well as a Poisson regression using data on premature mortality (&lt; 65 years) and on total mortality (all age groups). Results: The correlation analysis showed a positive and significant association between regional deprivation and mortality. The Poisson regression showed a clear gradient, i. e., we found a stepwise increase of mortality risk with increasing regional deprivation. Compared with communities in the lowest deprivation quintile, communities in the highest deprivation quintile showed a clearly higher mortality risk, both for premature mortality [RR 1.49 (95% CI: 1.42 - 1.57)] and for total mortality [RR 1.21 (95% CI: 1.18 - 1.25)]. Discussion: Using this new index, we could demonstrate for communities in Bavaria that higher regional deprivation is associated with higher mortality. This Index of Multiple Deprivation is a new and potentially useful tool for epidemiological and public health related studies in Germany. abstract_id: PUBMED:32901404 How Does Deprivation Affect Early-Age Mortality? Patterns of Socioeconomic Determinants of Neonatal and Postneonatal Mortality in Bolivia. Three mechanisms related to household living standards might affect early-age mortality: the absolute level of deprivation, its level relative to the average of the community, and the inequality in the distribution of deprivation within communities. A large body of literature has explored the effect of the absolute level of deprivation, but little research has examined the association between relative deprivation and early-age mortality, and findings related to deprivation inequality are inconsistent. Using 2008 Bolivian Demographic and Health Survey data, this study explores patterns of association between the three factors and mortality occurring in the neonatal and postneonatal periods. Because household-level deprivation might capture some unmeasured characteristics at the community level, such as area-specific investments, this study decomposes household-level deprivation into its between- and within-community components. The results show that after possible confounders are controlled for, community-level absolute deprivation is a significant predictor of neonatal and postneonatal mortality. Relative deprivation and deprivation inequality are not associated with early-age mortality. These findings are specific to a context of widespread deprivation and low inequality within communities; the role of the distribution of deprivation might be more important in countries in which basic needs are met within a bigger proportion of the population. This study helps identify crucial sectors of development related to living standards and deprivation inequality in order to tackle neonatal and postneonatal mortality. abstract_id: PUBMED:36467247 Men's reactions to gender inequality in the workplace: From relative deprivation on behalf of women to collective action. Over recent years, the role of men as women's allies in the struggle for gender equality has become increasingly important. Previous research has shown that often men do not fight gender inequalities as they fail to recognize the severity of discrimination against women (e.g., in hiring). In this study (N = 427), we examined whether men who experienced relative deprivation on behalf of women-a form of relative deprivation that stems from the awareness that women hold a less privileged position in society-were more motivated to engage in collective action to support gender equality in the workplace. The findings showed that men's feelings of deprivation on behalf of women were associated with a greater willingness to engage in collective action for gender equality. This relationship was sequentially mediated by two emotional reactions related to deprivation-increased guilt about gender inequalities and decreased fear of a potential backlash-and the moral conviction of acting for gender equality. These results suggest that men's awareness of gender inequality at work is an important antecedent to their acting in solidarity with women and that emotions and moral conviction are two psychological processes that turn cognition into behavior. Action to reduce gender inequalities should make men more sensitive to seeing that they hold a privileged position in society and to recognizing the pervasive and harmful nature of women's deprivation. abstract_id: PUBMED:36890998 Russian subject-level index of multidimensional deprivation and its association with all-cause and infant mortality. Background: Social and environmental inequalities in public health are recognized as global problems of our time. From the point of view of the theory of deprivation, social and environmental determinants identified as deprivation indicators, which help to detect health inequality. Indices are one of the most practical and powerful tools for measuring the level of deprivation. Objectives: The aims of our study are (1) to develop a Russian derivation index to measure the levels of deprivation and (2) to analyze its associations with total and infant mortality. Material And Methods: Deprivation indicators were obtained from the Federal State Statistics Service of Russia. All mortality data were taken from the official website of the Federal Research Institute for Health Organization and Informatics of Ministry of Health of the Russian Federation from 2009 to 2012. Principal components analysis with varimax rotation was used to (1) select suitable deprivation indicators and (2) create the index. A Spearman's correlation was run to determine the relationship of deprivation with all-cause and infant mortality. Ordinary least squares (OLS) regression was used to assess the relationship between deprivation and infant mortality. Development of the index and statistical analysis were carried out using R and SPSS software. Results: There is not a statistically significant correlation between deprivation and all-cause mortality. OLS regression showed a significant relationship between deprivation and infant mortality (p = 0.02). For every one-unit increase in the index score, infant mortality rate increases by about 20%. abstract_id: PUBMED:31811513 Non-compensatory aggregation method to measure social and material deprivation in an urban area: relationship with premature mortality. Health inequalities exist between nations, regions, and even smaller units. In societies where social and economic structures change rapidly and continuously, analysis of health socioeconomic determinants plays a fundamental role to provide proper policy answers. This study aims to measure accurately two different conceptions of deprivation by developing two different indexes using non-compensatory among sub-indicators aggregation methods. The proposed indicators are compared with premature mortality to verify deprivation's effect on health status. The results show that materially deprived areas are not necessarily socially deprived and vice versa. Material deprivation has a positive statistical co-graduation with premature mortality, while social deprivation has no association with premature mortality. abstract_id: PUBMED:26649014 Mortality in Sepsis and its relationship with Gender. Background And Objective: Sepsis remains a leading cause of death across the world, carrying a mortality rate of 20-50%. Women have been reported to be less likely to suffer from sepsis and to have a lower risk of mortality from sepsis compared to men. The objective of this study was to determine the relationship between gender and mortality in sepsis, and compare cytokine profiles of male and female patients. Methods: This was a prospective case series on 97 patients admitted with sepsis. Clinical and microbiological data was gathered, blood samples were collected for cytokine (IL-10, IL-6 and TNFα) levels and patients were followed up for clinical outcome. Results: There were 54% males and 46% females, with no significant difference of age or comorbids between genders. Respiratory tract infection was the commonest source of sepsis, and was more common in females (60%) compared to males (39%) (p=0.034). Males had a higher mortality (p=0.048, RR 1.73) and plasma IL-6 level(p=0.040) compared to females. Mean IL-6 plasma level was significantly (p&lt;0.01) higher in patients who died vs. who recovered. Conclusion: Our study shows that males with sepsis have a 70% greater mortality rate, and mortality is associated with a higher IL-6 plasma level. Answer: Yes, the effect of gender does modify the relationship between deprivation and mortality. The study conducted in the Metropolitan region of Barcelona found that at greater deprivation, there is an increased risk of dying from diabetes for both sexes and of dying from lung cancer for men. However, at greater deprivation, there is a decreased risk of dying from breast cancer and lung cancer for women. No clear relationship was found in the case of prostate cancer (PUBMED:22846597). Additionally, the study on heat wave-related mortality in Sweden found that while neighbourhood deprivation did not seem to modify heat wave-related all-cause mortality, coronary heart disease (CHD) mortality did seem to be modified by the risk associated with neighbourhood deprivation (PUBMED:30253698). Furthermore, the study on mortality disparities among people with intellectual disabilities in England revealed that deprivation indicators may not be effective for targeting vulnerable individuals, suggesting that the relationship between deprivation and mortality may be different for this population (PUBMED:34244310). In the context of gender inequality in the workplace, men's reactions to gender inequality showed that men's feelings of deprivation on behalf of women were associated with a greater willingness to engage in collective action for gender equality (PUBMED:36467247). These findings indicate that gender can indeed modify the relationship between deprivation and mortality, with different effects observed for different causes of mortality and in different contexts.
Instruction: Child sexual abuse examination: is there a need for routine screening for N. gonorrhoeae? Abstracts: abstract_id: PUBMED:16202938 Child sexual abuse examination: is there a need for routine screening for N gonorrhoeae and C trachomatis? Purpose: To determine the yield of routine vaginal cultures for Neisseria gonorrhoeae and Chlamydia trachomatis from girls evaluated following sexual abuse. Methods: Retrospective chart review evaluating results of cultures that were obtained from 2008 prepubertal girls seen within 72 hours following an assault over two periods of 3 years each. Results: It was found that only 16 (0.8%) of cultures were positive for either gonorrhea or chlamydia. All but one of the prepubertal girls who had positive vaginal cultures for sexually transmitted disease had signs of acute vulvovaginitis. Conclusion: Routine vaginal cultures in asymptomatic pre-pubertal girls have a very low yield. Prospective studies are required to change current protocols for the evaluation of child victims of sexual abuse. abstract_id: PUBMED:8795782 Child sexual abuse examination: is there a need for routine screening for N. gonorrhoeae? Purpose: To determine the yield of routine vaginal cultures for Neisseria gonorrhoeae from asymptomatic girls following sexual abuse. Methods: Cultures were obtained from 865 prepubertal girls seen within 72 hours following an assault. Adult females (2743), age 18 or older, who were treated according to the same protocol constitute the control group. Data were analyzed using chi square analysis and comparison of proportions. Results: It was found that 165 (4.6%) cultures were positive for N. gonorrhoeae. Of the 2743 adults, 153 were positive (5.6%) compared with 12 of the 865 prepubertal girls (1.4%). The differences are statistically significant. All 12 prepubertal children who had positive vaginal cultures for N. gonorrhoeae revealed that they had signs of acute vulvovaginitis. Conclusion: Routine vaginal cultures in asymptomatic prepubertal girls may not be indicated as part of the initial evaluation. abstract_id: PUBMED:12523225 Chlamydia trachomatis and Neisseria gonorrhoeae infections in sexually abused children in Jutland Introduction: At the Department of Forensic Medicine, University of Aarhus, Denmark, examinations are performed of children who are suspected of having been sexually abused. The medical forensic investigation aims at documenting sexual abuse if there are any physical findings. The presence of certain sexually transmitted diseases, which cannot be explained otherwise, may be such documentation. This article focuses on Chlamydia trachomatis and Neisseria gonorrhoeae, and the risk for the sexually abused child to contract these diseases. Material And Methods: Retrospectively, we went through the Institute's files from 1996 to 2000 concerning child abuse. Information about age, sex, cultures for Chlamydia trachomatis and for Neisseria gonorrhoeae, and the results of these cultures was registered. If the child had been cultured, the suspected perpetrator's age and gender were registered as well. The children were all between 0 and 15 years of age, and in all cases the suspicion of sexual abuse had led to reporting to the police. Results: 295 girls and 41 boys were examined in the period from 1996 to 2000. A total of 100 cultures for Chlamydia trachomatis and 105 cultures for Neisseria gonorrhoeae were performed. Among the 111 children who were examined for Chlamydia trachomatis and/or Neisseria gonorrhoeae there were no positive results. In 102 cases the suspected perpetrator was known to be one or more men, and only in one case it was a women. The mean age of the suspected perpetrators was 35.4 years. Discussion: The prevalence of Neisseria gonorrhoeae and Chlamydia trachomatis should reflect the prevalence in the group of perpetrators. Neisseria gonorrhoeae is uncommon outside Copenhagen, the capital of Denmark, whereas Chlamydia trachomatis is quite common. On this background, we had expected some of the children to be infected with Chlamydia trachomatis. Therefore, the risk for sexually abused children to be infected with Chlamydia trachomatis or Neisseria gonorrhoeae is very low, depending on the kind of abuse and the perpetrator's possible risk behaviour, age, and gender. As a conclusion, only children with a clear indication should be tested for these sexually transmitted diseases. abstract_id: PUBMED:17374888 Non-cultural detection and molecular genotyping of Neisseria gonorrhoeae from a piece of clothing. Isolation of Neisseria gonorrhoeae is currently the gold standard for the definitive diagnosis of gonorrhoea and for use in medico-legal cases in the UK. Molecular detection methods are used increasingly but are untested as evidence of infection in a court of law. An isolate of N. gonorrhoeae was obtained from a child and an article of clothing from an adult male who was suspected of sexual abuse of the child. Biochemical and immunological tests were used to confirm the isolate as N. gonorrhoeae. Amplification by PCR using two targets, cppB and ompIII, was used both as further confirmation of the isolate and to detect the presence of gonococcal-specific DNA from the clothing. The relationship of the gonococcal DNA from the child and the adult was investigated using genotyping (N. gonorrhoeae multi-antigen sequence typing; NG-MAST), including a nested PCR for the por gene. Both samples were indistinguishable by NG-MAST and shared the same sequence type, 403. This is the first report of molecular detection and genotyping of N. gonorrhoeae on an article of clothing, which resulted in conviction of the man for sexual assault. abstract_id: PUBMED:24801359 The usefulness of Neisseria gonorrhoeae strain typing by Pulse-Field Gel Electrophoresis (PFGE) and DNA detection as the forensic evidence in child sexual abuse cases: a case series. Diagnosis of alleged child sexual abuse can be made from history in conjunction with physical examination, psychosocial evaluation, and laboratory investigations. Sexually transmitted infection associated with sexual abuse is found in 5 % of the victims, with Neisseria gonorrhoeae being the most common organism. Identification of sexually transmitted disease, particularly N. gonorrhoeae infection, can be useful for the diagnosis of sexual abuse and thus, the initiation of the child protection process. Polymerase Chain Reaction (PCR) is a newer diagnostic assay with a higher sensitivity compared with conventional culture method. In addition, N. gonorrhoeae strain typing can also be used to identify the abuser. In this case series, we present the application of N. gonorrhoeae strain typing (PFGE technique) to identify the abuser, and the confirmation of gonococcal vaginitis by PCR technique. abstract_id: PUBMED:12418541 Screening tests to detect Chlamydia trachomatis and Neisseria gonorrhoeae infections--2002. Since publication of CDC's 1993 guidelines (CDC, Recommendations for the prevention and management of Chlamydia trachomatis infections, 1993. MMWR 1993;42[No. RR-12]:1-39), nucleic acid amplification tests (NAATs) have been introduced as critical new tools to diagnose and treat C. trachomatis and Neisseria gonorrhoeae infections. NAATs for C. trachomatis are substantially more sensitive than previous tests. When using a NAAT, any sacrifice in performance when urine is substituted for a traditional swab specimen is limited, thus reducing dependence on invasive procedures and expanding the venues where specimens can be obtained. NAATs can also detect both C. trachomatis and N. gonorrhoeae organisms in the same specimen. However, NAATs are usually more expensive than previous tests, making test performance from an economic perspective a key consideration. This report updates the 1993 guidelines for selecting laboratory tests for C. trachomatis with an emphasis on screening men and women in the United States. (In this report, screening refers to testing persons in the absence of symptoms or signs indicating C. trachomatis or N. gonorrhoeae infection.) In addition, these guidelines consider tests from an economic perspective and expand the previous guidelines to address detection of N. gonorrhoeae as well as C. trachomatis infections. Because of the increased cost of NAATs, certain laboratories are modifying manufacturers' procedures to improve test sensitivity without incurring the full cost associated with screening with a NAAT. Such approaches addressed in these guidelines are pooling of specimens before testing with a NAAT and additional testing of specimens whose non-NAAT test result is within a gray zone. This report also addresses the need for additional testing after a positive screening test to improve the specificity of a final diagnosis. To prepare these guidelines, CDC staff identified pertinent concerns, compiled the related literature published during 1990 or later, prepared tables of evidence, and drafted recommendations. Consultants, selected for their expertise or disciplinary and organizational affiliations, reviewed the draft recommendations. These final guidelines are the recommendations of CDC staff who considered contributions from scientific consultants. These guidelines are intended for laboratorians, clinicians, and managers who must choose among the multiple available tests, establish standard operating procedures for collecting and processing specimens, interpret test results for laboratory reporting, and counsel and treat patients. abstract_id: PUBMED:6406989 Pharyngeal gonorrhea in young children. Among 16,100 visits to a pediatric clinic, 103 children (0.6%) fulfilled criteria for child abuse and 16 of the 103 patients (15.5%) suffered sexual abuse. Throat cultures from 13 children were positive for Neisseria gonorrhoeae in 7 patients (54%). Ages of infected children ranged from 2.5 to 9 years with a mean age of 4.9 years. N. gonorrhoeae was also isolated from cervical (four) and rectal (one) cultures in four of seven patients tested. No child had symptoms related to the oropharynx. All children were treated with a single dose of 100,000 units aqueous procaine penicillin G per kg intramuscularly and 25 mg probenecid per kg orally. Three of the seven children required retreatment for persistently positive pharyngeal cultures. All cases could be traced to sexual abuse by a male contact. Routine culturing of the oropharynx for N. gonorrhoeae in abused children and their siblings may uncover some unsuspected sexual abuse cases. abstract_id: PUBMED:7827668 Indications for Neisseria gonorrhoeae cultures in children with suspected sexual abuse. Objective: To determine the clinical predictors of Neisseria gonorrhoeae infection in children examined for sexual abuse. Design: Retrospective review of a prospective management plan. Setting: A 240-bed children's hospital with 36,000 emergency department visits per year. Intervention: In 1988, a Pediatric Emergency Medicine Department protocol was introduced for the examination of children who present with complaints suggestive of sexual abuse. Results: From January 1990 through December 1991, the records of all children less than 12 years of age examined for suspected sexual abuse were reviewed. Vaginal/urethral, oral, and rectal cultures for N gonorrhoeae were performed in 316 children. Seven children (2.2%) had a total of 12 positive cultures: seven vaginal/urethral, four rectal, and one oral. Evidence of vaginal/urethral discharge on physical examination was the best predictor of N gonorrhoeae infection (sensitivity, 100%; specificity, 88%; positive predictive value, 16%; and negative predictive value, 100%). Historical and physical evidence of discharge was significantly associated with N gonorrhoeae infection (P &lt; .0006 and P &lt; .000001, respectively). Conclusions: Children less than 12 years of age examined for sexual abuse who did not have evidence on physical examination of vaginal or urethral discharge were found to have a 100% probability of having negative vaginal/urethral, oral, and rectal N gonorrhoeae cultures. These findings do not support the practice of obtaining cultures for N gonorrhoeae routinely in all children who present for evaluation of possible sexual abuse. abstract_id: PUBMED:1300358 Penicillinase-producing Neisseria gonorrhoeae conjunctivitis on some Nigerian children. During 21 month study of bacterial conjunctivitis among 121 children in two health care centres in Calabar, Nigeria, a total of 90 (74.4%) cases were culturally confirmed. Neonates had the highest age-specific attack rates with 48 (53.3%) cases. Neisseria gonorrhoeae, the predominant pathogen, was recovered from 32 (35.6%) infections; 21 (65.6%) of them from neonates. Cultures of genital swabs of consenting parents of infected neonates as well as those of three female children aged 2-12 years with concurrent vulvo-vaginitis yielded N. gonorrhoeae. Younger women, mostly primi-gravidae were more frequently found to have benefited from peri-natal health care services than older multi-gravidae. Nevertheless, such access to health care services did not appear to influence the frequency of gonococcal conjunctivitis in neonates from the two maternal groups (P &lt; 0.01). Sexual abuse and contaminated fomites were the possible modes of gonococcal infection transmission to older children. Overall, 22 (68.8%) strains of gonococci were resistant to penicillin; 19 (59.4%) were penicillinase-producing Neisseria gonorrhoeae (PPNG), while 5 (15.6%) had chromosomally-mediated resistance. All isolates were sensitive to erythromycin. This study recommends a review of gonorrhoea surveillance in pregnancy to include routine examination of cervical swabs just before delivery. abstract_id: PUBMED:23791118 Child maltreatment: screening and anticipatory guidance. Child maltreatment is a problem of epidemic proportions in the United States. Given the numbers of children affected by child maltreatment and the dire consequences that can develop, prompt identification of child maltreatment is crucial. Despite support of the implementation and development of protocols for child maltreatment screening by professional organizations such as the National Association of Pediatric Nurse Practitioners and American Academy of Pediatrics, little is available in the literature regarding the screening practices of pediatric nurse practitioners and other pediatric health care providers. This Continuing Education article will help pediatric nurse practitioners incorporate this vital screening intervention into their practice. Practical examples of when and how to incorporate screening questions and anticipatory guidance for discipline practices, crying, intimate partner violence (domestic violence), physical abuse, and sexual abuse will be discussed. Answer: Based on the provided abstracts, there appears to be a debate regarding the necessity of routine screening for Neisseria gonorrhoeae in cases of child sexual abuse examination. Some studies suggest that routine screening may not be indicated for asymptomatic prepubertal girls. For instance, a retrospective chart review found that only 0.8% of cultures were positive for either gonorrhea or chlamydia, and nearly all prepubertal girls with positive cultures had signs of acute vulvovaginitis (PUBMED:16202938). Another study found that only 1.4% of prepubertal girls had positive cultures for N. gonorrhoeae, and all of them had signs of acute vulvovaginitis, suggesting that routine vaginal cultures in asymptomatic prepubertal girls may not be necessary as part of the initial evaluation (PUBMED:8795782). Similarly, a retrospective study in Denmark found no positive results for N. gonorrhoeae or Chlamydia trachomatis among the children examined for sexual abuse, leading to the conclusion that only children with a clear indication should be tested for these sexually transmitted diseases (PUBMED:12523225). However, other studies highlight the importance of screening and the use of molecular detection methods. For example, molecular genotyping of N. gonorrhoeae from a piece of clothing led to the conviction of a man for sexual assault (PUBMED:17374888), and strain typing by Pulse-Field Gel Electrophoresis (PFGE) and DNA detection was useful as forensic evidence in child sexual abuse cases (PUBMED:24801359). The use of nucleic acid amplification tests (NAATs) has been introduced as a more sensitive method for diagnosing C. trachomatis and N. gonorrhoeae infections, which could be considered in the context of child sexual abuse examinations (PUBMED:12418541). In summary, while some evidence suggests that routine screening for N. gonorrhoeae in asymptomatic prepubertal girls may have a low yield, other studies emphasize the importance of screening, particularly with the use of more sensitive molecular detection methods. The decision to screen may depend on the presence of symptoms, the likelihood of infection based on the circumstances of the abuse, and the availability of sensitive diagnostic tests.
Instruction: Birth weight and polycystic ovary syndrome in adult life: is there a causal link? Abstracts: abstract_id: PUBMED:25790331 Birth weight and polycystic ovary syndrome in adult life: is there a causal link? Objectives: Several studies have demonstrated associations of birth weight with metabolic and reproductive abnormalities in adults. The aim of this study was to investigate the birth weight in women with PCOS and its correlation with clinical and biochemical characteristics of the syndrome. Materials And Methods: We studied 288 women with PCOS according to the NIH criteria and 166 women with normal cycle and without clinical hyperandrogenism. Birth weight and anthropometric characteristics were recorded, and levels of serum androgens, SHBG, insulin and fasting glucose were measured. Results: Birth weight data were available for 243/288 women with PCOS and age- and BMI-matched 101/166 controls. No differences were found (p&gt; 0.05) in birth weight among women with PCOS and normal controls. Birth weight of PCOS women was negatively correlated with DHEAS levels (p = 0.031, r = -0.143) and positively correlated with waist circumference (p &lt;0.001, r = 0.297) and body mass index (BMI) (p = 0.040, r = 0.132). Birth weight of controls was negatively correlated with SHBG levels (p = 0.021, r = -0.234). Women from both groups were further divided in 6 categories according to birth weight (A. &lt;2.500 gr, B. 2.501-3.000 gr, C. 3.001-3.500 gr, D. 3.501-4.000 gr, E. 4.001-4.500 gr, F. &gt; 4.500 gr). No statistically significant differences were observed in the distribution percentages between PCOS women and controls. (A. 7% vs 7.9%, B. 26.8% vs 20.8%, C. 39.1% vs 48.5%, D. 21.4% vs 20.8%, E. 4.9% vs 2%, F. 0.8% vs 0%), (in all comparisons, p&gt; 0.05). Conclusions: Women with PCOS do not differ from controls in birth weight distribution. However, birth weight may contribute to subtypes of the syndrome that are characterized by adrenal hyperandrogenism and central obesity. abstract_id: PUBMED:28277130 Low birth-weight is a PCOS risk factor for Southern-Italian women. Polycystic ovary syndrome (PCOS) is a heterogeneous endocrine and metabolic disorder, characterized by chronic anovulation/oligomenorrhea, hyperandrogenism and insulin-resistance. Initial studies on the genetic basis of this disorder suggested both a maternal and paternal pattern of inheritance. Birth weight has become a main factor to consider when investigating the etiology of an adult-onset disease, since it is a simple indicator of intrauterine adverse conditions. The aim of our study is to validate the "low birth-weight" as risk factor for the development of PCOS. We performed a retrospective study on 188 PCOS patients and 185 controls, showing that Southern-Italian PCOS population have a significant lower birth-weight comparing to the control population; interestingly, PCOS women with low and normal low weight at birth show higher levels of androgens. Taken together, our result highlights the importance of the "birth-weight" factor for PCOS anamnesis. abstract_id: PUBMED:23200688 Birth weight and polycystic ovary syndrome in adult life: a register-based study on 523,757 Danish women born 1973-1991. Objective: To study the association between birth weight and polycystic ovary syndrome (PCOS) in adult life in Danish women born 1973-1991. Design: Register study. Setting: Data were extracted from the Danish Medical Birth Register and the Danish National Patient Register (NPR). Patient(s): All female children born of Danish mothers in Denmark between 1973 and 1991 were included (n = 523,757) and followed for a total of 4,739,547 person-years at risk. Intervention(s): None. Main Outcome Measure(s): Information on birth weight was extracted from the Danish Medical Birth Register. The cohort was followed up in the NPR for PCOS diagnoses from age 15 years until the end of 2006. Furthermore, information on maternal diabetes diagnoses was extracted from the NPR. Result(s): The risk of PCOS was significantly increased in women with birth weight ≥4,500 g (incidence rate ratio, 1.57; 95% confidence interval 1.21-2.03) compared to women with birth weight 3,000-3,499 g. All women with birth weight ≥4,500 g were born large for gestational age and a birth weight of 4,500 g represented the 98.5th percentile of the birth weights. Women born of mothers diagnosed with diabetes were at increased risk of PCOS. In these women the risk of PCOS increased with decreasing birth weight. Conclusion(s): The risk of PCOS was increased in women born with birth weight ≥4,500 g. In women of diabetic mothers we found an increased risk of PCOS, which was inversely related to birth weight. abstract_id: PUBMED:27326419 Low birth weight may increase body fat mass in adult women with polycystic ovarian syndrome. Background: Women engaged with polycystic ovarian syndrome (PCOS), as the commonest endocrine disorder, are known to have a specific type of adiposity. Birth weight is among different contributors reported to be responsible for this diversity. Objective: We aimed to compare the relation between birth weight and body fat mass (BFM)/ body lean mass (BLM) in PCOS and their age and body mass index (BMI) matched normal controls. Materials And Methods: In this case-control study, a total number of 70 reproductive aged women, diagnosed with PCOS and 70 age- BMI matched healthy women without hirsutism and/or ovulatory dysfunction were recruited., control group had no polycystic ovaries in ultrasonographic scans. A detailed history of birth weight was taken and was divided into the following categories: &lt;2,500 (low birth weight, LBW) and 2,500-4,000 (normal birth weight; NBW). Results: Results showed that LBW prevalence was higher in women with PCOS than in controls (19.3% (27) vs. 15.7% (22)). Also body fat and lean mass (BFM, BLM) have increased in adult women with PCOS who were born underweight compared to their normal (19.8±9.05 vs. 12.9±4.5, p=0.001 and 48.9±6.9 vs. 43.2±5.8, p=0.004 respectively). Conclusion: Fetal birth weight influences on the adulthood obesity, BFM and BLM. This impact is different among women with and without PCOS. abstract_id: PUBMED:27222928 Developmental origins of polycystic ovary syndrome (PCOS), a case control study comparing birth weight in women with PCOS and control group. Evidence from various epidemiological studies and experimental animal studies has linked adverse intrauterine circumstances with health problems in adult life. This field of investigation is known as Developmental Origins of Health and Disease (DOHaD). Studies investigating the relation between developing polycystic ovary syndrome (PCOS) in adulthood and birth weight have yielded inconsistent results: PCOS is described more often in women with low birth weight and high birth weight, while other studies have failed to establish any relation. In this retrospective case-control study, we evaluated whether women diagnosed with PCOS had lower birth weight compared to women with a regular menstrual cycle (controls). Binary logistic regression models were used to analyze the data and correct for known confounders. About 65 women with PCOS and 96 controls were recruited for this purpose. The average birth weight of PCOS women (3357 g) did not differ from the average birth weight of controls (3409 g). Mean age at menarche differed significantly between groups, 13.7 years and 12.8 years (p = 0.006), respectively, for PCOS women and controls. In conclusion, we could not confirm the effect of adverse intrauterine conditions, reflected in birth weight, on developing PCOS. abstract_id: PUBMED:15802312 Birth weight in offspring of mothers with polycystic ovarian syndrome. Background: A relationship between reduced fetal growth and the polycystic ovary syndrome (PCOS) has been proposed in girls with PCOS. However, the birth weight in the offspring of PCOS mothers has not been systematically investigated. The aim of this study was to establish the birth weight of newborns of mothers with PCOS and to compare it with a control group of newborns of normal women matched by age and weight at the beginning of pregnancy. Methods: The birth weight of 47 infants born from singleton pregnancies in women with well-documented PCOS was compared with 180 infants born from singleton pregnancies in healthy controls. Results: The prevalence of small for gestational age (SGA) infants was significantly higher in the PCOS group compared to the control group (12. 8% versus 2.8%, respectively, P&lt;0.02). Moreover, SGA infants born to PCOS mothers were smaller than those born to control mothers (P&lt;0.05). The prevalence of large for gestational age infants (LGA) was similar in both groups, but birth length of LGA newborns was greater in PCOS women than controls (P&lt;0.05). Conclusions: PCOS mothers showed a significantly higher prevalence of SGA newborns which cannot be completely attributed to pregnancy complications, and seems to be more related to the PCOS condition of the mother. abstract_id: PUBMED:25637475 Insulin resistance and hyperandrogenism have no substantive association with birth weight in adolescents with polycystic ovary syndrome. Objective: To assess whether birth weight influences the metabolic and hormonal profile of adolescents with polycystic ovary syndrome (PCOS). Design: Retrospective study. Setting: University outpatient clinic. Patient(s): One hundred seventy consecutive adolescents 12 to 19 years of age with PCOS, 15 of whom were small for gestational age (SGA), and 75 healthy female aged-matched adolescents as controls. Intervention(s): Physical evaluations, fasting blood samples for measuring endocrine and metabolic parameters, and an oral glucose tolerance test. Main Outcomes Measure(s): Physical, endocrine, and metabolic features. Result(s): The birth weights of adolescents with PCOS as well as those with hyperinsulinemic or insulin resistance were similar to those of the control group. The PCOS SGA adolescents had basal insulin (15.93 ± 7.16 μU/mL vs. 10.97 ± 5.79 μU/mL) and homeostasis model assessment of insulin resistance values (3.2 ± 1.54 vs. 2.19 ± 1.28) that were statistically significantly higher than in the control group. The mean levels of total testosterone in the SGA adolescents with PCOS were above the upper limit of the normal range (0.80 ng/mL). Conclusion(s): Low birth weight may influence the appearance of hyperandrogenism and insulin resistance in a portion of adolescents with PCOS, but only 9% of the adolescents with PCOS in this study were SGA. In the majority of adolescents with PCOS, hyperinsulinemia and hyperandrogenism are related to factors other than birth weight alone. abstract_id: PUBMED:37351103 Fetal genome predicted birth weight and polycystic ovary syndrome in later life: a Mendelian randomization study. Associations between lower birth weight and higher polycystic ovary syndrome (PCOS) risk have been reported in previous observational studies, however, the causal relationship is still unknown. Based on decomposed fetal and maternal genetic effects on birth weight (n = 406,063), we conducted a two-sample Mendelian randomization (MR) analysis to assess potential causal relationships between fetal genome predicted birth weight and PCOS risk using a large-scale genome-wide association study (GWAS) including 4,138 PCOS cases and 20,129 controls. To further eliminate the maternally transmitted or non-transmitted effects on fetal growth, we performed a secondary MR analysis by utilizing genetic instruments after excluding maternally transmitted or non-transmitted variants, which were identified in another birth weight GWAS (n = 63,365 parent-offspring trios from Icelandic birth register). Linkage disequilibrium score regression (LDSR) analysis was conducted to estimate the genetic correlation. We found little evidence to support a causal effect of fetal genome determined birth weight on the risk of developing PCOS (primary MR analysis, OR: 0.86, 95% CI: 0.52 to 1.43; secondary MR analysis, OR: 0.86, 95% CI: 0.54 to 1.39). In addition, a marginally significant genetic correlation (rg = -0.14, se = 0.07) between birth weight and PCOS was revealed via LDSR analysis. Our findings indicated that observed associations between birth weight and future PCOS risk are more likely to be attributable to genetic pleiotropy driven by the fetal genome rather than a causal mechanism. abstract_id: PUBMED:34211435 Metabolomic Profiling of Pregnancies With Polycystic Ovary Syndrome Identifies a Unique Metabolic Signature and Potential Predictive Biomarkers of Low Birth Weight. Background: Polycystic ovary syndrome (PCOS) is a complex syndrome with clinical features of an endocrine/metabolic disorder. Various metabolites show significant association with PCOS; however, studies comparing the metabolic profile of pregnant women with and without PCOS are lacking. In this study, metabolomics analysis of blood samples collected from PCOS women and age and BMI matched controls in the second trimester of pregnancy was performed to identify metabolic differences between the two groups and determine their association with pregnancy outcome. Methods: Sixteen PCOS and fifty-two healthy women in their second trimester underwent targeted metabolomics of plasma samples using tandem mass spectrometry with the Biocrates MxP® Quant 500 Kit. Linear regression models were used to identify the metabolic alterations associated with PCOS, followed by enrichment and Receiver Operating Characteristic (ROC) analyses to determine the best indicators of pregnancy outcomes. Results: PCOS women had lower birth weight babies compared to healthy controls. As a group, systolic blood pressure (SBP) at both second trimester and at delivery negatively correlated with birth weight. Regression models indicated significant increases in the triglycerides C20:4_C34:3 and C18:2_C38:6 in the PCOS group [false discovery rate (FDR) &lt;0.05]. Enrichment analysis revealed significant elevations in triglycerides containing arachidonic acid, linoleic acid and palmitic acid in the PCOS group. A number of indicators of baby birth weight were identified including SBP at delivery, hexosylceramide (d18:2/24:0), ceramide (d18.0/24.1) and serine, with an AUC for all predictors combined for low birth weight (≤2500grams) of 0.88 (95%CI: 0.75-1.005, p&lt;0.001). Conclusions: PCOS pregnancies resulted in babies with a lower birth weight, marked by a unique metabolic signature that was enriched with specific triglycerides and unsaturated fatty acids. The functional significance of these associations needs further investigation. abstract_id: PUBMED:14507847 Birth weight and age at menarche in patients with polycystic ovary syndrome or diminished ovarian reserve, in a retrospective cohort. Background: Few studies have investigated the association between subfertility in women and factors in early life such as birth weight and age at menarche, and most have produced contradictory results. In the present study, this association was investigated among women undergoing artificial reproductive techniques (ART), including IVF for reason of polycystic ovary syndrome (PCOS) or diminished ovarian reserve. Herein, PCOS included oligomenorrhoea and at least one additional symptom such as hyperandrogenism, hirsutism or polycystic ovaries on ultrasound. In most patients this was concomitant with elevated serum LH levels. Diminished ovarian reserve was defined as receiving a donated oocyte or having a low response to ovarian hyperstimulation. Methods: Among a retrospective cohort of 26 428 women diagnosed with subfertility between 1980 and 1995, three study groups and one reference group were defined using data from medical records. Women were included in the first group if diagnosed as having PCOS (n = 265). In order to define diminished ovarian reserve capacity, two groups were selected: (i) women receiving a donated oocyte (n = 98); and (ii) women having a low response (three follicles or less) to ovarian hyperstimulation in both their first and second IVF cycles (n = 351). Women with tubal obstruction formed the reference group (n = 957). In a logistic regression model, the effect of birth weight and age at menarche was examined. Information on both variables was obtained from mailed questionnaires. Results: Birth weight did not differ significantly between the study groups and the reference group. However, PCOS patients were significantly older at menarche [OR 3.31 (2.18-5.04)]. Women receiving a donated oocyte and low responders were significantly younger at menarche [OR 2.67 (1.35-5.29) and OR 2.01 (1.26-3.20) respectively]. Conclusion: The fetal origins hypothesis, the association between intrauterine growth retardation and disease in adult life, could not be confirmed, though a relationship between timing of menarche and PCOS and a diminished ovarian reserve was identified. Further investigation of the effect of birth weight on fertility outcome in a prospective setting is strongly advised. Answer: The relationship between birth weight and polycystic ovary syndrome (PCOS) in adult life has been the subject of various studies, but the evidence is mixed and does not conclusively establish a causal link. Some studies have found associations between birth weight and PCOS. For instance, a study on Southern-Italian women indicated that those with PCOS had a significantly lower birth weight compared to controls, suggesting that "low birth-weight" could be a risk factor for the development of PCOS (PUBMED:28277130). Another study found that low birth weight may increase body fat mass in adult women with PCOS, implying a potential influence of birth weight on the physical manifestations of PCOS (PUBMED:27326419). Additionally, a Danish register-based study reported an increased risk of PCOS in women with a birth weight of ≥4,500 g (PUBMED:23200688). However, other studies have not found a significant difference in birth weight between women with PCOS and controls. One study concluded that women with PCOS do not differ from controls in birth weight distribution, although birth weight may contribute to subtypes of the syndrome characterized by adrenal hyperandrogenism and central obesity (PUBMED:25790331). Another study could not confirm the effect of adverse intrauterine conditions, reflected in birth weight, on developing PCOS (PUBMED:27222928). Similarly, a study on offspring of mothers with PCOS found no systematic difference in birth weight compared to controls, although there was a higher prevalence of small for gestational age (SGA) infants in the PCOS group (PUBMED:15802312). A Mendelian randomization study found little evidence to support a causal effect of fetal genome-determined birth weight on the risk of developing PCOS, suggesting that observed associations might be due to genetic pleiotropy rather than a causal mechanism (PUBMED:37351103). Another study found no substantive association between insulin resistance, hyperandrogenism, and birth weight in adolescents with PCOS (PUBMED:25637475). In summary, while some studies suggest a potential association between birth weight and PCOS, the evidence is inconsistent, and a direct causal link has not been firmly established. Further research, including prospective studies and investigations into the underlying mechanisms, is needed to clarify the relationship between birth weight and PCOS in adult life.
Instruction: Perioperative antibiotics for surgical site infection in pancreaticoduodenectomy: does the SCIP-approved regimen provide adequate coverage? Abstracts: abstract_id: PUBMED:23664266 Perioperative antibiotics for surgical site infection in pancreaticoduodenectomy: does the SCIP-approved regimen provide adequate coverage? Introduction: The Joint Commission Surgical Care Improvement Project (SCIP) includes performance measures aimed at reducing surgical site infections (SSI). One measure defines approved perioperative antibiotics for general operative procedures. However, there may be a subset of procedures not adequately covered with the use of approved antibiotics. We hypothesized that piperacillin-tazobactam is a more appropriate perioperative antibiotic for pancreaticoduodenectomy (PD). Methods: In collaboration with hospital epidemiology and the Division of Infectious Diseases, we retrospectively reviewed records of 34 patients undergoing PD between March and May 2008 who received SCIP-approved perioperative antibiotics and calculated the SSI rate. After changing our perioperative antibiotic to piperacillin-tazobactam, we prospectively reviewed PDs performed between June 2008 and March 2009 and compared the SSI rates before and after the change. Results: For 34 patients from March through May 2008, the SSI rate for PD was 32.4 per 100 cases. Common organisms from wound cultures were Enterobacter and Enterococcus (50.0% and 41.7%, respectively), and these were cefoxitin resistant. From June 2008 through March 2009, 106 PDs were performed. During this period, the SSI rate was 6.6 per 100 surgeries, 80% lower than during March through May 2008 (relative risk, 0.204; 95% confidence interval [CI], 0.086-0.485; P = .0004). Conclusion: Use of piperacillin-tazobactam as a perioperative antibiotic in PD may reduce SSI compared with the use of SCIP-approved antibiotics. Continued evaluation of SCIP performance measures in relationship to patient outcomes is integral to sustained quality improvement. abstract_id: PUBMED:35933790 Prophylactic Perioperative Antibiotics in Open Pancreaticoduodenectomy: When Less Is More and When It Is Not. A National Surgical Quality Improvement Program Propensity-Matched Analysis. Introduction: We hypothesized that first-generation cephalosporins (G1CEP) provide adequate antimicrobial coverage for pancreaticoduodenectomy (PD) when no biliary stent is present but might be inferior to second-generation cephalosporins or broad-spectrum antibiotics (G2CEP/BS) in decreasing surgical-site infection (SSI) rates when a biliary stent is present. Methods: The National Surgical Quality Improvement Program 2014-2019 was used to select patients who underwent elective open PD. We divided the population into no-stent versus stent groups based on the status of biliary drainage and then divided each group into G1CEP versus G2CEP/BS subgroups based on the choice of perioperative antibiotics. We matched the subgroups per a propensity score match and analyzed postoperative outcomes. Results: Six thousand two hundred forty five cases of 39,779 were selected; 2821 in the no-stent (45.2%) versus 3424 (54.8%) in the stent group. G1CEP were the antibiotics of choice in 2653 (42.5%) versus G2CEP/BS in 3592 (57.5%) cases. In the no-stent group, we matched 1129 patients between G1CEP and G2CEP/BS. There was no difference in SSI-specific complications (20.3% versus 21.0%; P = 0.677), general infectious complications (25.7% versus 26.9%; P = 0.503), PD-specific complications, overall morbidity, length of stay, or mortality. In the stent group, we matched 1244 pairs. G2CEP/BS had fewer SSI-specific complications (19.9% versus 26.6%; P &lt; 0.001), collections requiring drainage (9.6% versus 12.9%; P = 0.011), and general infectious complications (28.5% versus 34.1%; P = 0.002) but no difference in overall morbidity, mortality, length of stay, and readmission rates. Conclusions: G2CEP/BS are associated with reduced rates of SSI-specific and infectious complications in stented patients undergoing open elective PD. In patients without prior biliary drainage, G1CEP seems to provide adequate antimicrobial coverage. abstract_id: PUBMED:18831304 Antibiotics and perioperative infections. Surgical site infections remain a significant contributor to postoperative morbidity and mortality. It is estimated that 500,000 patients suffer from this complication annually. Among other interventions, appropriate administration of prophylactic antibiotics has been shown to decrease the risk of perioperative infections. The goal of prophylactic antibiotic administration is to decrease the risk of contamination of the wound from skin flora in the case of clean procedures, and to add coverage of organisms that are anticipated to contaminate the surgical field, as in open bowel procedures. The purpose of this review is to summarize the guiding principles of perioperative antibiotic administration including selection, timing, redosing, and discontinuation. In addition, special topics including likely organisms for classes of surgical procedures, endocarditis prophylaxis, and management strategies for patients with allergies will be reviewed. abstract_id: PUBMED:29192749 Adequate Administration of Perioperative Antibiotics in Orthopedic Patients: Quality Assessment Study. The Centers for Disease Control and Prevention (CDC) estimate that nearly 500,000 surgical site infections (SSIs) occur annually.[3] SSIs remain a substantial cause of morbidity and mortality among hospitalized patients. For this reason, we developed a quality assessment study with a perioperative antibiotic protocol that included antibiotic type, weight dependent dose, and intraoperative re-dosing regime for the proper administration of prophylaxis antibiotics prior to orthopedic surgery. We restricted our study to knee replacement and hip replacement surgeries. All records from September 2014 to June 2015 were taken into consideration (78 cases). Before February 2015 (protocol implementation date), only 33% or 13 of 39 cases were correctly dosed. After protocol establishment: 44 % or 17 of 39 cases were correctly dosed. Although our p-value suggests no statistical significance, there was an increasing trend of adequate antibiotic administration. In conclusion, we need to educate more our nursing staff and implement monthly audits of our cases to improve our mistakes and ensure that this aspect of practice will remain in the forefront of perioperative orthopedic surgery care. abstract_id: PUBMED:36358179 Perioperative Antibiotics in Appendicitis-Do We Need to Adjust Therapy for the Elderly? A Matched Pair Analysis. (1) Background: Perioperative Antibiotics for acute complicated appendicitis are a standard of care. While there are plenty of trials for pediatric patients, data for elderly patients are scarce. The goal of our study was to evaluate whether elderly patients carry more resistant bacteria and thus have less favorable outcomes after an appendectomy that may warrant intensified perioperative antibiotic treatment (2) We present a retrospective single-center matched pair (139 patients each) analysis of perioperative and microbiological outcomes of an elderly appendicitis cohort (i.e., older than 60 years) compared with a younger adult cohort (i.e., ≤60 years). Both groups were matched one for one according to gender, duration of symptoms, c-reactive protein at presentation and whether they presented with uncomplicated or complicated appendicitis. (3) Results: After matching, complicated appendicitis was present in 76.3% of both groups. Elderly patients more frequently received preoperative diagnostic CT (p &lt; 0.001) than the young. Both operative strategy (laparoscopic appendectomy in 92.1% each) and duration of surgery (57 vs. 56 min) were equal in both groups. Postoperative antibiotics were prescribed in ~57% for a median of 3 days in both groups and antibiotic selection was similar. The incidence of surgical site infections was higher in the young (12.2% vs. 7.9%) yet not significant. There was no difference in culture positivity or bacterial spectrum and the elderly cohort did not present with increased resistant bacterial isolates. (4) Conclusions: While overall resistant bacterial strains were rare, perioperative outcomes between the young and the elderly did not differ and did neither warrant longer nor intensified antibiotic treatment. abstract_id: PUBMED:31487057 Physiologically based pharmacokinetic evaluation of cefuroxime in perioperative antibiotic prophylaxis. Aims: Adequate plasma concentrations of antibiotics during surgery are essential for the prevention of surgical site infections. We examined the pharmacokinetics of 1.5 g cefuroxime administered during induction of anaesthesia with follow-up doses every 2.5 hours until the end of surgery. We built a physiologically based pharmacokinetic model with the aim to ensure adequate antibiotic plasma concentrations in a heterogeneous population. Methods: A physiologically based pharmacokinetic model (PK-Sim® /MoBi® ) was developed to investigate unbound plasma concentrations of cefuroxime. Blood samples from 25 thoracic surgical patients were analysed with high-performance liquid chromatography. To evaluate optimized dosing regimens, physiologically based pharmacokinetic model simulations were conducted. Results: Dosing simulations revealed that a standard dosing regimen of 1.5 g every 2.5 hours reached the pharmacokinetic/pharmacodynamic target for Staphylococcus aureus. However, for Escherichia coli, &gt;50% of the study participants did not reach predefined targets. Effectiveness of cefuroxime against E. coli can be improved by administering a 1.5 g bolus immediately followed by a continuous infusion of 3 g cefuroxime over 3 hours. Conclusion: The use of cefuroxime for perioperative antibiotic prophylaxis to prevent staphylococcal surgical site infections appears to be effective with standard dosing of 1.5 g preoperatively and follow-up doses every 2.5 hours. In contrast, if E. coli is relevant in surgeries, this dosing regimen appears insufficient. With our derived dose recommendations, we provide a solution for this issue. abstract_id: PUBMED:16047986 Wise use of perioperative antibiotics. NOSOCOMIAL SURGICAL SITE INFECTIONS (SSIs), especially those caused by antibiotic-resistant organisms, are a major concern for perioperative nurses. Administering the correct antibiotic at its effective dose within the optimal time frame can help prevent SSIs. THE SIGNIFICANT RELATIONSHIP between microbial resistance and overuse of antibiotics justifies an examination of practices and a move away from widespread antibiotic administration. NEW ANTIBIOTIC GUIDELINES and research support recommendations for antibiotic administration for specific cardiac, colorectal, gynecologic, ophthalmologic, orthopedic, and urinary surgical procedures. Perioperative nurses can help protect their patients from nosocomial SSIs by using this clinical evidence. abstract_id: PUBMED:22020573 Perioperative antimicrobial prophylaxis Perioperative antimicrobial prophylaxis (AMP) is an important but not the only procedure to prevent surgical site infections. The effectiveness of AMP to prevent surgical site infections has been proven in numerous studies during the last decades and is part of national and international guidelines. The choice of the antibiotic as well as the duration, time point and mode of application strongly impact on the effectiveness of the prophylaxis. This article provides an overview on recommended antibiotics for AMP and their activity against the expected bacterial pathogens in elective surgery. Furthermore, the current spectrum of microorganisms most frequently isolated from surgical site infections and alternative antibiotic strategies are discussed. abstract_id: PUBMED:19209807 Surgical site infection in pancreas surgery and the use of perioperative antibiotics Objective: To analyze the pathogen profiles and the use of perioperative antibiotics after surgical site infection (SSI) in pancreas operation. Method: We retrospectively analyzed the pathogen profiles and the use of perioperative antibiotics in 48 SSI patients who received pancreas operations from Jan 2001 to Dec 2005. Results: Totally 79 strains of pathogens (74 strains of bacteria) were isolated, including 17 strains of methicillin-resistant Staphylococcus aureus (MRSA), 17 strains of enterococcus species, 12 strains of Escherichia species, and 5 strains of enterobacteriaceae. Combination of the third generation of cephalosporins and antianaerobic agents were the main prophylactic therapies, with an average duration of (4.7 +/- 1.2) days. Most SSI episodes happened 6-14 days postoperatively (median 9 days). Celiac infections were most common (n=39). Therapeutic antibiotics gave priority to combined-antibiotics of the third cephalosporins and antianaerobic antibiotics, glycopeptides and carbapenems. Conclusions: SSI commonly occurs on the 6-14 postoperative day, with MRSA and enterococci as the common pathogens of SSI after pancreas surgery. Antibiotic resistance should be considered when applying prophylactic therapies. abstract_id: PUBMED:34065712 A Pilot Randomised Clinical Trial Comparing a Short-Term Perioperative Prophylaxis Regimen to a Long-Term Standard Protocol in Equine Colic Surgery. Background: For surgical interventions classified as clean or clean-contaminated, including laparotomy, guidelines in human and veterinary medicine recommend a short-term perioperative antibiotic prophylaxis (PAP). In equine colic surgery, however, PAP commonly exceeds 24 h. Objectives: The aim of this study was to compare a single-shot to a 5-day lasting PAP considering surgical site infections (SSI) and other adverse effects probably associated with the particular antimicrobial regimen. Study Design: The study was designed as a randomised non-inferiority pilot study including horses subjected to colic surgery while receiving one of two distinct PAP regimens. Methods: All horses (n = 67) included in the study received the standard physical examination before and after surgery. Colic surgery was performed according to the current standard of the clinic. Horses were randomly assigned to two groups, receiving either the "single-shot" or the "5-day lasting" antibiotic prophylaxis. The "single-shot" group (n = 30) received penicillin and gentamicin only once before and, if needed, during surgery, whereas the "5-day lasting" group (n = 37) received antibiotics for five days. In addition to the standard laboratory examinations, serum amyloid A and fibrinogen were determined preoperatively and during five days after surgery. SSI, postoperative colitis and haemolytic anaemia were classified as postoperative complications potentially related to antibiotic use. Results: The outcome of this preliminary non-inferiority clinical trial showed that the occurrence of postoperative adverse events (i.e., SSI, postoperative colitis and haemolytic anaemia) lacked significant differences between the study groups. Main limitations: The main limitations of this study are the limited group sizes and our inability to blind the study. Conclusions: Single-shot PAP seems to be an alternative approach considering the 5-day lasting protocol commonly used in equine abdominal surgery. However, a proper hygiene management together with a close clinical and laboratory monitoring of the equine patient is indispensable. Answer: The SCIP-approved regimen for perioperative antibiotics may not provide adequate coverage for surgical site infections (SSI) in pancreaticoduodenectomy (PD). A study that retrospectively reviewed records of patients undergoing PD who received SCIP-approved perioperative antibiotics found a high SSI rate of 32.4 per 100 cases. The common organisms from wound cultures were Enterobacter and Enterococcus, which were resistant to cefoxitin, a SCIP-approved antibiotic. After changing the perioperative antibiotic to piperacillin-tazobactam, the SSI rate significantly decreased to 6.6 per 100 surgeries, which was an 80% reduction compared to the previous period. This suggests that piperacillin-tazobactam may be a more appropriate perioperative antibiotic for PD than the SCIP-approved regimen (PUBMED:23664266). Another study using the National Surgical Quality Improvement Program data analyzed the effectiveness of first-generation cephalosporins (G1CEP) versus second-generation cephalosporins or broad-spectrum antibiotics (G2CEP/BS) in patients undergoing open PD. It was found that in patients without prior biliary drainage, G1CEP provided adequate antimicrobial coverage. However, in patients with a biliary stent, G2CEP/BS were associated with reduced rates of SSI-specific and infectious complications, indicating that the SCIP-approved regimen might not be sufficient for stented patients (PUBMED:35933790). Overall, while SCIP-approved antibiotics may be adequate for some surgical procedures, the evidence suggests that for PD, especially in the presence of biliary stents, alternative antibiotics such as piperacillin-tazobactam or broad-spectrum antibiotics may provide better coverage and reduce the risk of SSIs (PUBMED:23664266; PUBMED:35933790).
Instruction: Nursing students' prosocial motivation: does it predict professional commitment and involvement in the job? Abstracts: abstract_id: PUBMED:24862915 Nursing students' prosocial motivation: does it predict professional commitment and involvement in the job? Aim: This study investigated how prosocial motivation reported by nursing students in their final year of academic studies relates to career commitment and job involvement three years after graduation. Background: Most studies investigating nurses' prosocial motivation for choosing the nursing profession examine only their prosocial motivation for entering nursing training; they do not investigate whether this motivation is associated with job involvement or commitment to the profession. Design: A longitudinal survey design was used. Method: The present longitudinal study included 160 nurses. In their final academic year of spring 2007, the nurses received a questionnaire about their motivation for entering nursing. Three years after graduation, spring 2010, they received another questionnaire about their level of job involvement and career commitment. Results: The results showed that prosocial motivation measured in their last academic year was related to career commitment three years after graduation, but unrelated to job involvement. Conclusion: The results indicated that prosocial motivation is important in identifying with the profession but not necessarily for personal involvement in the job. The study gives important knowledge on how a commonly reported motivation for entering nursing relates to the nurses' attitudes about their work life. abstract_id: PUBMED:29870870 The effect of nursing internship program on burnout and professional commitment. Background: Professional commitment is defined as a belief in and acceptance of the values of the profession which is chosen, effort to actualize these values, desire to improve him/herself. Nurses' professional and organizational commitment are influenced by factors such as job stress, job satisfaction and burnout. Objective: This study was conducted to determine the effect of nursing internship program on professional commitment and burnout of senior nursing students. Design: A quasi-experimental study with a pretest and posttest without control group design was used. Methods: Students who were attending nursing internship program and agreed to participate were included in the study. Sample consisted of 101 students. Data were collected with a questionnaire, the burnout measure short version and nursing professional commitment scale. Results: After the nursing internship, 77.2% were pleased to study nursing, 83.2% were pleased to be a senior student, 55.4% did not have any intention to change their profession, 81.2% wanted to work as nurses, and 82.2% were planning career advancement in nursing of the students, 34.7% and 43.6% were found to experience burnout, before and after the nursing internship, respectively (p &lt; 0.05). When the students' mean pre-test and post-test scores on the nursing professional commitment scale were compared, a significant difference was found between mean scores on the total score and "maintaining membership" subscale (p &lt; 0.05). Conclusion: This study revealed that after the nursing internship, burnout and professional commitment levels of the students increased. abstract_id: PUBMED:28252193 Professional commitment: Does it buffer or intensify job demands? The purpose of this study is to investigate whether professional commitment can be seen as a moderator in the relationship between job demands and emotional exhaustion among Norwegian nurses. Inspired by the job demands-resources model, this study explores whether having a strong commitment to the nursing profession can be seen as a resource that buffers the effect of job demands on emotional exhaustion or, conversely, intensifies the impact of job demands. A survey that comprised Norwegian nurses who had graduated three years previously (N = 388) was conducted. Multiple regression was performed to test the hypothesis. The results provide support to a buffering effect; thus, individuals with a higher degree of professional commitment conveyed a weaker association between job demands and emotional exhaustion compared with nurses with a lower degree of commitment. Developing a better understanding of the potential buffering effect of professional commitment is of great interest. The present study is the first to utilize professional commitment as a resource within the job demands-resources framework. abstract_id: PUBMED:35568458 Commitment, job satisfaction and personality: A cross sectional study of generational cohorts in nursing students. Background: The current domestic nursing workforce is the most generational diversity in history with a wide range of life experiences. Purpose: The purpose of this study was to investigate generational differences in organizational commitment, job satisfaction and personality traits in a sample of nursing students preparing for the workforce. Methods: This cross-sectional study used data collected from pre-licensure final semester nursing students and degree completion students (n = 313) from three different schools about job satisfaction, personality traits and organizational commitment. Results: Findings showed the scores between the generations were not significantly different for job satisfaction (p = 0.86) and organizational commitment (p = 0.40). Significant differences were found between the generations for the personality measures of agreeableness (p = 0.02), conscientiousness (p = 0.01), and emotional stability (p = 0.00). Discussion/conclusions: Understanding generational differences in nursing students entering the workforce can serve academia and practice partners to focus curriculum and resources appropriately to prepare the next generation of nurses for the challenges in a time of crisis. abstract_id: PUBMED:37690440 Professional commitment and related factors among nursing undergraduates: A latent profile analysis. Objective: To identify different subgroups of nursing professional commitment among nursing undergraduates and to analyze the related factors of the different subgroups. Methods: A cross-sectional study using the professional commitment scale with 430 nursing undergraduates was conducted. Latent profile analysis was used to explore the nursing students' professional commitment subgroups and using multivariate logistic regression to analyze the related factors of nursing professional commitment. Results: The overall mean score for professional commitment in nursing was (75.07 ± 14.715). There were three different subgroups of professional commitment characteristics, namely the "high professional commitment group" (20.465 %), the "medium professional commitment group" (66.279 %), and the "low professional commitment group" (13.256 %). Dissatisfaction with clinical placement (OR = 3.556, p = 0.008) and self-enrollment (OR = 0.186, p = 0.002) were significantly related factors for low-level and medium-level nursing commitment groups. Students' origin in western China (OR = 1.869, p = 0.042) significantly influenced the medium-level specialty commitment group. Conclusions: Most nursing undergraduates have been categorized in the "medium professional commitment group". Students who were dissatisfied with their clinical experience and chose the nursing profession involuntarily were the main factors for low-level and medium-level nursing professional commitment. The clinical practice environment should be further improved to enhance the clinical experience satisfaction of nursing undergraduates; at the same time, the professional value cultivation of students whose choice of nursing profession was not made by themselves should be strengthened to improve the level of professional commitment of nursing undergraduates. abstract_id: PUBMED:27125166 Professional commitment and attributional style of medical-college nursing students in China: A cross-sectional study. Background: The persistent shortage of nurses has become a common phenomenon around the world. Nursing students' professional commitment affects their decision to choose nursing as a professional career. However, there is little knowledge about the professional commitment of nursing students, especially its relationship with attributional style. Objectives: To explore the professional commitment of nursing students and to identify the role of attributional style on nursing students' professional commitment. Design: Cross-sectional study design. Settings And Participants: This study was conducted at three different medical colleges in Jinan, Shandong Province, China and included 1230 nursing students. A total of 1223 valid questionnaires were returned, yielding a response rate of 99%. Methods: The Professional Commitment Scale and the Multidimensional-Multiattributional Causality Scale (MMCS) were used to investigate nursing students' professional commitment and attributional style. Basic demographic information about the nursing students was collected. Stepwise regression analysis was used to analyze the data. Results: The mean scores on the Professional Commitment Scale were 3.54±5.29 (mean±SD). Professional commitment of first year students was higher than that of second year students. Their residence affected their affective commitment, ideal commitment, and continuance commitment which were the sub-dimensions of professional commitment. Effort and ability attribution positively influenced professional commitment; however, context attribution negatively influenced it. Conclusions: It is imperative to take effective measures to improve nursing students' professional commitment. Our findings indicated that attributional style had a significant effect on professional commitment; thus, effort and ability attribution need to be emphasized. abstract_id: PUBMED:34507259 Impact of exemplar education program on the professional commitment of nursing students: A quasi-experimental study. Background: High attrition rate and nursing workforce shortage have been a prevailing world challenge. Professional commitment is a known key factor for attracting and retaining the nursing workforce. Objective: To design, implement and evaluate the impact of a one-week structured education program, underpinned by the exemplar education strategy, on the professional commitment of undergraduate nursing students. Design: A quasi-experimental study was designed with pre- and post-intervention measurements. The intervention was a structured, four module education program underpinned by the exemplar education strategy. Setting: Department of Nursing, Qianjiang College, Hangzhou Normal University, Zhejiang, China. Participants: 152 third-year nursing students who attended the education program and consented to participate were enrolled in the research. Methods: The intervention, the education program, was delivered over 16 h in one week of an 18-week semester. It was sequenced in four modules: expert lectures (3 h each), visiting a hospital (3 h), alumni salon (3 h), and submitting a reflection report (4 h). Professional commitment was measured by the Nursing Professional Commitment Scale (NPCS) administered both pre- and post-intervention. Results: The total professional commitment measurement score was significantly improved after the intervention. There were statistically significant differences before and after intervention in the four dimensions of professional commitment, namely, belief in goals and values, willingness to make an effort, desire to stay in the profession, and intrinsic positive value of work. Conclusion: The one-week professional education program underpinned by the exemplar education strategy effectively improved the level of professional commitment of the undergraduate nursing students. The program can be transferred into the curriculum to improve nursing students' professional commitment. The methodology can be applied in other aspects of nursing professional development. abstract_id: PUBMED:28322647 Career Decision-Making Self-Efficacy and Professional Commitment Among Master Nursing Students. Developing countries face a shortage of nurses with higher education, such as those with a master's degree. However, few studies have investigated the interaction between career decision and professional commitment (PC) of nursing students, especially for postgraduates. This study performed correlation analysis for career decision-making self-efficacy (CDMSE) and PC of 545 nursing postgraduate students from nursing schools at 19 universities or colleges throughout mainland China, who came from different regions, and possessed different years of study and different types of degrees as well as part-time job experiences. Data reliability and validity were confirmed for both Career Decision-Making Self-Efficacy Scale for university students (CDMSE-R) and Professional Commitment Questionnaire (PCQ). We found positive correlations between factors of PC and CDMSE. The score of CDMSE was different between regions, degree types, grades, and part-time job experiences. According to our results, we suggest nursing education or career advisory services should promote the PC of nursing students according to factors of their CDMSE scores. abstract_id: PUBMED:36152589 The level and influencing factors of graduating nursing students' professional commitment from the perspective of Ecological Systems Theory: A cross-sectional study. Background: Increased professional commitment is essential for relieving the nursing workforce shortage, which is exacerbated by the COVID-19 pandemic. The professional commitment of graduating nursing students is a powerful predictor of their work commitment. However, limited information is available regarding graduating nursing students' professional commitment. Existing studies investigating nursing students' professional commitment are limited by their lack of theoretical foundation. Objectives: To investigate the level of graduating nursing students' professional commitment and its multilevel influencing factors from the perspective of the Ecological Systems Theory in the early days following the COVID-19 outbreak. Design: A descriptive cross-sectional study. Setting: Three educational institutions in Shanghai, China. Participants: 513 nursing students who were graduating with an associate or bachelor's degree. Methods: The independent variables were measured by a self-designed questionnaire. The dependent variable professional commitment was measured by the Nursing Professional Commitment Scale. Hierarchical regression analyses, which allowed the independent variables entered in order, were performed to identify the significant predictor variables of the professional commitment and its dimensions. Results: The level of professional commitment was 100.15 ± 20.35 (score ranged between 34 and 136). The individual factors (degree, whether had received a scholarship during the past academic years, ΔR2 = 0.142), family factors (parents and siblings' attitudes towards one's majoring in nursing, ΔR2 = 0.153), educational factors (academic faculty's belief in nursing profession, leaders' emphasis on nursing profession, satisfaction with clinical instructors' role modeling, ΔR2 = 0.097), and social factors (reason for majoring in nursing, perceived nurse-patient relationship, ΔR2 = 0.153) were significant predictors of the graduating nursing students' professional commitment (R2 = 47.6 %, F = 32.277, p &lt; 0.001). Conclusions: The graduating nursing students had a moderate level of professional commitment, which needed to be improved. Nursing educators should comprehensively consider various factors to understand the relationship between individuals and environmental systems, and implement targeted interventions to shape the positive professional values of nursing students. abstract_id: PUBMED:37681475 Occupational stigma consciousness and professional commitment among Chinese nursing students: Path analysis of survey data. Pervasive stigma exists around nursing in Chinese society and can affect nursing students' commitment to the profession. Based on the resource conservation theory, this study aimed to investigate the relationship between occupational stigma consciousness and psychological empowerment in nursing students' professional commitment in China. Data were collected from November 2021 to May 2022, and 1189 nursing students participated in the study. Occupational stigma consciousness, professional commitment, and psychological empowerment were assessed using a questionnaire. The path analysis indicated that occupational stigma consciousness negatively predicted professional commitment (β = -0.127, p &lt; 0.001) and psychological empowerment (β = -0.329, p &lt; 0.001), and psychological empowerment mediated the relationship between occupational stigma consciousness and professional commitment. To resist occupational stigma consciousness and improve nursing students' professional commitment, nursing policies and education should focus on improving psychological empowerment. Answer: The relationship between nursing students' prosocial motivation and their professional commitment and job involvement has been explored in several studies. According to the findings from these studies, prosocial motivation reported by nursing students in their final year of academic studies is related to career commitment three years after graduation, but it is not necessarily related to job involvement (PUBMED:24862915). This suggests that while prosocial motivation may be important for identifying with the nursing profession, it does not automatically translate into personal involvement in the job. Another study found that after a nursing internship, students' professional commitment levels increased, although burnout also increased (PUBMED:29870870). This indicates that practical experience in the field can enhance commitment to the profession, despite the potential for increased stress. Professional commitment can also act as a buffer against job demands, reducing the association between job demands and emotional exhaustion (PUBMED:28252193). This suggests that a strong professional commitment can serve as a protective factor for nurses in the workplace. Additionally, a study on generational cohorts of nursing students found no significant differences between generations in terms of job satisfaction and organizational commitment, although there were differences in personality traits (PUBMED:35568458). This implies that generational factors may not significantly impact professional commitment and job satisfaction among nursing students. A cross-sectional study identified different subgroups of professional commitment among nursing undergraduates and found that dissatisfaction with clinical placement and involuntary choice of the nursing profession were significant factors for lower levels of professional commitment (PUBMED:37690440). This highlights the importance of positive clinical experiences and voluntary career choice in fostering professional commitment. In summary, prosocial motivation in nursing students does predict professional commitment to some extent, but its impact on job involvement is less clear. Other factors, such as practical experience, professional commitment as a buffer against job demands, generational differences, and satisfaction with clinical experiences, also play significant roles in shaping professional commitment and potentially job involvement.
Instruction: Prevalence and predictors of ocular manifestations of RA: is there a need for routine screening? Abstracts: abstract_id: PUBMED:17457954 Prevalence and predictors of ocular manifestations of RA: is there a need for routine screening? Background: People with rheumatoid arthritis (RA) have an increased risk for eye problems caused by associated conditions or medication side-effects. Consequent visual impairment may increase the risk of falls or difficulties self-administering medications. Aims: The aim of the present study was to estimate the prevalence and predictors of ocular manifestations and visual impairment in a local sample of people with RA. The Visual Functioning Questionnaire (VFQ-25) was evaluated as a screening tool in people with RA. Methods: Seventy-five participants with RA attended a visual screening clinic. Demographic, medication and disease characteristics were recorded, a full ophthalmological assessment was performed by an expert ophthalmologist and the VFQ-25, Health Assessment Questionnaire and Self-administered Comorbidity Questionnaire were completed. Results: Twenty-nine participants (38.7%) had impaired visual acuity; this was correctable by appropriate refraction in 93.1%. The prevalence of cataracts was 22.7% and this was predicted by older age and steroid use. An abnormal Schirmer's test, suggesting dry eyes, was seen in 70.7% of participants but only 12.0% were using artificial tears. Answers to the VFQ-25 suggested misinterpretation of questions as relating to disability attributed to arthritis rather than caused by visual impairment. Conclusions: Visual impairment does not appear to be particularly prevalent in RA, obviating the need for a screening programme. Utilization of the VFQ-25 as a screening tool in RA requires further consideration. The high prevalence of cataracts in participants on steroids emphasizes the need to prescribe the minimum required dose. The high prevalence of dry eyes suggests that a Schirmer's test should be performed regularly, with prescription of artificial tears if required. abstract_id: PUBMED:30049101 Ocular manifestations in rheumatoid arthritis: 24 cases in Abidjan Objective: Determine the prevalence and the main ocular manifestations in rheumatoid arthritis in Abidjan. Patients And Methods: Prospective and descriptive study of 24 patients with rheumatoid arthritis fulfilling the criteria of the American College of Rheumatology, views from September 2003 to August 2004 in department of rheumatology at the University Hospital of Cocody. The patients performed an eye examination including: visual acuity examination at the slit lamp, ocular fundus, color vision and visual field. Results: Ocular manifestations were observed in 9 of 24 patients representing a prevalence of 37.5%. There were 03 cases of decreased visual acuity, 03 cataract, 02 keratitis cases and 01 cases of anterior uveitis. No fundus abnormality in color vision and visual field was highlighted. These manifestations were found between 5 and 10 years (04 cases) and after 10 years (05 cases) of evolution of the disease. The term rheumatoid arthritis influenced the occurrence of ocular manifestations (P = 0.00). Conclusion: Ocular manifestations in rheumatoid arthritis are rare in our practice and were mainly affected by visual acuity, annexes and anterior segment of the eye. abstract_id: PUBMED:32121664 Ocular Manifestations in Patients with Philadelphia-Negative Myeloproliferative Neoplasms. The major complications of Philadelphia-negative (Ph-Negative) myeloproliferative neoplasms (MPNs) are thrombosis, haemorrhage and leukemic transformation. As systemic and haematological diseases, MPNs have the potential to affect many tissues and organs. Some complications lead to the diagnosis of MPNs, but other signs and symptoms are often misdiagnosed or neglected as a sign of MPN disease. Therefore, we reviewed the current literature to investigate and delineate the clinical manifestations seen in the eyes of Ph-negative MPN patients. We found that ocular manifestations are common among patients with MPNs. The most frequently described manifestations are due to the consequences of haematological abnormalities causing microvascular disturbances and hyperviscosity. More serious and vision-threatening complications as thrombotic events in the eyes have been repeatedly reported as well. These ocular symptoms may precede more serious extraocular complications. Accordingly, combined ophthalmological and haematological management have the potential to discover these diseases earlier and prevent morbidity and mortality in these patients. Furthermore, routine ophthalmological screening of all newly diagnosed MPN patients may be a preventive approach for early diagnosis and timely treatment of the ocular manifestations. abstract_id: PUBMED:33520133 Ocular Manifestations of COVID-19: A Systematic Review and Meta-analysis. Several studies have reported the characteristics of Coronavirus disease 2019 (COVID-19), yet there is a gap in our understanding of the ocular manifestations of COVID-19. In this systematic review and meta-analysis, we investigated the prevalence of ocular manifestations in COVID-19 patients. We searched Pubmed, Embase, Scopus, Web of Science, and medRxiv from December 1, 2019 to August 11, 2020. Two independent reviewers screened the articles, abstracted the data, and assessed the quality of included studies in duplicate. Thirty-eight studies were eligible after screening of 895 unique articles, with a total of 8,219 COVID-19 patients (55.3% female; n = 3,486 out of 6,308 patients). Using data extracted from cross-sectional studies, we performed random-effects meta-analyses to estimate the pooled prevalence of ocular symptoms along with 95% confidence interval (CI). The prevalence of ocular manifestations was estimated to be 11.03% (95% CI: 5.71-17.72). In the studies that reported the details of observed ocular symptoms, the most common ocular manifestations were dry eye or foreign body sensation (n = 138, 16%), redness (n = 114, 13.3%), tearing (n = 111, 12.8%), itching (n = 109, 12.6%), eye pain (n = 83, 9.6%) and discharge (n = 76, 8.8%). Moreover, conjunctivitis had the highest rate among reported ocular diseases in COVID-19 patients (79 out of 89, 88.8%). The results suggest that approximately one out of ten COVID-19 patients show at least one ocular symptom. Attention to ocular manifestations, especially conjunctivitis, can increase the sensitivity of COVID-19 detection among patients. abstract_id: PUBMED:36620710 Ocular Manifestations in Patients with Sensorineural Hearing Loss. Identification of ocular manifestations in patients with sensorineural hearing loss (SNHL) can have a large impact on the outcome and treatment of pediatric patients. Due to the common co-incidence of ocular manifestations and SNHL in children, both ophthalmologic and hearing loss screening and routine examinations must be conducted to minimize adverse outcomes and worsening of pathology. Early evaluation and diagnosis is imperative for intervention and further development of the patient. Co-incidence requires a thorough evaluation that includes a comprehensive history, examination, and diagnostic testing. In this article, a literature review was conducted to analyze the presentations of various diseases and syndromes, such as Alport Syndrome, Waardenburg Syndrome, Norrie Disease, Usher Disease, Stickler Syndrome, Marfan Syndrome, Congenital Rubella, and Hereditary Optic Neuropathies. We divided the various ocular pathologies into anterior and posterior segment presentations and associated systemic findings for better understanding. Additionally, this review aims to include an update on the management of patients with both ocular and hearing loss manifestations. abstract_id: PUBMED:33802255 Ocular Manifestations in Psoriasis Screening (OcMaPS) Questionnaire: A Useful Tool to Reveal Misdiagnosed Ocular Involvement in Psoriasis. Psoriasis is an immune-mediated, chronic inflammatory disease, which mainly affects the skin, although it has systemic pathological effects. Comorbidities of psoriasis include ocular disorders, which are often nonspecific or mildly symptomatic. The aim of this study was to show the importance of ocular-disease screening in psoriatic patients using the Ocular Manifestations in Psoriasis Screening (OcMaPS) questionnaire. Patients suffering from moderate-to-severe psoriasis referring at our outpatient-clinic were consecutively enrolled. Each patient was asked to complete a screening questionnaire (OcMaPS). Patients reporting ocular symptoms were referred for an ophthalmological examination. A total of 372 patients were enrolled in the study. Ocular symptoms were detected in 39 patients (10.5%), and 37 patients were referred to ophthalmological examination which confirmed the presence of ocular manifestation in 30 patients. There were three cases (10%) of uveitis, 14 (46.6%) of dry eye and 13 (43.3%) of cataract, in progress or already treated with surgery. In the remaining seven patients, no ocular manifestations were found. Ocular manifestations in psoriatic patients are not rare. It is important to be aware of ocular symptoms in psoriatic patients, screening patients (with a consultation or OcMaPS questionnaire), which leads to earlier diagnosis and treatment. abstract_id: PUBMED:36147389 An observational study on ocular manifestations in patients of psoriasis in a tertiary care hospital in western India. Background: The aim of the study is to observe the ocular manifestation in patients of psoriasis. Methods: All the diagnosed cases of Psoriasis by the dermatology department of this tertiary care hospital were included in this study. Relevant details of the history pertaining to disease duration, type of psoriasis, and treatment undertaken including ocular symptoms were obtained. Disease severity was quantified using the PASI score. Complete ocular examination including intraocular pressure, Schirmer I and II tests, Tear Film Breakup Tme (TBUT); was carried out for all the patients. Results: Of 126 patients of psoriasis, ocular manifestations were seen in 76 patients (60.3%). Dry eyes (27%) and blepharitis (15.9%) were the most common ocular manifestations. Uveitis was seen in 3.2% of the patients of which 75% patients were HA B27-positive psoriatic arthritis, which was statistically significant (p = 0.001). There was no statistical correlation between duration of the disease and ocular manifestations (p value is 0.077 using chi square test). The ocular manifestations were more common in patients with PASI score 10 when compared with the patients with PASI score 10 (p value = 0.028) which was statistically significant. Conclusions: In our study, prevalence of ocular manifestation was 60.3% which increased with the increasing PASI score. Dry eyes and blepharitis were the most common manifestations. Hence, routine ocular examination is recommended in patients with psoriasis. abstract_id: PUBMED:35459223 Prevalence of ophthalmological manifestations in pediatric and adolescent populations with Down syndrome: a systematic review of the literature. Background: Down syndrome (DS) is a chromosomal anomaly that is characterized by an extra chromosome 21. Ophthalmological manifestations have a high prevalence in patients with DS. Purpose: To review the scientific evidence and estimate the prevalence of ophthalmological manifestations in the pediatric population with DS. Data Sources: Electronic databases including MEDLINE, Cochrane Library, EMBASE, ScienceDirect, and LILACS. Study Eligibility Criteria: Published observational studies with available and original data were included. Articles were excluded if the study design was a review, letter to the editor, case report, case series, or systematic review and if the subjects had ophthalmological manifestations secondary to other conditions. Participants And Interventions: Pediatric and adolescent population with DS and with ophthalmological evaluation. Study Appraisal And Synthesis Methods: A data collection form was designed in Excel. Five reviewers extracted relevant details about the design and results of each study. The quality of the studies was assessed by applying the tools for systematic reviews of prevalence and incidence from the Joanna Briggs Institute. We calculated the weighted prevalence of ophthalmological manifestations, considering only the studies reporting the measurement of each manifestation. Results: Twenty-two articles (from 15 countries, published during 1994-2020) were included in the present systematic review. Ocular manifestations were observed in 85% of the studied pediatric and adolescent populations with DS. The most frequent ones were slanting fissures, oblique fissures, epicanthus, and epiblepharon. Conclusion: The ocular manifestations in the pediatric and adolescent populations with DS are varied, and some can irreversibly affect visual development. Screening of the pediatric population with DS should be conducted from the first months of age and continued annually. Systematic Review Registration: PROSPERO CRD42019127717. abstract_id: PUBMED:25364370 Ocular manifestations in a child with systemic brucellosis. Brucellosis is a zoonotic disease with widespread prevalence. It presents with in various range and often with the presence of non-specific clinical signs and symptoms. Brucellosis also may cause different manifestations in eyes such as uveitis, keratitis, conjunctivitis and neuro-ophthalmic defects. Ocular brucellosis is rare among children. Herein, we present a 7-year-old girl with systemic and ocular brucellosis. After treatment with systemic steroid and antibiotics, her signs and symptoms disappeared. Since early treatment is important in preventing permanent visual loss and the other complications of ocular brucellosis, examination of the eyes in brucellosis patients must always be noticed by clinicians working in this field. abstract_id: PUBMED:37223599 Ocular Manifestations of Spondyloarthritis. Introduction: Ocular manifestations are seen in several autoimmune disorders including spondyloarthritis (SpAs). Though acute anterior uveitis (AAU) is the hallmark of SpAs, episcleritis and scleritis are also seen. Genetic and geographical factors impact the prevalence of AAU; however, available evidence support that HLA-B27 positivity is closely associated with it. Objective: The current narrative review is focused on clinical features of AAU and its management. Methods: For the purpose of this narrative review; literature search covered MEDLINE, Google scholar and EMBASE databases and included articles published in English language from January 1980 till April 2022 using the following keywords: "Ankylosing spondylitis", "Spondyloarthritis", "Eye manifestations", "Ocular", "Uveitis" and "Arthritis". Conclusion: Patients with SpA may suffer from different ocular complications with uveitis being the most common one. Biological therapy is a promising medical strategy enabling in achieving therapeutic goals with minimal adverse effects. An effective management strategy for patients with AAU associated with SpA could be formulated by the collaboration between ophthalmologists and rheumatologists. Answer: The prevalence of ocular manifestations in patients with rheumatoid arthritis (RA) varies across studies. In one study, 38.7% of participants with RA had impaired visual acuity, which was correctable by appropriate refraction in 93.1% of cases. The prevalence of cataracts was 22.7%, predicted by older age and steroid use. An abnormal Schirmer's test, suggesting dry eyes, was seen in 70.7% of participants, but only 12.0% were using artificial tears. However, the study concluded that visual impairment does not appear to be particularly prevalent in RA, suggesting that there may not be a need for a routine screening program. The study also noted that the utilization of the Visual Functioning Questionnaire (VFQ-25) as a screening tool in RA requires further consideration (PUBMED:17457954). Another study in Abidjan found ocular manifestations in 37.5% of patients with RA, with the main manifestations affecting visual acuity, annexes, and the anterior segment of the eye. These manifestations were more common between 5 and 10 years of disease evolution and after 10 years. The study suggested that the term of RA influenced the occurrence of ocular manifestations (PUBMED:30049101). Given these findings, while ocular manifestations are not uncommon in RA, the need for routine screening may depend on various factors, including the age of the patient, duration of the disease, and steroid use. The high prevalence of dry eyes suggests that regular performance of a Schirmer's test with prescription of artificial tears if required could be beneficial. However, the decision to implement routine screening programs should be based on a careful consideration of the prevalence and predictors of ocular manifestations in the specific patient population, as well as the potential benefits and costs associated with such a program.
Instruction: TVs in the bedrooms of children: does it impact health and behavior? Abstracts: abstract_id: PUBMED:21130109 TVs in the bedrooms of children: does it impact health and behavior? Objectives: To (1) determine socio-demographic characteristics associated with a TV in the bedroom (BTV) and (2) examine relationship of BTV, independent of total viewing time, with social and behavioral characteristics. Methods: Children 6-17 years from the 2007 US National Survey of Children's Health were included (n=48,687). BTV, daily TV viewing time, demographic, behavioral and social outcomes (community involvement, social skills, health habits and status, and family) were examined using logistic regression, and adjusted for total viewing time. Results: Overall prevalence of BTV was 49.3% in American children. Older age, non-Hispanic Black (71.3%), Hispanics (56.3%), higher level of poverty (&gt;56.2%), non two-parent biological family structure (&gt;62.6%), Midwest (47.1%), Northeast (46.7%), South Atlantic (56.4%) and South Central (59.8%) region of the country were associated with higher odds of BTV. Female gender (52.7%) and residence in Alaska (33.0%) were associated with lower prevalence of BTV. BTV was associated with higher prevalence of exhibiting problematic social behaviors (29%) and overweight status (44%). BTV was significantly associated with lower prevalence of regular family meals (13%), engagement in school (16%), participation in extracurricular activities (31%), regularly sleeping enough (20%), and participation in community service (25%) after adjustment for total viewing time. Conclusions: BTV appears associated with more social and behavioral indices than previously reported, in addition to total viewing time. abstract_id: PUBMED:19420181 Televisions in the bedrooms of racial/ethnic minority children: how did they get there and how do we get them out? The purpose of this study was to describe the prevalence of TVs in the bedrooms of an urban, largely racial/ethnic minority population of children and parents' reasons for putting the TV in their child's room. The authors surveyed 200 parents of children age 2 to 13 years in a primary care clinic; 57% of the children were non-Hispanic black, 33% were Hispanic. Sixty-seven percent of all children had a TV in the room where they slept; high rates of TVs were present in bedrooms of black (70%) and Hispanic (74%) children compared with white children (22%). The top 3 reasons parents cited for putting a TV in the room where their child sleeps were (a) to keep the child occupied so that the parent could do other things around the house, ( b) to help the child sleep, and (c) to free up the other TVs so that other family members could watch their shows. abstract_id: PUBMED:34932978 Airborne bacterial and fungal concentrations and fungal diversity in bedrooms of infant twins under 1 year of age living in Porto. Exposure to airborne microorganisms has been linked to the development of health detriments, particularly in children. Microbial pollution can constitute a relevant health concern indoors, where levels of airborne microorganisms may be specially increased. This work aimed to characterize the airborne bacterial levels, and fungal concentration and diversity to which twins are exposed in their bedrooms (n = 30) during the first year of life. Bacterial and fungal levels varied widely across the studied bedrooms, with 10% of the rooms presenting values exceeding the national limit for both indoor bacterial and fungal counts. Cladosporium was the predominant genera, but Penicillium, Aspergillus, Alternaria, Trichoderma and Chrysonilia were also identified in the samples collected. In addition, two toxicogenic species, A. flavus and T. viride, were identified at counts that exceeded the established limit (12 CFU/m3) in 3 and 7% of the bedrooms surveyed, respectively. Based on indoor-to-outdoor concentration ratios, outdoor air seemed to be the main contributor to the total load of fungi found indoors, while airborne bacteria appeared to be mainly linked to indoor sources. Higher indoor nitrogen dioxide levels were negatively correlated with indoor fungi concentrations, whereas particulate matter and volatile organic compounds concentrations were associated with an increase in fungal prevalence. In addition, rooms with small carpets or located near outdoor agriculture sources presented significantly greater total fungal concentrations. Multiple linear regression models showed that outdoor levels were the single significant predictor identified, explaining 38.6 and 53.6% of the Cladosporium sp. and total fungi counts, respectively. The results also suggest the existence of additional factors contributing to airborne biologicals load in infants' bedrooms that deserve further investigation. Findings stress the need for investigating the existence of declared interactive effects between chemical and biological air pollutants to accurately understand the health risk that the assessed levels can represent to infants. abstract_id: PUBMED:34686133 Loneliness in older parents: marital transitions, family and social connections, and separate bedrooms for sleep. Objectives: This research innovatively analyzed the marital transitions (i.e., divorce and widowhood) of older Taiwanese parents, their sleep problems and spousal specific characteristics (i.e., separate bedrooms for sleep and marital relationships) as well as their social and family connections, all of which were simultaneously reflected in emotional and social domains of loneliness. Methods: Data are from 1645 older parents from Northern Taiwan. Loneliness was assessed by a De Jong-Gierveld short scale with emotional and social domains. We conducted multivariate logistic regression to examine the associations of marital transitions and family/social connections regarding sleep problems and psychological well-being with loneliness in social and emotional domains. Besides sleep problems and individual socioeconomic status, we included data on couples' sleeping arrangements and marital relationships. Results: Social loneliness was significantly associated with being divorced (AOR = 1.80, 95% CI 1.13-2.86) and living alone (AOR = 1.50, 95% CI 1.02-2.23). In contrast, strong family cohesion and frequent weekly contact with friends were associated with lower social loneliness. Married parents who slept in separate bedrooms were more likely than bed-sharing couples to feel emotional and social loneliness, despite adjusting for their sleep problems. Furthermore, satisfactory spousal relationships significantly decreased the magnitude of associations in the social domain. Discussion: Our findings support significant associations between loneliness in later life and major marital transitions, family and social connections and sleep problems which differ in social and emotional domains. Independent of relationship satisfaction, separate bedrooms relate to higher risks of emotional loneliness in older adults. abstract_id: PUBMED:29320346 Media Devices in Parents' and Children's Bedrooms and Children's Media Use. Objectives: The American Academy of Pediatrics advises having no media devices in children's bedrooms. We examined the link between media devices in parents' and children's bedrooms and children's media use. Methods: Ninety parent-child dyads participated in a community- based healthy weight management study targeting 8-to-12-year-olds with body mass index (BMI)-for-age ≥75th percentile. Parents and children reported the number of media devices in their bedrooms and hours spent using media devices on weekdays and weekend days. Results: Most children (61%) and parents (92%) had at least one media device in their bedrooms. The numbers of devices in parents' and children's bedrooms were positively correlated. Children with no bedroom media devices reported less weekday media use compared to children with bedroom devices. A similar non-significant pattern was found for children's weekend media use. Conclusions: Study findings indicate similar media devices in the bedrooms of parents and children and a significant association between media devices in children's bedrooms and their weekday media use. Efforts to reduce media in parent bedrooms may enhance interventions targeting reduction of media use among children, especially those with higher BMI. abstract_id: PUBMED:37690435 Is teacher violence a form of betrayal trauma? Relationship with mental health problems among young adults. Background: Childhood trauma is one of the most preventable risk factors for mental health problems. Considering the substantial amount of time that young people spend in school during their early years, it is important to understand the potential impacts of teachers' behaviors on students' mental health. Objectives: This study examined the relationship between exposure to teacher violence and mental health problems. Participants And Setting: An international sample of young adults aged 18 to 24 (N = 283). Methods: Participants completed self-report measures of childhood trauma, exposure to teacher violence, depressive symptoms, post-traumatic stress, and alcohol misuse. Results: Exposure to teacher violence could be reliably and validly measured using the Teacher Violence Scale (TVS). Current mental health problems - including depressive symptoms, post-traumatic stress, and alcohol misuse - were associated with exposure to teacher violence during high school years, but not with childhood non-betrayal trauma. Conclusions: Our findings expand the application of the betrayal trauma theory to school settings and point to the importance of preventing and managing teacher violence. It is important to provide more support and training to teachers and enhance monitoring measures in schools. More research on the prevalence and correlates of exposure to teacher violence is needed. We also provided first evidence supporting the reliability and validity of the English version of the TVS to facilitate future research. abstract_id: PUBMED:30987948 Television use and its effects on sleep in early childhood. Objectives: The purpose of this study was to investigate the impacts of television (TV) viewing and bedroom TV presence on young children's sleep as measured by actigraphy. Design: Analyses of covariance were run to examine differences in sleep duration and quality among children based on the presence of TVs in their bedrooms and the amount of TV watched. Setting: Recruited in preschools in Massachusetts; recorded ambulatory (in home, environs). Participants: Participants were 470 children between 33 and 71 months of age (M = 51.02). Measurements: Children were instructed to wear an actigraph watch for 16 days. Caregivers reported demographic information, completed behavior questionnaires, and answered questions regarding their child's TV use. Results: Children who watched more TV and had TVs in their bedroom displayed significantly shorter sleep duration and worse sleep, but they also napped significantly longer in the daytime. Nonetheless, total 24-hour sleep was shorter for those who watched more TV and had TVs in their bedroom compared to those who did not have TVs in their bedrooms or watch TV frequently. Children who had TVs in their bedrooms watched TV later at night, watched more adult TV programs, and had higher negative affect than children without TVs in their bedrooms. Conclusions: These findings suggest that TV use in young children does impact sleep duration and quality as measured by actigraphy, and daytime napping does not offset these negative impacts. abstract_id: PUBMED:24966505 To Evaluate the Accuracy of Saline Infusion Sonohysterography (SIS) for Evaluation of Uterine Cavity Abnormalities in Patients with Abnormal Uterine Bleeding. Objective: To evaluate the accuracy of transvaginal sonography (TVS) and saline infusion sonohysterography (SIS) for detection of uterine cavity abnormalities in patients with abnormal uterine bleeding (AUB) taking hysteroscopy as the gold standard. Methods: This was a prospective study done in the department of Obstetrics and Gynecology of a tertiary care academic hospital. Sixty premenopausal and postmenopausal women who presented with AUB underwent TVS, SIS, and hysteroscopy. The presence of focal abnormality and the type of abnormality, i.e., polyp, submucous myoma, and endometrial hyperplasia, were noted. The results of TVS and SIS were compared with hysteroscopy. Results: On hysteroscopy, 76.67 % (n = 46) patients were diagnosed with intra cavity abnormalities. SIS showed sensitivity, specificity, PPV, and NPV of 89.1, 100, 100, and 73.7 %, respectively. In comparison, TVS showed sensitivity, specificity, PPV, and NPV of 43.48, 78.57, 86.96, and 29.73 %, respectively. Conclusions: SIS was found to be more sensitive and specific than TVS in detection of intra cavity abnormalities. abstract_id: PUBMED:28790632 FOLLICULAR MONITORING: COMPARISON OF TRANSABDOMINAL AND TRANSVAGINAL SONOGRAPHY. Transabdominal sonography (TAS) and Transvaginal sonography (TVS) were compared for follicular monitoring in 73 patients of infertility. These patients were referred for follicular monitoring in normal and stimulated cycles from infertility clinic. Patient compliance and acceptance was excellent for TVS technique as compared to TAS technique. The overall resolution of ovarian and follicular anatomy was much better in TVS. With TAS only 35.6% had good visualisation of follicles as compared to 80.8% (p&lt;0.05) in TVS. TVS offered advantage of much better detection rate of smaller follicles (&lt;10mm) and higher accuracy in assessing number of follicles. abstract_id: PUBMED:37240796 The Utility of Rectal Water Contrast Transvaginal Ultrasound for Assessment of Deep Bowel Endometriosis. Deep infiltrating endometriosis (DIE) is characterized by the presence of endometrial tissue outside the uterine cavity that infiltrates at least 5-mm deep below the peritoneal layer. Imagining examinations are the first-choice methods to detect DIE. The aim of this study is to assess whether rectal water contrast transvaginal sonography (RWC-TVS) can be a useful tool for the estimation of the size of deep bowel endometriotic nodules. This retrospective study includes 31 patients subjected to RWC-TVS who underwent surgery due to deep bowel endometriosis between January 2021 and December 2022. Nodule dimensions measured via ultrasound were compared to those of histopathological samples taken after surgery. In total, 52% of patients had endometriosis limited only to the intestines, 19% had endometriotic nodules located at uterosacral ligaments and posterior vaginal fornix, 6% at the anterior compartment, and 13% at a different location. Additionally, 6% of patients had nodules at more than two locations. In all but one case, the intestinal nodules could be seen on RWC-TVS images. The largest nodule dimension measured via RWC-TVS and the size of the equivalent histopathological sample correlated (R = 0.406, p = 0.03). Thus, RWC-TVS allows for the detection of DIE and moderate estimation of the nodule sizes and should be practiced during a diagnostic process. Answer: Yes, having TVs in the bedrooms of children does impact their health and behavior. The presence of a TV in the bedroom (BTV) is associated with a higher prevalence of exhibiting problematic social behaviors and overweight status in children. Additionally, BTV is significantly associated with a lower prevalence of regular family meals, engagement in school, participation in extracurricular activities, regularly sleeping enough, and participation in community service, even after adjusting for total viewing time (PUBMED:21130109). Furthermore, children with TVs in their bedrooms tend to watch TV later at night, watch more adult TV programs, and have higher negative affect compared to those without TVs in their bedrooms. These children also display significantly shorter sleep duration and worse sleep quality, and while they may nap longer in the daytime, their total 24-hour sleep is shorter compared to those without bedroom TVs (PUBMED:30987948). Moreover, the presence of media devices in children's bedrooms is linked to increased weekday media use, suggesting that efforts to reduce media in parent bedrooms may enhance interventions targeting the reduction of media use among children, especially those with higher body mass index (BMI) (PUBMED:29320346). In summary, the evidence suggests that having a TV in a child's bedroom can negatively affect their social behaviors, health habits, sleep patterns, and overall engagement in healthy activities.
Instruction: Does Instruction to Eliminate Coffee, Tea, Alcohol, Carbonated, and Artificially Sweetened Beverages Improve Lower Urinary Tract Symptoms? Abstracts: abstract_id: PUBMED:26727685 Does Instruction to Eliminate Coffee, Tea, Alcohol, Carbonated, and Artificially Sweetened Beverages Improve Lower Urinary Tract Symptoms?: A Prospective Trial. Purpose: Common advice for lower urinary tract symptoms (LUTS) such as frequency, urgency, and related bother includes elimination of potentially irritating beverages (coffee, tea, alcohol, and carbonated and/or artificially sweetened beverages). The purpose of this study was to determine compliance with standardized instruction to eliminate these potentially irritating beverages, whether LUTS improved after instruction, and whether symptoms worsened with partial reintroduction. Design: The 3-phase fixed sequence design was (1) baseline, (2) eliminate potentially irritating beverages listed above, and (3) reintroduce at 50% of baseline volume, with a washout period between each 3-day phase. We asked participants to maintain total intake volume by swapping in equal amounts of nonpotentially irritating beverages (primarily water). Subjects And Setting: The study sample comprised 30 community-dwelling women recruited through newspaper advertisement. Methods: Quantification measures included 3-day voiding diaries and detailed beverage intake, and LUTS questionnaires completed during each phase. Results: During Phase 2, we found significant reduction in potentially irritating beverages but complete elimination was rare. Despite protocol demands, total beverage intake was not stable; mean (± standard deviation) daily total intake volume dropped by 6.2 ± 14.9 oz (P = .03) during Phase 2. In Phase 3, the volume of total beverage intake returned to baseline, but the intake of potentially irritating beverages also returned to near baseline rather than 50% as requested by protocol. Despite this incomplete adherence to study protocols, women reported reduction in symptoms of urge, inability to delay voiding, and bother during both phases (P ≤ .01). The number of voids per day decreased on average by 1.3 and 0.9 voids during Phases 2 and 3, respectively (P = .002 and P = .035). Conclusions: Education to reduce potentially irritating beverages resulted in improvement in LUTS. However, eliminating potentially irritating beverages was difficult to achieve and maintain. Study findings do not allow us to determine whether LUTS improvement was attributable to intake of fewer potentially irritating beverages, reduced intake of all beverages, the effect of self-monitoring, or some combination of these factors. abstract_id: PUBMED:28444711 Are we justified in suggesting change to caffeine, alcohol, and carbonated drink intake in lower urinary tract disease? Report from the ICI-RS 2015. Aims: There is increasing evidence that diet may have a significant role in the development of lower urinary tract symptoms. While fluid intake is known to affect lower urinary tract function the effects of alcohol, caffeine, carbonated drinks, and artificial sweeteners are less well understood and evidence from epidemiological studies is mixed and sometimes contradictory. The aim of this paper is to appraise the available evidence on the effect of caffeine, alcohol, and carbonated drinks on lower urinary tract function and dysfunction in addition to suggesting proposals for further research. Methods: Literature review based on a systematic search strategy using the terms "fluid intake," "caffeine," "alcohol," "carbonated" and "urinary incontinence," "detrusor overactivity," "Overactive Bladder," "OAB." Results: In addition to fluid intake, there is some evidence to support a role of caffeine, alcohol, and carbonated beverages in the pathogenesis of OAB and lower urinary tract dysfunction. Although some findings are contradictory, others clearly show an association between the ingestion of caffeine, carbonated drinks, and alcohol with symptom severity. CONCLUSIONS Given the available evidence lifestyle interventions and fluid modification may have an important role in the primary prevention of lower urinary tract symptoms. However, more research is needed to determine the precise role of caffeine, carbonated drinks, and alcohol in the pathogenesis and management of these symptoms. The purpose of this paper is to stimulate that research. Neurourol. Urodynam. 36:876-881, 2017. © 2017 Wiley Periodicals, Inc. abstract_id: PUBMED:23722012 Intake of caffeinated, carbonated, or citrus beverage types and development of lower urinary tract symptoms in men and women. Evidence to substantiate recommendations for restriction of caffeinated or acidic beverages as self-management for lower urinary tract symptoms (LUTS) is limited. We examined longitudinal and acute associations between beverage intake and LUTS in the Boston Area Community Health (BACH) cohort (n = 4,144) between 2002 and 2010. Multivariable models tested associations between baseline intakes and progression of LUTS at 5-year follow-up, between follow-up intakes and International Prostate Symptom Scores at follow-up, and between 5-year intake changes and LUTS progression. Greater coffee or total caffeine intake at baseline increased the odds of LUTS progression in men (coffee: &gt;2 cups/day vs. none, odds ratio = 2.09, 95% confidence interval: 1.29, 3.40, P-trend = 0.01; caffeine: P-trend &lt; 0.001), particularly storage symptoms. Women who increased coffee intake by at least 2 servings/day during follow-up (compared with categories of decreased or unchanged intakes) had 64% higher odds of progression of urgency (P = 0.003). Women with recently increased soda intake, particularly caffeinated diet soda, had higher symptom scores, urgency, and LUTS progression. Citrus juice intake was associated with 50% lower odds of LUTS progression in men (P = 0.02). Findings support recommendations to limit caffeinated beverage intake for LUTS, and in men, they suggest benefits of citrus juice consumption. Further clinical research is warranted, particularly of the precise role of sodas containing artificial sweeteners in bladder sensations and urological function. abstract_id: PUBMED:12410741 Benign prostatic hyperplasia: the opposite effects of alcohol and coffee intake. Objective: To assess several lifestyle factors influencing benign prostatic enlargement and therefore the severity of benign prostatic hyperplasia (BPH). Subjects And Methods: In age-stratified cohorts from population-based random samples of 882 men (aged 65, 70, 75 and 80 years) each participant completed a standardized questionnaire, including weight, height, socio-economic status, use of cigarettes, alcohol and coffee consumption. The questionnaire also elicited detailed information on the medical history of prostatic and (over the past month) lower urinary tract symptoms, applying all questions from the American Urology Association instrument. The lifestyle variables were evaluated for confounding by multiple logistic regression, controlling for age, relative weight and professional education, with those classified as having BPH analysed as the dependent variable. Results: The prevalence of surgery for BPH increased with age from 15% at 65 years to 41% at 80 years. There was a strong inverse association between alcohol intake and men treated surgically for BPH or in 'watchful waiting' for surgical intervention, but a positive correlation with coffee consumption, and although not significantly, with the number of cigarettes smoked. Nevertheless, those who had never smoked have a slightly greater risk of BPH than current smokers. The body mass index and professional education were not associated with the risk of BPH. Conclusions: Given the opposite effects of coffee and moderate alcohol consumption, together with the increased risk for clinical BPH in men with coronary heart disease, coffee constituents, which increase the serum concentration of low-density lipoprotein cholesterol, may be involved in the pathophysiology of BPH. Further epidemiological studies are needed to evaluate whether avoiding coffee intake reduces the risk of BPH. abstract_id: PUBMED:35435467 Fluids affecting bladder urgency and lower urinary symptoms: results from a randomized controlled trial. Introduction And Hypothesis: Caffeinated, alcoholic, artificially sweetened, carbonated, and acidic beverages are pervasive and consumed in large quantities. Reputedly, these beverages are "irritating to the bladder" and result in heightened void frequency, but prior studies lack control for intake volume. We tested the null hypothesis that women recruited from the community who demonstrate overactive bladder symptoms will show no difference by groups in void frequency when one group is instructed to replace listed beverages by substituting non-irritants (emphasis on water or milk) and the other group is instructed in healthy eating. Methods: This was a parallel-group randomized controlled trial design with a three-period fixed sequence (baseline and 2 and 6 weeks post-baseline). We recruited 105 community women with overactive bladder symptoms. Inclusion Criteria: &gt;7 voids per day or 2 voids per night, daily intake of ≥16 oz. (473 ml) of beverages containing the ingredients listed above, and ≥ 32 oz. (946 ml) of total fluid intake. Stratified randomization was conducted. The primary outcome was average daily void frequency on a 3-day diary. Results: Participants were 86% white, mean (SD) age was 46.6 (17.6) years, and baseline void frequency was 9.2 (2.9) voids per day. At 2 and 6 weeks, estimated average (SD) difference in void frequency between group 1 and group 2 was -0.46 (0.57) and -0.31 (0.57) voids per day (p &gt; 0.05); the null hypothesis was not rejected. Conclusions: Women who reduce potentially irritating beverages while maintaining total fluid volume intake is not predictive of void frequency. Further research on type and volume of beverage intake is recommended. abstract_id: PUBMED:34825923 Overactive bladder syndrome in nulliparous female university students: prevalence and risk factors including waterpipe smoking. Introduction And Hypothesis: The objective was to assess the prevalence of symptoms of overactive bladder syndrome (OAB) among healthy nulliparous female university students, and to evaluate the correlation of these symptoms with a variety of factors, including waterpipe (WP) smoking. Methods: This is a cross-sectional study. A questionnaire was administered to evaluate symptoms of OAB in healthy nulliparous female university students. Variables assessed included body mass index (BMI), cigarette smoking, WP smoking, consumption of alcohol, coffee, and tea with and without artificial sweeteners, soft drinks, and energy drinks. Adjusted odds ratio were calculated to determine the correlation of these variables with OAB symptoms. Results: A total of 767 out of 2,900 females responded to the questionnaire. Bothersome frequency was reported in 32.3%, and nocturia in 47.5% of the women. Urgency and urgency urinary incontinence (UUI) were present in 25.5 and 24.7% of the study participants respectively. Urgency was associated with WP smoking (p value 0.048). Bothersome frequency was associated with artificial sweeteners used with coffee and tea (p value 0.013). UUI was associated with cigarette smoking (p value 0.034) and elevated BMI (p value &lt; 0.001). OAB symptoms were not found to be significantly associated with soft drink and energy drink consumption. A lower prevalence of nocturia (p value 0.009) and urgency was associated with alcohol consumption (p value 0.017). More than two-thirds (69.2%) of WP smokers expressed readiness to decrease WP smoking if this would improve their lower urinary tract symptoms (LUTS). Conclusion: Overactive bladder is common in healthy young nulliparous women and is associated with multiple risk factors, including WP smoking. abstract_id: PUBMED:28969404 Effect of green tea catechins in patients with high-grade prostatic intraepithelial neoplasia: Results of a short-term double-blind placebo controlled phase II clinical trial. Background And Study Objective: Several studies suggest a protective role of green tea catechins against prostate cancer (PCa). In order to evaluate the efficacy of green tea catechins for chemoprevention of PCa in patients with high-grade prostate intraepithelial neoplasia (HG-PIN) we performed a phase II clinical trial. Methods: Sixty volunteers with HG-PIN were enrolled to carry out a double-blind randomized placebo-controlled phase II clinical trial. Treated group took daily 600 mg of green tea catechins (Categ Plus®) for 1 year. Patients were screened at 6 and 12 months through prostatic biopsy and measurements of prostate-specific antigen (PSA). Results: Despite the statistically significant reduction of PSA observed in subjects who received green tea catechins for 6 and 12 months, we did not find any statistical difference in PCa incidence between the experimental groups neither after 6 nor after 12 months. However, throughout the one-year follow- up we observed very limited adverse effects induced by green tea catechins and a not significant improvement in lower urinary tract symptoms and quality of life. Conclusions: Although the small number of patients enrolled in our study and the relatively short duration of intervention, our findings seems to deny the efficacy of green tea catechins. However, results of our clinical study, mainly for its low statistical strength, suggest that the effectiveness of green tea catechins should be evaluated in both a larger cohort of men and longer trial. abstract_id: PUBMED:24883106 A green and black tea extract benefits urological health in men with lower urinary tract symptoms. Objectives: The objective of this study was to examine the effects of a green and black tea extract blend [AssuriTEA Men's Health (AMH)] in men with lower urinary tract symptoms (LUTS). Methods: In this randomized, double-blind, placebo-controlled study, 46 men aged 30-70 with an American Urologic Association symptom score (AUAss) of at least 8 and up to 24 were randomized to 500 mg AMH, 1000 mg AMH, or placebo daily for 12 weeks. Measurements were taken at baseline (BL), week 6 and week 12 for AUAss, simple uroflowmetry, postvoid residual volume (PVR), C-reactive protein (CRP), Short-Form 36 Health Survey (SF-36), and International Index of Erectile Function (IIEF). Results: A total of 40 subjects completed the study. AUAss decreased 34.5% from BL to week 12 in the 1000 mg AMH group (p = 0.008). At week 12, CRP increased in the 500 mg AMH (p = 0.003) and placebo (p = 0.012) groups from their BL levels but not in the 1000 mg group. Average urine flow (Qmean) increased in the 500 mg (p = 0.033) and 1000 mg AMH (p = 0.002) groups versus placebo. PVR decreased in the 1000 mg AMH group (p = 0.034) from BL at week 6. Treatment group effects were observed for the physical functioning and sexual desire domains of the SF-36 and IIEF (p = 0.051 and p = 0.005 respectively). AMH was well tolerated. Conclusions: Oral administration of AMH improved LUTS and quality of life in as little as 6 weeks. abstract_id: PUBMED:19683313 Alcohol consumption is associated with a decreased risk of benign prostatic hyperplasia. Purpose: While some studies have indicated that alcohol consumption is associated with a decreased risk of benign prostatic hyperplasia, others have not. We evaluated associations of alcohol consumption with benign prostatic hyperplasia and male lower urinary tract symptoms. Materials And Methods: We performed a meta-analysis of published studies pertaining to alcohol intake, benign prostatic hyperplasia and lower urinary tract symptoms. We analyzed abstracted data with random effects models to obtain pooled odds ratios of adjusted effects estimates. Results: A total of 19 studies (120,091 men) met selection criteria and of these studies 14 revealed a significantly decreased likelihood of benign prostatic hyperplasia or lower urinary tract symptoms with increased alcohol intake. Sixteen studies were eligible for pooled analyses, of which 12 used benign prostatic hyperplasia as the primary outcome. We stratified total alcohol intake by gm per day into 6 strata. Alcohol intake was associated with a significantly or marginally significantly decreased likelihood of benign prostatic hyperplasia in all 6 strata (p values 0.08, 0.01, &lt;0.001, 0.02, 0.001 and &lt;0.001, respectively). Compared to no alcohol intake, an alcohol intake of 36 gm daily or greater was associated with a 35% decreased likelihood of benign prostatic hyperplasia (OR 0.65, 95% CI 0.58-0.74, p &lt;0.001). Of the 4 studies that used lower urinary tract symptoms as the primary outcome 3 demonstrated a significantly increased likelihood of lower urinary tract symptoms with alcohol consumption. Conclusions: Alcohol consumption is associated with a decreased likelihood of benign prostatic hyperplasia but not of lower urinary tract symptoms. Further studies are needed to determine the mechanisms by which alcohol modifies the risk of benign prostatic hyperplasia. abstract_id: PUBMED:31875256 Fluids affecting bladder urgency and lower urinary symptoms (FABULUS): methods and protocol for a randomized controlled trial. Introduction And Hypothesis: We present the design of a randomized controlled trial, Fluids Affecting Bladder Urgency and Lower Urinary Symptoms (FABULUS), with the purpose of testing the common clinical advice of treating overactive bladder by eliminating potentially irritating beverages (PIBs) that are caffeinated, artificially sweetened, citric, or alcoholic. The primary hypothesis is that women taught to reduce PIBs will show less void frequency compared with a control group instructed in diet/exercise recommendations. Secondary outcomes include change in urgency symptoms and volume per void. Methods: We report the methods for FABULUS and discuss how challenges presented in the literature and from a prior proof-of-concept feasibility trial are addressed by strengthening study design, procedures, and instruments. We introduce the concept of standardized automated tutorials for assisting participants in compliance from study start to finish. The tutorials contain a detailed explanation of the study, including tips for complying with the extensive diary requirements, and parallel tutorials to intervention and control groups for consistency in format and time of instructional content. The intervention tutorial on eliminating PIBs places emphasis on maintaining steady fluid intake volume, as fluctuations have been a confounder in prior work. Results: Study results promise to inform about both the tutorial approach and specific PIB reduction for effectively treating overactive bladder. Conclusions: OAB can have a negative impact on quality of life, and current medical treatments carry costs and side-effect risks. If simple lifestyle changes can improve or prevent these bladder symptoms, multiple medical and public health advances could result. Answer: The instruction to eliminate coffee, tea, alcohol, carbonated, and artificially sweetened beverages is commonly given to individuals with lower urinary tract symptoms (LUTS) such as frequency, urgency, and related bother. The evidence on whether this advice leads to improvement in LUTS is mixed. A prospective trial found that education to reduce potentially irritating beverages resulted in improvement in LUTS, with women reporting a reduction in symptoms of urge, inability to delay voiding, and bother during both phases of the study where they were instructed to eliminate and then partially reintroduce these beverages. The number of voids per day also decreased. However, complete elimination of these beverages was difficult to achieve and maintain, and the study could not determine whether the improvement was due to fewer irritating beverages, reduced intake of all beverages, the effect of self-monitoring, or a combination of these factors (PUBMED:26727685). Another study, which was a literature review, suggested that there is some evidence to support a role of caffeine, alcohol, and carbonated beverages in the pathogenesis of overactive bladder (OAB) and LUTS. The evidence from epidemiological studies is mixed and sometimes contradictory, and more research is needed to determine the precise role of these substances in the management of LUTS (PUBMED:28444711). A longitudinal study within the Boston Area Community Health (BACH) cohort found that greater coffee or total caffeine intake increased the odds of LUTS progression in men, particularly storage symptoms. Women who increased coffee intake had higher odds of progression of urgency. The study supported recommendations to limit caffeinated beverage intake for LUTS and suggested benefits of citrus juice consumption for men (PUBMED:23722012). In contrast, a randomized controlled trial found that reducing potentially irritating beverages while maintaining total fluid volume intake was not predictive of void frequency, suggesting that the type and volume of beverage intake may not be as impactful as previously thought (PUBMED:35435467). Overall, while there is some evidence that reducing intake of potentially irritating beverages can improve LUTS, the evidence is not conclusive, and more research is needed to fully understand the relationship between these beverages and LUTS. Additionally, individual adherence to such dietary changes can be challenging, and the improvements observed may be influenced by other factors such as self-monitoring and overall fluid intake (PUBMED:26727685; PUBMED:28444711; PUBMED:237220 12). Despite the mixed evidence, some studies have shown associations between specific beverages and LUTS. For instance, a study on benign prostatic hyperplasia (BPH) found an inverse association between alcohol intake and men treated surgically for BPH, but a positive correlation with coffee consumption. This suggests that the constituents of coffee, which can increase serum concentration of low-density lipoprotein cholesterol, may be involved in the pathophysiology of BPH. The study concluded that further epidemiological studies are needed to evaluate whether avoiding coffee intake reduces the risk of BPH (PUBMED:12410741). Another study found that alcohol consumption is associated with a decreased risk of BPH but not of lower urinary tract symptoms. The meta-analysis included 19 studies and indicated that increased alcohol intake was associated with a significantly decreased likelihood of BPH. However, the same study noted that alcohol consumption did not significantly affect lower urinary tract symptoms (PUBMED:19683313). In the context of overactive bladder syndrome (OAB) in nulliparous female university students, the prevalence of OAB symptoms was found to be associated with multiple risk factors, including waterpipe smoking. The study did not find a significant association with soft drink and energy drink consumption, but bothersome frequency was associated with artificial sweeteners used with coffee and tea (PUBMED:34825923). Additionally, a study on the effects of green tea catechins in patients with high-grade prostatic intraepithelial neoplasia did not find a statistical difference in prostate cancer incidence between the experimental groups after 6 or 12 months. However, the study observed limited adverse effects and a not significant improvement in lower urinary tract symptoms and quality of life, suggesting that further research with a larger cohort and longer trial duration is needed (PUBMED:28969404). In summary, while there is some evidence to suggest that eliminating coffee, tea, alcohol, carbonated, and artificially sweetened beverages may improve LUTS, the results are not definitive, and adherence to such dietary changes can be difficult.
Instruction: Investigating febrile UTI in infants: is a cystogram necessary? Abstracts: abstract_id: PUBMED:19620025 Investigating febrile UTI in infants: is a cystogram necessary? Objective: Current imaging recommendations for investigating any infantile febrile urinary tract infection (UTI) are ultrasound scan (US), micturating cystourethrogram (MCUG) and dimercaptosuccinic acid (DMSA) scan. The aim of this retrospective cohort study was to determine the need and indications for MCUG in the investigation of a first febrile infantile UTI, as doubts have been raised over its benefit. Patients And Methods: Information on 427 infants who had undergone US, MCUG and DMSA following a first febrile UTI was prospectively recorded. The infants were divided into two groups: A (354) with normal renal US and B (73) with abnormal US. DMSA findings were correlated with findings on MCUG. Main outcome measures were incidence of recurrent UTIs, change in management or intervention as a result of MCUG, and outcome at discharge. Results: Only 21/354 (6%) infants in Group A had both scarring on DMSA and vesicoureteric reflux (VUR), predominantly low-grade on MCUG. In Group B (abnormal US), 23/73 (32%) had scarring on DMSA and vesicoureteric reflux, predominantly high grade on MCUG. Of the infants with non-scarred kidneys, 73% had dilating reflux. Successful conservative treatment was performed in 423 infants, and 4 infants in Group B required surgery. Conclusion: We recommend US and DMSA in all infantile febrile UTI cases. Where US is normal, MCUG should be reserved for those cases with abnormal DMSA. Where US is abnormal, MCUG should be performed irrespective of findings on DMSA scan. A randomized prospective study is necessary to evaluate this further. abstract_id: PUBMED:36518361 Actinotignum schaalii Can Be an Uropathogen of "Culture-Negative" Febrile Urinary Tract Infections in Children with Urinary Tract Abnormalities. Accurate diagnosis and treatment of febrile urinary tract infections (UTI) during childhood are important for the prevention of renal parenchymal damage and functional loss, and detection of underlying diseases related to chronic kidney disease (CKD). Actinotignum schaalii (A. schaalii)-related febrile UTI in children is rare, and its incidence and risk factors remain unclear. A 3-year-old boy with a history of UTI presented with fever and vomiting. Although the culture of his urine specimen in air was negative, A. schaalii was observed in a 5% carbon dioxide (CO2) culture condition, as well as an anaerobic one. A diagnosis of febrile UTI was made, and he recovered with antibiotic therapy. He was found to have CKD associated with vesicoureteral reflux (VUR) after further investigations. A. schaalii is one of the causative agents of febrile UTI in children with urinary tract abnormalities. Although the culture in the air could show negative results, urine culture in 5% CO2 and anaerobic conditions is useful for diagnosis. Our case is the youngest and the first known case of A. schaalii-related febrile UTI associated with VUR in children. abstract_id: PUBMED:25657771 Renal function in children with febrile convulsions. Objective: Febrile convulsions (FC) are the most frequent seizure disorder in children. Some studies have detected serum electrolyte disturbances in patients with FC. This study determines serum electrolytes, renal function tests, and frequency of urinary tract infection in hospitalized children with FC. Materials & Methods: In this descriptive, cross sectional study, we evaluated 291 children with FC admitted to the Neurology ward of Ali-Asghar Children's Hospital from 2008- 2013. Data was recorded on age, sex, type (simple, complex), and recurrence of seizures, family history of FC and epilepsy, serum electrolytes, renal function tests, and urinary tract infections. Results: A total of 291 patients with diagnosis of FC were admitted to our center. Of these 291 patients, 181 (62.2%) were male. The mean age was 24.4 ± 14.6 months. There were simple, complex, and recurrent FCs in 215 (73.9%), 76 (26.1%) and 61 (21%) of patients, respectively. Urinary tract infections (UTI) were found in 13 (4.5%) patients, more present in females (p-value = 0.03) and under 12 months of age (p-value = 0.003). Hyponatremia, hypocalcemia, and hypokalemia was detected in 32 (11%), 16 (5.5%), and 4 (1.4%) of cases, respectively. Twentyfour (8.2%) patients had a glomerular filtration rate less than 60 ml/min/1.73m2. There were no abnormalities in serum magnesium, BUN, and creatinine levels. Conclusion: During FCs, mild changes may occur in renal function but a serum electrolyte evaluation is not necessary unless patients are dehydrated. In children with FC, urinary tract infections should be ruled out. abstract_id: PUBMED:24366989 Accuracy of early DMSA scan for VUR in young children with febrile UTI. Objective: To evaluate the accuracy of an acute (99m)Tc-dimercaptosuccinic acid (DMSA) scan in predicting dilating vesicoureteral reflux (VUR) among young children with a febrile urinary tract infection (UTI). Methods: The medical records of children (≤ 2 years of age), presenting with febrile UTI between January 2000 and December 2011, were retrospectively reviewed. Results: A total of 523 children were included in this study, of whom 397 children (75.9%) had abnormal DMSA results and 178 children (34.0%) were identified as VUR on micturating cystourethrography (MCU). Among all the patients, the number of children with dilating VUR was 151 (28.9%). The rate of abnormal results on DMSA for the dilating VUR group was significantly higher than the rates for the non-VUR and low-grade VUR groups (P &lt; .01). In the &lt;6 months age group and ≥ 6 months age group, the sensitivities of DMSA in predicting dilating VUR were 96.15% and 100.0%, respectively, the negative predictive values were 97.26% and 100.0%, respectively, and the negative likelihood ratios were 0.0911 and 0.0000, respectively. Conclusion: For children ≤ 2 years of age with a febrile UTI, an acute DMSA scan is valuable in the exclusion of dilating VUR. The likelihood of the presence of dilating VUR on MCU is rather low when the result of DMSA is negative. DMSA should be conducted to assess the need for an MCU. abstract_id: PUBMED:37228436 Biomarkers for febrile urinary tract infection in children. Background: The current reference standard for pediatric urinary tract infection (UTI) screening, the leukocyte esterase (LE) dipstick test, has suboptimal accuracy. The objective of this study was to compare the accuracy of novel urinary biomarkers to that of the LE test. Methods: We prospectively enrolled febrile children who were evaluated for UTI based on their presenting symptoms. We compared the accuracy of urinary biomarkers to that of the test. Results: We included 374 children (50 with UTI, 324 without UTI, ages 1-35 months) and examined 35 urinary biomarkers. The urinary biomarkers that best discriminated between febrile children with and without UTI were urinary neutrophil gelatinase-associated lipocalin (NGAL), IL-1β, CXCL1, and IL-8. Of all examined urinary biomarkers, the urinary NGAL had the highest accuracy with a sensitivity of 90% (CI: 82-98) and a specificity of 96% (CI: 93-98). Conclusion: Because the sensitivity of the urinary NGAL test is slightly higher than that of the LE test, it can potentially reduce missed UTI cases. Limitations of using urinary NGAL over LE include increased cost and complexity. Further investigation is warranted to determine the cost-effectiveness of urinary NGAL as a screening test for UTI. abstract_id: PUBMED:30484290 Preoperative Th1/Th2 and related cytokines: Prediction value in postoperative febrile UTI after ureteroscopy in patients with ureteral calculi. Background: The topic of whether preoperative Th1/Th2 cells and their related factors have a predictive value for postoperative febrile urinary tract infection (UTI) in patients with ureteral calculi has not been explored. Objectives: The objective of this study was to investigate the role of preoperative Th1/Th2 cells and related cytokines in the prediction of postoperative febrile UTI after ureteroscopy in patients with ureteral calculi. Material And Methods: One hundred sixty patients who underwent ureteroscopic pneumatic lithotripsy in the Affiliated Hospital of Hangzhou Normal University (China) were recruited and divided into febrile UTI group (n = 78) and non-UTI group (n = 82). Flow cytometry was used to detect the proportions of Th1 and Th2 cells (Th1% and Th2%). Detection of Th1/Th2 cell-related cytokines was conducted using enzyme-linked immunosorbent assay (ELISA). Quantitative real-time polymerase chain reaction (qRT-PCR) was applied to measure the expression of T-bet and GATA3. Results: Compared with patients in non-UTI group, those in febrile UTI group had significantly increased proportions of Th2 cells, levels of Th2 cytokines (interleukin (IL)-4, IL-10 and IL-5), and mRNA expression of Th2-associated transcription factor GATA3 (all p &lt; 0.05). In addition, the Th1/Th2 ratio of febrile UTI group was significantly lower than that of non-UTI group (p &lt; 0.001). Receiver operating characteristic (ROC) curve analysis showed that the accuracy rate of Th2%, Th1/Th2 ratio, and IL-4, IL-10 and IL-5 levels for the diagnosis of postoperative febrile UTI in patients with ureteral calculi was 90.63%, 85.00%, 72.50%, 87.50%, and 91.88%, respectively, and their combined diagnostic sensitivity was 97.4% with specificity as high as 100%. Conclusions: Perioperative Th2 dominance was correlated with the risk of postoperative febrile UTI after ureterscopy in patients with ureteral calculi, which can provide clinical guidance for the development of individualized treatment. abstract_id: PUBMED:34251495 Biomarkers for febrile urinary tract infection in children. Background: The sensitivity and specificity of the leukocyte esterase test for the diagnosis of urinary tract infection (UTI) are suboptimal. Recent studies have identified markers that appear to more accurately differentiate children with and without UTI. The objective of this study was to determine the accuracy of these markers, which included CCL3, IL-8, CXCL1, TNF-alpha, IL-6, IFN-gamma, IL-17, IL-9, IL-2, and NGAL, in the diagnosis of UTI. Methods: This was a prospective cross-sectional study to compare inflammatory proteins between urine samples from febrile children with a UTI, matched febrile controls without a UTI, and asymptomatic healthy controls. Results: We included 192 children (75 with febrile UTI, 69 febrile controls, and 48 asymptomatic healthy controls). Urinary proteins that best discriminated between febrile children with and without UTI were NGAL, a protein that exerts a local bacteriostatic role in the urinary tract through iron chelation; CCL3, a chemokine involved in leukocyte recruitment; and IL-8, a cytokine involved in neutrophil recruitment. Levels of these proteins were generally undetectable in asymptomatic healthy children. Conclusions: NGAL, CCL3, and IL-8 may be useful in the early diagnosis of UTI. Trial Registration: ClinicalTrials.gov (NCT01391793) A higher resolution version of the Graphical abstract is available as Supplementary information. abstract_id: PUBMED:30279715 The Relation between Urinary Tract Infection and Febrile Seizure. Objectives: Febrile seizure is the most common type of seizure among children. Identification of factors involved in febrile seizure is highly critical. The present study was conducted to determine the association between children's urinary tract infection and febrile seizure. Materials & Methods: In this case-control study, 165 children with simple febrile seizure (case group) were compared with 165 children with fever and without seizure (control group) in terms of urinary tract infection (UTI) in Qazvin, central Iran in 2015-2016. The age of children was between 6 months to 5 yr. Results: Among 165 children with febrile seizure, 25 (15.2%) had urinary tract infection. In the control group, only 2 patients (1.2%) had UTI. There was significant difference between two groups regarding urinary tract infection (P=0.001). Among 25 children with UTI in the case group, 17 children (68%) had acute pyelonephritis, and the remaining 8 children (32%) had cystitis. The two patients with UTI in control group had cystitis (P=0.055). Conclusion: Urinary tract infection could be a risk factor for febrile seizure. Therefore, all patients with febrile seizure are examined in terms of urinary tract infection. abstract_id: PUBMED:36625359 A new recommendation for febrile urinary tract infection in children aged 2-24 months: Tepecik UTI Guideline-2. Aim: Urinary tract infections (UTIs) represent a common febrile illness in infancy. The study compared two UTI guidelines in terms of number of imaging studies, presence of parenchymal damage and radiation exposure in patients with the first febrile UTI between 2 and 24 months of age. Method: The results of Tepecik UTI Guideline-1 used until 2012 (Group 1, n = 105) were retrospectively compared with Tepecik UTI Guideline-2 (Group 2) used after 2013. In Group 1, urinary tract ultrasonography (US), dimercaptosuccinic acid (DMSA) and voiding cystourethrography (VCUG) were made in all patients. In Group 2, if the US result was abnormal, patients were evaluated with VCUG and DMSA. If the US was normal, only DMSA was performed. If the DMSA was abnormal, the VCUG was undergone (n: 43, 40.9%). Results: The abnormal VCUG detection rate was 69.2% in Group 1 and 30.8% in Group 2 (p = 0.09). Sensitivity and specificity of US in the diagnosis of vesicoureteral reflux (VUR) was 15.9% and 96.7% in Group 1 and 61.5% and 70.5% in Group 2, respectively. Abnormal DMSA findings were observed among 33.3% (Groups 1) and 66.7% (Groups 2) subjects, respectively (p &gt; 0.05). The median radiation exposure (500 mrem) of patients in Group 1 was statistically significantly higher than those in Group 2 (200 mrem) (p &lt; 0.001). Conclusion: The VCUG should not be the first examination to be considered in such patients. We think that Tepecik UTI Guideline-2 reduces unnecessary invasive procedure and radiation exposure and not missed VUR in the management of children with UTI at 2-24 months. Needs prospective follow-up studies before considering this recommendation. abstract_id: PUBMED:37878097 Duration of antibiotic therapy in febrile neutropenia Febrile neutropenia is a frequent complication of cancer treatment and is associated with an increased risk of morbidity and mortality. However, standardization in the management of neutropenic fever has led to a reduction in its complications. The duration and type of antibiotic therapy differ according to guidelines, particularly in cases of febrile neutropenia of unknown origin. Continuation of antibiotic treatment until resolution of neutropenia has historically been recommended but exposes patients to prolonged periods of broad-spectrum antibiotic therapy. In a context of growing resistance to antibiotics, optimization of antibiotic use is necessary, particularly in this frail patient population, faced with the consequences of repeated exposure to broad-spectrum antibiotic therapy. Answer: Investigating febrile urinary tract infections (UTIs) in infants often involves a combination of imaging studies, including ultrasound scans (US), micturating cystourethrogram (MCUG), and dimercaptosuccinic acid (DMSA) scans. The necessity of a cystogram, specifically MCUG, in the investigation of a first febrile infantile UTI has been questioned due to concerns about its benefits and the potential for invasive procedures to cause distress and additional radiation exposure to young patients. A retrospective cohort study aimed to determine the need and indications for MCUG in the investigation of a first febrile infantile UTI. The study divided infants into two groups based on renal US results: Group A with normal US and Group B with abnormal US. The findings suggested that MCUG might not be necessary for all cases. Only 6% of infants in Group A with normal US had both scarring on DMSA and vesicoureteric reflux (VUR), predominantly low-grade, on MCUG. In contrast, in Group B with abnormal US, 32% had scarring on DMSA and VUR, predominantly high grade, on MCUG. The study recommended US and DMSA for all infantile febrile UTI cases, reserving MCUG for cases with abnormal DMSA when US is normal, and performing MCUG irrespective of DMSA findings when US is abnormal (PUBMED:19620025). Another study, the Tepecik UTI Guideline-2, suggested that VCUG, a type of cystogram, should not be the first examination considered in patients with the first febrile UTI between 2 and 24 months of age. The guideline recommended performing VCUG only if US was abnormal or if DMSA was abnormal following a normal US. This approach aimed to reduce unnecessary invasive procedures and radiation exposure without missing significant cases of VUR (PUBMED:36625359). In conclusion, while a cystogram can be an important tool in the investigation of febrile UTI in infants, its use may be more selective, based on the results of other less invasive imaging studies such as US and DMSA scans. This selective approach aims to minimize unnecessary procedures and radiation exposure while still identifying significant cases of VUR that may require further intervention.
Instruction: Adjuvant Treatment and Clinical Trials in Elderly Patients With Endometrial Cancer: A Time for Change? Abstracts: abstract_id: PUBMED:26745698 Adjuvant Treatment and Clinical Trials in Elderly Patients With Endometrial Cancer: A Time for Change? Objectives: The aim of the study were to evaluate the gap between recommended and received adjuvant therapy in elderly patients with endometrial cancer (EC) and to determine the percent of women 70 years and older who would meet enrollment criteria for representative Gynecologic Oncology Group (GOG) trials. Methods And Materials: An institutional review board approved retrospective chart review of all EC cases from a tertiary care institution from 2005 to 2010 was performed. Clinical, surgical, and pathologic data were abstracted from electronic medical records. Gynecologic Oncology Group protocols 249, 209, and 229L were selected as representative national EC trials. Patients were evaluated for eligibility by each protocol's criteria. Results: Twenty-six percent (280/1064) of patients with EC were older than 70 years. More than 60% (181/280) of elderly patients with EC were recommended to undergo adjuvant therapy. By therapy type, 64% (48/75) of elderly patients who were recommended adjuvant radiation received it, 53% (49/92) of elderly patients who were recommended combination chemotherapy and radiation received it, and 29% (4/14) of elderly patients who were recommended chemotherapy received it. In evaluating enrollment criteria for GOG 249, 30% (40/134) of pathologically eligible patients would have been eliminated for medical clearance; for GOG 209, 31% (26/86) would have been eliminated, and for GOG 229L, 9% (4/45) would have been eliminated purely for medical reasons. Conclusions: More adjuvant treatment is recommended in the elderly patients because of a higher incidence of advanced disease and aggressive histopathology. Approximately half of the elderly patients who were recommended treatment actually received it. In addition, clinical trial data are limited for elderly patients because approximately one third of the women aged 70 years and older who meet pathologic enrollment criteria for trials were excluded because of complex medical disease. abstract_id: PUBMED:31233482 Adjuvant treatment for endometrial cancer. Purpose Of Review: This article reviews and interprets studies on adjuvant treatment of endometrial cancer published during the last 18 months. Recent Findings: For patients with intermediate and high intermediate risk endometrial cancer, vaginal brachytherapy remains the adjuvant therapy of choice. New molecular markers might help to define patients in this group for whom observation only is sufficient and women who might have benefitted from external beam radiotherapy. Preliminary results from large randomized controlled trials have shown that in early stage, high-risk endometrial cancer the addition of chemotherapy to external beam radiotherapy (EBRT) did not improve survival. The combination of vaginal brachytherapy with three courses of chemotherapy resulted in similar progression-free and overall survival (3 years) as EBRT. In stage III high-risk endometrial cancer, the addition of chemotherapy to EBRT improved failure-free survival but not overall survival (immature data). Chemotherapy alone had the same efficacy concerning progression-free and overall survival (immature data). Summary: Three large randomized clinical trials on the role of adjuvant radio and/or chemotherapy have so far provided only immature results. Discussions about changes of clinical practice should be postponed until mature data from all three trials are available. The impact of new molecular markers for risk stratification will be assessed in ongoing RCTs. abstract_id: PUBMED:8730625 Adjuvant progestagen therapy for the treatment of endometrial cancer: review and meta-analyses of published randomised controlled trials. Objective: To assess the efficacy of adjuvant progestagen therapy for the treatment of endometrial cancer. Design: Computerised literature review by MEDLINE backed up by a manual search of 16 journals published before January 1995, to identify relevant published randomised controlled trials. Trials were classified according to methodological quality criteria. Odds ratios and 95% confidence intervals (CIs) were calculated where data allowed. Setting: Institute of Epidemiology and Health Service Research, University of Leeds. Subjects: Patients presenting with endometrial cancer. Intervention: Adjuvant progestagen therapy. Outcomes: Death (overall, cancer related and intercurrent) and relapse. Results: Six randomised controlled trials comparing adjuvant progestagen with no adjuvant therapy in the treatment of endometrial cancer were identified in the literature prior to January, 1995. Three trials had an acceptable method of randomisation. Meta-analysis demonstrated that overall survival was not improved by adjuvant progestagen therapy and may even have been adversely affected. Furthermore, this treatment was not associated with any reduction in relapse or death from endometrial cancer. Conclusion: Current evidence does not support use of adjuvant progestagen therapy in the primary treatment of endometrial cancer. abstract_id: PUBMED:21495211 Adjuvant chemotherapy for endometrial cancer: unproven. High-risk endometrial cancer (EC), only 15% of all EC, mainly affects elderly women, often with significant comorbid diseases. Because patients with high-risk EC are at increased risk of distant metastases and EC death, the use of adjuvant chemotherapy has been investigated in several trials. Trials comparing radiotherapy and chemotherapy have not shown survival difference. A first trial comparing combinations of chemotherapy and radiotherapy with radiotherapy alone suggested progression-free survival benefit. Toxicity and quality of life data are lacking. The role of adjuvant chemotherapy for endometrial carcinoma remains unproven. High-risk EC remains the challenge for further research. abstract_id: PUBMED:16277879 Defining the roles of aromatase inhibitors in the adjuvant treatment of early-stage breast cancer. The widespread use of tamoxifen has led to significant improvements in survival for postmenopausal women with early-stage hormone receptor-positive breast cancer; however, approximately 30% of patients die despite receiving tamoxifen as adjuvant treatment. In addition, concerns exist regarding tamoxifen-associated side effects, including endometrial cancer and thromboembolic disease. The development of the third-generation aromatase inhibitors (AIs; anastrozole, exemestane, and letrozole) therefore represented a welcome potential alternative to tamoxifen. Several clinical trials have demonstrated the superiority of AIs over tamoxifen in the adjuvant treatment of postmenopausal women with hormone-sensitive breast cancer, but these trials differ in their design and in the characteristics of their patient populations. This review discusses the different designs of the primary adjuvant, switching, extended adjuvant, and sequencing trials that are investigating the use of AIs in the adjuvant treatment of breast cancer and provides direction regarding how the data from these trials could be used to guide treatment choice. This review also demonstrates why one should not extrapolate results from clinical trials to clinical situations that differ from the clinical trial or from clinical trials investigating a particular AI to clinical situations involving another AI. abstract_id: PUBMED:20975364 Adjuvant chemotherapy for endometrial cancer: unproven. High-risk endometrial cancer (EC), only 15% of all EC cases, mainly affects elderly women, often with significant comorbid diseases. Because patients with high-risk EC are at increased risk of distant metastases and EC death, the use of adjuvant chemotherapy has been investigated in several trials. Trials comparing radiotherapy and chemotherapy have not shown survival difference. A first trial comparing combinations of chemotherapy and radiotherapy with radiotherapy alone suggested a progression-free survival benefit. Toxicity and quality-of-life data are lacking. The role of adjuvant chemotherapy for endometrial carcinoma remains unproven. High-risk EC remains the challenge for further research. abstract_id: PUBMED:15701884 Endocrine therapy trials of aromatase inhibitors for breast cancer in the adjuvant and prevention settings. The recent past has witnessed the appearance of substantial data relating to endocrine therapy of breast cancer. In the adjuvant therapy setting in early breast cancer, several large, well-conducted, randomized, double-blind clinical trials have provided evidence for the value of the third-generation aromatase inhibitors (AI) anastrozole, exemestane, and letrozole. The three major studies to date [i.e., Arimidex, tamoxifen alone, or in combination (ATAC), International Exemestane Study (IES), and letrozole after 5 years of tamoxifen (MA.17)] evaluated three different populations of women from the standpoints of duration of prior tamoxifen and thus time since the treatment of the primary breast cancer. A consistent pattern of improvement in disease-free survival was seen whether the control arm was tamoxifen (ATAC and IES) or placebo following tamoxifen (MA.17). From a toxicity standpoint, the major findings with the AIs were a decreased incidence of thromboembolic events and endometrial cancers but an increase in musculoskeletal complaints and potential for decreasing bone density. The last issue should be clarified with ongoing studies addressing the impact of the three AIs on bone density and fractures. In summary, based on ATAC, IES, and MA.17, respectively, the following conclusions can be drawn relating to postmenopausal women with hormone receptor positive early breast cancer: anastrozole is a reasonable choice for initial endocrine adjuvant therapy, exemestane should be considered for women who have received 2 to 3 years of tamoxifen, and letrozole should be considered for those who have completed about 5 years of tamoxifen. In the prevention setting, tamoxifen has been evaluated in multiple trials involving &gt;28,000 women and, despite clear evidence of benefit, the level of acceptance of this agent by women seems to be low. Two recently developed prevention trials, IBIS 2 and MAP.3, involve the study of aromatase inhibitors against a placebo control rather than tamoxifen. Whereas the recent adjuvant trials have established the value of the third-generation aromatase inhibitors in early-stage breast cancer, the marked reductions in contralateral breast cancers seen in these trials suggest they will be of value in the prevention setting in women at increased risk of developing the disease. abstract_id: PUBMED:20418109 The evolving role of adjuvant therapy in endometrial cancer. Extra-fascial total hysterectomy and bilateral salpingo-oophorectomy with or without lymph node dissection is the initial treatment for endometrial cancer. Unresolved scientific controversy exists regarding the selection of patients who may benefit from lymphadenectomy, the magnitude of such benefit, and the role of adjuvant therapy. External pelvic irradiation has been shown to reduce loco-regional recurrences without improving survival. Meta-analyses of randomized trials indicate that external pelvic irradiation offers a significant benefit in terms of survival only in high-risk disease (i.e. stage Ic grade 3). Intermediate risk patients (i.e. stage Ib grade 3 disease), therefore, may be treated with adjuvant intravaginal brachytherapy alone to avoid the risk of side effects associated with pelvic irradiation. Overall, patients with clinically early endometrial cancer develop relapses in less than 20% of cases, mostly at distant sites. Randomized trials comparing adjuvant external pelvic irradiation versus adjuvant chemotherapy have shown conflicting clinical results. Chemotherapy seems to prevent or delay distant spread more than radiotherapy, while radiotherapy appears to prevent or delay local relapses more than chemotherapy, although these trends fail to achieve statistical significance. Recent evidence from a randomized trial indicates that sequential external pelvic irradiation with or without brachytherapy and platinum-based chemotherapy result in significantly better progression-free survival than radiotherapy alone in patients with high-risk endometrial cancer. Reliable surgical/pathological variables predictive of high risk of distant failure may be used to identify a subset of patients suitable for randomized trials of adjuvant chemotherapy with or without external irradiation. abstract_id: PUBMED:20975363 Adjuvant chemotherapy in endometrial cancer. The indications for adjuvant therapy in endometrial cancer are briefly reviewed. The importance of systemic adjuvant therapy is emphasized. A short summary of randomized studies on adjuvant chemotherapy versus radiotherapy and on adjuvant sequential chemotherapy plus radiotherapy versus radiotherapy alone is given. On the basis of the present results from randomized trials, a combination of adjuvant radiotherapy and platinum-based chemotherapy seems to be most effective. abstract_id: PUBMED:17485907 Current clinical trials of endocrine therapy for breast cancer. Breast cancer is a hormone-dependent cancer like prostate cancer and endometrial cancer. Estrogen plays important roles in the development and progression of breast cancer. Endocrine therapy is the treatment of choice for estrogen receptor- and/or progesterone receptor-positive breast cancer. Endocrine therapy has been used for several purposes, including chemoprevention, preoperative treatment, postoperative adjuvant treatment and treatment for recurrent diseases. A large number of clinical trials have provided evidence showing the clinical benefits of various endocrine therapies for the treatment of breast cancer. The current status and recent advances in endocrine therapy for breast cancer are reviewed based on the results of current clinical trials. Future perspectives of endocrine therapy are also discussed. Answer: The abstracts provided highlight several aspects of adjuvant treatment and clinical trials in elderly patients with endometrial cancer, suggesting that there may indeed be a time for change. A retrospective chart review from a tertiary care institution revealed that more than 60% of elderly patients with endometrial cancer were recommended to undergo adjuvant therapy, but approximately half of those recommended for treatment actually received it. This gap in care is particularly concerning given the higher incidence of advanced disease and aggressive histopathology in elderly patients. Furthermore, about one-third of women aged 70 years and older who met pathologic enrollment criteria for clinical trials were excluded due to complex medical disease, indicating that clinical trial data for this population are limited (PUBMED:26745698). Recent findings in adjuvant treatment for endometrial cancer suggest that vaginal brachytherapy remains the adjuvant therapy of choice for patients with intermediate and high intermediate risk. However, large randomized controlled trials have shown that the addition of chemotherapy to external beam radiotherapy did not improve survival in early-stage, high-risk endometrial cancer. In stage III high-risk endometrial cancer, chemotherapy added to external beam radiotherapy improved failure-free survival but not overall survival. The role of new molecular markers for risk stratification is being assessed in ongoing trials, and discussions about changes in clinical practice should await mature data from all three trials (PUBMED:31233482). A meta-analysis of published randomized controlled trials found that adjuvant progestagen therapy did not improve overall survival and was not associated with any reduction in relapse or death from endometrial cancer (PUBMED:8730625). Similarly, the role of adjuvant chemotherapy for endometrial carcinoma remains unproven, with trials comparing radiotherapy and chemotherapy not showing a survival difference (PUBMED:21495211, PUBMED:20975364). In summary, the evidence suggests that there is a need for change in the approach to adjuvant treatment and clinical trials for elderly patients with endometrial cancer. This includes addressing the under-treatment of elderly patients, re-evaluating the efficacy of current adjuvant therapies, and ensuring that clinical trials are inclusive of this population to generate relevant data for evidence-based decision-making.
Instruction: Mortality by ethnic group to 2006: is extending census-mortality linkage robust? Abstracts: abstract_id: PUBMED:22854361 Mortality by ethnic group to 2006: is extending census-mortality linkage robust? Objective: To update trends in mortality by ethnic group from the New Zealand Census-Mortality Study (NZCMS), by additionally linking 2004-06 mortality records to the 2001 Census. To investigate possible bias from this extended linkage, especially for Pacific and Asian people who emigrate more frequently. Methods: Anonymous and probabilistic record linkage of 2004-06 mortality records with the 2001 Census was undertaken. Age-standardised 1-74 year old mortality rates by sex and age group, and for all-cause and selected causes of death, were calculated using the direct method for first 30 months post 2001 Census (2001-03) and second 30 months (2003-06). Results: Observed all-cause mortality rates continued to fall in 2003-06 compared to previous periods, but more so for Pacific (18.3% and 21.7% for males and females for 2003-06 compared to 2001-04, respectively) and Asian (22.2%, 16.7%), than for Maori (13.2%, 14.2%) and European/Other (13.0%, 10.4%). Observed rate ratios for Maori, compared to European/Other were 2.43 (95% CI 2.31-2.57) for males and 2.72 (2.56-2.89) for females, the same (males) and slightly less (7%, females) than in 2001-03. Declines in cardiovascular disease (CVD) and injury mortality were the main drivers of all-cause mortality rate reductions for all ethnic groups. Relative inequalities in CVD between Maori and European/Other remain high (three to four-fold relative risks), but reduced by 8% for both males and females from 2001-03 to 2003-06, which in turn means that absolute inequalities closed by as much as 20%. Conclusion: We suspect that analyses comparing mortality rates over time within one of the closed NZCMS cohorts (e.g. 2001-03 compared to 2003-06) is prone to bias due to our inability to censor people when they migrate out of New Zealand. This limitation means mortality rates in the NZCMS are increasingly underestimated with time since census night, particularly for Pacific and Asian people. However, previously published NZCMS trends remain valid as the duration of follow-up (3 years) is short, and cohorts were not split by time since census. Nevertheless, it is safe to conclude that mortality rates continued to decline from 2001-03 to 2003-04 for all four ethnic groups. All-cause mortality inequalities for Maori compared to European/Other over this time were probably stable in relative terms and decreasing in absolute terms, but cardiovascular disease (CVD) inequalities probably decreased in both absolute and relative terms. abstract_id: PUBMED:10777988 Anonymous linkage of New Zealand mortality and Census data. Background: The New Zealand Census-Mortality Study (NZCMS) aims to investigate socio-economic mortality gradients in New Zealand, by anonymously linking Census and mortality records. Objectives: To describe the record linkage method, and to estimate the magnitude of bias in that linkage by demographic and socio-economic factors. Methods: Anonymous 1991 Census records, and mortality records for decedents aged 0-74 years on Census night and dying in the three-year period 1991-94, were probabilistically linked using Automatch. Bias in the record linkage was determined by comparing the demographic and socio-economic profile of linked mortality records to unlinked mortality records. Results: 31,635 of 41,310 (76.6%) mortality records were linked to one of 3,373,896 Census records. The percentage of mortality records linked to a Census record was lowest for 20-24 year old decedents (49.0%) and highest for 65-69 year old decedents (81.0%). By ethnic group, 63.4%, 57.7%, and 78.6% of Maori, Pacific, and decedents of other ethnic groups, respectively, were linked. Controlling for demographic factors, decedents from the most deprived decile of small areas were 8% less likely to be linked than decedents from the least deprived decile, and male decedents from the lowest occupational class were 6% less likely to be linked than decedents from the highest occupational class. Conclusion: The proportion and accuracy of mortality records linked was satisfactorily high. Future estimates of the relative risk of mortality by socio-economic status will be modestly under-estimated by 5-10%. abstract_id: PUBMED:18797482 Ethnic counts on mortality and census data (mostly) agree for 2001-2004: New Zealand Census-Mortality Study update. Background: The New Zealand Census-Mortality Study (NZCMS) previously demonstrated substantial undercounting of Maori and Pacific deaths on mortality data relative to census data for the 1980s and 1990s. The recent linkage of 2001-04 mortality data to 2001 census data allows us to determine whether any such 'numerator-denominator' bias persists. Methods: 2001 census anonymously and probabilistically linked to 3 years of subsequent mortality data (82,404 eligible mortality records), allowing a comparison of ethnicity recording. Results: Using a 'total' definition of ethnicity, there was a close agreement of census and mortality counts: 7419 Maori on the 2001 census compared to 7536 Maori according to mortality data--a census to mortality ratio of 0.98; Pacific--2451 and 2493, ratio 0.98; Asian--1236 and 1215, ratio 1.02; non-Maori non-Pacific non-Asian--73,089 and 72,051, ratio 1.01. Using a 'sole' definition of Maori ethnicity, census counts were only 86% of mortality counts, indicating that mortality data is not recording as many people with two or more ethnic groups as would be expected based on census data. This 'sole' bias was more pronounced in the South Island. Conclusion: There is now little bias in ethnic group counts between census and mortality data for a 'total' definition of ethnic group. Calculations of mortality rates by ethnicity using unlinked census and mortality data and a total definition of ethnicity should be unbiased. These results strongly support using the census definition of ethnicity on all health datasets. abstract_id: PUBMED:34629363 Estimation of Socioeconomic Inequalities in Mortality in Japan Using National Census-linked Longitudinal Mortality Data. Background: We aimed to develop census-linked longitudinal mortality data for Japan and assess their validity as a new resource for estimating socioeconomic inequalities in health. Methods: Using deterministic linkage, we identified, from national censuses for 2000 and 2010 and national death records, persons and deceased persons who had unique personal identifiers (generated using sex, birth year/month, address, and marital status). For the period 2010-2015, 1,537,337 Japanese men and women aged 30-79 years (1.9% in national census) were extracted to represent the sample population. This population was weighted to adjust for confounding factors. We estimated age-standardized mortality rates (ASMRs) by education level and occupational class. The slope index of inequality (SII) and relative index inequality (RII) by educational level were calculated as inequality measures. Results: The reweighted sample population's mortality rates were somewhat higher than those of the complete registry, especially in younger age-groups and for external causes. All-cause ASMRs (per 100,000 person-years) for individuals aged 40-79 years with high, middle, and low education levels were 1,078 (95% confidence interval [CI], 1,051-1,105), 1,299 (95% CI, 1,279-1,320), and 1,670 (95% CI, 1,634-1,707) for men, and 561 (95% CI, 536-587), 601 (95% CI, 589-613), and 777 (95% CI, 745-808) for women, respectively, during 2010-2015. SII and RII by educational level increased among both sexes between 2000-2005 and 2010-2015, which indicates that mortality inequalities increased. Conclusion: The developed census-linked longitudinal mortality data provide new estimates of socioeconomic inequalities in Japan that can be triangulated with estimates obtained with other methods. abstract_id: PUBMED:12316710 Differential mortality and fertility in Sri Lanka by ethnic group. "Using data from the publication by the Registrar-General [of Sri Lanka] of deaths by age, sex and ethnic group for the census year 1981, as well as births by maternal age and ethnic group for the same year, this study examines variation in mortality by ethnic group over the post-war period. It also examines differential fertility by ethnic group over the same period in the light of new data that have become available." abstract_id: PUBMED:20858236 Establishing a follow-up of the Swiss MONICA participants (1984-1993): record linkage with census and mortality data. Background: To assess the feasibility and quality of an anonymous linkage of 1) MONICA (MONItoring of trends and determinants in CArdiovscular disease, three waves between 1984 and 1993) data with 2) census and mortality records of the Swiss National Cohort in order to establish a mortality follow-up until 2008. Many countries feature the defect of lacking general population cohorts because they have missed to provide for follow-up information of health surveys. Methods: Record linkage procedures were used in a multi-step approach. Kaplan-Meier curves from our data were contrasted with the survival probabilities expected from life tables for the general population, age-standardized mortality rates from our data with those derived from official cross-sectional mortality data. Cox regression models were fit to investigate the influence of covariates on survival. Results: 97.8% of the eligible 10,160 participants (25-74y at baseline) could be linked to a census (1990: 9,737; 2000: 8,749), mortality (1,526, 1984-2008) and/or emigration record (320, 1990-2008). Linkage success did not differ by any key study characteristic. Results of survival analyses were robust to linkage step or certainty of a correct link. Loss to follow-up between 1990 and 2000 amounted to 4.7%. MONICA participants had lower mortality than the general population, but similar mortality patterns, (e.g. variation by educational level, marital status or region). Conclusions: Using anonymized census and death records allowed an almost complete mortality follow-up of MONICA study participants of up to 25 years. Lower mortality compared to the general population was in line with a presumable 'healthy participant' selection in the original MONICA study. Apart from that, the derived data set reproduced known mortality patterns and showed only negligible potential for selection bias introduced by the linkage process. Anonymous record linkage was feasible and provided robust results. It can thus provide valuable information, when no cohort study is available. abstract_id: PUBMED:20720601 Ethnic counts on mortality and census data 2001-06: New Zealand census-mortality study update. Aim: To provide an update for the assessment of discrepancies in ethnicity counts in the 2001 census and mortality data for the 2004-2006 period. Methods: 2001 census anonymously and probabilistically linked to 5 years of subsequent mortality data (135,849 eligible mortality records), allowing a comparison of ethnicity recording for the years 2001-2004 and 2004-2006. Results: Using a total definition of ethnicity, census and mortality counts agree reasonably well in 2004-06 and resemble comparisons in 2001-04, except at younger ages where counts for Pacific and Asian ethnicities are up to a third less for mortality data. Due to multiple ethnicities being more commonly recorded on census data, sole ethnicity counts are generally greater on mortality than census data, particularly for Maori ethnicity. Conclusion: Similar to 2001-2004, there is little bias in ethnic group counts between census and mortality data when using total ethnicity. Calculations of mortality rates by ethnicity using unlinked census and mortality data and a total definition of ethnicity should be unbiased. These results support ongoing use of the census definition of ethnicity on all health datasets. abstract_id: PUBMED:31581296 Education inequalities in adult all-cause mortality: first national data for Australia using linked census and mortality data. Background: National linked mortality and census data have not previously been available for Australia. We estimated education-based mortality inequalities from linked census and mortality data that are suitable for international comparisons. Methods: We used the Australian Bureau of Statistics Death Registrations to Census file, with data on deaths (2011-2012) linked probabilistically to census data (linkage rate 81%). To assess validity, we compared mortality rates by age group (25-44, 45-64, 65-84 years), sex and area-inequality measures to those based on complete death registration data. We used negative binomial regression to quantify inequalities in all-cause mortality in relation to five levels of education ['Bachelor degree or higher' (highest) to 'no Year 12 and no post-secondary qualification' (lowest)], separately by sex and age group, adjusting for single year of age and correcting for linkage bias and missing education data. Results: Mortality rates and area-based inequality estimates were comparable to published national estimates. Men aged 25-84 years with the lowest education had age-adjusted mortality rates 2.20 [95% confidence interval (CI): 2.08‒2.33] times those of men with the highest education. Among women, the rate ratio was 1.64 (1.55‒1.74). Rate ratios were 3.87 (3.38‒4.44) in men and 2.57 (2.15‒3.07) in women aged 25-44 years, decreasing to 1.68 (1.60‒1.76) in men and 1.44 (1.36‒1.53) in women aged 65-84 years. Absolute education inequalities increased with age. One in three to four deaths (31%) was associated with less than Bachelor level education. Conclusions: These linked national data enabled valid estimates of education inequality in mortality suitable for international comparisons. The magnitude of relative inequality is substantial and similar to that reported for other high-income countries. abstract_id: PUBMED:28461712 Black-White Disparities in Adult Mortality: Implications of Differential Record Linkage for Understanding the Mortality Crossover. Mortality rates among black individuals exceed those of white individuals throughout much of the life course. The black-white disparity in mortality rates is widest in young adulthood, and then rates converge with increasing age until a crossover occurs at about age 85 years, after which black older adults exhibit a lower mortality rate relative to white older adults. Data quality issues in survey-linked mortality studies may hinder accurate estimation of this disparity and may even be responsible for the observed black-white mortality crossover, especially if the linkage of surveys to death records during mortality follow-up is less accurate for black older adults. This study assesses black-white differences in the linkage of the 1986-2009 National Health Interview Survey to the National Death Index through 2011 and the implications of racial/ethnic differences in record linkage for mortality disparity estimates. Match class and match score (i.e., indicators of linkage quality) differ by race/ethnicity, with black adults exhibiting less certain matches than white adults in all age groups. The magnitude of the black-white mortality disparity varies with alternative linkage scenarios, but convergence and crossover continue to be observed in each case. Beyond black-white differences in linkage quality, this study also identifies declines over time in linkage quality and even eligibility for linkage among all adults. Although linkage quality is lower among black adults than white adults, differential record linkage does not account for the black-white mortality crossover. abstract_id: PUBMED:21670323 Ethnic mortality differentials in Lithuania: contradictory evidence from census-linked and unlinked mortality estimates. Background: This study examines discrepancies between census and death registry information in the reporting of the ethnicity of the deceased in Lithuania and shows how these reporting differences influence estimates of mortality inequality by ethnicity. Methods: This study uses a census-linked dataset provided by Statistics Lithuania. The data include all deaths and population exposures between 1 July 2001 and 31 December 2004. The information on the ethnicity of the deceased was available from both the census and the death records. The Poisson regression was applied (1) to measure the effects of socio-demographic variables on the misreporting of ethnicity on death records and (2) to estimate mortality rate ratios by ethnicity based on census-linked and unlinked data. Results: The death-record-based information on ethnicity under-reports the deaths of people of Russian, Polish and other ethnicities and over-reports the deaths of people of Lithuanian ethnicity. This leads both to the underestimation of mortality in the three ethnic minority groups and to biased mortality rate ratios. The misreporting is higher in death records for women, persons younger than 80 years, divorced persons, urban residents and those dying from ill-defined causes. Conclusion: Studies based on unlinked data may provide biased estimates of ethnic mortality differences. Answer: The extension of census-mortality linkage to include 2004-06 mortality records linked to the 2001 Census in New Zealand aimed to update trends in mortality by ethnic group. However, this extended linkage raised concerns about possible bias, particularly for Pacific and Asian people who have higher emigration rates. The study found that all-cause mortality rates continued to decline for all four ethnic groups (Pacific, Asian, Maori, and European/Other) from 2001-03 to 2003-06, with the most significant reductions observed for Pacific and Asian groups. Despite the decline in mortality rates, relative inequalities in cardiovascular disease (CVD) between Maori and European/Other remained high, although there was a reduction in both absolute and relative terms (PUBMED:22854361). The study acknowledged the potential for bias in the New Zealand Census-Mortality Study (NZCMS) due to the inability to censor individuals when they migrate out of New Zealand. This limitation means that mortality rates in the NZCMS are increasingly underestimated with time since the census night, especially for Pacific and Asian people. However, the previously published NZCMS trends were considered valid as the follow-up duration was short (3 years), and cohorts were not split by time since the census. The study concluded that mortality rates likely continued to decline for all ethnic groups, with all-cause mortality inequalities for Maori compared to European/Other probably stable in relative terms and decreasing in absolute terms, but CVD inequalities likely decreased in both absolute and relative terms (PUBMED:22854361). In summary, extending census-mortality linkage to include more recent mortality records is robust in terms of continuing to show declines in mortality rates for all ethnic groups. However, there is a potential bias in the NZCMS due to migration, particularly affecting Pacific and Asian populations, which may lead to an underestimation of mortality rates over time. Despite this, the trends within the closed cohorts remain valid, and the study provides valuable insights into the mortality rates and inequalities among different ethnic groups in New Zealand up to 2006.
Instruction: Carotid endarterectomy in the elderly: are the patient characteristics, the early outcome, and the predictors the same as those in younger patients? Abstracts: abstract_id: PUBMED:10805883 Carotid endarterectomy in octogenarians: comparison with patient characteristics and outcomes in younger patients. Objectives: Carotid endarterectomy has been shown to be of clear benefit to selected patients. However, recent trials of carotid endarterectomy versus best medical therapy have excluded octogenarians, and some authors have suggested that carotid endarterectomy would have an unfavorable cost-benefit relationship in octogenarians. We compared patients and results for carotid endarterectomy in octogenarians and younger patients. Methods: We reviewed the results for 582 primary carotid endarterectomies (90 in octogenarians and 492 in younger patients) performed in 528 patients between February 1, 1985, and January 31, 1998 (all data were collected prospectively for the most recent 301 carotid endarterectomies). Conventional surgical technique was used with general anesthesia, selective shunting, and selective patching. Main outcome measures were perioperative and late ipsilateral stroke and death. Results: The two groups were similar with respect to indications for carotid endarterectomy and patient characteristics, except that octogenarians were more likely to have histories of congestive heart failure or hypertension and less likely to have histories of smoking or chronic lung disease. Carotid endarterectomy was performed for asymptomatic disease in 27% of the octogenarians and 33% of the younger patients (P =.31). Stenosis was &gt;/=80% in 90% of the octogenarians and 78% of the younger patients (P =.014). Perioperative strokes, all of which were ipsilateral, occurred in one octogenarian (1.1%) and eight younger patients (1.6%, P = 1.00). No octogenarians and two younger patients died within 30 days of surgery (P = 1.00). Length of stay and direct costs associated with carotid endarterectomy were similar for octogenarians and younger patients. Late strokes occurred in two octogenarians (one ipsilateral) and four younger patients (two ipsilateral). Life table estimates of freedom from ipsilateral stroke at 2 years were 98% and 97% for octogenarians and younger patients, respectively (log-rank P =.69), and life table estimates of patient survival at 4 years were 81% and 89% for octogenarians and younger patients, respectively (P =.11). Octogenarians represented an increasing fraction of the carotid endarterectomies performed during the study period. Conclusions: Octogenarians selected for carotid endarterectomy were similar to younger patients with respect to indications for carotid endarterectomy and comorbidities. Early mortality, early and late neurologic outcome, complications, and resource utilization were similar for the two groups, and more than 75% of octogenarians survived 4 years after undergoing carotid endarterectomy. Cost-benefit analyses for carotid endarterectomy, which are highly sensitive to expected patient survival, might not be pertinent to individual patient situations. Intellectually intact octogenarians without unusually severe comorbidities are good candidates for and should be offered the benefits of carotid endarterectomy. abstract_id: PUBMED:17445605 Carotid endarterectomy in the elderly: are the patient characteristics, the early outcome, and the predictors the same as those in younger patients? Background: The study was conducted to detect possible differences in patient characteristics, in early outcome, and in predictors for 30-day combined stroke and death rate between younger and older patients undergoing CEA. Methods: Between March 1988 and November 2005, we recorded prospectively the data from every patient who underwent carotid artery surgery. Cardiovascular risk factors, comorbidities, anesthetic and surgical techniques, outcome, and survival were reviewed and compared between &lt;75 age and &gt;or=75 age group. Results: We performed 1028 procedures in 742 patient. A total of 286 carotid endarterectomies were done in 213 patients 75 years or older. The number of cardiovascular risk factors per patient decreased with age. The risk factors tobacco use, hyperlipidemia, and the combination tobacco, hypertension, and hyperlipidemia were predominant in the younger group. Hypertension as only risk factor and the combination hypertension and hyperlipidemia were more present in the older group. The perioperative combined minor and major stroke and death rate was 2.8% in both groups. Attendant predictors for adverse outcome in the &gt;or=75 age group were female sex and the combination of the risk factors diabetes, tobacco use, and hypertension. Conclusions: The type, the number, and the combinations of cardiovascular risk factors and the predictors for adverse outcome are different in the 2 age groups. Our observations indicate that people 75 years or older with small number of cardiovascular risk factors can undergo CEA with the same risk of perioperative stroke and death as the younger patients. abstract_id: PUBMED:33242214 Predictors of outcome events and 6-year mortality after carotid endarterectomy and carotid stenting in patients with carotid artery stenosis. Aim: The aim of our study was to evaluate the results of CEA and CAS in patients with carotid artery stenosis, and their effect on long-term mortality and morbidity, as well as to identify predictors of long-term mortality in a single-centre observational study. Clinical Rationale: While data on short-term morbidity and mortality after carotid endarterectomy (CEA) and carotid stenting (CAS) is robust, there is only a limited amount of literature on long-term mortality and its predictors five years-plus post these procedures. Material And Methods: Consecutive patients with symptomatic and asymptomatic internal carotid artery stenosis treated with CEA or CAS in a single centre in eastern Slovakia between 2012 and 2014 were included. We recorded basic sociodemographic data, the presence of co-morbidities and periprocedural complications. Clinical and sonographic follow-up was performed three and 12 months after the procedures. Patient survival data and any stroke data was obtained at the end of a six-year follow-up. Results: We included 259 patients after CEA (mean age 67.4 ± 8.5, 64.5% men) and 321 after CAS (mean age 66.9 ± 8.4, 73.5% men). We did not identify a statistically significant difference in short-term or long-term mortality, survival times, or the presence of short-term or long-term complications between the CEA and CAS groups. Predictors of long-term mortality included age and diabetes mellitus in both cohorts. Repeated interventions were related to increased mortality only in the CAS cohort. Conclusions: The results of our study show that long-term mortality does not differ between CEA and CAS. abstract_id: PUBMED:31175797 Early carotid endarterectomy in symptomatic patients. Objective: Introduction: Stroke is a big social problem. The expediency of surgical treatment is justified as the chances for complete reconvalescence of the neurological deficiency increase. The aim: To analyze available sources of scientific information on the terms of surgical treatment of patients with acute neurological deficiency. Patients And Methods: Materials and methods: We analyzed 41 English publications in PubMed for 5 years from 2012 to 2017, with the keywords "urgent carotid endarterectomy" and "early carotid endarterectomy". Conclusion: Conclusions: In the classic course of stroke or TIA, the best treatment results are obtained when the CEA is performed 3-7 days after the marker event. CEA should be performed as soon as possible to restore brain revascularization and prevent the fatal progression of the neurological deficiency after crescendo TIA or stroke-in-evolution, despite the fact that the risk of complications and disability after surgery may be higher than 6%. abstract_id: PUBMED:9845656 Determinants of outcome after carotid endarterectomy. Background: The efficacy of carotid endarterectomy for selected patients has been evaluated with randomized controlled clinical trials. The generalizability of these studies to average surgical practice remains an important public health concern. Objective: The objective of the study was to determine the predictors of outcome after carotid endarterectomy on a regional basis. Patients And Methods: The study was designed as a retrospective cohort study and included all consecutive patients presented for carotid endarterectomy at the 8 University of Toronto-affiliated hospitals in the period from January 1, 1994, to December 31, 1996. The main outcome measure was 30-day postoperative stroke or death rate. Results: During the study interval, 1280 primary carotid endarterectomies were performed. The overall combined stroke and death rate was 6.3% for all patients who underwent endarterectomy (4.0% for patients who were asymptomatic). The significant predictors of poor outcome were the following: presenting symptoms (odds ratio, 1.74; 95% confidence interval [CI], 0.96, 3.12), low surgeon volume (&lt;6 cases per year; odds ratio, 3.98; 95% CI, 1.65, 9.58), and left-sided surgery (odds ratio, 1.72; 95% CI, 1.07, 2.76). Conclusion: These data suggest that adoption of the recommendations of the symptomatic carotid endarterectomy trials is appropriate. However, endarterectomy for asymptomatic lesions remains of uncertain benefit on a regional basis and must be individualized to the experience of the specific surgeon. The surgeon volume/outcome relationship that is identified in this study suggests a need for a minimum volume threshold for this procedure. abstract_id: PUBMED:26449599 The influence of diabetes mellitus on early outcome following carotid endarterectomy. Background: There are few studies that look at the influence of diabetes mellitus on early outcome following carotid endarterectomy (CEA). Those available have reported conflicting results, with some showing poor outcome and others similar outcome to those without diabetes mellitus. Objective: To assess the influence of diabetes mellitus on early outcome following CEA. Methods: Clinical data on patients who had CEA over a 5-year period were acquired from a prospectively maintained computerised database. They were divided into two groups, namely diabetics and non-diabetics. Results: Two hundred and sixty-four charts were analysed. There were no significant differences in patient demographics and risk factors for atherosclerosis between the two groups. The majority (71%) of patients had CEA for symptomatic carotid disease. Carotid shunting was performed selectively, and significantly more diabetic patients had CEA under the protection of a carotid shunt (p=0.0469). Postoperative strokes, transient ischaemic attacks and deaths were not significantly different between the two groups. Conclusions: Diabetes mellitus had no influence on the early surgical outcome following carotid endarterectomy. abstract_id: PUBMED:22220451 Comparison between carotid stenting and carotid endarterectomy in early outcome. Carotid artery stenting (CAS) is a widely used method in prevention of stroke for carotid artery stenosis as an alternative to surgical treatment. Initial studies reveal higher morbidity and mortality rates for CAS than acceptable standards for carotid endarterectomy (CEA). The aim of this study was to compare results in a series of CAS with concurrent risk-matched group of CEA patients. The study included two groups of 50 patients with internal carotid artery stenosis. We compared early outcome (30 days after procedure) in risk-matched groups of patients that underwent these procedures. Post procedural complications were equally frequent in both groups. There was no significant difference in perioperative complication rates (P = 0.871). Comparison of these two methods shows that CAS and CEA are competitive methods for treatment of carotid artery stenosis. Particularly in symptomatic patients with high risk for surgery CAS is alternative treatment. abstract_id: PUBMED:26016220 Early carotid endarterectomy in symptomatic patients - our experience Carotid endarterectomy is the gold standard as a therapeutic regimen for patients with high grade symptomatic stenosis of the internal carotid artery (ACI). This study analyzed the effect of early carotid endarterectomy in patients undergoing an operative procedure 2-3 weeks after the development of ischemic neurologic symptoms, considering the frequency and type of complications in the postoperative period. Patients included in this study were those with significant symptomatic ACI stenosis (70%-99%), which caused ischemic stroke or transient ischemic attacks (TIA). Patients with ischemic stroke were operated within twenty days of the initial neurologic event, whilst in those with symptoms of TIA, surgery was performed immediately after diagnostic work-up. In all cases, carotid endarterectomy was performed under general anesthesia with the use of protective intraluminal shunt. In the vast majority of cases, tucking or Kunlin's sutures of the distal intima were applied. All procedures were performed between January 2008 and October 2012, and the total number of patients was 69. All patients underwent the same follow up program. Follow up carotid ultrasound was performed routinely on postoperative day 7 and at 1, 3, 6 and 12 months. In this study, 27 (39%) patients suffered minor ipsilateral stroke and 42 (61 %) patients had TIA symptoms with verified significant ACl stenosis. Postoperative complications were observed in four (5.26%) patients. Two (2.63%) patients developed ischemic stroke after the procedure and two (2.63%) patients developed ACI restenosis in the late postoperative period and were treated by endovascular stenting. In conclusion, we found that early carotid endarterectomy was of greater benefit than delayed endarterectomy, which is in keeping with the published studies. The leading observation was that in selected patients, early carotid endarterectomy was not associated with a higher risk of postoperative complications in comparison with delayed endarterectomy and could be performed safely. abstract_id: PUBMED:24837913 Safety of early endarterectomy in patients with symptomatic carotid artery stenosis: an international multicenter study. Background And Purpose: Although the latest recommendations suggest that carotid endarterectomy (CEA) should be performed in symptomatic carotid artery stenosis (sCAS) patients within 2 weeks of the index event, only a minority of patients undergo surgery within the recommended time-frame. The aim of this international multicenter study was to prospectively evaluate the safety of early CEA in patients with sCAS in everyday clinical practice settings. Methods: Consecutive patients with non-disabling acute ischaemic stroke (AIS) or transient ischaemic attack (TIA) due to sCAS (≥ 70%) underwent early (≤ 14 days) CEA at five tertiary-care stroke centers during a 2-year period. Primary outcome events included stroke, myocardial infarction (MI) or death occurring during the 30-day follow-up period and were defined according to the International Carotid Stenting Study criteria. Results: A total of 165 patients with sCAS [mean age 69 ± 10 years; 69% men; 70% AIS; 6% crescendo TIA; 8% with contralateral internal carotid artery (ICA) occlusion] underwent early CEA (median elapsed time from symptom onset 8 days). Urgent CEA (≤ 2 days) was performed in 20 cases (12%). The primary outcomes of stroke and MI were 4.8% [95% confidence interval (CI) 1.5%-8.1%] and 0.6% (95% CI 0%-1.8%). The combined outcome event of non-fatal stroke, non-fatal MI or death was 5.5% (95% CI 2.0%-9.0%). Crescendo TIA, contralateral ICA occlusion and urgent CEA were not associated (P &gt; 0.2) with a higher 30-day stroke rate. Conclusions: Our findings indicate that the risk of early CEA in consecutive unselected patients with non-disabling AIS or TIA due to sCAS is acceptable when the procedure is performed within 2 weeks (or even within 2 days) from symptom onset. abstract_id: PUBMED:33161214 Carotid endarterectomy should be performed early or delayed in patients with symptomatic carotid stenosis? Background: To date, the literature directly comparing early carotid endarterectomy (CEA) and delayed CEA in patients with symptomatic carotid stenosis (CS) is limited. We aimed to evaluate the efficacy and safety of early CEA and delayed CEA in patients with symptomatic CS by performing a meta-analysis. Material And Method: The PubMed, Cochrane Library (last searched in May 2020) and relevant websites such as Web of Science and EMBASE (1990 to May 2020) were searched. All meta-analyses of eligible results were conducted using the STATA version 12.0 (Stata Corporation, College Station, Texas, USA). Results: A total of 7 articles were included in the study hailing from the New Scotland, Chicago, Sweden, UK, Italy, and France. In this study, the early CEA meant that the procedure was performed within the first 14 days or first 30 days. And the delayed CEA meant the procedure was performed more than 14 days or 30 days after the symptom occurrence. Referring to the latter early CEA group and delayed CEA group, there were three publications. The results illustrated that the early CEA group was not associated with a higher incidence of stroke (OR = 0.77, 95 % CI: 0.273-2.170; P = 0.620). And no statistic difference was found on the incidence of postoperative 30-day mortality and stroke or mortality. Meanwhile, referring to the former early CEA group and delayed CEA group, there were six articles. The results demonstrated that the early CEA group was associated with a higher rate of postoperative 30-day mortality (RD = 0.010, 95 % CI: 0.002 to 0.019; P = 0.022). Conclusion: The meta-analysis of these related studies suggests that, compared to the delayed CEA group, the early CEA performed in patients with the acute post stroke phase resulted in a higher risk of postoperative mortality. Therefore, the delayed CEA was safer than early CEA for patients with symptomatic CS. Answer: Carotid endarterectomy (CEA) in the elderly does show some differences in patient characteristics, early outcomes, and predictors when compared to younger patients. In a study comparing octogenarians to younger patients undergoing CEA, it was found that octogenarians were more likely to have histories of congestive heart failure or hypertension and less likely to have histories of smoking or chronic lung disease. Despite these differences, the early mortality, early and late neurologic outcomes, complications, and resource utilization were similar for both groups. More than 75% of octogenarians survived 4 years after undergoing CEA, suggesting that age alone should not be a deterrent to offering CEA to elderly patients (PUBMED:10805883). Another study that compared patients under 75 years of age with those 75 years or older found that the number of cardiovascular risk factors per patient decreased with age. The perioperative combined minor and major stroke and death rate was 2.8% in both groups. However, the predictors for adverse outcomes were different between the two age groups, with female sex and the combination of risk factors such as diabetes, tobacco use, and hypertension being attendant predictors in the older group (PUBMED:17445605). Furthermore, a study that evaluated the results of CEA and carotid stenting (CAS) found that predictors of long-term mortality included age and diabetes mellitus in both cohorts, indicating that these factors are important considerations regardless of the patient's age (PUBMED:33242214). In summary, while there are some differences in patient characteristics and predictors of outcomes between elderly and younger patients undergoing CEA, the early outcomes such as perioperative stroke and death rates appear to be similar. Elderly patients with fewer cardiovascular risk factors can undergo CEA with the same risk of perioperative stroke and death as younger patients, suggesting that CEA can be a viable option for selected elderly patients.
Instruction: Seven basic dimensions of personality pathology and their clinical consequences: Are all personalities equally harmful? Abstracts: abstract_id: PUBMED:26096533 Seven basic dimensions of personality pathology and their clinical consequences: Are all personalities equally harmful? Objectives: Dimensional pathology models are increasingly being accepted for the assessment of disordered personalities, but their ability to predict negative outcomes is yet to be studied. We examine the relative clinical impact of seven basic dimensions of personality pathology through their associations with a wide range of clinical outcomes. Methods: A sample of 960 outpatients was assessed through a 7-factor model integrating the Cloninger, the Livesley, and the DSM taxonomies. Thirty-six indicators of clinical outcome covering three areas - dissatisfaction, functional difficulties, and clinical severity - were also assessed. The unique contribution of each personality dimension to clinical outcome was estimated through multiple regressions. Results: Overall, personality dimensions explained 17.6% of the variance of clinical outcome, but varied substantially in terms of their unique contributions. Negative Emotionality had the greatest impact in all areas, contributing 43.9% of the explained variance. The remaining dimensions led to idiosyncratic patterns of clinical outcomes but had a comparatively minor clinical impact. A certain effect was also found for combinations of dimensions such as Negative Emotionality × Impulsive Sensation Seeking, but most interactions were clinically irrelevant. Conclusions: Our findings suggest that the most relevant dimensions of personality pathology are associated with very different clinical consequences and levels of harmfulness. Practitioner Points: The relative clinical impact of seven basic dimensions of personality pathology is examined. Negative Emotionality (Neuroticism) is 6-14 times as harmful as other pathological dimensions. The remaining dimensions and their interactions have very specific and comparatively minor clinical consequences. Limitations: We examine only a handful of clinical outcomes. Our results may not be generalizable to other clinical or life outcomes. Our variables are self-reported and hence susceptible to bias. Our design does not allow us to establish causal relationships between personality and clinical outcomes. abstract_id: PUBMED:17214592 The big seven model of personality and its relevance to personality pathology. Proponents of the Big Seven model of personality have suggested that Positive Valence (PV) and Negative Valence (NV) are independent of the Big Five personality dimensions and may be particularly relevant to personality disorder. These hypotheses were tested with 403 undergraduates who completed a Big Seven measure and markers of the Big Five and personality pathology. Results revealed that PV and NV incrementally predicted personality pathology dimensions beyond those predicted by multiple markers of the Big Five. However, factor analyses suggested that PV and NV might be best understood as specific, maladaptive aspects of positive emotionality and low agreeableness, respectively, as opposed to independent factors of personality. Implications for the description of normal and abnormal personality are discussed. abstract_id: PUBMED:19817631 Dimensions of personality pathology in adolescents: relations to DSM-IV personality disorder symptoms. The aim of the present study was to relate and compare two approaches to personality pathology in adolescents. Dimensions of personality pathology, assessed by the Dimensional Assessment of Personality Pathology-Basic Questionnaire for Adolescents (DAPP-BQ-A; Tromp &amp; Koot, 2008), were related to DSM-IV personality disorder (PD) symptoms in 168 adolescents referred for mental health services. Correlational analyses revealed that the DAPP-BQ-A higher- and lower-order dimensions were related to PD symptoms in predictable ways. Regression analyses showed that for all but three PDs (Schizoid, Schizotypal, and Passive-Aggressive), lower-order dimensions accounted for unique variance, after controlling for gender, age, and co-occurring PD symptoms. It is concluded that dimensional assessment may provide valuable information on adolescent personality pathology, and facilitate the study of developmental antecedents of adult personality pathology. abstract_id: PUBMED:34276482 How Our Caregivers Shape Who We Are: The Seven Dimensions of Attachment at the Core of Personality. Psychology defines personality as the stable traits of an individual, and cognitive research suggests that a set of core beliefs is at the root of these traits. From this perspective, two major questions remain unanswered: (1) What are the core beliefs that make up personality? (2) How are they acquired? An interesting answer is provided by attachment theory, according to which attachment is at the basis of personality. The current theoretical formulation, however, does not sufficiently clarify the relationship between the two. Adopting a cognitive-clinical approach, we put forward a novel version of attachment theory, arguing that it can better account for the relationship between attachment and personality, thereby providing more convincing answers to questions (1) and (2). In particular, we propose that: (A) attachment information is acquired over seven dimensions; (B) the acquisition of each dimension is induced by a specific caregiving feature and (C) realized through a specific acquisition mechanism - imprinting. In a nutshell, we propose an Attachment-Personality Model (APM) according to which seven attachment dimensions constitute the knowledge core of personality. We finally discuss the significant implications of the model, especially its clinical application in terms of conception, assessment, and treatment of mental disorders. The model can be empirically tested, and we suggest three ways to do that. abstract_id: PUBMED:19072681 Dimensions of personality pathology in adolescents: psychometric properties of the DAPP-BQ-A. This study aimed to contribute to the dimensional approach to personality pathology by addressing the applicability of a personality pathology questionnaire, originally developed for adults, in adolescent samples. The psychometric properties of the Dimensional Assessment of Personality Pathology-Basic Questionnaire for Adolescents (DAPP-BQ-A) were studied in two samples including 170 adolescents referred for mental health services and 1,628 nonreferred adolescents, respectively. Factor analysis resulted in a strong replication of the original structure, retaining four factors (Emotional Dysregulation, Dissocial Behavior, Inhibitedness, and Compulsivity), which could be further organized into a two-dimensional structure with factors identifiable as Internalizing and Externalizing, suggesting a possible link between personality and psychopathology. Internal consistency and test-retest reliability proved to be satisfactory for all lower-order dimensions, with the exception of Intimacy Problems. Several of these dimensions showed considerable promise in differentiating nonreferred adolescents, referred adolescents without and referred adolescents with a personality disorder. The present findings underscore the need for a developmental perspective on personality pathology. Promising aspects of the dimensional approach to personality pathology in adolescence are discussed. abstract_id: PUBMED:15465966 Dimensions of personality pathology: an alternative to the five-factor model. Objective: Researchers have advocated replacing the DSM-IV classification of personality disorders with an alternative diagnostic system based on the five-factor model. This study evaluates the clinical comprehensiveness of the five-factor model and addresses the broader question of how many factors, and which factors, are necessary to understand personality pathology. Method: A national sample of 530 psychiatrists and clinical psychologists used the Shedler-Westen Assessment Procedure (SWAP-200) to provide detailed psychological descriptions of patients with personality disorder diagnoses. The SWAP-200 is a 200-item instrument designed to capture the richness and complexity of clinical observations while also providing quantifiable data for research. We used factor analysis to identify dimensions of personality relevant to understanding personality pathology. Results: The five-factor structure replicated in a content-restricted subset of 60 SWAP-200 items. However, factor analysis of the full SWAP-200 yielded a conceptually richer factor solution that did not resemble the five-factor model. The analysis identified 12 clinically relevant personality dimensions labeled psychological health, psychopathy, hostility, narcissism, emotional dysregulation, dysphoria, schizoid orientation, obsessionality, thought disorder, oedipal conflict (histrionic sexualization), dissociation, and sexual conflict. Conclusions: The five-factor model represents a sound distillation of the personality constructs used by laypersons. However, it omits key clinical constructs and may not capture the complexity of personality syndromes seen in clinical practice. The SWAP-200 factors may provide a framework for studying personality pathology that is both empirically grounded and clinically relevant. abstract_id: PUBMED:1742707 Dimensions of personality pathology. The reliability of DSM-III-R diagnoses of personality disorders is poor and their validity has yet to be established. There is little evidence that the features of personality pathology cluster into these diagnostic entities. For these reasons, it is important to explore alternative ways of classifying personality disorders. In this preliminary study, reliable scales were developed to assess 100 personality dimensions which were systematically developed. The factorial structure underlying the dimensions was evaluated in a heterogeneous sample of 110 subjects from the general population. Sixteen components, accounting for 81.4% of the variance, were retained for rotation to oblique structure. The components were labelled social avoidance, narcissism, insecure attachment, compulsive behaviours, interpersonal disesteem, mobility, anxiousness, conduct problems, stimulus seeking, identity disturbance, self-harm, rejection, diffidence, and hypersensitivity. Two components were not interpreted because they only had one or two salient loadings. abstract_id: PUBMED:20860863 Contrasting prototypes and dimensions in the classification of personality pathology: evidence that dimensions, but not prototypes, are robust. Background: DSM-5 may mark the shift from a categorical classification of personality pathology to a dimensional system. Although dimensional and categorical conceptualizations of personality pathology are often viewed as competing, it is possible to develop categories (prototypes) from combinations of dimensions. Robust prototypes could bridge dimensions and categories within a single classification system. Method: To explore prototype structure and robustness, we used finite mixture modeling to identify empirically derived personality pathology prototypes within a large sample (n=8690) of individuals from four settings (clinical, college, community, and military), assessed using a dimensional measure of normal and abnormal personality traits, the Schedule for Nonadaptive and Adaptive Personality (SNAP). We then examined patterns of convergent and discriminant external validity for prototypes. Finally, we investigated the robustness of the dimensional structure of personality pathology. Results: The resulting prototypes were meaningful (externally valid) but non-robust (sample dependent). By contrast, factor analysis revealed that the dimensional structures underlying specific traits were highly robust across samples. Conclusions: We interpret these results as further evidence of the fundamentally dimensional nature of an empirically based classification of personality pathology. abstract_id: PUBMED:10629431 Six basic dimensions of personality and a seventh factor of generalized dysfunctional personality: a diathesis system covering all personality disorders. The five-factor model (FFM) of personality has recently been claimed to be comprehensive for all dimensionally conceived personality disorders (PDs). This assumption is refuted on the grounds of a principal component analysis conducted for three basic factor inventories (FFPI, NEO-FFI, Hamburg Personality Inventory), 22 explorative facet scales for normal personality and a questionnaire inventory (Inventory of Clinical Personality Accentuations) of 24 PD scales. Seven substantial orthogonal Varimax factors were found explaining 73% of the total variance. The five factors of the FFM were replicated. In addition, a new basic factor 'risk and competition seeking' (basic dimension 'risk', BD-R) emerged replicating findings of Andresen. The massive seventh factor was substantially loaded by all 24 PD scales and interpreted as 'general dysfunctional personality' (GDP). Most of the highly loaded scales describe varying forms of cognitive aberration. Together with GDP, the six-factor model now explains about 73% of all PD scales, thus, a virtually comprehensive descriptive diathesis system is achieved. Without GDP and BD-R, only 25% can be accounted for by the FFM. Five of the six normal dimensions will be loaded with positive and negative signs by some PDs. Neuroticism (BD-N) is still unique regarding its positive substantial correlations with almost all PD scales. BD-N is also the only normal factor which is correlated with GDP at about 0.50. A geometric model for the relationships between basic and clinical factors is presented. abstract_id: PUBMED:30106357 Moving beyond categories and dimensions in personality pathology assessment and diagnosis. It has been suggested that a dimensional model of personality pathology should be adopted for the development and refinement of personality disorder classification. In this article, the advantages and challenges of moving toward a dimensional model are briefly reviewed. However, it is suggested that although categories and dimensions are valuable frameworks for personality pathology diagnosis, an expansion beyond categories and dimensions is needed to improve the shortcoming seen in current diagnostic systems. Ideas and examples are offered for how this might occur.Declaration of interestNone. Answer: No, not all personalities are equally harmful. According to the study with PUBMED:26096533, the seven basic dimensions of personality pathology have varying degrees of clinical impact. The dimension of Negative Emotionality was found to have the greatest impact across all areas of clinical outcome, contributing 43.9% of the explained variance. This suggests that Negative Emotionality (or Neuroticism) is significantly more harmful than other pathological dimensions, which have more specific and comparatively minor clinical consequences. The study indicates that while personality dimensions explained 17.6% of the variance of clinical outcome overall, there was substantial variation in the unique contributions of each dimension. Therefore, it can be concluded that not all personalities are equally harmful, with some dimensions having a more pronounced negative impact on clinical outcomes than others.
Instruction: Could personality traits modify the response to uncorrected high hyperopia? Abstracts: abstract_id: PUBMED:16213400 Could personality traits modify the response to uncorrected high hyperopia? Purpose: We aimed to measure temperament and investigate personality in children with high hyperopia considering that these could modify the individual response to uncorrected high hyperopia. Methods: Fifteen children (age range, 5 to 12 years) with orthotropia and ametropic amblyopia in the presence of uncorrected high hyperopia were identified (group 1). Among the children with refractive accommodative esotropia, 15 children (age range, 5 to 12 years) were enrolled to form group 2. We measured the temperament by using the Children's Behavior Questionnaire (CBQ) Short Form. The Children's Apperception Test (CAT-H) was also administered to all subjects. Results: Regarding the temperament scales measured by CBQ, themes of discomfort, fear, and shyness were more dominant in group 2; high-intensity pleasure and smiling-laughter were dominant in group 1. However the difference was statistically significant only for fear scale (P = 0.045). CAT-H results revealed that aggression toward the parents was the most commonly encountered behavioral pattern in both groups. The children in group 1 were more likely to express passive-aggressive behavioral pattern. Obstinacy and anal period characteristics dominate in the children in group 2. Themes of narcissistic injury was more frequently expressed by these children. Conclusion: The temperament scales and personality traits could play a role in the modification of the individual response to uncorrected high hyperopia. This finding deserves more research in a larger study group. abstract_id: PUBMED:35295061 Uncorrected refractive error in a university community. Aims And Background: To determine the prevalence and pattern of uncorrected refractive error among staff of a Nigerian university. Patient And Methods: A cross-sectional study of consecutive staff of the University of Nigeria, Nsukka, Nigeria who presented for a 10-day free eye screening program. Demographic data and data on eye care awareness and knowledge were obtained with a combination of self-administered and interviewer-administered questionnaires in the English language. Clinical examinations included visual acuity taken at 6 m with Snellen's chart; noncontact tonometry; pen-torch eye examination; and direct ophthalmoscopy; autorefraction and subjective refraction. Results: One thousand and eighty-three subjects aged 18-82 years (mean = 44.1 ± 12.15 years) comprising 568 females (52.4%) and 515 males (47.6%) were screened. Eighty-nine subjects (8.3%) were visually impaired and five subjects (0.5%) were blind. Three hundred and fifty-six subjects were diagnosed with refractive error (356/1083; 32.9%), out of which 149 subjects (41.9%) were uncorrected. The prevalence of uncorrected refractive error in this study population was 13.8%. Astigmatism was the commonest refractive error, whereas hypermetropia and hypermetropic astigmatism were the commonest spherical and astigmatic errors, respectively. Conclusion: A significant proportion of the staff of this Nigerian university still lives with uncorrected refractive error with its attendant consequences. Regular eye checks should be done by the staff of our universities and effort should be intensified in eye care awareness creation among the populace, including the apparently enlightened communities. abstract_id: PUBMED:35978677 Estimation of the Prevalence of Uncorrected Refractive Error and Other Ocular Morbid Conditions in School Children of Industrial Area in a Non-metro City in India. Purpose This study aims to estimate the prevalence of uncorrected refractive error and ocular morbid conditions in school-going children of the Pimpri Chinchwad Municipal Corporation (PCMC) industrial belt. Methods Ocular examination was done in a well-equipped mobile clinic on school premises in the presence of a school teacher using visual acuity (VA) charts, autorefractometer, retinoscope, and handheld slit lamp. For the age group of 5-6 years, Lea symbols and HOTV charts were used, and for the age group of &gt;7 years, Snellen's chart was used. A detailed anterior segment examination was done to see lid position, the presence of any lid swelling, conjunctival congestion, conjunctival xerosis, corneal opacity, and lens opacity, and findings of previous eye surgery were noted. Spectacle correction was given to these students if they were found to have a significant refractive error. Children requiring intervention other than refractive correction were referred to a tertiary hospital. Results A total of 3,054 school children were examined. Most were between the age group of 11-15 years (2,448 (80.2%)), with a mean age of 12.45 ± 2.022 years; 1,470 (48.1%) were male children. A total of 368 (12.04%) children had uncorrected refractive error. Myopia was seen in 204 (6.68%) children, hypermetropia in 16 (0.52%) children, and astigmatism in 148 (4.85%) children. On classification, simple myopic astigmatism (SMA) was found in 73 (2.39%) children, compound myopic astigmatism (CMA) in 38 (1.24%) children, simple hypermetropic astigmatism (SHA) in 13 (0.34%) children, and compound hypermetropic astigmatism (CHA) in 16 (0.52%) children. Moreover, 121 children had ocular morbid conditions. Ocular morbidity with decreased vision was seen in 52 (1.7%) children with preexisting refractive error and 12 (0.39%) with amblyopia, and strabismus was seen in eight (0.26%) children. Five (0.16%) children had lens disorder, and five (0.16%) had no improvement with glasses despite normal anterior segment. Conclusion There was a high prevalence of uncorrected refractive error. Early detection of uncorrected refractive error and ocular morbidity will improve overall performance in school-going children. abstract_id: PUBMED:37180530 Risk Factors for Uncorrected Refractive Error: Persian Cohort Eye Study - Rafsanjan Center. Purpose: To determine the prevalence of visually significant uncorrected refractive error (URE) in Rafsanjan and investigate the related factors. URE is the leading cause of visual impairment (VI) which causes the second-highest number of years lived with disability. The URE is a preventable health problem. Methods: In this cross-sectional study participants from Rafsanjan who were 35-70 years were enrolled between 2014 and 2020. Demographic and clinical characteristics data were gathered, and eye examination was performed. Visually significant URE was defined as present if habitual visual acuity was (HVA; visual acuity with present optical correction) &gt;0.3 logMAR in the best eye and the visual acuity of that eye showed &gt;0.2 logMAR improvement after the best correction. Logistic regression was used to determine the association between predicting variables (age, sex, wealth, education, employment, diabetes, cataract, and refractive error characteristics) and outcome (URE). Results: Among the 6991 participants of Rafsanjan subcohort of the Persian Eye Cohort, 311 (4.4%) had a visually significant URE. Diabetes was significantly more prevalent in the participants with visually significant URE, at 18.7% versus 13.1% in patients without significant URE (P = 0.004). In the final model, each year of increase in age was associated with 3% higher URE (95% confidence interval [CI]: 1.01-1.05). In comparison to low hyperopia, participants with low myopia had 5.17 times more odds of visually significant URE (95% CI: 3.38-7.93). However, antimetropia decreased the risk of visually significant URE (95% CI: 0.02-0.37). Conclusion: Policymakers should pay special attention to elderly patients with myopia to effectively reduce the prevalence of visually significant URE. abstract_id: PUBMED:18524197 The use of levodopa in the treatment of bilateral amblyopia secondary to uncorrected high hypermetropia. The standard treatment of bilateral amblyopia secondary to uncorrected high bilateral hypermetropia has been spectacles and time. The patient described here failed to show adequate improvement in visual acuity after 18 months of standard treatment. Visual acuity improved dramatically when the patient began taking levodopa, dropped precipitously when the medication was discontinued, and returned quickly to post-treatment levels when levodopa was restarted. Visual acuity remained stable when the medication was slowly tapered and discontinued. This report suggests that dopamine may play a role in the pathophysiology of bilateral amblyopia secondary to uncorrected high bilateral hypermetropia. abstract_id: PUBMED:34778316 Improving Access to Refractive Services in Adults: A Health Examination Center-Based Model. Purpose: To assess the potential of a health examination center-based screening model in improving service for uncorrected refractive error. Methods: Individuals aged ≥18 years undergoing the routine physical examinations at a tertiary hospital in the northeast China were invited. Presenting visual acuity, noncycloplegic autorefraction, noncontact tonometry, fundus photography, and slit-lamp examination were performed. Refractive error was defined as having spherical equivalent ≤ -0.75 D or ≥ +1 D and uncorrected refractive error was considered as refractive error combined with presenting visual acuity &lt; 6/12 in the better eye. Costs for the screening were assessed. Results: A total of 5,284 participants (61 ± 14 years) were included. The overall prevalence of myopia and hyperopia was 38.7% (95% CI, 37.4-40.0%) and 23.5% (95% CI, 22.3-24.6%), respectively. The prevalence of uncorrected refractive error was 7.85% (95% CI, 7.13-8.58%). Women (p &lt; 0.001 and p = 0.003), those with age ≥ 70 years (p &lt; 0.001 and p = 0.003), and myopia (p &lt; 0.001 and p &lt; 0.001) were at higher risk of uncorrected refractive error and uncorrected refractive error-related visual impairment. Spectacle coverage rate was 70.6% (95% CI, 68.2-73.0%). The cost to identify a single case of refractive error and uncorrected refractive error was US$3.2 and US$25.2, respectively. Conclusion: The prevalence of uncorrected refractive error is high in the urban Chinese adults. Health examination center-based refractive error screening is able to provide an efficient and low-cost model to improve the refractive services in China. abstract_id: PUBMED:998715 Values, personality, physical characteristics, and refractive error. A questionnaire containing: (a) a questionnaire of physical characteristics associated with psychosomatic symptoms, (b) questions to determine the presence of myopia or hypermetropia but not the amount of error, (c) a shortened version of the Gough Adjective Check List, and (d) the Heist and Yonge Omnibus Personality Inventory were administered to 782 introductory psychology students attending Washington State University. This group consisted of 297 self-identified myopes (117 males, 180 females), 439 emmetropes (207 males, 232 females), and 46 hypermetropes (15 males, 31 females). Results indicated that myopes and nonmyopes differed from one another with respect to their value systems, personality, and physical characteristics as assessed by the components of the questionnaire. Differences in these characteristics were consistently found by all assessment techniques to exist most often between myopes and hypermetropes and between emmetropes and hypermetropes rather than between myopes and emmetropes. In general myopes exhibited a personality pattern of introversion, whereas hypermetropes maintained a pattern described as extroverted. abstract_id: PUBMED:17020900 Inferring myopia over the lifecourse from uncorrected distance visual acuity in childhood. Aim: To report the usefulness of uncorrected distance visual acuity (DVA) at 16 years to "screen" for myopia status and to assess the lifetime risk of myopia, based on a national birth cohort. Methods: 1867 members of the 1958 British birth cohort for whom there were data on acuity at 16 years had autorefraction, as part of a biomedical survey, at 45 years. Reduced uncorrected DVA at age 16 years (6/12 or worse in both eyes) was compared with adult refraction (spherical equivalent). Results: Only a quarter of individuals in the population studied who had developed myopia by 45 years of age had reduced acuity at 16 years of age. Notably, half of all adults with moderate myopia (-2.99 to -5.99) and 31% (11/35) with severe myopia (&gt; or =-6) had good uncorrected DVA in both eyes at 16 years of age. Thus, sensitivities were low, ranging from 16% for all myopia (cut-off point spherical equivalent -0.5) to 69% for severe myopia (cut-off point spherical equivalent -6). However, a high (91%) lifetime probability of primary myopia (spherical equivalent &gt; or =-0.5) given a reduced uncorrected DVA at 16 years was found. Conclusion: In this population, reduced uncorrected DVA in childhood is an inaccurate and inappropriate intermediate "phenotype" for capturing adult myopia status. However, our findings support assessment of DVA in secondary school children as an effective method of identifying refractive error (both myopia and hypermetropia). abstract_id: PUBMED:36628479 Prevalence of refractive errors in Nepalese children and adults: a systematic review with meta-analysis. Clinical Relevance: Country-specific estimates of the prevalence of refractive errors are important to formulate national eye health policies for refractive care services. Background: The purpose of this study was to systematically synthesise available literature and estimate the prevalence of refractive errors in the Nepalese population. Methods: PubMed, Scopus, and Web of Science databases were systematically searched for articles on refractive errors and presbyopia published in English language until 27 September 2022. Population and school-based quantitative, cross-sectional prevalence studies and Rapid Assessment of Avoidable Blindness survey repository data were included. The quality of the included studies was assessed using the Newcastle Ottawa scale adapted for cross-sectional studies. Data extraction was performed with consensus among the reviewers. Meta-analysis of the prevalence was performed using the Random effects model to estimate the pooled proportions. Results: A total of 38 studies with 101 701 participants were included: 18 studies in children (n = 31 596) and 20 in adults (n = 70 105). In children, the estimated pooled prevalence of overall refractive errors was 8.4% (95% CI: 4.8 to 12.9) with myopia, hypermetropia and astigmatism prevalent in 7.1% (95% CI: 3.7 to 11.4), 1.0% (95% CI: 0.7 to 1.3) and 2.2% (95% CI: 0.9 to 3.9), respectively. In adults, the prevalence of refractive errors, uncorrected refractive errors, and uncorrected presbyopia were 11.2% (95% CI: 8.0 to 14.9), 7.3% (95% CI: 5.4 to 9.5) and 78.9% (95% CI: 69.1 to 87.3), respectively. Conclusions: The pooled prevalence of refractive errors is relatively low while uncorrected refractive errors and presbyopia are high in Nepalese population suggesting a need for better access to refractive care services in the country. The paucity of quality evidence on prevalence of refractive errors, particularly in children, indicates a need for a well-designed population-based study to accurately estimate the current prevalence of refractive errors. abstract_id: PUBMED:26182727 PREVALENCE OF UNCORRECTED REFRACTIVE ERRORS IN ADULTS AGED 30 YEARS AND ABOVE IN A RURAL POPULATION IN PAKISTAN. Background: Uncorrected refractive errors are a leading cause of visual disability globally. This population-based study was done to estimate the prevalence of uncorrected refractive errors in adults aged 30 years and above of village Pawakah, Khyber Pakhtunkhwa (KPK), Pakistan. Methods: It was a cross-sectional survey in which 1000 individuals were included randomly. All the individuals were screened for uncorrected refractive errors and those whose visual acuity (VA) was found to be less than 6/6 were refracted. In whom refraction was found to be unsatisfactory (i.e., a best corrected visual acuity of &lt;6/6) further examination was done to establish the cause for the subnormal vision. Results: A total of 917 subjects participated in the survey (response rate 92%). The prevalence of uncorrected refractive errors was found to be 23.97% among males and 20% among females. The prevalence of visually disabling refractive errors was 6.89% in males and 5.71% in females. The prevalence was seen to increase with age, with maximum prevalence in 51-60 years age group. Hypermetropia (10.14%) was found to be the commonest refractive error followed by Myopia (6.00%) and Astigmatism (5.6%). The prevalence of Presbyopia was 57.5% (60.45% in males and 55.23% in females). Poor affordability was the commonest barrier to the use of spectacles, followed by unawareness. Cataract was the commonest reason for impaired vision after refractive correction. The prevalence of blindness was 1.96% (1.53% in males and 2.28% in females) in this community with cataract as the commonest cause. Conclusions: Despite being the most easily avoidable cause of subnormal vision uncorrected refractive errors still account for a major proportion of the burden of decreased vision in this area. Effective measures for the screening and affordable correction of uncorrected refractive errors need to be incorpora'ted into the health care delivery system. Answer: Yes, personality traits could potentially modify the response to uncorrected high hyperopia. A study aimed to measure temperament and investigate personality in children with high hyperopia found that certain temperament scales and personality traits might play a role in how individuals respond to uncorrected high hyperopia. In the study, children with refractive accommodative esotropia (group 2) exhibited more dominant themes of discomfort, fear, and shyness, while children with orthotropia and ametropic amblyopia in the presence of uncorrected high hyperopia (group 1) showed dominant themes of high-intensity pleasure and smiling-laughter. The difference in the fear scale was statistically significant. Additionally, aggression toward parents was a common behavioral pattern in both groups, with group 1 children more likely to express a passive-aggressive behavioral pattern, while obstinacy and anal period characteristics were more dominant in group 2 children. Themes of narcissistic injury were more frequently expressed by children in group 2. These findings suggest that temperament and personality traits could influence how children with uncorrected high hyperopia cope with their condition, although more research is needed in a larger study group to confirm these observations (PUBMED:16213400).
Instruction: Is endoscopic nodular gastritis associated with premalignant lesions? Abstracts: abstract_id: PUBMED:26087803 Is endoscopic nodular gastritis associated with premalignant lesions? Background: Nodularity on the gastric mucosa is occasionally seen in general practice. There is no consensus about the association of nodular gastritis and histological premalignant lesions. This study is designed to investigate the prevalence of histological premalignant lesions in dyspeptic patients with endoscopic nodular gastritis. Methods: Consecutive patients with endoscopic nodular gastritis were compared with an age- and sex-matched control group. Endoscopic nodular gastritis was defined as a miliary nodular appearance of the gastric mucosa on endoscopy. Biopsy samples of stomach tissue were examined for the presence of atrophic gastritis, intestinal metaplasia, and dysplasia. The presence of Helicobacter pylori infection was determined by histology. Results: From 5366 evaluated patients, a total of 273 patients with endoscopic nodular gastritis and 1103 participants as control group were enrolled. H. pylori infection was detected in 87.5% of the patients with endoscopic nodular gastritis, whereas 73.8% of the control group were positive for H. pylori (p &lt; 0.001). Prevalence of incomplete intestinal metaplasia (p = 0.016) and dysplasia (p &lt; 0.001) in patients with endoscopic nodular gastritis were significantly higher than in the control group. Prevalence of atrophic gastritis and complete intestinal metaplasia were also more frequent in patients with endoscopic nodular gastritis than in the control group. Conclusion: Dysplasia, incomplete intestinal metaplasia and H. pylori infection are significantly more frequent in patients with endoscopic nodular gastritis. Although further studies are needed before a clear conclusion can be reached, we suggest that endoscopic nodular gastritis might serve as a premalignant lesion and could be biopsied in all patients for the possibility of histological premalignancy, in addition to H. pylori infection. abstract_id: PUBMED:29946429 The evolving role of endoscopy in the diagnosis of premalignant gastric lesions. Gastric adenocarcinoma is a disease that is often detected late, at a stage when curative treatment is unachievable. This must be addressed through changes in our approach to the identification of patients at increased risk by improving the detection and risk assessment of premalignant changes in the stomach, including chronic atrophic gastritis and intestinal metaplasia. Current guidelines recommend utilising random biopsies in a pathology-led approach in order to stage the extent and severity of gastritis and intestinal metaplasia. This random method is poorly reproducible and prone to sampling error and fails to acknowledge recent advances in our understanding of the progression to gastric cancer as a non-linear, branching evolutionary model. Data suggest that recent advances in endoscopic imaging modalities, such as narrow band imaging, can achieve a high degree of accuracy in the stomach for the diagnosis of these premalignant changes. In this review, we outline recent data to support a paradigm shift towards an endoscopy-led approach to diagnosis and staging of premalignant changes in the stomach. High-quality endoscopic interrogation of the chronically inflamed stomach mucosa, supported by targeted biopsies, will lead to more accurate risk assessment, with reduced rates of under or missed diagnoses. abstract_id: PUBMED:32363734 Genetic polymorphisms in the cag pathogenicity island of Helicobacter pylori and risk of stomach cancer and high-grade premalignant gastric lesions. Helicobacter pylori (Hp) infects the stomach of about half of the human population and is strongly associated with the risk of gastric cancer (GC) and its premalignant precursors. The cag pathogenicity island (cagPAI) is a region of the Hp genome encoding for key molecular machinery involved in the infection process. Following a sequencing study, we selected 50 genetic polymorphisms located in seven cagPAI genes and tested their associations with the risk of advanced gastric premalignant lesions and GC in 1220 subjects from various Latin American populations showing the whole spectrum of phenotypes from gastritis to GC. We found that three polymorphisms of cagA are associated with the risk of advanced gastric premalignant lesions (incomplete intestinal metaplasia [ie, Type 2 and 3] or dysplasia), and that six polymorphisms located in cagA, cagL and cagI were associated with risk of GC. When corrected for multiple testing none of the associations were statistically significant. However, scores built by integrating the individual polymorphisms were significantly associated with the risk of advanced gastric premalignant lesions and GC. These results have the potential of establishing markers for risk stratification in the general population, in view of targeting Hp eradication to high-risk population groups. abstract_id: PUBMED:30159051 Association between VEGF-634G&gt;C Gene Polymorphism with Gastric Premalignant Lesions and Serum VEGF Levels in Helicobacter pylori Gastritis Patients. Aim: To evaluate the association between VEGF-634G&gt;C gene polymorphism with premalignant gastric lesions as well as the level of VEGF. Methods: This cross-sectional study included patients with H. pylori gastritis at Haji Adam Malik General Hospital, Permata Bunda General Hospital, and Universitas Sumatera Utara Hospital, Medan, Indonesia. Detection of H. pylori infection was made using positive results of 14C-UBT, rapid urease test, and/or immunohistochemistry. Gastric premalignant lesion diagnosis was made when one or more of the following were present: chronic atrophic gastritis, intestinal metaplasia, or dysplasia. Real-time polymerase chain reaction (RT-PCR) was used to examine VEGF-634G&gt;C gene polymorphism. Additionally, serum samples of patients with H. pylori gastritis were obtained to determine the level of circulating VEGF. Data were analysed using SPSS version 22. Results: A total number of 87 patients with H. pylori gastritis were included in this study. Of all participants, 26 patients (29.9%) showed gastric premalignancy. There was a significant association between GG+GC genotype of VEGF-634G&gt;C and gastric premalignant lesions (P = 0.003; OR (CI 95%) = 6.07 (1.88-41.71)). VEGF-634 G&gt;C polymorphism also showed an association with VEGF serum levels (P = 0.005). Patients with the GG+GC genotype would be at risk of 3.16 times to have high VEGF levels compared to CC genotypes. Conclusion: VEGF-634G&gt;C polymorphism, in particular, GG+GC genotype was associated with an increased risk of gastric premalignant transformation as well as having high VEGF levels in patients with H.pylori gastritis. abstract_id: PUBMED:35657091 Dynamic change of circulating innate and adaptive lymphocytes subtypes during a cascade of gastric lesions. According to the Correa model, the intestinal-type gastric cancer (GC) is preceded by premalignant lesions, including chronic gastritis, intestinal metaplasia and dysplasia. However, the dynamic change of innate and adaptive immune response during this process has not been studied comprehensively. In this study, we performed a comprehensive and trajectory analysis of circulating innate lymphoid cells (ILCs) and adaptive Th lymphocytes subtypes in patients spanning a cascade of gastric lesions. Increased circulating ILC2s frequency was found in the gastritis, premalignant stage and GC group, whereas further decreased ILC2s were detected in the GC group compared with the premalignant group. Moreover, ILC3s level was higher in both gastritis, premalignant lesion and GC stage, compared with healthy controls. Furthermore, up-regulated T follicular helper (Tfh) cell proportions were detected in the gastritis and premalignant process. In conclusion, by analyzing the circulating ILCs and Th cells frequency and the key cytokine production or immunoglobulin level, we demonstrated the potential involvement of ILC3 and Tfh in the gastric diseases. These findings will help to understand the immunologic mechanisms in both GC and the premalignant process and contribute to serve potential therapeutic targets to prevent the GC development. abstract_id: PUBMED:31435167 Helicobacter pylori and cytokine gene variants as predictors of premalignant gastric lesions. Gastric cancer remains the third leading cause of mortality from cancer worldwide and carries a poor prognosis, due largely to late diagnosis. The importance of the interaction between Helicobacter pylori (H. pylori) infection, the main risk factor, and host-related genetic factors has been studied intensively in recent years. The genetic predisposition for non-hereditary gastric cancer is difficult to assess, as neither the real prevalence of premalignant gastric lesions in various populations nor the environmental risk factors for cancer progression are clearly defined. For non-cardiac intestinal-type cancer, identifying the factors that modulate the progression from inflammation toward cancer is crucial in order to develop preventive strategies. The role of cytokines and their gene variants has been questioned in regard to non-self-limiting H. pylori gastritis and its evolution to gastric atrophy and intestinal metaplasia; the literature now includes various and non-conclusive results on this topic. The influence of the majority of cytokine single nucleotide polymorphisms has been investigated for gastric cancer but not for preneoplastic gastric lesions. Among the investigated gene variants onlyIL10T-819C, IL-8-251, IL-18RAP917997, IL-22 rs1179251, IL1-B-511, IL1-B-3954, IL4R-398 and IL1RN were identified as predictors for premalignant gastric lesions risk. One of the most important limiting factors is the inhomogeneity of the studies (e.g., the lack of data on concomitant H. pylori infection, methods used to assess preneoplastic lesions, and source population). Testing the modifying effect of H. pylori infection upon the relationship between cytokine gene variants and premalignant gastric lesions, or even testing the interaction between H. pylori and cytokine gene variants in multivariable models adjusted for potential covariates, could increase generalizability of results. abstract_id: PUBMED:27956984 Two Atypical Cases of Nodular Gastritis: A Poorly Differentiated Gastric Adenocarcinoma and a Pseudo-Low Grade Gastric MALT Lymphoma. Nodular gastritis is a Helicobacter pylori-related gastritis with endoscopically proven gooseflesh skin-like nodularity in the gastric antrum. Although an association between nodular gastritis and gastric malignancies has been suggested, there is neither a treatment strategy nor a treatment guideline for this condition because of its relative rarity. We have recently experienced two cases of diffuse-type nodular gastritis invading both the antrum and corpus of the stomach with atypical findings that required specific treatments in two young females. The first patient was diagnosed with a suspicious low grade gastric mucosa associated lymphoid tissue (MALT) lymphoma lesion on a diffuse-type nodular gastritis, and was cured by H. pylori eradication. The second patient was diagnosed with a signet cell type gastric cancer on a diffuse-type nodular gastritis, and was cured by surgical resection. When considering the nature and significance of these gastric lesions, a link between nodular gastritis and gastric malignancy should be considered, especially in young women who have diffuse-type nodular gastritis involving both the antrum and corpus of the stomach. abstract_id: PUBMED:30860506 Helicobacter pylori, nodular gastritis and premalignant stomach lesions: a case-control study Introduction: Nodular gastritis (GN) is a type of gastritis strongly related to Helicobacter pylori and may be a risk factor for gastric cancer. It is a highly prevalent pathology in children infected with H. pylori. In Colombia there are no studies on this entity and for this reason we decided to carry out the present investigation. Materials And Methods: Case studies and controls. Case; endoscopic and histological nodular gastritis, controls, chronic gastritis without lymphoid follicles to histology. Population: adults older than 18 years, who underwent a high digestive endoscopy and signed informed consent. All patients were biopsied with the OLGA system. Results: We included 344 patients, 172 in each group. The cases had 10 years less than the controls (40.9 vs 50.9, p = 0.045). In the cases H. pylori was found in 91.9% vs 47.8% (p &lt;0.001). Lymphoid follicles were more frequent in the antrum than in the body (60.5 vs 4.7% p &lt; 0.00001). OLGA II in cases 6.4% versus 1.2% (p = 0.01), OLGA III was similar. There was no OLGA IV in any patient. In the cases a gastric cancer was found. Conclusions: Patients with nodular gastritis are younger than controls. 92% of the cases had H. pylori. Recommendations: It is recommended that this infection be investigated and eradicated in patients with this type of gastritis. abstract_id: PUBMED:34306861 Factors for Negative Result in Serum Anti-Helicobacter pylori IgG Antibody Test in Adult Subjects With Nodular Gastritis: A Single-center Study. Aim: Nodular gastritis has been demonstrated to be strongly associated with Helicobacter pylori infection. The present retrospective study was performed to elucidate factors related to a negative serum antibody test result in adults with nodular gastritis. Materials And Methods: We investigated 116 H. pylori-positive subjects endoscopically diagnosed with nodular gastritis and subjected to a serum anti-H. pylori immunoglobulin G (IgG) antibody test. The degree of gastric mucosal atrophy and the presence of spotty redness in the gastric body and fornix were carefully determined by observations of endoscopic images. Results: Of the 116 investigated subjects, 108 were positive and 8 negative in serum anti-H. pylori IgG antibody test results. The degree of gastric mucosal atrophy was mild in seven among eight seronegative cases. The levels of pepsinogen II in serum in patients with negative antibody test findings were significantly lower as compared to those found positive, while the pepsinogen I/II ratio tended to be higher in subjects shown negative by the test. Only 1 of 69 with spotty redness was negative in serum anti-H. pylori IgG antibody testing, while 7 of 47 without spotty redness were negative. Multiple logistic regression analysis of subjects with a negative test result revealed that the absence of spotty redness shown by endoscopy was a significant risk factor. Conclusion: The absence of spotty redness, which may reflect the degree of gastric body inflammation, is a significant factor indicating increased risk for a negative serum anti-H. pylori IgG antibody test result in subjects with nodular gastritis. abstract_id: PUBMED:26182135 A case of collagenous gastritis resembling nodular gastritis in endoscopic appearance. A 25-year-old Japanese female was referred to our clinic for the investigation of moderate iron-deficiency anemia and epigastralgia. Endoscopic examination showed diffuse mucosal nodules in the gastric body resembling nodular gastritis, but this pattern was not observed in the antrum. Histology of the gastric biopsies taken from the gastric body showed mild atrophic mucosa with chronic active inflammation. Some of the biopsy specimens showed deposition of patchy, band-like subepithelial collagen. Four years later, the patient showed no clinical symptoms and signs. A follow-up endoscopic examination showed similar findings, which mimicked pseudopolyposis or a cobblestone-like appearance. The biopsy specimens from the depressed mucosa between the nodules revealed a thickened subepithelial collagen band with no improvement, which led to a diagnosis of collagenous gastritis. Treatment with oral administration of proton-pump inhibitors and histamine-2-receptor antagonists had proved ineffective. To make a correct diagnosis of collagenous gastritis, we should determine the characteristic endoscopic findings and take biopsies from the depressed mucosa between the nodules. Answer: Yes, endoscopic nodular gastritis is associated with premalignant lesions. A study designed to investigate the prevalence of histological premalignant lesions in dyspeptic patients with endoscopic nodular gastritis found that dysplasia, incomplete intestinal metaplasia, and Helicobacter pylori infection were significantly more frequent in patients with endoscopic nodular gastritis compared to a control group. The prevalence of atrophic gastritis and complete intestinal metaplasia were also more frequent in patients with endoscopic nodular gastritis (PUBMED:26087803). This suggests that endoscopic nodular gastritis might serve as a premalignant lesion and could be biopsied in all patients for the possibility of histological premalignancy, in addition to H. pylori infection.
Instruction: Does retained-seed priming drive the evolution of serotiny in drylands? Abstracts: abstract_id: PUBMED:23345416 Does retained-seed priming drive the evolution of serotiny in drylands? An assessment using the cactus Mammillaria hernandezii. Premise Of The Study: Serotinous plants retain their seeds for a long time. In deserts, retained seeds undergo hydration-dehydration cycles and thus may become primed. Priming enhances germination and seedling vigor. We test the hypothesis that serotiny evolves because it provides a site protected from predators in which seeds can become primed. Rainfall-cued dispersal of primed seeds may enhance this effect. Methods: We tested this hypothesis with Mammillaria hernandezii through protein-content analyses; field and laboratory germination experiments with primed, unprimed, and retained seeds; and fitness estimations from demographic models. Key Results: Hydration-dehydration cycles induced priming, enhancing germination. Artificial priming and retention in the parent plant for 1 yr induced similar changes in seed protein patterns, suggesting that priming occurs naturally while seeds are retained. Under field conditions, germination of seeds retained for 1 yr more than doubled that of seeds of the same cohort that were not primed or that remained buried for 1 yr. The first seeds to germinate died rapidly. Serotinous plants whose seeds underwent priming had higher fitness than those whose seeds were in the soil seed bank or that did not experience priming. Conclusions: Priming in soil seed banks may be costly because of high predation, so seed protection during priming is sufficient to promote the evolution of serotiny. Bet hedging contributes to this process. Rapid germination of primed seeds that respond to brief rainfall events is disadvantageous because such rainfall is insufficient for seedling survival. Serotinous species counteract this cost by cueing dispersal with heavy precipitation. abstract_id: PUBMED:24222682 Fire structures pine serotiny at different scales. Premise Of The Study: Serotiny (delayed seed release with the consequent accumulation of a canopy seedbank) confers fitness benefits in environments with crown-fire regimes. Thus, we predicted that serotiny level should be higher in populations recurrently subjected to crown-fires than in populations where crown-fires are rare. In addition, under a high frequency of fires, space and resources are recurrently available, permitting recruitment around each mother to follow the seed rain shadow. Thus, we also predicted spatial aggregation of serotiny within populations. Methods: We compared serotiny, considering both the proportion and the age of serotinous cones, in populations living in contrasting fire regimes for two iconic Mediterranean pine species (Pinus halepensis, P. pinaster). We framed our results by quantitatively comparing the strength of the fire-serotiny relationship with previous studies worldwide. Key Results: For the two species, populations living under high crown-fire recurrence regimes had a higher serotiny level than those populations where the recurrence of crown-fires was low. For P. halepensis (the species with higher serotiny), populations in high fire recurrence regimes had higher fine-scale spatial aggregation of serotiny than those inhabiting low fire recurrence systems. The strength of the observed fire-serotiny relationship in P. halepensis is among the highest in published literature. Conclusions: Fire regime shapes serotiny level among populations, and in populations with high serotiny, recurrent fires maintain a significant spatial structure for this trait. Consequently, fire has long-term evolutionary implications at different scales, emphasizing its prominent role in shaping the ecology of pines. abstract_id: PUBMED:32607207 Enhanced seed defenses potentially relax selection by seed predators against serotiny in lodgepole pine. Serotiny, the retention of seeds in a canopy seed bank until high temperatures cause seeds to be released, is an important life history trait for many woody plants in fire-prone habitats. Serotiny provides a competitive advantage after fire but increases vulnerability to predispersal seed predation, due to the seeds being retained in clusters in predictable locations for extended periods. This creates opposing selection pressures. Serotiny is favored in areas of high fire frequency, but is selected against by predispersal seed predators. However, predation also selects for cone traits associated with seed defense that could reduce predation on serotinous cones and thereby relax selection against serotiny. This helps explain the elevated defenses in highly serotinous species. However, whether such interactions drive variation in seed defenses within variably serotinous populations has been studied rarely. We investigated the effects of phenotypic selection exerted by red squirrel (Tamiasciurus hudsonicus) predation on Rocky Mountain lodgepole pine (Pinus contorta latifolia) seeds. Squirrels preferentially harvested cones with more and larger seeds, indicating a preference for a higher food reward. We found evidence for stronger selection on trees with serotinous cones, which presumably accounts for the elevated defenses of and lower predation on serotinous compared to non-serotinous cones. Lower levels of predation on serotinous cones in turn lessen selection against serotiny by squirrels. This has important implications because the frequency of serotiny in lodgepole pine has profound consequences for post-fire communities and ecosystems widespread in the Rocky Mountains. abstract_id: PUBMED:30637553 Cell cycle inhibitors improve seed storability after priming treatments. Seed priming is a treatment that controls seed water content to partially activate germination processes such as metabolism but prevents full germination of the seeds. The treatment is well known to enhance seed performance, including germination, but sometimes reduces seed storability or longevity as a side effect. Toward developing a novel priming technique that can maintain seed longevity for a longer time period, chemicals that suppress the seed deterioration under a controlled condition were screened from 80 known biologically active compounds contained in the RIKEN NPDepo authentic library using Arabidopsis thaliana seeds. Seeds primed with mimosine, a cell cycle inhibitor, retained higher survival rate after a controlled deterioration treatment compared to seeds primed without the chemical. In addition, other cell cycle inhibitors such as aphidicolin, hydroxyurea and oryzalin had similar effects on the seed storability after priming. Our results suggest that progression of the cell cycle during priming is an important checkpoint that determines the storability of seeds after the treatment. abstract_id: PUBMED:27620182 Disentangling plasticity of serotiny, a key adaptive trait in a Mediterranean conifer. Premise Of The Study: Serotiny, the maintenance of ripe seeds in closed fruits or cones until fire causes dehiscence, is a key adaptive trait of plants in fire-prone ecosystems, but knowledge of phenotypic plasticity for cone retention in woody plants is extremely scarce. On the basis of published literature and our field observations, we hypothesized that increased aridity might decrease the aerial seed bank as a plastic response, not necessarily adaptive. Methods: We used a Pinus halepensis common garden replicated in three contrasted sites (mild, cold, and dry) to separate population differentiation from phenotypic plasticity of cone serotiny and canopy cone bank (CCB). Differences in growth among trees of the same provenance allowed us to include size effect as a proxy of ontogenetic age for the same chronological age of the trees. Key Results: Tree size had a strong negative effect on serotiny, but serotiny degree differed among trial sites even after accounting for size effects. As hypothesized, serotiny was lower at the harsh (dry and cold) sites compared with the mild site. Genetic variation for size-dependent cone serotiny and significant population × site interaction were confirmed, the latter implying different plasticity of serotiny among populations. Population differentiation for CCB showed an ecotypic trend, with positive correlation with temperature oscillation (continentality) and negative correlation with summer rainfall. Conclusions: Growth-limiting environments exacerbated the precocious release of seeds, contrary to the ecotypic trend found for the aerial cone bank, suggesting a counter-gradient plasticity. This plastic response is potentially maladaptive under a scenario of frequent wildfires. abstract_id: PUBMED:30484944 Evolutionary history of fire-stimulated resprouting, flowering, seed release and germination. Fire has shaped the evolution of many plant traits in fire-prone environments: fire-resistant tissues with heat-insulated meristems, post-fire resprouting or fire-killed but regenerating from stored seeds, fire-stimulated flowering, release of on-plant-stored seeds, and germination of soil-stored seeds. Flowering, seed release and germination fit into three categories of response to intensifying fire: fire not required, weakly fire-adapted or strongly fire-adapted. Resprouting also has three categories but survival is always reduced by increasing fire intensity. We collated 286 records for 20 angiosperm and two gymnosperm families and 50 trait assignments to dated phylogenies. We placed these into three fire-adapted trait types: those associated with the origin of their clade and the onset of fire-proneness [primary diversification, contributing 20% of speciation events over the last 120 million years (My)], those originating much later coincident with a change in the fire regime (secondary diversification, 30%), and those conserved in the daughter lineage as already adapted to the fire regime (stabilisation, 50%). All four fire-response types could be traced to &gt;100 My ago (Mya) with pyrogenic flowering slightly younger because of its dependence on resprouting. There was no evidence that resprouting was always an older trait than either seed storage or non-sprouting throughout this period, with either/both ancestral or derived in different clades and times. Fire-adapted traits evolved slowly in the Cretaceous, 120-65 Mya, and rapidly but fitfully in the Cenozoic, 65-0 Mya, peaking over the last 20 My. The four trait-types climaxed at different times, with the peak in resprouter speciation over the last 5 My attributable to fluctuating growing conditions and increasing savanna grasslands unsuitable for non-sprouters. All experienced a trough in the 40-30-Mya period following a reduction in world temperatures and oxygen levels and expected reduced fire activity. Thick bark and serotiny arose in the Mid-Cretaceous among extant Pinaceae. Heat-stimulated germination of hard seeds is ancestral in the 103-My-old Fabales. Smoke-(karrikin)-stimulated germination of non-hard seeds is even older, and includes the 101-My-old Restionaceae-Anarthriaceae. A smoke/karrikin response is detectable in some fire-free lineages that prove to have a fire-prone ancestry. Among clades that are predominantly fire-prone, absence of fire-related traits is the advanced condition, associated either with increased fire frequency (loss of serotiny and soil storage), or migration to fire-free habitats (loss of thick bark, pyrogenic flowering, serotiny or soil storage). Protea (Africa) and Hakea (Australia) illustrate the importance of stabilisation processes between resprouting/non-sprouting in accounting for speciation events over the last 20 My and highlight the frequent interchange possible between these two traits. Apart from Pinus, most ancestral trait reconstruction relative to fire has been conducted on predominantly Southern Hemisphere clades and this needs to be redressed. Despite these limitations, it is clear that fire has had a profound effect on fire-related trait evolution worldwide, and set the platform for subsequent evolution of many non-fire-related traits. Genetics of the triggering mechanisms remain poorly understood, except the karrikin system for smoke-stimulated germination. We exhort biologists to include fire-proneness and fire-related traits in their thinking on possible factors controlling the evolution of plants. abstract_id: PUBMED:26812088 Seed priming to alleviate salinity stress in germinating seeds. Salinity is one of the major abiotic stresses that affect crop production in arid and semiarid areas. Seed germination and seedling growth are the stages most sensitive to salinity. Salt stress causes adverse physiological and biochemical changes in germinating seeds. It can affect the seed germination and stand establishment through osmotic stress, ion-specific effects and oxidative stress. The salinity delays or prevents the seed germination through various factors, such as a reduction in water availability, changes in the mobilization of stored reserves and affecting the structural organization of proteins. Various techniques can improve emergence and stand establishment under salt conditions. One of the most frequently utilized is seed priming. The process of seed priming involves prior exposure to an abiotic stress, making a seed more resistant to future exposure. Seed priming stimulates the pre-germination metabolic processes and makes the seed ready for radicle protrusion. It increases the antioxidant system activity and the repair of membranes. These changes promote seed vigor during germination and emergence under salinity stress. The aim of this paper is to review the recent literature on the response of plants to seed priming under salinity stress. The mechanism of the effect of salinity on seed germination is discussed and the seed priming process is summarized. Physiological, biochemical and molecular changes induced by priming that lead to seed enhancement are covered. Plants' responses to some priming agents under salinity stress are reported based on the best available data. For a great number of crops, little information exists and further research is needed. abstract_id: PUBMED:33498531 Nanotechnology Potential in Seed Priming for Sustainable Agriculture. Our agriculture is threatened by climate change and the depletion of resources and biodiversity. A new agriculture revolution is needed in order to increase the production of crops and ensure the quality and safety of food, in a sustainable way. Nanotechnology can contribute to the sustainability of agriculture. Seed nano-priming is an efficient process that can change seed metabolism and signaling pathways, affecting not only germination and seedling establishment but also the entire plant lifecycle. Studies have shown various benefits of using seed nano-priming, such as improved plant growth and development, increased productivity, and a better nutritional quality of food. Nano-priming modulates biochemical pathways and the balance between reactive oxygen species and plant growth hormones, resulting in the promotion of stress and diseases resistance outcoming in the reduction of pesticides and fertilizers. The present review provides an overview of advances in the field, showing the challenges and possibilities concerning the use of nanotechnology in seed nano-priming, as a contribution to sustainable agricultural practices. abstract_id: PUBMED:38409565 Seed priming with gas plasma-activated water in Ethiopia's "orphan" crop tef (Eragrostis tef). Main Conclusion: Seed priming with gas plasma-activated water results in an increased ageing resilience in Eragrostis tef grains compared to a conventional hydropriming protocol. Tef (Eragrostis tef) is a cereal grass and a major staple crop of Ethiopia and Eritrea. Despite its significant importance in terms of production, consumption, and cash crop value, tef has been understudied and its productivity is low. In this study, tef grains have undergone different priming treatments to enhance seed vigour and seedling performance. A conventional hydropriming and a novel additive priming technology with gas plasma-activated water (GPAW) have been used and tef grains were then subjected to germination performance assays and accelerated ageing. Tef priming increases the germination speed and vigour of the grains. Priming with GPAW retained the seed storage potential after ageing, therefore, presenting an innovative environmental-friendly seed technology with the prospect to address variable weather conditions and ultimately food insecurity. Seed technology opens new possibilities to increase productivity of tef crop farming to achieve a secure and resilient tef food system and economic growth in Ethiopia by sustainable intensification of agriculture beyond breeding. abstract_id: PUBMED:36780009 Molecular dynamics of seed priming at the crossroads between basic and applied research. Key Message: The potential of seed priming is still not fully exploited. Our limited knowledge of the molecular dynamics of seed pre-germinative metabolism is the main hindrance to more effective new-generation techniques. Climate change and other recent global crises are disrupting food security. To cope with the current demand for increased food, feed, and biofuel production, while preserving sustainability, continuous technological innovation should be provided to the agri-food sector. Seed priming, a pre-sowing technique used to increase seed vigor, has become a valuable tool due to its potential to enhance germination and stress resilience under changing environments. Successful priming protocols result from the ability to properly act on the seed pre-germinative metabolism and stimulate events that are crucial for seed quality. However, the technique still requires constant optimization, and researchers are committed to addressing some key open questions to overcome such drawbacks. In this review, an update of the current scientific and technical knowledge related to seed priming is provided. The rehydration-dehydration cycle associated with priming treatments can be described in terms of metabolic pathways that are triggered, modulated, or turned off, depending on the seed physiological stage. Understanding the ways seed priming affects, either positively or negatively, such metabolic pathways and impacts gene expression and protein/metabolite accumulation/depletion represents an essential step toward the identification of novel seed quality hallmarks. The need to expand the basic knowledge on the molecular mechanisms ruling the seed response to priming is underlined along with the strong potential of applied research on primed seeds as a source of seed quality hallmarks. This route will hasten the implementation of seed priming techniques needed to support sustainable agriculture systems. Answer: Yes, retained seed priming does appear to drive the evolution of serotiny in drylands. The hypothesis that serotiny evolves because it provides a site protected from predators in which seeds can become primed is supported by research on the cactus Mammillaria hernandezii. In this study, it was found that hydration-dehydration cycles induced by the desert environment can prime seeds, enhancing their germination. The research showed that seeds retained for one year on the parent plant had a germination rate more than double that of seeds not primed or that remained buried for one year. The higher fitness of serotinous plants whose seeds underwent priming compared to those whose seeds were in the soil seed bank or that did not experience priming suggests that the protection of seeds during priming promotes the evolution of serotiny. Additionally, the rapid germination of primed seeds in response to brief rainfall events is typically disadvantageous, but serotinous species counteract this cost by cueing dispersal with heavy precipitation (PUBMED:23345416). This evolutionary advantage of serotiny is further supported by studies in other species, such as pines, where populations living under high crown-fire recurrence regimes had a higher level of serotiny than those where crown-fires were rare (PUBMED:24222682). Moreover, enhanced seed defenses in lodgepole pine potentially relax selection by seed predators against serotiny, suggesting that the evolution of serotiny is also influenced by the interaction between fire frequency and predation (PUBMED:32607207). In summary, the evidence suggests that retained seed priming, which enhances germination and seedling vigor, is a significant factor driving the evolution of serotiny in drylands, as it provides a protected site for seeds to become primed and improves the fitness of serotinous plants in these challenging environments.
Instruction: Does acellular dermal matrix really improve aesthetic outcome in tissue expander/implant-based breast reconstruction? Abstracts: abstract_id: PUBMED:33633977 Comparison of one-stage direct-to-implant with acellular dermal matrix and two-stage immediate implant-based breast reconstruction-a cohort study. Background: The use of acellular dermal matrix (ADM) in one-stage immediate implant-based breast reconstruction (BR) may offer advantages over the two-stage expander-to-implant technique, but literature shows conflicting results. The aim of the present study was to compare these two techniques for immediate implant-based BR regarding postoperative complications, aesthetic correction procedures and aesthetic outcome. Methods: The study was designed as an observational cohort study with 44 participants admitted for immediate implant-based BR at Department of Plastic Surgery, Aarhus University Hospital, Denmark. 21 patients underwent BR with a one-stage direct-to-implant technique using ADM and 23 patients underwent BR with a two-stage expander-to-implant technique. Follow-up time was 2 years. Results: The risk of implant loss was equal between groups; one-stage group 16% and two-stage group 17% whereas the risk of implant exchange (but not loss of BR) was 13% in the one-stage group compared to 7% in the two-stage group. The risk of at least one major complication were equal between groups; 28% and 24% but the risk of at least one minor complication was significantly higher in the two-stage group (41%) compared to the one-stage group (3%). Number of aesthetic corrections were equally frequent in the two treatment groups (one-stage group 1.8, two-stage group 1.5). Patient and investigator assessed aesthetic outcome was very high in both groups as well as the degree of symmetry between breasts. No capsular contracture Baker grade 3 or 4 was observed. Conclusions: The present study design sets limitations for drawing wide conclusions. This study did not reveal any significant differences between the two breast reconstructive techniques besides a higher risk of minor complications in the two-stage group, that did, however, not lead to a higher risk of implant loss. With equally high satisfaction with the aesthetic result and no significant difference in number of aesthetic corrections between the two groups we suggest, that the one-stage approach using ADM may be feasible and allows the patient to achieve an implant-based BR with a minimum of surgeries and outpatient visits. The study was registered in ClinicalTrials.gov (NCT04209010). abstract_id: PUBMED:33172826 Development of an evidence-based approach to the use of acellular dermal matrix in immediate expander-implant-based breast reconstruction. Background: Although acellular dermal matrix (ADM) is widely used in expander-implant-based breast reconstructions, previous analyses have been unable to demonstrate improvements in patient-reported outcomes (PROs) with this approach over non-ADM procedures. This study aims to develop a more selective, evidence-based approach to the use of ADM in expander-implant-based breast reconstruction by identifying patient subgroups in which ADM improved clinical outcomes and PROs. Study Design: The Mastectomy Reconstruction Outcomes Consortium Study prospectively evaluated immediate expander-implant reconstructions at 11 centers from 2012 to 2015. Complications (any/overall and major), and PROs (satisfaction, physical, psychosocial, and sexual well-being) were assessed two years postoperatively using medical records and the BREAST-Q, respectively. Using mixed-models accounting for centers and with interaction terms, we analyzed for differential ADM effects across various clinical subgroups, including age, body mass index, radiation timing, and chemotherapy. Results: Expander-implant-based breast reconstruction was performed in 1451 patients, 738 with and 713 without ADM. Major complication risk was higher in ADM users vs. nonusers (22.9% vs. 16.4% and p = 0.04). Major complication risk with ADM increased with higher BMI (BMI=30, OR=1.70; BMI=35, OR=2.29, interaction p = 0.02). No significant ADM effects were observed for breast satisfaction, psychosocial, sexual, and physical well-being within any subgroups. Conclusion: In immediate expander-implant-based breast reconstruction, ADM was associated with a greater risk of major complications, particularly in high-BMI patients. We were unable to identify patient subgroups where ADM was associated with significant improvements in PROs. Given these findings and the financial costs of ADM, a more critical approach to the use of ADM in expander-implant reconstruction may be warranted. abstract_id: PUBMED:25894022 Does acellular dermal matrix really improve aesthetic outcome in tissue expander/implant-based breast reconstruction? Background: The expectation for improved results by women undergoing postmastectomy reconstruction has steadily risen. A majority of these operations are tissue expander/implant-based breast reconstructions. Acellular dermal matrix (ADM) offers numerous advantages in these procedures. Thus far, the evidence to justify improved aesthetic outcome has solely been based on surgeon opinion. The purpose of this study was to assess aesthetic outcome following ADM use in tissue expander/implant-based breast reconstruction by a panel of blinded plastic surgeons. Methods: Mean aesthetic results of patients who underwent tissue expander/implant-based breast reconstruction with (n = 18) or without ADM (n = 20) were assessed with objective grading of preoperative and postoperative photographs by five independent blinded plastic surgeons. Absolute observed agreement as well as weighted Fleiss Kappa (κ) test statistics were calculated to assess inter-rater variability. Results: When ADM was incorporated, the overall aesthetic score was improved by an average of 12.1 %. In addition, subscale analyses revealed improvements in breast contour (35.2 %), implant placement (20.7 %), lower pole projection (16.7 %), and inframammary fold definition (13.8 %). Contour (p = 0.039), implant placement (p = 0.021), and overall aesthetic score (p = 0.022) reached statistical significance. Inter-rater reliability showed mostly moderate agreement. Conclusions: Mean aesthetic scores were higher in the ADM-assisted breast reconstruction cohort including the total aesthetic score which was statistically significant. Aesthetic outcome alone may justify the added expense of incorporating biologic mesh. Moreover, ADM has other benefits which may render it cost-effective. Larger prospective studies are needed to provide plastic surgeons with more definitive guidelines for ADM use. Level Of Evidence Iv: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 . abstract_id: PUBMED:25221385 DermACELL: a novel and biocompatible acellular dermal matrix in tissue expander and implant-based breast reconstruction. Background: Acellular dermal matrices present a new alternative to supporting expanders and implants for breast reconstruction in breast cancer patients following mastectomy. However, some studies have suggested that acellular dermal matrix may increase the complication rates in certain clinical settings. DermACELL acellular dermal matrix offers advanced processing in order to attempt to decrease bio-intolerance and complications. Methods: Ten consecutive patients that presented for breast reconstruction and were candidates for tissue expanders underwent the procedure with the use of an acellular dermal matrix. The patients underwent postoperative expansion/adjuvant cancer therapy, then tissue expander exchange for permanent silicone breast prostheses. Patients were followed through the postoperative course to assess complication outcomes. Histologic evaluation of host integration into the dermal matrix was also assessed. Results: Of the ten patients, eight completed reconstruction while two patients failed reconstruction. The failures were related to chronic seromas and infection. Histology analysis confirms rapid integration of mesenchymal cells into the matrix compared to other acellular dermal matrices. Conclusions: Based on our observations, DermACELL is an appropriate adjunct to reconstruction with expanders. Histological analysis of vascularization and recellularization support the ready incorporation of DermACELL into host tissue. Level of Evidence: Level IV, therapeutic study. abstract_id: PUBMED:24508194 The cost effectiveness of acellular dermal matrix in expander-implant immediate breast reconstruction. Background: Expander-implant breast reconstruction is often supplemented with acellular dermal matrix (ADM). The use of acellular dermal matrix has allowed for faster, less painful expansions and improved aesthetics, but with increased cost. Our goal was to provide the first cost utility analysis of using acellular dermal matrix in two-stage, expander-implant immediate breast reconstruction following mastectomy. Methods: A comprehensive literature review was conducted to identify complication rates for two-stage, expander-implant immediate breast reconstruction with and without acellular dermal matrix. The probabilities of the most common complications were combined with Medicare Current Procedural Terminology reimbursement codes and expert utility estimates to fit into a decision model. The decision model evaluated the cost effectiveness of acellular dermal matrix relative to reconstructions without it. Retail costs for ADM were derived from the LifeCell 2012 company catalogue for Alloderm. Results: The overall complication rates were 30% and 34.5% with and without ADM. The decision model revealed a baseline cost increase of $361.96 when acellular dermal matrix is used. The increase in Quality-Adjusted Life Years (QALYs) is 1.37 in the population with acellular dermal matrix. This yields a cost effective incremental cost-utility ratio (ICUR) of $264.20/QALY. Univariate sensitivity analysis confirmed that using acellular dermal matrix is cost effective even when using retail costs for unilateral and bilateral reconstructions. Conclusions: Our study shows that, despite an increased cost, acellular dermal matrix is a cost effective technology for patients undergoing two-stage, expander-implant immediate breast reconstruction due to its increased utility in successful procedures. abstract_id: PUBMED:38160591 Outcomes after tissue expander exchange to implant in two-stage prepectoral breast reconstruction with and without acellular dermal matrix: A retrospective cohort study. As prepectoral implant placement becomes widely adopted, recent studies investigating the use of acellular dermal matrix (ADM) during tissue expander placement have demonstrated no major benefit with regard to postoperative outcomes. We sought to evaluate second-stage outcomes 1 year after tissue expander exchange to implant with and without ADM. Consecutive patients who underwent prepectoral tissue expander-based breast reconstruction with and without ADM were identified. Patients were followed up for 1 year after tissue expander exchange to implant. Second-stage outcomes of interest including implant rippling, capsular contracture, implant explantation, additional revision surgeries, and patient-reported outcomes were collected and compared. Sixty-eight breasts in the ADM cohort and sixty-one breasts in the no ADM cohort underwent tissue expander exchange to implant. Second-stage outcomes of interest were similar between the ADM and no ADM cohorts with no statistically significant differences identified regarding incidences of implant rippling (24.6% vs. 12.1%, p = 0.08), capsular contracture (4.5% vs. 3.3%, p = 1.00), and explantation (6.6% vs. 1.7%, p = 0.67) between the two cohorts. BREAST-Q scores were similar between the two cohorts with the exception of physical wellbeing and satisfaction in terms of implant rippling, as can be seen, which improved in the no ADM cohort (p = 0.04). Our study reports no major benefit for the inclusion of ADM with respect to implant rippling, capsular contracture, explantation, need for additional revision surgeries, and patient-reported satisfaction in prepectoral second-stage implant-based breast reconstruction. abstract_id: PUBMED:24374398 Aesthetic outcomes of acellular dermal matrix in tissue expander/implant-based breast reconstruction. Introduction: Tissue expander and implant-based breast reconstruction after mastectomy is the most common method of breast reconstruction. Modifications of the traditional total submuscular reconstruction (TSR) have been made using acellular dermal matrix (ADM) to create an inferolateral sling and a more natural implant pocket for superior aesthetic results. The objective of this study was to assess aesthetic outcomes when using ADM in breast reconstruction. Methods: A retrospective chart review identified all patients who underwent implant-based breast reconstruction from 2005 to 2009 at our institution. Demographic information, complications, reoperations, and aesthetic outcome data were collected for all patients meeting inclusion criteria related to adequate follow-up and postoperative photographs. Five aesthetic outcomes were evaluated for all study patients by 18 blinded evaluators using postoperative photographs. Outcomes were scored on a scale of 1 to 5, with 5 representing the best possible aesthetic score. Results: A total of 122 patients underwent 183 tissue expander-based reconstructions (ADM, n = 58; TSR, n = 125). The infection rate in patients with ADM was 16.2% compared to 5.9% in TSR patients, but this was not statistically significant (P = 0.09). Capsular contracture was more common in TSR patients (23.5%), compared to those with ADM (8.1%), P = 0.048. Aesthetic scores from the attending plastic surgeons were as follows: natural contour (ADM, 3.36; TSR, 3.02; P = 0.0001), symmetry of shape (ADM, 3.57; TSR, 3.27; P = 0.005), symmetry of size (ADM, 3.68; TSR, 3.42; P = 0.002), position on chest wall (ADM, 3.75; TSR, 3.45; P = 0.004), and overall aesthetic appearance (ADM, 3.56; TSR, 3.20; P = 0.0001). Conclusions: For all 5 aesthetic parameters evaluated, the ADM group scored significantly higher than the TSR group by 18 blinded evaluators. These consistent findings suggest that the use of ADM in breast reconstruction does confer a significant advantage in aesthetic outcomes for breast reconstruction. This is likely at the cost of a higher infection rate when using ADM; however, that may be offset by the advantage of a lower rate of capsular contracture in patients with ADM. abstract_id: PUBMED:36523919 A Nationwide Analysis Evaluating the Safety of Using Acellular Dermal Matrix with Tissue Expander-Based Breast Reconstruction. Background In March 2021, the United States Food and Drug Administration (FDA) safety communication cautioned against the use of acellular dermal matrix (ADM) products in breast reconstruction and reiterated that the FDA does not approve ADM use in breast surgery. This study aims to assess the safety of ADM use in breast reconstruction. Methods Women who underwent ADM and non-ADM assisted tissue expander (TE)-based breast reconstruction were identified using the National Surgical Quality Improvement Program database (2012-2019). Trends of ADM use over time, and 30-day outcomes of surgical site infection (SSI), dehiscence, and unplanned reoperation were assessed. Results Of the 49,049 TE-based breast reconstructive cases, 42.4% were ADM assisted and 57.6% non-ADM assisted. From 2012 to 2019, the use of ADM increased from 26.1 to 55.6% (relative risk [RR] =1.10; p &lt; 0.01). Higher rates of SSI (3.9 vs. 3.4%; p = 0.003) and reoperation (7.4 vs. 6.0%; p &lt; 0.001) were seen in the ADM cohort. There was no significant difference seen in dehiscence rates (0.7 vs. 0.7%; p = 0.73). The most common reoperation within 30 days for the ADM group (17.6%) was removal of TE without insertion of implant (current procedural terminology: 11,971). ADM-assisted breast reconstruction was associated with increased relative risk of SSI by 10% (RR = 1.10, confidence interval [CI]: 1.01-1.21; p = 0.03) and reoperation by 15% (RR = 1.15, CI: 1.08-1.23; p &lt; 0.001). Conclusions ADM-assisted breast reconstruction more than doubled from 2012 to 2019. There are statistically higher complication rates of SSI (0.5%) and reoperation (1.4%) with ADM use in TE-based breast reconstruction, suggesting that reconstruction without ADM is safe when comparing immediate postoperative outcomes. abstract_id: PUBMED:29320921 Benefits and risks with acellular dermal matrix (ADM) and mesh support in immediate breast reconstruction: a systematic review and meta-analysis. In modern implant-based immediate breast reconstruction, it has become common to use biological acellular dermal and synthetic matrices in combination with a tissue expander or an implant. The aim of this systematic review was to examine differences in recurrence of cancer, impact on oncological treatment, health related quality of life, complications and aesthetic outcome between matrix and no matrix in immediate breast reconstruction. Systematic searches, data extraction and assessment of methodological quality were performed according to predetermined criteria. Fifty-one studies were eligible and included in the review. The certainty of evidence for overall complication rate and implant loss is low (GRADE ⊕⊕□ □). The certainty of evidence for delay of adjuvant treatment, implant loss, infection, capsular contraction and aesthetic outcome is very low (GRADE ⊕□ □ □). No study reported data on recurrence of cancer or health related quality of life. In conclusion, there is a lack of high quality studies that compare the use of matrix with no matrix in immediate breast reconstruction. Specifically, there are no data on risk of recurrence of cancer, delay of adjuvant treatment and Health related quality of life (HRQoL). In addition, there is a risk of bias in many studies. It is often unclear what complications have been included and how they have been diagnosed, and how and when capsular contracture and aesthetic outcome have been evaluated. Controlled trials that further analyse the impact of radiotherapy, type of matrix and type of procedure (one or two stages) are necessary. abstract_id: PUBMED:31632205 Evidence for the Use of Acellular Dermal Matrix in Implant-Based Breast Reconstruction. Acellular dermal matrices (ADMs) are tissue grafts that have been specially processed to remove all cellular components. These machined biological scaffolds have become popular in a variety of surgical settings due to their rapid incorporation into living tissue. As ADMs are highly malleable and cause minimal inflammation, they have come to serve as a useful tool in implant-based breast reconstruction procedures. The major benefits of using an ADM in this setting include superior initial breast contouring, decreased risk of capsular contracture after implant insertion, and consistent sustained positioning of the reconstructed breast. Despite these advantages, these tissue grafts are foreign to the host, and postoperative complications following ADM insertion, including infection and seroma, have been well documented. When considering using ADMs in this setting, it is important to first consider patient-specific factors that could preclude their use, such as low body mass index, small breasts, or a history of radiation exposure to the breast tissue. ADM grafts are also expensive, which may present another barrier to their use. Review of the literature ultimately suggests a continued role for ADMs in implant-based breast reconstruction, and continued research in this field is warranted. Answer: The use of acellular dermal matrix (ADM) in tissue expander/implant-based breast reconstruction has been associated with improved aesthetic outcomes in some studies. For instance, one study found that when ADM was incorporated, the overall aesthetic score improved by an average of 12.1%, with statistically significant improvements in breast contour, implant placement, and overall aesthetic score (PUBMED:25894022). Another study reported that for all five aesthetic parameters evaluated, the ADM group scored significantly higher than the total submuscular reconstruction group, suggesting that the use of ADM does confer a significant advantage in aesthetic outcomes for breast reconstruction (PUBMED:24374398). However, the evidence is not entirely consistent across all studies. Some studies have been unable to demonstrate improvements in patient-reported outcomes (PROs) with ADM use over non-ADM procedures (PUBMED:33172826). Furthermore, a systematic review and meta-analysis indicated that there is a lack of high-quality studies comparing the use of ADM with no ADM in immediate breast reconstruction, with no data on recurrence of cancer or health-related quality of life, and a risk of bias in many studies (PUBMED:29320921). In conclusion, while some studies suggest that ADM can improve aesthetic outcomes in tissue expander/implant-based breast reconstruction, the evidence is not conclusive, and more high-quality research is needed to provide definitive guidelines for ADM use (PUBMED:31632205). Additionally, the potential benefits must be weighed against the risks of complications and the financial costs associated with ADM (PUBMED:33172826; PUBMED:36523919).
Instruction: Does amplitude-integrated electroencephalogram background pattern correlate with cerebral injury in neonates with hypoxic-ischaemic encephalopathy? Abstracts: abstract_id: PUBMED:25123158 Does amplitude-integrated electroencephalogram background pattern correlate with cerebral injury in neonates with hypoxic-ischaemic encephalopathy? Aims: To determine the correlation between amplitude-integrated electroencephalogram (aEEG) background pattern and cerebral magnetic resonance imaging (MRI) in infants with hypoxic-ischaemic encephalopathy (HIE) and to examine whether the correlation changes with therapeutic hypothermia. Methods: We included 38 term-born infants with HIE of whom 17 were cooled. All were continuously monitored with aEEG. Background pattern was scored at the beginning and the end of the recording. Cerebral MRI was obtained on median day 5 (2-11 days). Abnormalities were classified using a predefined scoring system for basal ganglia, watershed and overall injury, and then grouped into mild-moderate and severe. Results: Abnormal aEEG background pattern correlated with more severe cerebral injury on MRI in the non-cooled infants (P &lt; 0.01). In addition, cooled infants had less severe cerebral injury than non-cooled infants, in particular on T2-weighted images (watershed P = 0.04 and total injury score = 0.07). Conclusions: Abnormal aEEG background pattern is predictive of abnormal MRI, but therapeutic hypothermia seems to reduce this association. Thus, when cooling is applied in a clinical setting, the predictive value of aEEG may be limited. abstract_id: PUBMED:32039117 Simultaneous Near-Infrared Spectroscopy (NIRS) and Amplitude-Integrated Electroencephalography (aEEG): Dual Use of Brain Monitoring Techniques Improves Our Understanding of Physiology. Continuous brain monitoring tools are increasingly being used in the neonatal intensive care unit (NICU) to assess brain function and cerebral oxygenation in neonates at high risk for brain injury. Near infrared spectroscopy (NIRS) is useful in critically ill neonates as a trend monitor to evaluate the balance between tissue oxygen delivery and consumption, providing cerebral and somatic oximetry values, and allowing earlier identification of abnormalities in hemodynamics and cerebral perfusion. Amplitude-integrated electroencephalography (aEEG) is a method for continuous monitoring of cerebral function at the bedside. Simultaneous use of both monitoring modalities may improve the understanding of alterations in hemodynamics and risk of cerebral injury. Several studies have described correlations between aEEG and NIRS monitoring, especially in infants with hypoxic-ischemic encephalopathy (HIE), but few describe the combined use of both monitoring techniques in a wider range of clinical scenarios. We review the use of NIRS and aEEG in neonates and describe four cases where abnormal NIRS values were immediately followed by changes in brain activity as seen on aEEG allowing the impact of a hemodynamic disturbance on the brain to be correlated with the changes in the aEEG background pattern. These four clinical scenarios demonstrate how simultaneous neuromonitoring with aEEG and NIRS provides important clinical information. We speculate that routine use of these combined monitoring modalities may become the future standard for neonatal neuromonitoring. abstract_id: PUBMED:19963989 Hypoxic-ischemic brain injury in neonatal piglets with different histological outcomes: An amplitude-integrated EEG study. To test the reliability of amplitude-integrated electroencephalogram (aEEG) in cerebral hypoxic ischemia (HI), 12 neonatal piglets subjected to different levels of HI are divided into three groups based on the histological outcomes obtained 4 days after experiment. Results show that concomitant with the increased severity of brain injury, the upper and lower margins of aEEG decrease significantly (p &lt; 0.05) during early recovery period after HI (about 2 hours post-resuscitation). We conclude that aEEG method reliably reflects hypoxic-ischemic cerebral injury and constitutes a valuable monitoring tool in neonatal intensive care unit (NICU). abstract_id: PUBMED:16818548 Use of 2-channel bedside electroencephalogram monitoring in term-born encephalopathic infants related to cerebral injury defined by magnetic resonance imaging. Objective: Single-channel amplitude-integrated electroencephalography has been shown to be predictive of neurodevelopmental outcome in term infants with hypoxic-ischemic encephalopathy. We describe the relationship of quantifiable electroencephalogram (EEG) measures, obtained using a 2-channel digital bedside EEG monitor from term newborn infants with encephalopathy and/or seizures, to cerebral injury defined qualitatively by MRI. Methods: Median values of minimum, mean, and maximum EEG amplitude were obtained from term-born encephalopathic infants during a 2-hour seizure-free period obtained within 72 hours of admission. Infants underwent MRI with images qualitatively scored for abnormalities of cortex, white matter, deep nuclear gray matter, and posterior limb of the internal capsule. Eighty-six infants had EEG measures related to qualitative MRI outcomes. Results: The most common diagnosis was hypoxic ischemic encephalopathy (n = 40). For all infants there was a negative relationship between EEG amplitude measures and MRI abnormality scores assessed on a scale from 4 to 15, with a higher score indicating more abnormalities. This relationship was strongest for the minimum amplitude measures in both hemispheres; that is, for every unit increase in score there was a mean drop of 0.41 microV for the left cerebral hemisphere, with 35% of variance explained. This relationship persisted on sub-group analyses for infants with hypoxic-ischemic encephalopathy, infants with other diagnoses and infants monitored after the first 24 hours of life. Using an MRI abnormality score cutoff of 8 or worse for cerebral injury in infants with hypoxic-ischemic encephalopathy, a minimum amplitude of 4 microV showed a higher specificity (80%: left hemisphere), whereas a minimum amplitude of 6 muV showed a higher sensitivity (92%: left hemisphere). Conclusions: Bedside EEG measures in term-born encephalopathic infants are related to the severity of cerebral injury as defined by qualitative MRI. A minimum amplitude of &lt;4 microV appears useful in predicting outcome. abstract_id: PUBMED:19163768 Cerebral hypoxic ischemia at different cerebral oxygen saturations in piglets: amplitude-integrated EEG study. Objective: the objective of present paper was to study the amplitude-integrated EEG (aEEG) in hypoxic ischemic (HI) animal models and to test the reliability of aEEG method when used to monitor cerebral injuries. Method: HI animal models were constructed and classified into mild, moderate and severe cerebral oxygen saturation groups according to regional oxygen saturation (rSO(2)) in brains. Then aEEG waveforms were obtained from raw EEG data using digital signal processing. Result: aEEG reflected cerebral functions consistently and accurately at different cerebral rSO(2) levels. aEEG waveforms rarely changed in the group of mild HI; they dropped but recovered in moderate HI group; and aEEG maintained very low after HI in the cases when severe HI happened. Conclusion: aEEG method could monitor cerebral functions directly, accurately and consistently. It is a reliable tool to continuously evaluate cerebral injuries. abstract_id: PUBMED:4003365 Focal motor seizures heralding stroke in full-term neonates. We describe the clinical syndrome, medical management, etiology, and neurologic outcome of stroke diagnosed by computed tomographic scan in 11 full-term neonates encountered during a two-year period. Neonatal stroke is relatively common and may appear in the setting of diverse cerebrovascular disorders such as hypoxic-ischemic encephalopathy, polycythemia, acute severe hypertension, and embolization. Repetitive, persistently unifocal motor seizures heralded localized cerebral injuries in eight infants. The majority of patients did not display any other lateralized clinical neurologic signs. An electroencephalogram revealed a focal or lateralized functional central nervous system abnormality in ten cases. All of the initial computed tomographic scans were focally abnormal. However, cranial ultrasound examinations were insensitive to stroke in nine patients. Medical management included careful cardiorespiratory support, correction of coexisting metabolic or system abnormalities, and aggressive administration of anticonvulsants to promptly eliminate seizures. Limited follow-up suggests that many affected infants may enjoy favorable outcomes. abstract_id: PUBMED:37948366 Unequal Cerebral Magnetic Resonance Imaging Changes in Perinatal Hypoxic Ischemic Injury of Term Neonates. Background: Perinatal hypoxic ischemic injury (HII) has a higher prevalence in the developing world. One of the primary concepts for suggesting that an imaging pattern reflects a global insult to the brain is when the injury is noted to be bilateral and symmetric in distribution. In the context of HII in term neonates, this is either bilateral symmetric ( a ) peripheral/watershed (WS) injury or ( b ) bilateral symmetric basal-ganglia-thalamus (BGT) pattern, often with the peri-Rolandic and hippocampal injury. Unilateral, asymmetric, or unequal distribution of injury may therefore be misdiagnosed as perinatal arterial ischemic stroke. Objectives: We aimed to determine the prevalence of unequal cerebral injury in HII, identify patterns, and determine their relationship with existing classification of HII. Materials And Methods: Review of brain magnetic resonance imaging from a database of children with HII. Reports with any unequal pattern of injury were included and further classified as a unilateral, bilateral asymmetric, or symmetric but unequal degree pattern of HII. Results: A total of 1213 MRI scans in patients with a diagnosis of HII revealed 156 (13%) with unequal involvement of the hemispheres: unilateral in 2 of 1213 (0.2%) (involvement only in the WS), asymmetric in 48 of 1213 (4%) (WS in 6 [0.5%], BGT in 4 [0.3%], and combined BGT and WS in 38 [3.1%]), and bilateral symmetric but unequal degree in 106 of 1213 (8.7%) (WS in 20 [1.6%], BGT in 17 [1.4%], and combined BGT and WS in 69 [5.7%]). Conclusions: The majority of children with cerebral palsy due to HII demonstrate a characteristic bilateral symmetric pattern of injury. In our study, 13% demonstrated an unequal pattern. Differentiation from perinatal arterial ischemic stroke, which is mostly unilateral and distributed typically in the middle cerebral artery territory, should be possible and recognition of the typical BGT or WS magnetic resonance imaging patterns should add confidence to the diagnosis, in such scenarios. abstract_id: PUBMED:19896902 Assessment of brain tissue injury after moderate hypothermia in neonates with hypoxic-ischaemic encephalopathy: a nested substudy of a randomised controlled trial. Background: Moderate hypothermia in neonates with hypoxic-ischaemic encephalopathy might improve survival and neurological outcomes at up to 18 months of age, although complete neurological assessment at this age is difficult. To ascertain more precisely the effect of therapeutic hypothermia on neonatal cerebral injury, we assessed cerebral lesions on MRI scans of infants who participated in the Total Body Hypothermia for Neonatal Encephalopathy (TOBY) trial. Methods: In the TOBY trial hypoxic-ischaemic encephalopathy was graded clinically according to the changes seen on amplitude integrated EEG, and infants were randomly assigned to intensive care with or without cooling by central telephone randomisation. The relation between allocation to hypothermia or normothermia and cerebral lesions was assessed by logistic regression with perinatal factors as covariates, and adjusted odds ratios (ORs) were calculated. The TOBY trial is registered, number ISRCTN 89547571. Findings: 325 infants were recruited in the TOBY trial between 2002 and 2006. Images were available for analysis from 131 infants. Therapeutic hypothermia was associated with a reduction in lesions in the basal ganglia or thalamus (OR 0.36, 95% CI 0.15-0.84; p=0.02), white matter (0.30, 0.12-0.77; p=0.01), and abnormal posterior limb of the internal capsule (0.38, 0.17-0.85; p=0.02). Compared with non-cooled infants, cooled infants had fewer scans that were predictive of later neuromotor abnormalities (0.41, 0.18-0.91; p=0.03) and were more likely to have normal scans (2.81, 1.13-6.93; p=0.03). The accuracy of prediction by MRI of death or disability to 18 months of age was 0.84 (0.74-0.94) in the cooled group and 0.81 (0.71-0.91) in the non-cooled group. Interpretation: Therapeutic hypothermia decreases brain tissue injury in infants with hypoxic-ischaemic encephalopathy. The predictive value of MRI for subsequent neurological impairment is not affected by therapeutic hypothermia. Funding: UK Medical Research Council; UK Department of Health. abstract_id: PUBMED:8192067 The role of early MR in the evaluation of the term infant with seizures. Purpose: To define the role of MR in evaluating term neonates with seizures the most common clinical manifestation of cerebral injury in neonates. Methods: Fifteen term infants with seizures underwent MR imaging. The presence and pattern of MR findings were compared with clinical markers of perinatal distress, cause of cerebral injury, and short-term neurologic outcome. Results: Seizures were caused by hypoxic-ischemic encephalopathy in three patients, bacterial meningitis in three, and prenatal cocaine exposure in one. Nine patients had no identifiable risk factors. By MR, five patients had focal ischemic injury of the cerebral hemispheres and/or basal ganglia and brain stem. Six patients had diffuse cerebral edema: of these, five had basal ganglia edema; one had brain stem edema. One patient had superior sagittal sinus thrombosis with venous infarcts. Three patients had normal MR studies. There was no correlation between markers of perinatal distress, risk factors for seizures, and presence or pattern of MR findings. There was some correlation between MR findings of diffuse cerebral injury and neurologic outcome, and between MR findings of basal ganglia and brain stem abnormalities and neurologic outcome; these findings correlated with spasticity and hemiplegia at 6 to 24 months follow-up. Conclusion: The presence or pattern of MR findings does not appear to correlate with with clinical signs of perinatal distress or presumed causes of perinatal cerebral injury. Further investigation is needed to identify prospectively neonates with seizures who are at risk for significant neurologic morbidity. abstract_id: PUBMED:31893463 Glial Fibrillary Acid Protein and Cerebral Oxygenation in Neonates Undergoing Cardiac Surgery. Background: Neonates undergoing surgery for complex congenital heart disease are at risk of developmental impairment. Hypoxic-ischemic brain injury might be a contributing factor. We aimed to investigate the perioperative release of the astrocyte cytoskeleton component glial fibrillary acid protein and its relation to cerebral oxygenation. Methods: Serum glial fibrillary acid protein levels were measured before and 0, 12, 24, and 48 hours after surgery. Reference values were based on preoperative samples; concentrations above the 95th percentile were defined as elevated. Cerebral oxygenation was derived by near-infrared spectroscopy. Results: Thirty-six neonates undergoing 38 surgeries utilizing cardiopulmonary bypass were enrolled (complete data available for 35 procedures). Glial fibrillary acid protein was elevated after 18 surgeries (arterial switch: 7/12; Norwood: 5/15; others: 6/8; p = 0.144). Age at surgery was higher in cases with elevated serum levels (6 [4-7] vs. 4 [2-5] days, p = 0.009) and intraoperative cerebral oxygen saturation was lower (70 ± 10% vs. 77 ± 7%, p = 0.029). In cases with elevated postoperative glial fibrillary acid protein, preoperative cerebral oxygen saturation was lower for neonates undergoing the arterial switch operation (55 ± 9% vs. 64 ± 4%, p = 0.048) and age at surgery was higher for neonates with a Norwood procedure (7 [6-8] vs. 5 [4-6] days, p = 0.028). Conclusions: Glial fibrillary acid protein was elevated after ∼50% of neonatal cardiac surgeries and was related to cerebral oxygenation and older age at surgery. The potential value as a biomarker for cerebral injury after neonatal cardiac surgery warrants further investigation; in particular, the association with neurodevelopmental outcome needs to be determined. Answer: Yes, the amplitude-integrated electroencephalogram (aEEG) background pattern does correlate with cerebral injury in neonates with hypoxic-ischaemic encephalopathy (HIE). Studies have shown that abnormal aEEG background patterns are predictive of abnormal cerebral magnetic resonance imaging (MRI) findings in neonates with HIE. Specifically, in non-cooled infants, an abnormal aEEG background pattern correlated with more severe cerebral injury on MRI (PUBMED:25123158). Additionally, aEEG has been found to reliably reflect hypoxic-ischemic cerebral injury and is considered a valuable monitoring tool in the neonatal intensive care unit (NICU) (PUBMED:19963989). Furthermore, bedside EEG measures in term-born encephalopathic infants are related to the severity of cerebral injury as defined by qualitative MRI (PUBMED:16818548). However, it is important to note that the predictive value of aEEG may be limited when therapeutic hypothermia is applied, as this treatment seems to reduce the association between abnormal aEEG patterns and MRI findings (PUBMED:25123158).
Instruction: Antibacterial properties of cyanoacrylate tissue adhesive: Does the polymerization reaction play a role? Abstracts: abstract_id: PUBMED:19700870 Antibacterial properties of cyanoacrylate tissue adhesive: Does the polymerization reaction play a role? Purpose: To ascertain if the polymerization reaction also contributes additionally to the antibacterial effects of two commonly used cyanoacrylate tissue adhesives. Materials And Methods: Fresh liquid ethyl-cyanoacrylate (EC) and N-butyl-cyanoacrylate (BC) adhesives were applied onto 6-mm sterile filter paper discs. In the first group, the adhesive-soaked discs were immediately placed onto confluent monolayer cultures of bacteria, allowing the polymerization reaction to proceed while in culture. In the second group, the adhesive-soaked disc was allowed to first polymerize prior to being placed onto the bacterial cultures. Four types of bacteria were studied: Staphylococcus aureus, Streptococcus pneumoniae, Escherichia coli, and Pseudomonas aeruginosa. Immediately after the discs were applied, the cultures were incubated at 35 degrees C for 24 h. Bacterial inhibitory halos were measured in the cultures at the end of the incubation period. Results: For EC, exposure of the bacteria to the cyanoacrylate polymerization reaction increased the bacterial inhibitory halos in Streptococcus pneumonia, Staphylococcus aureus and Escherichia coli. For BC, it increased the bacterial inhibitory halos in Staphylococcus aureus and Streptococcus pneumoniae. No inhibitory halos were observed in Pseudomonas aeruginosa. The bactericidal effect was higher in actively polymerizing EC, compared to previously polymerized EC in Staphylococcus aureus, Streptococcus pneumoniae, and Escherichia coli; however, no such differences were observed for BC. Conclusions: The polymerization reaction may also be an important factor in the antibacterial properties of EC and BC. abstract_id: PUBMED:37498537 Barnacle-Inspired Wet Tissue Adhesive Hydrogels with Inherent Antibacterial Properties for Infected Wound Treatment. Currently, antibiotics are the most common treatment for bacterial infections in clinical practice. However, with the abuse of antibiotics and the emergence of drug-resistant bacteria, the use of antibiotics has faced an unprecedented challenge. It is imminent to develop nonantibiotic antimicrobial agents. Based on the cation-π structure of barnacle cement protein, a polyphosphazene-based polymer poly[(N,N-dimethylethylenediamine)-g-(N,N,N,N-dimethylaminoethyl p-ammonium bromide (ammonium bromide)-g-(N,N,N,N-dimethylaminoethyl acetate ethylammonium bromide)] (PZBA) with potential adhesion and inherent antibacterial properties was synthesized, and a series of injectable antibacterial adhesive hydrogels (PZBA-PVA) were prepared by cross-linking with poly(vinyl alcohol) (PVA). PZBA-PVA hydrogels showed good biocompatibility, and the antibacterial rate of the best-performed hydrogel reached 99.81 ± 0.04% and 98.80 ± 2.16% against Staphylococcus aureus and Escherichia coli within 0.5 h in vitro, respectively. In the infected wound model, the healing rate of the PZBA-PVA-treated group was significantly higher than that of the Tegaderm film group due to the fact that the hydrogel suppressed inflammatory responses and modulated the infiltration of immune cells. Moreover, the wound healing mechanism of the PZBA-PVA hydrogel was further evaluated by real-time polymerase chain reaction and total RNA sequencing. The results indicated that the process of hemostasis and tissue development was prompted and the inflammatory and immune responses were suppressed to accelerate wound healing. Overall, the PZBA-PVA hydrogel is shown to have the potential for infected wound healing application. abstract_id: PUBMED:37839832 A multi-functional double cross-linked chitosan hydrogel with tunable mechanical and antibacterial properties for skin wound dressing. Chitosan hydrogels with essential antibacterial properties and biocompatibility have great potential in tissue engineering and regeneration medicine. However, pure chitosan hydrogel could be limited by insufficient mechanical properties. In this work, we designed a multi-functional chitosan hydrogel based on the combination of chitosan methacrylate (CTSMA) and sulfhydrated chitosan (CTSSH), which is cross-linked simultaneously by free-radical polymerization reaction and Thiol-ene reaction. The CTSMA/CTSSH (CMS) hydrogels displayed superior tissue adhesive and mechanical properties when compared to pure CTSMA hydrogel. Additionally, the resulting hydrogels exhibited potent antimicrobial effects against both E. coli and S. aureus. Besides, the CMS hydrogels exhibited good biocompatibility as demonstrated by cytotoxicity and cell proliferation experiments using fibroblasts cells (L929) and adipose-derived stem cells (ADSCs). In vivo experiment, the repairing effect of hydrogels on full-thickness skin defect model in rats was studied. Histological and immunohistochemical staining results showed that CMS hydrogels promoted angiogenesis, dermal repair and epidermal regeneration. Overall, the study highlights the potential of the CMS hydrogels as a promising biomaterial in wound healing applications. abstract_id: PUBMED:33545852 Mussel-inspired double cross-linked hydrogels with desirable mechanical properties, strong tissue-adhesiveness, self-healing properties and antibacterial properties. Developing multifunctional hydrogels with good mechanical properties, tissue-adhesiveness, self-healing properties and antioxidant, blood clotting and antibacterial properties is highly desirable for biomedical applications. In this study, a series of multifunctional chitosan-based double cross-linked hydrogels were prepared using a facile method based on quaternized chitosan (QCS) and polyacrylamide (PAM) using polydopamine (PDA) as a novel connecting bridge. Investigation on the content of dopamine (DA) and QCS revealed that the catechol-mediated interactions played an important role in the hydrogel properties. Results showed that the hydrogel exhibited the best mechanical properties when QCS = 12 wt% and DA = 0.4 wt%. Tensile and compressive strength was 13.3 kPa and 67.8 kPa, respectively, and the hydrogel presented strong and repeatable tissue-adhesiveness (27.2 kPa) to porcine skin, as well as good stretchability (1154%). At room temperature, the hydrogel exhibited high self-healing efficiency (90% after 2 h of healing). Antibacterial test results showed that the hydrogel killed 99.99% S. aureus and E. coli. Moreover, the vaccarin-loaded hydrogel exhibited a pH-responsive drug release profile with superior cytocompatibility compared to the pure hydrogel. In summary, this strategy combined double cross-linking and catechol-mediated chemistry to shed new light on the fabrication of novel multifunctional hydrogels with desirable mechanical properties, strong tissue adhesiveness and self-healing abilities. abstract_id: PUBMED:33001623 Antibacterial Zwitterionic Polyelectrolyte Hydrogel Adhesives with Adhesion Strength Mediated by Electrostatic Mismatch. Biotissue adhesives and antibacterial materials have great potential applications in wound dressing, implantable devices, and bioelectronics. In this study, stretchable tissue adhesive hydrogels with intrinsic antibacterial properties have been demonstrated by copolymerizing zwitterionic monomers with ionic monomers. The hydrogels are stretchable to about 900% strain and show a modulus of 4-9 kPa. The zwitterionic moieties provide strong dipole-dipole interaction, electrostatic interaction, and hydrogen bonding with the skin surface, and thus show adhesion strength values of 1-4 kPa to skin. Meanwhile, the copolymerized cationic or anionic monomers break the intrinsic electrostatic stoichiometry of the zwitterionic units and thus mediate the electrostatic interactions and the adhesion strength with the surface. The stretchable hydrogels form a robust and compliant (due to low modulus and stretchability) adhesive to skin, rubber, glass, and plastics, and could be repeatedly peeled-off and readhered to the skin. Moreover, the abundant quaternary ammonium (QA) groups in the zwitterionic moieties and the added QA groups endow it outstanding antibacterial properties (&gt;99%). These stretchable tissue adhesive antibacterial hydrogels are promising for wound dressings and implantable devices. abstract_id: PUBMED:37520366 Embryonic perfect repair inspired electrospun nanofibers dressing with temperature-sensitive and antibacterial properties for wound healing. Introduction: The development of highly effective wound dressings is crucial for successful clinical applications. Achieving wound closure, preventing infection, and minimizing scarring are key objectives in wound healing. Drawing inspiration from the regenerative mechanisms observed in embryonic tissue repair, we designed a series of wound-contractible dressings with exceptional antibacterial properties. Methods: This was achieved by encapsulating quaternized silicone (QP12) and poly(N-isopropylacrylamide-co-N-hydroxymethylacrylamide-co-octadecyl acrylate) (PNNS) within electrospun nanofibers of poly(ε-caprolactone) (PCL). Results And Discussion: The resulting nanofibrous dressings demonstrated remarkable thermo-responsive self-contraction and tissue adhesion capabilities, enabling secure adherence to the skin and active wound closure. Notably, these nanofibers exhibited potent antibacterial activity against both Gram-positive and Gram-negative bacteria. Furthermore, they possessed desirable properties such as hydrophilicity, biocompatibility and mechanical properties resembling human skin. A full-thickness skin defect model evaluation revealed that these temperature-sensitive nanofibers expedited wound closure, enhanced wound healing, and suppressed scar formation. This result was evidenced by reduced infiltration of inflammatory cells, well-organized collagen arrangement, and improved vascularization. In summary, we propose that these wound-contractible nanofibers, with their antibacterial and anti-scarring properties, hold great promise as an advanced solution for skin wound repair. abstract_id: PUBMED:37566735 Dual-Cross-Linked Chitosan-Based Antibacterial Hydrogels with Tough and Adhesive Properties for Wound Dressing. Biocompatible chitosan-based hydrogels have attracted extensive attention in wound dressing due to their human skin-like tissue characteristics. However, it is a crucial challenge to fabricate chitosan-based hydrogels with versatile properties, including flexibility, stretchability, adhesivity, and antibacterial activity. In this work, a kind of chitosan-based hydrogels with integrated functionalities are facilely prepared by solution polymerization of acrylamide (AAm) and sodium p-styrene sulfonate (SS) in the presence of quaternized carboxymethyl chitosan (QCMCS). Due to the dual cross-linking between QCMCS and P(AAm-co-SS), the optimized QCMCS/P(AAm-co-SS) hydrogel exhibits tough mechanical properties (0.767 MPa tensile stress and 1100% fracture strain) and moderate tissue adhesion (11.4 kPa). Moreover, biological evaluation in vitro illustrated that as-prepared hydrogel possesses satisfactory biocompatibility, hemocompatibility, and excellent antibacterial ability (against S. aureus and E. coli are 98.8% and 97.3%, respectively). Then, the hydrogels are tested in a rat model for bacterial infection incision in vivo, and the results show that they can significantly accelerate epidermal regeneration and wound closure. This is due to their ability to reduce the inflammatory response, promote the formation of collagen deposition and granulation tissue. The proposed chitosan-based antibacterial hydrogels have the potential to be a highly effective wound dressing in clinical wound healing. abstract_id: PUBMED:37659303 All natural mussel-inspired bioadhesives from soy proteins and plant derived polyphenols with marked water-resistance and favourable antibacterial profile for wound treatment applications. Hypothesis: Implementation of tissue adhesives from natural sources endowed with good mechanical properties and underwater resistance still represents a challenging research goal. Inspired by the extraordinary wet adhesion properties of mussel byssus proteins resulting from interaction of catechol and amino residues, hydrogels from soy protein isolate (SPI) and selected polyphenols i.e. caffeic acid (CA), chlorogenic acid (CGA) and gallic acid (GA) under mild aerial oxidative conditions were prepared. Experiments: The hydrogels were subjected to chemical assays, ATR FT-IR and EPR spectroscopy, rheological and morphological SEM analysis. Mechanical tests were carried out on hydrogels prepared by inclusion of agarose. Biological tests included evaluation of the antibacterial and wound healing activity, and hemocompatibility. Findings: The decrease of free NH2 and SH groups of SPI, the EPR features, the good cohesive strength and excellent underwater resistance (15 days for SPI/GA) under conditions relevant to their use as surgical glues indicated an efficient interaction of the polyphenols with the protein in the hydrogels. The polyphenols greatly also improved the mechanical properties of the SPI/ agarose/polyphenols hydrogels. These latter proved biocompatible, hemocompatible, not harmful to skin, displayed durable adhesiveness and good water-vapour permeability. Excellent antibacterial properties and in some cases (SPI/CGA) a favourable wound healing activity on dermal fibroblasts was obtained. abstract_id: PUBMED:33565857 Conductive Antibacterial Hemostatic Multifunctional Scaffolds Based on Ti3C2Tx MXene Nanosheets for Promoting Multidrug-Resistant Bacteria-Infected Wound Healing. Chronic bacterial-infected wound healing/skin regeneration remains a challenge due to drug resistance and the poor quality of wound repair. The ideal strategy is combating bacterial infection, while facilitating satisfactory wound healing. However, the reported strategy hardly achieves these two goals simultaneously without the help of antibiotics or bioactive molecules. In this work, a two-dimensional (2D) Ti3C2Tx MXene with excellent conductivity, biocompatibility, and antibacterial ability was applied in developing multifunctional scaffolds (HPEM) for methicillin-resistant Staphylococcus aureus (MRSA)-infected wound healing. HPEM scaffolds were fabricated by the reaction between the poly(glycerol-ethylenimine), Ti3C2Tx MXene@polydopamine (MXene@PDA) nanosheets, and oxidized hyaluronic acid (HCHO). HPEM scaffolds presented multifunctional properties containing self-healing behavior, electrical conductivity, tissue-adhesive feature, antibacterial activity especially for MRSA resistant to many commonly used antibiotics (antibacterial efficiency was 99.03%), and rapid hemostatic capability. HPEM scaffolds enhanced the proliferation of normal skin cells with negligible toxicity. Additionally, HPEM scaffolds obviously accelerated the MRSA-infected wound healing (wound closure ratio was 96.31%) by efficient anti-inflammation effects, promoting cell proliferation, and the angiogenic process, stimulating granulation tissue formation, collagen deposition, vascular endothelial differentiation, and angiogenesis. This study indicates the important role of multifunctional 2D MXene@PDA nanosheets in infected wound healing. HPEM scaffolds with multifunctional properties provide a potential strategy for MRSA-infected wound healing/skin regeneration. abstract_id: PUBMED:37127622 A wound-friendly antibacterial hyaluronic acid dressing with on-demand removability for infected wound healing. Background: Antibacterial activity and on-demand removability are key characteristics governing the effectiveness of clinic wound dressing. However, the excellent tissue adhesion of new dressings is often overemphasized without a detailed discussion of dressing replacement. Besides, the inherent antibacterial ability of dressings is beneficial for promoting the healing of infected wound. Therefore, we rationally design an injectable antibacterial wound dressing with on-demand removability to accelerate infected wound healing. Method: We design this wound dressing with a simple and feasible method based on the electrostatic self-assembly of hyaluronic acid and ε-polylysine. We investigated the efficacy of this dressing in terms of its microtopography, rheology, self-healing performance, adhesive ability, antimicrobial, hemostatic, on-demand removal properties, and wound healing promotion through various tests. Results: The prepared dressing possesses injectability, self-healing ability and antibacterial activity, showing NaCl-triggered on-demand dissolution due to the disruption of electrostatic interactions. When used as dressings for healing full-thickness wounds, it could effectively accelerate wound healing by killing bacteria, downregulating inflammation, promoting collagen deposition, enhancing keratinocyte migration and angiogenesis due to its excellent adhesion ability, favorable hemostatic property, and potent antibacterial performance. Conclusion: All results indicate that this is a simple and practical dressing for clinical application. This strategy provides a novel idea for developing on-demand removal dressings with antibacterial and injectable properties. Answer: Yes, the polymerization reaction does play a role in the antibacterial properties of cyanoacrylate tissue adhesives. A study investigating the antibacterial effects of two commonly used cyanoacrylate adhesives, ethyl-cyanoacrylate (EC) and N-butyl-cyanoacrylate (BC), found that the polymerization reaction contributes to their antibacterial properties. The study applied fresh liquid EC and BC adhesives onto sterile filter paper discs, which were then placed onto bacterial cultures either immediately to allow polymerization to occur in culture or after allowing the adhesive to polymerize prior to exposure to the bacteria. The bacteria studied included Staphylococcus aureus, Streptococcus pneumoniae, Escherichia coli, and Pseudomonas aeruginosa. The results showed that exposure to the polymerizing EC increased the bacterial inhibitory halos for S. pneumoniae, S. aureus, and E. coli, indicating a stronger antibacterial effect during active polymerization. For BC, increased inhibitory halos were observed for S. aureus and S. pneumoniae. However, no inhibitory halos were observed for Pseudomonas aeruginosa with either adhesive. The bactericidal effect was higher for actively polymerizing EC compared to previously polymerized EC for the bacteria where an effect was observed, but no such differences were seen for BC. These findings suggest that the polymerization reaction is an important factor in the antibacterial properties of EC and BC cyanoacrylate adhesives (PUBMED:19700870).
Instruction: Is the Western Cape at risk of an outbreak of preventable childhood diseases? Abstracts: abstract_id: PUBMED:18270640 Is the Western Cape at risk of an outbreak of preventable childhood diseases? Lessons from an evaluation of routine immunisation coverage. Objective: To determine the routine immunisation coverage rates in children aged 12 - 23 months in the Western Cape. Design: Cross-sectional Household Survey using an adaptation of the '30 by 7' cluster survey technique. Setting: Households across the Western Cape. Subjects: A total of 3,705 caregivers of children aged 12 - 23 months who had been living in the Western Cape for at least 6 months. Outcome Measures: Vaccination status (1 = fully vaccinated; 0 = partially vaccinated) as recorded on a Road-to-Health card or by history. Reasons for not vaccinating were established from a questionnaire. Results: The immunisation coverage was 76.8% for vaccines due by 9 months and 53.2% for those due by 18 months. The reasons given for not being immunised were clinic-related factors (47%), lack of information (27%), caregiver being unable to attend the clinic (23%), and lack of motivation (14%). Of the clinic factors cited, the two commonest ones were missed opportunities (34%) and being told by clinic staff to return another time (20%). Conclusion: While the coverage indicates that a great deal of good work is being done, the coverage is insufficient to prevent outbreaks of measles and other common childhood conditions, including polio. The coverage is too low to consider not running periodic mass campaigns for measles and polio. It will need to be sustainably improved before introducing rubella vaccine as part of the Expanded Programme on Immunisations (EPI) schedule. The reasons given by caregivers for their children not being immunised are valuable pointers as to where interventions should be focused. abstract_id: PUBMED:32843135 Vaccine-Preventable Infections in Childcare Workers. Background: Adequate immunity to so-called childhood diseases can lower the occupational risk of vaccine-preventable infectious diseases in persons who work in day-care centers for children. Methods: A systematic literature survey was carried out in PubMed and Embase for the period January 2000 to February 2019. Studies on immune status and vaccination status were included. In addition, data from the first wave of the German Health Interview and Examination Survey for Adults (Studie zur Gesundheit Erwachsener in Deutschland, DEGS1) and surveillance data on notifiable infections in Germany were evaluated. Results: Six studies and the DEGS1 analysis of vaccination or immune status for varicella zoster, rubella, hepatitis A (HAV), pertussis, measles, and mumps in persons caring for children in day-care centers, most of whom are women, were included in this review. According to DEGS1, childcare workers are more commonly vaccinated against HAV and pertussis than the general female population (prevalence ratios [PR]: 1.46 [1.12; 1.90] and 1.57 [1.05; 2.36]), yet 57% had not been vaccinated against HAV and 77% had not been vaccinated against pertussis. Childcare workers were found to be less commonly vaccinated against rubella than the general female population, although the difference was not statistically significant (PR: 0.87 [0.71; 1.07]). In a Canadian study, positive HAV serology was found to be correlated with the duration of activity as a childcare worker. In the DEGS1 study, large proportions of the younger childcare workers in particular were seronegative against measles (16%), mumps (19%), and HAV (37%). Notifiable disease statistics show that those working in community facilities had a markedly higher risk of mumps, pertussis, and varicella (relative risk [RR]: 1.8-2.6) and a somewhat higher risk of rubella and HAV (RR: 1.47 and 1.21, respectively). Conclusion: Childcare workers have a higher occupational risk of infection but do not always receive the appropriate vaccinations. In particular, women of child-bearing age working in day-care centers should be made more aware of the need for vaccination. abstract_id: PUBMED:27407970 Outbreak of Rubella Among Cadets in an Academy. Background: Rubella is traditionally considered a childhood disease, but has the potential to cause outbreaks in closed community of young adults. The present paper describes one such outbreak in a military training establishment. Method: The cases of rubella outbreak were identified by clinical features (fever, rash and lymphadenopathy) and confirmed by detection of high titre of anti-rubella IgM antibodies by enzyme-linked immunosorbent assay. Initial serum samples were also tested for antibodies against measles, scrub typhus and dengue. Salient epidemiological features of the outbreak were studied. Result: 163 cadets were admitted in the local military hospital within two months. The cases presented with fever, coryza, lymphadenopathy and maculopapular rash. 117 (71.78%) of the samples showed presence of anti-rubella IgM antibodies. Conclusion: Rubella outbreaks in military communities have the potential to adversely affect military training/operations. The need for administration of vaccination against rubella in such population may be reviewed. abstract_id: PUBMED:22370236 Effect of caffeic acid phenethyl ester (CAPE) on H₂O₂ induced oxidative and inflammatory responses in human middle ear epithelial cells. Objective: Acute otitis media (OM) is a common pediatric disease. Recent research into the pathogenesis of OM has focused on oxidative damage, induced by oxygen free radicals, to the middle ear mucosa along with inflammation. Caffeic acid phenethyl ester (CAPE) is a biologically active ingredient of propolis honey bees, with antioxidative and anti-inflammatory activities. The effect of CAPE on hydrogen peroxide (H(2)O(2))-induced inflammatory and oxidative reactions in the middle ear is still not known. The aim of this study was to evaluate the anti-inflammatory and antioxidative effects of CAPE on cultured human middle ear epithelial cells (HMEECs). Methods: The inflammatory injury of H(2)O(2) and the anti-inflammatory effect of CAPE were determined by measuring levels of pro-inflammatory cytokines (tumor necrosis factor (TNF)-α and COX-2) with real-time reverse transcription polymerase chain reaction and Western blot analysis. Oxidative stress induced by H(2)O(2) and antioxidative effects of CAPE were evaluated directly by reactive oxygen species (ROS) production using flow cytometric analysis of 5-(and-6)-chloromethyl-2',7'-dichlorodihydrofluorescein diacetate, acetyl ester (CM-H(2)DCFDA), and indirectly by the expression of superoxide dismutase (SOD) using Western blot analysis. The effect of CAPE was compared with N-acetyl cysteine (NAC) which has well-known antioxidative and anti-inflammatory effects. Results: CAPE significantly inhibited H(2)O(2)-induced upregulation of TNF-α and COX-2 expression in a dose and time dependent manner. ROS accumulation induced by H(2)O(2) stimulation was decreased by CAPE pretreatment. Induced SOD expression after H(2)O(2) stimulation was diminished by CAPE pretreatment. The anti-inflammatory and antioxidative effects of CAPE were similar to those of NAC. Conclusions: These findings suggest that inflammation induced by H(2)O(2) can be inhibited by CAPE via inhibition of the expression of pro-inflammatory cytokines such as TNF-α and COX-2. Furthermore, CAPE has antioxidative effects, which decreases the need for endogenous SOD expression. abstract_id: PUBMED:34358212 Incidence of Vaccine-Preventable Childhood Diseases in the European Union and in the European Free Trade Association Countries. Introduction: Despite the widespread availability of vaccines, the incidence of vaccine-preventable childhood diseases (VPCD) started to grow in recent years. The aim of the study was to compare the annual incidence of selected VPCDs in the EU (European Union) and EFTA (European Free Trade Association) countries in the period of the last 5 years (2014-2019 or other intervals, depending on data availability), and the country-specific vaccine schedules. Methods: VPCD incidence rates in Europe were based on "The Surveillance Atlas of Infectious Diseases" by the ECDC (European Centre for Disease Prevention and Control); vaccination schedules were based on ECDC reports. Results: The obligation to vaccinate was not universal, and it generally only applied to two preparations: the MMR (measles, mumps, rubella) vaccine and the one against polio. During the study, the situation associated with mumps did not change or improve in individual countries; the median incidence amounted to 30 cases. The median incidence associated with rubella amounted to 1 case, but in a few countries, it grew very rapidly, i.e., in Germany, Italy, and Romania; in Poland, the incidence was clearly decreasing, from 5923 to 1532 cases. The most dynamic situation concerned measles. The total median was 2.4 cases per 100,000 population; the only one country with falling incidence was Germany. The diseases associated with Streptococcus pneumoniae and Neisseria meningitidis remained at a stable level in all analyzed countries. Conclusion: Vaccine schedules differ among the countries, so does the epidemiological situation of selected diseases. Morbidity on measles was the most disturbing phenomenon: the incidence rate increased in almost 40% of all countries, regardless of the obligation to vaccinate. The increasing incidence of VPCD may be due to anti-vaccine movements, the activity of which is often caused by mistrust and spreading misinformation. In order to better prevent the increase in morbidity, standardization of vaccine schedules and documentation should be considered in the EU countries. abstract_id: PUBMED:25041293 Enterovirus 71 outbreak in Northern Sydney, 2013: case series and initial response. Aim: Enteroviruses are a common cause of childhood disease which may manifest in a variety of ways. Enterovirus 71 (EV71) is a subtype of enterovirus which can cause meningoencephalomyelitis resulting in neurological sequelae including lethargy, weakness, ataxia, sleep myoclonus, urinary retention and, in severe cases, cardiorespiratory collapse due to neurogenic pulmonary oedema. EV71 was responsible for outbreaks in South East Asia in 1997-1998, in Western Australia in 1999 and in Sydney in 2000-2001. In 2013, we are experiencing another EV71 outbreak in Sydney. This study describes the discovery of a new outbreak in Sydney's Northern Beaches, the clinical findings as well as the public health response. Methods: Thirty-seven children in total presented with presumed EV71 to the Northern Beaches Health Service from December 2012 to April 2013. Most children presented with a prodrome lasting 2-7 days prior to seeking medical attention. Sleep myoclonus was a common presenting sign occurring in 65%. Neurological signs were subtle in the majority of children and were at times missed by clinicians on a child's first presentation. Forty-six per cent of children who presented to Northern Beaches Health Service during this outbreak required a transfer to a tertiary paediatric centre for more intensive care. Results: The public health investigation was important in establishing that the disease was widespread throughout the community and not as a result to exposure to a single child care setting. Identification of risk factors enabled more targeted communication to medical practitioners, child care centres and parents within the local community. Conclusions: EV71 is in Australia and all clinicians seeing children in primary, secondary and tertiary care centres need to be aware of the disease, the subtle nature of initial symptoms and the potentially devastating consequences. abstract_id: PUBMED:29378545 Under-vaccinated groups in Europe and their beliefs, attitudes and reasons for non-vaccination; two systematic reviews. Background: Despite effective national immunisation programmes in Europe, some groups remain incompletely or un-vaccinated ('under-vaccinated'), with underserved minorities and certain religious/ideological groups repeatedly being involved in outbreaks of vaccine preventable diseases (VPD). Gaining insight into factors regarding acceptance of vaccination of 'under-vaccinated groups' (UVGs) might give opportunities to communicate with them in a trusty and reliable manner that respects their belief system and that, maybe, increase vaccination uptake. We aimed to identify and describe UVGs in Europe and to describe beliefs, attitudes and reasons for non-vaccination in the identified UVGs. Methods: We defined a UVG as a group of persons who share the same beliefs and/or live in socially close-knit communities in Europe and who have/had historically low vaccination coverage and/or experienced outbreaks of VPDs since 1950. We searched MEDLINE, EMBASE and PsycINFO databases using specific search term combinations. For the first systematic review, studies that described a group in Europe with an outbreak or low vaccination coverage for a VPD were selected and for the second systematic review, studies that described possible factors that are associated with non-vaccination in these groups were selected. Results: We selected 48 articles out of 606 and 13 articles out of 406 from the first and second search, respectively. Five UVGs were identified in the literature: Orthodox Protestant communities, Anthroposophists, Roma, Irish Travellers, and Orthodox Jewish communities. The main reported factors regarding vaccination were perceived non-severity of traditional "childhood" diseases, fear of vaccine side-effects, and need for more information about for example risk of vaccination. Conclusions: Within each UVG identified, there are a variety of health beliefs and objections to vaccination. In addition, similar factors are shared by several of these groups. Communication strategies regarding these similar factors such as educating people about the risks associated with being vaccinated versus not being vaccinated, addressing their concerns, and countering vaccination myths present among members of a specific UVG through a trusted source, can establish a reliable relationship with these groups and increase their vaccination uptake. Furthermore, other interventions such as improving access to health care could certainly increase vaccination uptake in Roma and Irish travellers. abstract_id: PUBMED:32536339 Paediatric Active Enhanced Disease Surveillance (PAEDS) 2017 and 2018: Prospective hospital-based surveillance for serious paediatric conditions. Introduction: The Paediatric Active Enhanced Disease Surveillance (PAEDS) network is a hospital-based active surveillance system employing prospective case ascertainment for selected serious childhood conditions, particularly vaccine-preventable diseases and potential adverse events following immunisation (AEFI). This report presents surveillance data for 2017 and 2018. Methods: Specialist nurses screened hospital admissions, emergency department (ED) records, laboratory and other data on a daily basis in seven paediatric tertiary referral hospitals across Australia to identify children with the conditions under surveillance. In 2017 and 2018 these included acute flaccid paralysis (AFP; a syndrome associated with poliovirus infection), acute childhood encephalitis (ACE), influenza, intussusception (IS; a potential AEFI with rotavirus vaccines), pertussis, varicella-zoster virus infection (varicella and herpes zoster), invasive meningococcal, and invasive Group A streptococcus diseases. An additional social research component was added to evaluate parental attitudes to vaccination. Results: PAEDS captured 1,580 and 925 cases for 2017 and 2018, respectively, across all conditions under surveillance. Key outcomes of PAEDS included: contribution to national AFP surveillance to reach the World Health Organization reporting targets; identification of a third human parechovirus outbreak among other infectious diseases linked to ACE; demonstration of variable influenza activity between 2017 and 2018, with vaccine effectiveness (VE) analysis demonstrating that the protection offered through vaccination is season-dependent. All IS cases associated with vaccine receipt were reported to the relevant state health department. Varicella and herpes zoster case numbers remained unchanged, with vaccine uptake found to be suboptimal among eligible children under the NIP. Enhanced pertussis surveillance continues to capture controls for VE estimation. Surveillance for invasive meningococcal disease showed predominance for serotype B at 57% over 2 years among 77 cases where serotyping was available, and surveillance for invasive group A streptococcus captured severe disease in children. Conclusion: PAEDS continues to provide unique policy-relevant data on serious paediatric conditions using hospital-based sentinel surveillance. abstract_id: PUBMED:34587877 Paediatric Active Enhanced Disease Surveillance (PAEDS) 2019: Prospective hospital-based surveillance for serious paediatric conditions. Introduction: The Paediatric Active Enhanced Disease Surveillance (PAEDS) network is an Australian hospital-based active surveillance system employing prospective case ascertainment for selected serious childhood conditions, particularly vaccine preventable diseases and potential adverse events following immunisation (AEFI). This report presents surveillance data for 2019. Methods: Specialist nurses screened hospital admissions, emergency department records, laboratory and other data on a daily basis in seven paediatric tertiary referral hospitals across Australia, to identify children with the conditions under surveillance. Standardised protocols and case definitions were used across all sites. In 2019, the conditions under surveillance comprised: acute flaccid paralysis (AFP; a syndrome associated with poliovirus infection), acute childhood encephalitis (ACE), influenza, intussusception (IS; a potential AEFI with rotavirus vaccines), pertussis, varicella-zoster virus infection (varicella and herpes zoster), invasive meningococcal and invasive Group A streptococcus diseases and two new conditions, Kawasaki disease and gram-negative bloodstream infections. An additional social research component continued to evaluate parental attitudes to influenza vaccination. Results: PAEDS captured 2,701 cases for 2019 across all conditions under surveillance. Key outcomes of PAEDS included: contribution to national AFP surveillance to reach the World Health Organization reporting targets for detection of poliomyelitis cases; demonstration of high influenza activity in 2019 and influenza-associated deaths in ACE cases; identification of key barriers to influenza vaccination of children hospitalised for acute respiratory illness; reporting of all IS cases associated with vaccine receipt to relevant state health department; and showing a further reduction nationally in varicella cases. Enhanced pertussis surveillance continued to capture controls to support vaccine efficacy estimation. Invasive meningococcal disease surveillance showed predominance of serotype B and a reduction in cases nationally. Surveillance for invasive group A streptococcus captured severe cases in children. Monitoring of Kawasaki disease incidence and gram-negative bloodstream infections commenced. Conclusions: PAEDS continues to provide unique policy-relevant data on serious paediatric conditions using sentinel hospital-based enhanced surveillance. Keywords: paediatric, surveillance, child, hospital, vaccine preventable diseases, adverse event following immunisation, acute flaccid paralysis, encephalitis, influenza, intussusception, pertussis, varicella zoster virus, meningococcal, group A streptococcus, Kawasaki, bloodstream infections. abstract_id: PUBMED:20808459 Routine immunization of adults in Canada: Review of the epidemiology of vaccine-preventable diseases and current recommendations for primary prevention. Vaccination is one of the greatest achievements in public health of the 20th century. However, the success of vaccine uptake and adherence to immunization guidelines seen in pediatric populations has not been observed among adult Canadians. As a result of the disparity in susceptibility to vaccine-preventable disease, there has been an increasing shift of vaccine-preventable childhood diseases into adult populations. Accordingly, morbidity and mortality due to vaccine-preventable illnesses now occur disproportionately in adults. All Canadians, irrespective of age, should have immunity to measles, mumps, rubella, tetanus, diphtheria, pertussis and varicella. All adult Canadians with significant medical comorbidities or those older than 65 years of age should receive the pneumococcal polysaccharide vaccine and yearly trivalent inactivate influenza vaccines. The present review summarizes the burden of illness of these vaccine-preventable diseases in the Canadian adult population and reviews the current immunization recommendations. Vaccination of all Canadians to these common agents remains a vital tool to decrease individual morbidity and mortality and reduce the overall burden of preventable disease in Canada. Answer: Yes, the Western Cape is at risk of an outbreak of preventable childhood diseases. An evaluation of routine immunisation coverage in children aged 12 - 23 months in the Western Cape revealed that the immunisation coverage was 76.8% for vaccines due by 9 months and only 53.2% for those due by 18 months. The coverage is insufficient to prevent outbreaks of measles and other common childhood conditions, including polio. The reasons given for not being immunised included clinic-related factors, lack of information, caregiver inability to attend the clinic, and lack of motivation. The coverage is too low to consider not running periodic mass campaigns for measles and polio, and it would need to be sustainably improved before introducing rubella vaccine as part of the Expanded Programme on Immunisations (EPI) schedule (PUBMED:18270640).
Instruction: Treating the human bladder with capsaicin: is it safe? Abstracts: abstract_id: PUBMED:35223304 Chronic Low Back Pain Forced Me to Search for and Find Pain Solutions: An Autobiographical Case Report. Lifelong, pregnancy-induced low back pain forced me to search for solutions to the problem of pain. Currently, low back pain is often diagnosed as "nonspecific" and, as a result, a multitude of tests and poorly effective, at times side effect-laden or habit-forming treatments are recommended. My quest for relief took me to first diagnose my pain as coming from the sacroiliac joints, then to prolotherapy, the first treatment which brought me prolonged relief. I then learned how to perform prolotherapy. In 2009, when I undertook a randomized controlled study of dextrose prolotherapy for rotator cuff tendinopathy, I restricted my practice to treating pain. As low back pain was a large part of my practice, I sought new ways to examine the sacroiliac joints. I conducted a consecutive patient data collection which suggested that over three-quarters of those with low back pain suffer from displaced sacroiliac joints. In a further randomized controlled study, I found that the two-minute corrective exercise I derived from this test provided immediate relief to 90% of those using it. With Dr. John Clark Lyftogt I discovered the safety and effectiveness of 5% dextrose perineural injections to provide immediate pain relief to any area supplied by a nerve I could reach with my needle. As I was treating many diabetics with peripheral neuropathy, I shifted my perineural injection material to 5% mannitol, which may be as effective, with less exposure to dextrose as a potential benefit for diabetics. As most people dislike injections, a pharmacist and I developed a mannitol-containing topical cream for pain relief. We compared a base cream to the same cream with mannitol on lips pretreated with capsaicin cream which made them burn. By 10 minutes the probability the two creams were as effective in relieving the burn was less than 0.001 in favor of mannitol. When given to 235 patients with a total of 289 different painful conditions, we found that it provided 53% relief in an average of 16 minutes with a median of four hours duration. Now retired, after 55 years of medical practice, I love to relieve the pain of friends and fellow hikers using exercise and cream. Searching for and finding solutions to chronic pain has enriched my life and that of many others. abstract_id: PUBMED:9471038 Treating the human bladder with capsaicin: is it safe? Objectives: Although human contact with capsaicin has occurred over thousands of years, some uncertainty surrounds its status as a possible carcinogen. This is the first report of bladder biopsies from patients who have been treated with capsaicin over a 5-year period. Methods: Between 1991 and 1996, 20 patients (9 males, 11 females; mean age 52.5 years, range 40-70 years) with intractable detrusor hyperreflexia have had repeated instillations of intravesical capsaicin (1-2 mmol/l). The number of treatments per patient varied between 1 and 17 (total = 82; average 6/patient). The surveillance programme involved repeated flexible cystoscopy and bladder biopsies before and after capsaicin. Cryostat sections were stained with haematoxylin-eosin and examined by light microscopy. Results: None of the bladder biopsies have shown metaplasia, dysplasia, flat carcinoma in situ, papillary or solid invasive cancer. Conclusions: No pre-malignant of malignant change has been found in biopsies of patients who had repeated capsaicin instillations for up to 5 years. However, as the morphological effects of chemical carcinogens may not be apparent for 10 years, further surveillance is being continued. abstract_id: PUBMED:31621576 Treating osteoarthritis pain: recent approaches using pharmacological therapies. Osteoarthritis (OA) is a debilitative, painful condition with significant global burden. Pharmacological options have limited analgesic efficacy and their side-effects often restrict their use. Novel pharmacological options are needed to relieve patient symptoms and their consequent disease impact. A variety of pharmacological options have been investigated in treating OA, including existing therapies previously used for treating other arthritides (such as colchicine and hydroxychloroquine) and new therapies targeting pain (including monoclonal antibodies to nerve growth factor and intra-articular trans-capsaicin). Extended-release triamcinolone may offer more persisting analgesic effects compared to immediate-release preparations. While most studies have been unsuccessful, pharmacological therapies targeting peripheral nociceptive pathways appear promising. abstract_id: PUBMED:34178580 An update on targets for treating osteoarthritis pain: NGF and TRPV1. Purpose Of Review: a)Osteoarthritis (OA) is the most common form of arthritis, and pain is the primary symptom of the disease, yet analgesic options for treating OA pain remain limited. In this review, we aimed to give an update on the current clinical and preclinical studies targeting two pathways that are being investigated for treating OA pain: the nerve growth factor (NGF) pathway and the transient receptor potential vanilloid-1 (TRPV1) pathway. Recent Findings: b)Antibodies against NGF, small molecule inhibitors of TrkA, TRPV1 agonists, and TRPV1 antagonists are all in different stages of clinical and pre-clinical testing for the treatment of OA pain. NGF antibodies have shown efficacy in the primary endpoints tested compared to placebo, however, rapidly progressive OA has been consistently observed in a subset of patients and the cause remains unclear. TRPV1 agonists have also demonstrated reduced pain with no serious adverse events - the most common adverse events include a burning or warming sensation upon administration. Summary: c)Targeting the NGF and TRPV1 pathways appear effective for reducing OA pain, but further work is needed to better understand which patients may benefit most from these treatments. The anti-NGF antibody tanezumab and the TRPV1 agonist CNTX-4975 have both received fast-track designation from the FDA for the treatment of OA pain. abstract_id: PUBMED:23707266 TRPA1 insensitivity of human sural nerve axons after exposure to lidocaine. TRPA1 is an ion channel of the TRP family that is expressed in some sensory neurons. TRPA1 activity provokes sensory symptoms of peripheral neuropathy, such as pain and paraesthesia. We have used a grease gap method to record axonal membrane potential and evoked compound action potentials (ECAPs) in vitro from human sural nerves and studied the effects of mustard oil (MO), a selective activator of TRPA1. Surprisingly, we failed to demonstrate any depolarizing response to MO (50, 250 μM) in any human sural nerves. There was no effect of MO on the A wave of the ECAP, but the C wave was reduced at 250 μM. In rat saphenous nerve fibres MO (50, 250 μM) depolarized axons and reduced the C wave of the ECAP but had no effect on the A wave. By contrast, both human and rat nerves were depolarized by capsaicin (0.5 to 5 μM) or nicotine (50 to 200 μM). Capsaicin caused a profound reduction in C fibre conduction in both species but had no effect on the amplitude of the A component. Lidocaine (30 mM) depolarized rat saphenous nerves acutely, and when rat nerves were pretreated with 30 mM lidocaine to mimic the exposure of human nerves to local anaesthetic during surgery, the effects of MO were abolished whilst the effects of capsaicin were unchanged. This study demonstrates that the local anaesthetic lidocaine desensitizes TRPA1 ion channels and indicates that it may have additional mechanisms for treating neuropathic pain that endure beyond simple sodium channel blockade. abstract_id: PUBMED:24973718 Human sensory neurons: Membrane properties and sensitization by inflammatory mediators. Biological differences in sensory processing between human and model organisms may present significant obstacles to translational approaches in treating chronic pain. To better understand the physiology of human sensory neurons, we performed whole-cell patch-clamp recordings from 141 human dorsal root ganglion (hDRG) neurons from 5 young adult donors without chronic pain. Nearly all small-diameter hDRG neurons (&lt;50 μm) displayed an inflection on the descending slope of the action potential, a defining feature of rodent nociceptive neurons. A high proportion of hDRG neurons were responsive to the algogens allyl isothiocyanate (AITC) and ATP, as well as the pruritogens histamine and chloroquine. We show that a subset of hDRG neurons responded to the inflammatory compounds bradykinin and prostaglandin E2 with action potential discharge and show evidence of sensitization including lower rheobase. Compared to electrically evoked action potentials, chemically induced action potentials were triggered from less depolarized thresholds and showed distinct afterhyperpolarization kinetics. These data indicate that most small/medium hDRG neurons can be classified as nociceptors, that they respond directly to compounds that produce pain and itch, and that they can be activated and sensitized by inflammatory mediators. The use of hDRG neurons as preclinical vehicles for target validation is discussed. abstract_id: PUBMED:21956871 Confocal microscopy with double immunofluorescence staining reveals the functional transient receptor potential vanilloid subtype 1 expressed in myoepithelial cells of human submandibular glands. Myoepithelial cells (MECs) mainly surround acini and intercalated ducts in the human salivary glands. The contraction of MECs provides the expulsive force to promote salivation. We previously found functional transient receptor potential vanilloid subtype 1 (TRPV1) was expressed in rabbit and human submandibular glands and increased saliva secretion. However, it was unknown whether TRPV1 was expressed in MECs of submandibular glands. In this study, we observed the immunoflourescence of TRPV1 was not only located in serous acini and ducts but also surround the basal layer of the acinus and intercalated ducts of human submandibular glands. Double immunofluorescence staining revealed colocalization of TRPV1 with calponin, vimentin, and α-smooth muscle actin, which indicated the myoepithelial expression of TRPV1. Treating submandibular gland tissues with capsaicin, an agonist of TRPV1, substantially increased the phosphorylation of the 20-kDa regulatory light-chain subunit of myosin (MLC(20) ), a crucial molecule for contraction of smooth muscle cells, in MECs. Pretreatment with capsazepine, a specific TRPV1 inhibitor, blocked capsaicin-induced MLC(20) phosphorylation. These results suggest that TRPV1 is expressed in MECs of the human submandibular gland and mediates myoepithelial contraction via a mechanism involving MLC(20) phosphorylation. abstract_id: PUBMED:12490594 Activation of epidermal vanilloid receptor-1 induces release of proinflammatory mediators in human keratinocytes. During dermal injury and the associated trauma a number of compounds are released that can mediate the inflammatory response. Determining the cellular mechanisms that initiate the inflammatory responses to acute keratinocyte damage is important for understanding the regulation of epidermal inflammation. The recently cloned vanilloid receptor-1 (VR1) is a polymodal receptor, responding to thermal, pH, or vanilloids such as capsaicin stimulation. Although VR1 has been localized only on sensory neurons and within the central nervous system, recent evidence suggests a functional VR1 is expressed in human skin and epidermal cells. Using reverse transcription-polymerase chain reaction and immunoblotting we report that human keratinocytes and the human keratinocyte cell line HaCaT express VR1. Consistent with neuronal VR1, activation of epidermal VR1 by capsaicin induced a calcium influx. Treating HaCaT cells with capsaicin resulted in a dose-dependent expression of cyclooxygenase-2 (COX-2), whereas pretreatment with the VR1 receptor antagonist capsazepine abolished the capsaicin-stimulated increase in COX-2 expression. Furthermore, the capsaicin-induced expression of COX-2 was dependent on extracellular calcium. Activation of the epidermal VR1 by capsaicin also resulted in an increased release of interleukin-8 and prostaglandin E2, and the stimulated release was attenuated by capsazepine. The finding that VR1 is expressed by keratinocytes is of great importance because it expands the putative role of VR1 beyond that of pain perception. Our results suggest that VR1 expression in keratinocytes may have a role in the inflammation that occurs secondary to epidermal damage or insult, and thus may function as a sensor for noxious cutaneous stimulation. abstract_id: PUBMED:27479625 Treating Painful Diabetic Peripheral Neuropathy: An Update. Painful diabetic peripheral neuropathy occurs in approximately 25% of patients with diabetes mellitus who are treated in the office setting and significantly affects quality of life. It typically causes burning pain, paresthesias, and numbness in a stocking-glove pattern that progresses proximally from the feet and hands. Clinicians should carefully consider the patient's goals and functional status and potential adverse effects of medication when choosing a treatment for painful diabetic peripheral neuropathy. Pregabalin and duloxetine are the only medications approved by the U.S. Food and Drug Administration for treating this disorder. Based on current practice guidelines, these medications, with gabapentin and amitriptyline, should be considered for the initial treatment. Second-line therapy includes opioid-like medications (tramadol and tapentadol), venlafaxine, desvenlafaxine, and topical agents (lidocaine patches and capsaicin cream). Isosorbide dinitrate spray and transcutaneous electrical nerve stimulation may provide relief in some patients and can be considered at any point during therapy. Opioids and selective serotonin reuptake inhibitors are optional third-line medications. Acupuncture, traditional Chinese medicine, alpha lipoic acid, acetyl-l-carnitine, primrose oil, and electromagnetic field application lack high-quality evidence to support their use. abstract_id: PUBMED:18692962 Cannabinoid receptor CB2 localisation and agonist-mediated inhibition of capsaicin responses in human sensory neurons. Cannabinoid receptor 2 (CB2) agonists provide the potential for treating chronic pain states without CNS effects associated with CB1 receptor activation. Animal models suggest that they act mainly via non-neuronal cells, possibly inhibition of inflammatory cells in the periphery or CNS, or via release of beta-endorphin; however, the clinical relevance and mechanism of analgesic action is uncertain. Here, we demonstrate colocalisation of CB2 with CB1 and the capsaicin receptor TRPV1 in human dorsal root ganglion (DRG) sensory neurons and increased levels of CB2 receptors in human peripheral nerves after injury, particularly painful neuromas. In primary cultures of human DRG neurons, selective CB2 agonists blocked activation of inward cation currents and elevation of cytoplasmic Ca2+ in response to capsaicin. These inhibitory effects were reversed by GW818646X a CB2 antagonist, and 8-bromo cAMP, but not by SR141716 a CB1 antagonist, or naloxone. Thus CB2 receptor agonists functionally inhibited nociceptive signalling in human primary sensory neurons via a mechanism shared with opioids, of adenylyl cyclase inhibition, but not via mu-opioid receptors. We conclude that CB2 agonists deserve imminent clinical trials for nociceptive, inflammatory and neuropathic chronic pain, in which capsaicin or heat-activated responses via TRPV1 may provide a clinical marker. Answer: Based on the provided abstracts, treating the human bladder with capsaicin appears to be safe. A study involving 20 patients with intractable detrusor hyperreflexia who received repeated instillations of intravesical capsaicin over a 5-year period showed no pre-malignant or malignant changes in bladder biopsies. None of the biopsies exhibited metaplasia, dysplasia, flat carcinoma in situ, papillary, or solid invasive cancer (PUBMED:9471038). However, the authors of the study caution that the morphological effects of chemical carcinogens may not be apparent for 10 years, and therefore, further surveillance is being continued. This suggests that while no immediate safety concerns were identified, long-term effects are still being monitored. It is important to note that the other abstracts provided do not directly address the safety of capsaicin treatment in the human bladder but discuss various aspects of capsaicin's effects on human tissues and its potential therapeutic uses. For instance, capsaicin is known to activate the transient receptor potential vanilloid-1 (TRPV1) in various cell types, including sensory neurons and myoepithelial cells, and has been studied for its potential in treating conditions like osteoarthritis pain and diabetic peripheral neuropathy (PUBMED:31621576, PUBMED:34178580, PUBMED:21956871, PUBMED:27479625). Capsaicin's role in inducing the release of proinflammatory mediators in human keratinocytes also highlights its involvement in inflammatory responses (PUBMED:12490594). In summary, the specific study on capsaicin treatment in the human bladder suggests that it is safe based on the absence of malignant changes in biopsies over a 5-year period, but ongoing surveillance is necessary to fully understand the long-term safety profile (PUBMED:9471038).
Instruction: Adult seat belt use: does the presence of children in the household make a difference? Abstracts: abstract_id: PUBMED:18836951 Adult seat belt use: does the presence of children in the household make a difference? Objective: To obtain prevalence estimates of seat belt use among adults with and without at least one child in the household and to examine whether having at least one child in the household is associated with adult seat belt use. Methods: The Second Injury Control and Risk Survey (ICARIS-2) was a nationwide cross-sectional, list-assisted random-digit-dialing telephone survey of individuals who were at least 18 years old and who spoke either English or Spanish. ICARIS-2 was carried out from 2001 to 2003; a similar study, ICARIS-1, had been conducted in 1994. National estimates were calculated for the prevalence of adult seat belt use and stratified according to the presence or absence of children in the household. Prevalence estimates for the two ICARIS surveys were compared using t-tests. Multivariable logistic regression was used to explore the association between having at least one child in the household and self-reported adult seat belt use. Results: Based on the 9,684 completed household interviews in ICARIS-2, an estimated 15.9% (13 million) of drivers with children in their households did not always wear their seat belt when driving, and 17.5% (15 million) of adult passengers with children in their households did not always wear their seat belt while riding. The prevalence of drivers and passengers who did not always wear their seat belt decreased between ICARIS-1 and ICARIS-2. Both driver and passenger seat belt use were associated with the respondent's age, sex, ethnicity, level of education, current marital status, and self-reported alcohol-impaired driving or riding with an alcohol-impaired driver. Drivers with children in the household, living in the Northeast, North Central, and Southern census regions of the country were significantly more likely than those in the West to report wearing their seat belt less than always. Conclusions: While seat belt use rates are increasing, many more lives could be saved by more complete restraint use. Effective strategies for increasing seat belt use rates and decreasing the number of both fatal and nonfatal motor vehicle injuries include primary enforcement laws, enhanced enforcement of seat belt use laws, and child safety-seat distribution combined with education programs. abstract_id: PUBMED:35162545 Can Age or Height Define Appropriate Thresholds for Transition to Adult Seat Belts? An Analysis of Observed Seat Belt Fit in Children Aged 7-12 Years. This study aimed to investigate associations between demographic, anthropometric and vehicle factors and the fit of adult seat belts in children aged 7-12 years in passenger vehicles. Seat belt fit was assessed by inspection of 7-12-year-old children in their own cars. Logistic regressions examined associations between anthropometric and vehicle factors on achieving good seat belt fit. There were 40 participants included in the analysis, with 16 (40%) having good overall belt fit. The odds of achieving good overall seat belt fit increased by 15% (OR 1.15, 95% CI 1.04-1.27) with every centimeter increase in height and increased by 5% with every one-month increase in age (OR 1.045, 95% CI 1.001-1.10). Controlling for vehicle factors, neither age or height was significantly associated with overall good belt fit, and the discriminatory power of models including these metrics to predict good belt fit was 73% (AUC 0.73, 95% CI 0.55-0.91) and 74% (AUC 0.74, 95% CI 0.58-0.91). The results suggest that taller and older children have a better chance of achieving a good seat belt fit. However, with variations in seat geometry between vehicles, no single simple metric clearly defines an appropriate transition to the adult seat belt. abstract_id: PUBMED:30384085 Neighborhood-level factors affecting seat belt use. Despite the well-known safety benefits of seat belt use, some vehicle occupants still do not use them. This is a challenge in Tennessee, which has a lower seat belt use rate compared to the United States national average. Roadside observations and interviews are the two main sources for estimating seat belt use rate and have several limitations (e.g., small sample size, social desirability bias). To address these limitations, we attributed seat belt use of individuals who were involved in traffic crashes (N = 542,776) to their corresponding home-addresses. Home-addresses were retrieved from police crash reports and were geocoded, and assigned to their corresponding census tract revealing added information about the spatial distribution of seat belt use and socioeconomics of the areas surrounding the crash victim's home. The average seat belt use rate in the metropolitan area was 88% and for the non-metropolitan area was 87%. A Tobit model was used to evaluate the relationship between the seat belt use rate for both drivers and passengers over 16 years old, with neighborhood sociodemographic variables. Population, age cohorts, race, household vehicles' ownership, household size, and education were among the predictors of the seat belt use rate. Results of this analysis could be used in safety campaign design to reach specific geographic areas and groups with a lower seat belt use rate. abstract_id: PUBMED:9446249 The "seat belt" syndrome in pregnancy The submitted article deals with the seat belt syndrome and its possible effects for pregnancy. The term "seat belt" syndrome is used for the injuries caused by contact forces between human body and safety belt which result from their relative acceleration. They were published many studies which confirmed the existence of this syndrome. The authors resume the results of published date and present the possibilities of reducing the danger of this syndrome. abstract_id: PUBMED:24479100 Seat belt sign and its significance. Safety belts are the most important safety system in motor vehicles and when worn intend to prevent serious injuries. However, in unusual circumstances (high velocity motor vehicle collisions) these safety measures (seat belts) can be the source and cause of serious injuries. The seat belt syndrome was first described as early by Garrett and Braunste in but the term "seat belt sign" was discussed by Doersch and Dozier. Medical personnel's involved in emergency care of trauma patients should be aware of seat belt sign and there should a higher index of suspicion to rule out underlying organ injuries. abstract_id: PUBMED:31862024 The effects of persistent audible seat belt reminders and a speed-limiting interlock on the seat belt use of drivers who do not always use a seat belt. Introduction: Vehicle technologies that increase seat belt use can save thousands of lives each year. Kidd, Singer, Huey, and Kerfoot (2018) found that a gearshift interlock was more effective for increasing seat belt use than an intermittent audible reminder, but interlocks may not be more effective than persistent audible reminders lasting at least 90 seconds. Method: Forty-nine part-time belt users with a recent seat belt citation who self-reported not always using a seat belt drove two vehicles for 1 week each. Thirty-three drove a Chevrolet with an intermittent audible reminder followed by either a BMW with a persistent 90-second audible reminder (n = 17) or a Subaru with an incessant audible reminder (n = 16). The other 16 participants experienced the BMW persistent reminder followed by an interlock that limited speed to 15 mph during unbelted driving. These data were combined with data from 32 part-time belt users in Kidd et al. (2018) who experienced the intermittent reminder for 2 weeks or the intermittent reminder for 1 week and a gearshift interlock the next. Results: Relative to the intermittent reminder, seat belt use was significantly increased an estimated 30% by the BMW persistent reminder, 34% by the Subaru incessant reminder, and 33% by the speed-limiting interlock. Belt use was increased an estimated 16% by the gearshift interlock, but this change was not significant. More participants circumvented the speed-limiting interlock to drive unbelted than the audible reminders. Responses to a poststudy survey indicated that interlocks were less acceptable than reminders. Conclusions: Audible reminders lasting at least 90 seconds and a speed-limiting interlock were more effective for increasing seat belt use than an intermittent audible reminder, but reminders were found more acceptable. Practical applications: Strengthening existing U.S. safety standards to require audible reminders lasting at least 90 seconds for front-row occupants could save up to 1,489 lives annually. abstract_id: PUBMED:24433015 Use of seat belt and enforcement of seat belt laws in Pakistan. Objectives: Traffic crashes are a leading cause of deaths and injuries in Pakistan. The World Health Organization (WHO) estimates that approximately 30,131 people die in road crashes annually on Pakistani roads (WHO 2013). The use of seat belts can be effective in reducing road crash fatalities in Pakistan but under existing seat belt laws, seat belt use is mandatory only for drivers and is limited to specific vehicles and roads. Primarily, this study was aimed to measure seat belt use rates among Pakistani vehicle occupants. It was meant to investigate the factors influencing the compliance behavior of seat belt use among vehicle drivers. The second objective was to gauge the enforcement of seat belt laws and their effectiveness in Pakistan. Methods: A first ever national survey was conducted for the systematic recording of seat belt use rates among Pakistani drivers and front passengers on 5 different kinds of roads. Factors influencing compliance with seat belt use were gauged through a questionnaire for 5 different kinds of roads. Commitment level of different traffic police organizations to enforce seat belt laws was also measured through their previous year's enforcement data. Results: Average seat belt use rates were 20% with the highest on motorways (53%) and the lowest on rural roads (5%). Unawareness of the law/usefulness, seat belt not fitted, discomfort, forgetfulness, low speed, and careless attitude were major reasons for noncompliance with seat belt laws among Pakistani drivers. Seat belt use rates were directly proportional to law enforcement strength and the National Highways and Motorways Police (NH&amp;MP) showed maximum enforcement on M-2. There is a dire need to revise existing seat belt laws in order to raise seat belt use rates in Pakistan. abstract_id: PUBMED:19337593 The spectrum of seat belt syndrome among Canadian children: Results of a two-year population surveillance study. Background: Seat belts have been proven to save lives. However, if they are not properly fitted, 'seat belt syndrome' can occur. The aim of the present study was to describe injuries encountered in Canadian children with seat belt-associated injuries. Methods: Canadian paediatricians and paediatric subspecialists were surveyed monthly through the Canadian Paediatric Surveillance Program. Children younger than 18 years of age who were restrained in motor vehicles at the time of a collision, with abdominal or thoracolumbar spine injuries, were included. The children may have been restrained in child safety seats, booster seats, or two- or three-point seat belts. Results: Twenty-eight children, between two and 16 years of age, with injuries compatible with seat belt syndrome were reported in Canada between September 2003 and August 2005. Although 12 children were younger than eight years of age, only one was restrained in a booster seat and only four of the older children were properly restrained with a three-point seat belt. Twenty-four children had abdominal injuries. Of these, 18 had stomach and/or intestinal injuries and 11 had solid organ injuries. Twelve patients had a spinal fracture, including only five Chance-type fractures. Seven patients presented with paraplegia, and none of them recovered. Conclusion: In Canada, over a two-year period, 28 children were reported to have sustained injuries consistent with seat belt syndrome; seven of these children remained paraplegic. These results emphasize the necessity to review restraints in motor vehicles to adequately protect children. abstract_id: PUBMED:37267012 Increasing seat belt use in the United States by promoting and requiring more effective seat belt reminder systems. Objective: Federal Motor Vehicle Safety Standard (FMVSS) 208 requires every passenger vehicle to provide an auditory signal lasting 4 to 8 seconds and a visual display lasting 60 seconds when the driver is unbelted at ignition. This requirement does not increase seat belt use. This paper summarizes the latest research on using vehicle technology to increase seat belt use and existing safety standards worldwide to support the strengthening of FMVSS 208. Method: Studies of seat belt reminders and interlocks published in peer-reviewed journals, conference proceedings, or as technical reports were identified in online databases and reviewed along with current requirements worldwide. Results from past research were used to estimate the front- and rear-seat daytime belt use rate and the annual number of lives that could be saved by a persistent audible reminder at each seating position. Results: Most motor vehicle occupants routinely buckle up. Those that do not typically forget, are going a short distance, or find belts uncomfortable. Seat belt reminders can remind or motivate occupants to buckle up. Enhanced reminders that exceed FMVSS 208 increase belt use by 6 percentage points. Reminders also can increase rear belt use, and although required throughout the world, are not required by FMVSS 208. More persistent reminders, like those required around the world, with a continuous, long-lasting audible signal increase belt use by 30% among drivers who do not routinely buckle up. If every vehicle in the U.S. had such a reminder at each seating position, then it was estimated that the daytime belt use rate in the U.S. would increase about 3 percentage points from 90.3% to 93.2% in the front row and by about 6 percentage points from 80% to 85.9% in the rear row. It was estimated that the increase in belt use from a continuous, long-lasting audible reminder could potentially save about 1,600 lives each year.Seat belt interlocks can increase belt use, but acceptance is a stumbling block. Public outcry ensued after interlocks were required in 1973, and public sentiment remains negative. Opinions toward front and rear reminders are more favorable. Furthermore, past research suggests interlocks may be no more effective for increasing seat belt use than persistent audible reminders. The effect of interlocks on rear belt use have not been explored. Conclusion: Persistent seat belt reminder systems that last at least 90 seconds can potentially save hundreds of U.S. motorists each year. Robust empirical evidence, successful exemplars from organizations throughout the world, and a clear public health benefit exists for strengthening FMVSS 208 to require more persistent audible reminders at every seating position. abstract_id: PUBMED:26436242 Children's and Adults' Comfort Experience of Extra Seat Belts When Riding in the Rear Seat of a Passenger Car. Objective: The objective of this study was to explore passengers' comfort experience of extra seat belts during on-road driving in the rear seat of a passenger car and to investigate how the use of extra belts affects children's and adults' attitudes to the product. Methods: Two different seat belt systems were tested, criss-cross (CC) and backpack (BP), consisting of the standard 3-point belt together with an additional 2-point belt. In total, 32 participants (15 children aged 6-10, 6 youths aged 11-15, and 11 adults aged 20-79, who differed considerably in size, shape, and proportions) traveled for one hour with each system, including city traffic and highway driving. Four video cameras monitored the test subject during the drive. Subjective data regarding emotions and perceived discomfort were collected in questionnaires every 20 min. A semistructured interview was held afterwards. Results: All participant groups accepted the new products and especially the increased feeling of safety (P &lt;.01); 56% preferred CC and 44% preferred BP but the difference was not significant. In total, 81% wanted to have extra seat belts in their family car. CC was appreciated for its symmetry, comfort, and the perceived feeling of safety. Some participants found CC unpleasant because the belts tended to slip close to the neck, described as a strangling feeling. BP was simpler to use and did not cause annoyance to the neck in the way CC did. Instead, it felt asymmetric and to some extent less safe than CC. Body size and shape affected seat belt fit to a great extent, which in turn affected the experience of comfort, both initially and over time. Perceived safety benefit and experienced comfort were the most determinant factors for the attitude toward the extra seat belts. The extra seat belts were perceived as being better than the participants had expected before the test, and they became more used to them over time. Conclusion: This exploratory study provided valuable knowledge from a user perspective for further development of new seat belt systems in cars. In addition to an increased feeling of safety, seat belt fit and comfort are supplementary influencing factors when it comes to gaining acceptance of new seat belt systems. Answer: Yes, the presence of children in the household does make a difference in adult seat belt use. According to the study conducted as part of the Second Injury Control and Risk Survey (ICARIS-2), it was found that an estimated 15.9% of drivers with children in their households did not always wear their seat belt when driving, and 17.5% of adult passengers with children in their households did not always wear their seat belt while riding (PUBMED:18836951). This suggests that having children in the household is associated with adult seat belt use, as the presence of children may influence adults to wear seat belts more consistently to set a good example and to protect the children in the event of a crash. The study also noted that seat belt use rates are increasing, and effective strategies for further increasing seat belt use rates include primary enforcement laws, enhanced enforcement of seat belt use laws, and child safety-seat distribution combined with education programs (PUBMED:18836951).
Instruction: Nonadministration of thromboprophylaxis in hospitalized patients with HIV: a missed opportunity for prevention? Abstracts: abstract_id: PUBMED:24464775 Nonadministration of thromboprophylaxis in hospitalized patients with HIV: a missed opportunity for prevention? Background: Hospitalized patients with human immunodeficiency virus (HIV) appear to be at increased risk of venous thromboembolism. Previous work at our institution has demonstrated that the proportion of doses administered varies between patients and locations. Objective: To compare the proportion of doses of thromboprophylaxis not administered between patients with and without HIV. Design: Using retrospective data, the proportion of nonadministered doses was determined in all hospitalized adults and stratified by HIV status. Setting: Large, urban, academic medical center in Baltimore, Maryland. Patients: Data were available for 4947 patient visits, 583 of which were by patients with HIV. Most visits by patients with HIV were to a designated HIV care unit. Measurements: Proportion of doses of thromboprophylaxis not administered, and documented reasons for dose nonadministration. Results: A total of 42,870 doses were prescribed. The proportion of doses not administered was greater for patients with HIV (23.5%) compared with patients without HIV (16.1%, odds ratio [OR]: 1.59, 95% confidence interval [CI]: 1.49-1.70, P &lt; 0.001). Documented dose refusal accounted for a greater proportion of nonadministered doses in patients with HIV (15.9% vs 10.8%, OR: 1.56, 95% CI: 1.43-1.70, P &lt; 0.0001). On the HIV care unit, the proportion of doses not administered was greater for patients with HIV (26.4% vs 13.1%, OR: 2.39, 95% CI: 1.93-2.96, P &lt; 0.001). Within this unit, documented dose refusal was greater for patients with HIV (13.7% vs 10.7%, OR: 1.32, 95% CI: 1.16-1.51, P &lt; 0.0001). Conclusions: Nonadministration and documented refusal of thromboprophylaxis appear to be more common in patients with HIV at our institution. abstract_id: PUBMED:33507453 Nonadministration of pharmacologic venous thromboembolism prophylaxis is less common in hospitalized patients with COVID-19. Introduction: The incidence of venous thromboembolism (VTE) in patients hospitalized with COVID-19 is higher than most other hospitalized patients. Nonadministration of pharmacologic VTE prophylaxis is common and is associated with VTE events. Our objective was to determine whether nonadministration of pharmacologic VTE prophylaxis is more common in patients with COVID-19 versus other hospitalized patients. Materials And Methods: In this retrospective cohort analysis of all adult patients discharged from the Johns hopkins hospital between Mar 1 and May 12, 2020, we compared demographic, clinical characteristics, VTE outcomes, prescription and administration of VTE prophylaxis between COVID-19 positive, negative, and not tested groups. Results: Patients tested positive for COVID-19 were significantly older, and more likely to be Hispanic, have a higher median body mass index, have longer hospital length of stay, require mechanical ventilation, develop pulmonary embolism and die (all p &lt; 0.001). COVID-19 patients were more likely to be prescribed (aOR 1.51, 95% CI 1.38-1.66) and receive all doses of prescribed pharmacologic VTE prophylaxis (aOR 1.48, 95% CI 1.36-1.62). The number of patients who missed at least one dose of VTE prophylaxis and developed VTE was similar between the three groups (p = 0.31). Conclusions: It is unlikely that high rates of VTE in COVID-19 are due to nonadministration of doses of pharmacologic prophylaxis. Hence, we should prioritize research into alternative approaches to optimizing VTE prevention in patients with COVID-19. abstract_id: PUBMED:33733028 Validation of the IMPROVE-DD risk assessment model for venous thromboembolism among hospitalized patients with COVID-19. Background: Antithrombotic guidance statements for hospitalized patients with coronavirus disease 2019 (COVID-19) suggest a universal thromboprophylactic strategy with potential to escalate doses in high-risk patients. To date, no clear approach exists to discriminate patients at high risk for venous thromboembolism (VTE). Objectives: The objective of this study is to externally validate the IMPROVE-DD risk assessment model (RAM) for VTE in a large cohort of hospitalized patients with COVID-19 within a multihospital health system. Methods: This retrospective cohort study evaluated the IMPROVE-DD RAM on adult inpatients with COVID-19 hospitalized between March 1, 2020, and April 27, 2020. Diagnosis of VTE was defined by new acute deep venous thrombosis or pulmonary embolism by Radiology Department imaging or point-of-care ultrasound. The receiver operating characteristic (ROC) curve was plotted and area under the curve (AUC) calculated. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated using standard methods. Results: A total of 9407 patients were included, with a VTE prevalence of 2.9%. The VTE rate was 0.4% for IMPROVE-DD score 0-1 (low risk), 1.3% for score 2-3 (moderate risk), and 5.3% for score ≥ 4 (high risk). Approximately 45% of the total population scored high VTE risk, while 21% scored low VTE risk. IMPROVE-DD discrimination of low versus medium/high risk showed sensitivity of 0.971, specificity of 0.218, PPV of 0.036, and NPV of 0.996. ROC AUC was 0.702. Conclusions: The IMPROVE-DD VTE RAM demonstrated very good discrimination to identify hospitalized patients with COVID-19 as low, moderate, and high VTE risk in this large external validation study with potential to individualize thromboprophylactic strategies. abstract_id: PUBMED:31042860 Effectiveness of Sequential Compression Devices in Prevention of Venous Thromboembolism in Medically Ill Hospitalized Patients: A Retrospective Cohort Study Objective: To evaluate the effectiveness of sequential compression devices (SCDs) for venous thromboembolism (VTE) prevention in medically ill hospitalized patients. Materials And Methods: Adult patients admitted to a teaching hospital from April 2015 to March 2016 were included. Patients on anticoagulants with or without SCDs were excluded. We analyzed VTE risk, length of hospital stay, and other comorbidities among propensity score-matched patients on SCDs and those without thromboprophylaxis (NONE). Results: Among 30,824 patients, 67 patients (0.22%) developed VTE during their hospital stays, with deep vein thrombosis (DVT) in 55 cases and pulmonary embolism (PE) in 12. VTE was seen in 47 out of 20,018 patients on SCDs (41 DVT, 6 PE) and 20 out of 10,819 patients without SCDs (14 DVT, 6 PE). Risk-adjusted analysis showed no significant difference in VTE incidence in the SCD group compared to NONE (odds ratio 0.99, 95% confidence interval 0.57-1.73, p=0.74). Conclusion: Compared to the NONE group, SCDs are not associated with decreased VTE incidence during hospital stay. abstract_id: PUBMED:24384102 Lack of evidence to support thromboprophylaxis in hospitalized medical patients with cancer. Background: The administration of anticoagulant thromboprophylaxis for all patients with cancer who are hospitalized for acute medical illness is considered standard practice and strongly recommended in clinical guidelines. These recommendations are extrapolated from randomized controlled prophylaxis trials not specifically conducted in cancer cohorts. Because hospitalized patients with cancer constitute a unique population with increased risk of venous thromboembolic events and major hemorrhage, validation of the efficacy and safety of primary thromboprophylaxis in this population is critical. We sought to summarize the rates of venous thromboembolic events and major bleeding episodes among hospitalized patients with cancer who were receiving anticoagulant therapy compared with placebo. Methods: A systematic literature search strategy was conducted using MEDLINE, EMBASE, and the Cochrane Register of Controlled Trials. Two reviewers independently extracted data onto standardized forms. The primary end points were all venous thromboembolic events. Secondary end points included major bleeding episodes and symptomatic venous thromboembolic events. Pooled analysis with relative risk using a random effect model was used as the primary measurement. Results: A total of 242 citations were identified by the literature search. Of these, 3 placebo-controlled randomized trials included venous thromboembolic events as a primary outcome and were analyzed according to cancer subgroups. The pooled relative risk of venous thromboembolic events was 0.91 (95% confidence interval, 0.21-4.0; I(2): 68%) among hospitalized patients with cancer who were receiving thromboprophylaxis compared with placebo. None of the trials reported the rates of symptomatic venous thromboembolic events or major bleeding episodes according to cancer status. Conclusions: The risks and benefits of primary thromboprophylaxis with anticoagulant therapy in hospitalized patients with cancer are not known. This is especially relevant because numerous Medicare-type pay-for-performance incentives mandate prophylaxis specifically in patients with cancer. abstract_id: PUBMED:35592209 Bedside Peripheral Ultrasound-Guided Vascular Access in 253 Patients Hospitalized With COVID-19 Pneumonia: A Retrospective Italian Study. Background Several studies have recommended the use of vascular access in the treatment of COVID-19 patients. However, little is known about the utility and safety of using a peripheral ultrasound-guided vascular access device (UGVAD) at the bedside of hospitalized COVID-19 patients. To examine this, a retrospective monocenter study was carried out at the oncology-hematology department of Azienda Sanitaria di Piacenza, Italy. Methods We retrospectively analyzed data from three general hospitals in a district in North Italy on the positioning of UGVADs used with hospitalized COVID-19 patients. The positioning of the VAD was performed by a dedicated team using ultrasound guidance. The primary endpoint was the duration of VAD until the patient's recovery or death. The secondary endpoints were complications of the use of VADs, which included vein thrombosis, infections, device malfunction, and viral contamination of the operators. Results Between February 21, 2020, and April 30, 2020, 253 consecutive hospitalized patients with COVID-19 pneumonia underwent UGVAD positioning. A midline was inserted in 88.53% of the patients, while peripheral central venous catheters and femoral central catheters were inserted in 9.88% and 1.59% of the patients, respectively. The mean lifespan of the VADs was 10.36±9.96 days (range: 1-73). Primary endpoint: The use of the VAD allowed the planned treatment in 92.88% of the patients; in the remaining 7.12%, the VAD was repositioned. Secondary endpoints: Complications of VAD were registered in 15.02% of the patients (dislocation, 9.49%; infection, 1.98%; thrombosis, 1.58%; occlusion, 1.19%; and malfunction, 0.79%). No contamination of the operators was registered. Discussion and conclusion With the limitation of being a retrospective study, our report suggests that ultrasound-guided positioning of VAD may allow the safe clinical management (drug infusion, hydration, parenteral nutrition, and phlebotomy) of hospitalized COVID-19 patients. The observance of recommended procedures protected all operators from infection. abstract_id: PUBMED:36927371 Low awareness of venous thromboembolism among neurosurgical hospitalized patients: a cross-sectional study. Background: Venous thromboembolism (VTE) including Deep Venous Thrombosis (DVT) and Pulmonary Embolism (PE), is a serious cause of patient morbidity and mortality in hospitals. Neurosurgical hospitalized patients have higher rates of immobility and bed rest, thus increasing their risk of developing VTE. This highlights the need for their thromboprophylaxis regimens. Patients' awareness of VTE is essential for promoting strategies such as early ambulation and encouraging self-assessment and self-reporting of VTE signs and symptoms. This study evaluated neurosurgical hospitalized patients' awareness of VTE and explored the influencing factors to provide a theoretical basis for nursing intervention. Methods: We selected one tertiary level hospital in Hunan Province and randomly sampled eligible patients from each five neurosurgical units. We conducted a cross-sectional survey of the hospitalized patients of neurosurgery using the self-designed and validated VTE knowledge questionnaire, and the influencing factors were analyzed using SPSS 26.0. Results: A total of 386 neurosurgical hospitalized patients completed the survey. The score of VTE knowledge in neurosurgical hospitalized patients was 13.22 (SD = 11.52). 36.0% and 21.2% of participants reported they had heard of DVT and PE, respectively. 38.9% of participants were unable to correctly identify any symptoms of VTE. The most frequently identified risk factor was 'immobility or bed rest for more than three days' (50.0% of participants), and 38.1% of patients agreed that PE could cause death. 29.5% of participants were unable to identify any prophylactic measures of VTE. The results of Negative Binomial Regression showed that the influencing factors of VTE knowledge in neurosurgical hospitalized patients were education level (P &lt; 0.004) and sources of information related to VTE, including nurses (95% CI = 2.201-4.374, P &lt; 0.001), and family member/friend (95% CI = 2.038-4.331, P &lt; 0.001), Internet/TV (95% CI = 1.382-2.834, P &lt; 0.001). Other sources included patient /pamphlet/poster /professional books (95% CI = 1.492-3.350, P &lt; 0.001). Conclusions: This study demonstrates the lack of awareness of VTE among neurosurgical hospitalized patients. More attention must be paid to carrying out training on VTE knowledge according to different characteristics of neurosurgical hospitalized patients, so as to ensure safe and high-quality patient care. abstract_id: PUBMED:29703486 Updated clinical models for VTE prediction in hospitalized medical patients. Venous thromboembolism (VTE) occurring in hospitalized medical patients is associated with increased length of hospitalization, high rate of acute care hospital transfer, longer inpatient rehabilitation and multiplication of health-care costs. Identification of acutely ill hospitalized medical patients eligible for thromboprophylaxis is a sophisticated process. Global VTE risk stems from the combination of predictors related with the acute medical illness, comorbidities, associated treatments and patients' intrinsic risk factors. Emerging clinical risk factors related to underlying pathologies should be considered when VTE risk is assessed. The Padua Prediction Score (PPS), the International Medical Prevention Registry on Venous Thromboembolism (IMPROVE-RAM) and the Geneva Risk Score are three robust risk assessment models (RAM) which underwent extensive external validation in cohorts of acutely ill hospitalized medical patients. The development of the IMPROVE bleeding risk assessment model and the identification of D-Dimer increase as a biomarker-predictor of VTE are some steps forward for personalized thromboprophylaxis. The beneficial impact of the RAMs in VTE prevention is already seen by the decrease of in-hospital VTE rates when RAMs are incorporated in electronic alert systems. abstract_id: PUBMED:29119147 Betrixaban: a direct oral inhibitor of activated factor X for the prophylaxis of venous thromboembolism in patients hospitalized for acute medical illness. Venous thromboembolism (VTE), comprising deep vein thrombosis and pulmonary embolism, is a serious clinical and public health concern. Hospitalization is a major risk factor for developing VTE. Hospital-associated events account for more than 50% of all cases of VTE. Heparins have demonstrated to be efficacious in the prevention of VTE in medically ill patients. Despite the demonstrated efficacy and safety of the available direct oral anticoagulants in the prevention and treatment of different thromboembolic conditions, their net benefit in the prevention of VTE in hospitalized medically ill patients has not been fully confirmed. Betrixaban is an oral, specific and direct inhibitor of human coagulation factor Xa with demonstrated efficacy and safety for the prevention of VTE in patients undergoing total knee replacement and in patients with nonvalvular atrial fibrillation. Recent studies have successfully evaluated betrixaban 80 mg once daily in the prevention of VTE in acute medically ill patients in a large phase III trial. This review will address preclinical pharmacology and main aspects of the clinical development of betrixaban as an antithrombotic agent, with specific attention to recent studies on the prophylaxis of VTE in a specific population of patients hospitalized for acute medical illnesses. abstract_id: PUBMED:36926396 Retroperitoneal and abdominal bleeding in anticoagulated COVID-19 hospitalized patients: Case series and brief literature review. Background: Hospitalized and severely ill coronavirus disease 2019 (COVID-19) patients necessitate prophylactic or therapeutic anticoagulation to minimize the risk of thrombosis at different sites. Life-threatening bleeding complications include spontaneous iliopsoas hematoma, peritoneal bleeding, and extra-abdominal manifestations such as intracranial hemorrhage. Case Summary: Bleeding in the abdominal wall results in less severe complications than seen with iliopsoas hematoma or peritoneal bleeding. In our case series of 9 patients, we present retroperitoneal and abdominal bleeding complications following anticoagulation in hospitalized COVID-19 patients with severe acute respiratory syndrome coronavirus 2 pneumonia. Contrast-enhanced computed tomography (CE-CT) is the best imaging modality for assessing hematoma secondary to anticoagulation and determines the therapeutic approach, whether interventional, surgical, or conservative management. Conclusion: We present the role of CE-CT for rapid and precise localization of the bleeding site and prognostic counseling. Finally, we provide a brief review of the literature. Answer: The nonadministration of thromboprophylaxis in hospitalized patients with HIV indeed represents a missed opportunity for prevention. Hospitalized patients with HIV are at an increased risk of venous thromboembolism (VTE), and the study conducted at a large, urban, academic medical center in Baltimore, Maryland, found that the proportion of nonadministered doses of thromboprophylaxis was greater for patients with HIV (23.5%) compared to patients without HIV (16.1%). This suggests that patients with HIV were less likely to receive the thromboprophylaxis doses prescribed to them. Moreover, documented dose refusal accounted for a greater proportion of nonadministered doses in patients with HIV (15.9% vs 10.8%). The findings indicate that nonadministration and documented refusal of thromboprophylaxis are more common in patients with HIV at the institution studied (PUBMED:24464775). This missed opportunity for prevention is concerning given the increased risk of VTE in this population. It is essential to understand and address the reasons behind the nonadministration and refusal of thromboprophylaxis to improve patient outcomes. The study's results highlight the need for targeted interventions to ensure that hospitalized patients with HIV receive appropriate VTE prophylaxis to reduce their risk of thromboembolic events.
Instruction: Brain CT infarction in patients with carotid atheroma. Does it predict a future event? Abstracts: abstract_id: PUBMED:11533517 Brain CT infarction in patients with carotid atheroma. Does it predict a future event? Background: The aim of the present study was to investigate the predictive value for subsequent stroke of different patterns of brain CT infarction in patients with carotid atheroma. Methods: Prospective study on 138 patients, with 138 carotid plaques, having, on presentation, a greater than 50 percent stenosis on duplex scanning and associated with an ipsilateral (to the plaque) amaurosis fugax (AF), hemispheric transient ischaemic attack (HTIA) or which were asymptomatic. This carotid artery defined the side of interest. All patients had a brain CT scan on presentation and subsequently were followed for a period of 1-5 years (mean 3.14). The baseline CT neurovascular findings on the side of interest were classified as pattern A (discrete subcortical and cortical infarctions), pattern B (haemodynamic infarctions, widespread white matter lesions, basal ganglia infarctions and lacunae) and normal CT. Results: On follow-up, 5/27 (18.5 percent) of patients with pattern A, 4/38 (10.5 percent) with pattern B and 3/73 (4.1 percent) with normal CT appearance developed stroke in the hemisphere of interest (Cox regression: p=0.02). Conclusions: Pattern A confers an unfavourable prognosis in patients with carotid atheroma who are either asymptomatic or presented with amaurosis fugax or hemispheric transient ischaemic attacks. abstract_id: PUBMED:11174786 Patterns of brain computed tomography infarction and carotid plaque echogenicity. Purpose: It was suggested that in the absence of cardioembolism the discrete subcortical and cortical infarctions on brain computed tomography (CT) are most likely associated with carotid atheroma, whereas the hemodynamic infarctions, diffuse widespread white matter lesions, lacunae and basal ganglia infarctions are most likely associated with other pathologic conditions. The aim of this study was to determine the ultrasonic characteristics of carotid plaques and the degree of stenosis that were associated with the different brain CT infarction patterns and normal CT (pattern A, discrete subcortical and cortical infarctions; pattern B, hemodynamic infarctions, diffuse widespread white matter lesions, lacunae and basal ganglia infarctions). Methods: Four hundred nineteen carotid plaques (315 patients), producing 50% to 99% stenosis on duplex scanning, were studied. These plaques were imaged on duplex scanning, captured, digitized, and normalized (standardized) in a computer. Subsequently, their gray scale median (GSM) was evaluated to distinguish quantitatively the hypoechoic (low GSM) from the hyperechoic (high GSM) plaques. The brain CT infarction patterns of A, B, or normal CT on the ipsilateral hemisphere were noted. Results: The pattern A brain CT infarction was associated with carotid plaques having median GSM of 11 and median degree of stenosis of 80%, as contrasted with pattern B (median GSM, 28.5; median degree of stenosis, 75%) or normal CT (median GSM, 22; median degree of stenosis, 75%) (Kruskal-Wallis test, P &lt;.001 for the GSM and P =.002 for the degree of stenosis). In the logistic regression analysis only the GSM and not the degree of stenosis distinguished the plaques associated with the three CT patterns. Conclusions: The pattern A brain CT infarction was associated with hypoechoic plaques suggesting an involvement of extracranial carotid artery embolization, whereas the pattern B was associated with hyperechoic plaques suggesting an involvement of other mechanisms (hemodynamic, intracranial small and large vessel disease). abstract_id: PUBMED:22859283 Clinical risk factors and CT imaging features of carotid atherosclerotic plaques as predictors of new incident carotid ischemic stroke: a retrospective cohort study. Background And Purpose: Parameters other than luminal narrowing are needed to predict the risk of stroke more reliably, particularly in patients with &lt;70% stenosis. The goal of our study was to identify clinical risk factors and CT features of carotid atherosclerotic plaques, in a retrospective cohort of patients free of stroke at baseline, that are independent predictors of incident stroke on follow-up. Materials And Methods: We identified a retrospective cohort of patients admitted to our emergency department with suspected stroke between 2001-2007 who underwent a stroke work-up including a CTA of the carotid arteries that was subsequently negative for acute stroke. All patients also had to receive a follow-up brain study at least 2 weeks later. From a random sample, we reviewed charts and imaging studies of patients with subsequent new stroke on follow-up as well as those who remained stroke-free. All patients were classified either as "new carotid infarct patients" or "no-new carotid infarct patients" based on the Causative Classification for Stroke. Independently, the baseline CTA studies were processed using a custom, CT-based automated computer classifier algorithm that quantitatively assesses a set of carotid CT features (wall thickness, plaque ulcerations, fibrous cap thickness, lipid-rich necrotic core, and calcifications). Univariate and multivariate statistical analyses were used to identify any significant differences in CT features between the patient groups in the sample. Subsequent ROC analysis allowed comparison to the classic NASCET stenosis rule in identifying patients with incident stroke on follow-up. Results: We identified a total of 315 patients without a new carotid stroke between baseline and follow-up, and 14 with a new carotid stroke between baseline and follow-up, creating the main comparison groups for the study. Statistical analysis showed age and use of antihypertensive drugs to be the most significant clinical variables, and maximal carotid wall thickness was the most relevant imaging variable. The use of age ≥ 75 years, antihypertensive medication use, and a maximal carotid wall thickness of at least 4 mm was able to successfully identify 10 of the 14 patients who developed a new incident infarct on follow-up. ROC analysis showed an area under the ROC curve of 0.706 for prediction of new stroke with this new model. Conclusions: Our new paradigm of using age ≥ 75 years, history of hypertension, and carotid maximal wall thickness of &gt;4 mm identified most of the patients with subsequent new carotid stroke in our study. It is simple and may help clinicians choose the patients at greatest risk of developing a carotid infarct, warranting validation with a prospective observational study. abstract_id: PUBMED:18756475 Carotid plaque computed tomography imaging in stroke and nonstroke patients. Objective: To identify a set of computed tomographic (CT) features of carotid atherosclerotic plaques that is significantly associated with ischemic stroke. Methods: In a cross-sectional study, we retrospectively identified 136 consecutive patients admitted to our emergency department with suspected stroke who underwent a CT-angiogram of the carotid arteries. CT-angiographic studies of the carotid arteries were processed automatically using automated computer classifier algorithm that quantitatively assesses a battery of carotid CT features. Acute stroke patients were categorized into "acute carotid stroke patients" and "nonacute carotid stroke patients" independent of carotid wall CT features, using the Causative Classification System for Ischemic Stroke, which includes the neuroradiologist's review of the imaging studies of the brain parenchyma and of the degree of carotid stenosis, and charted test results (such as electrocardiogram). Univariate followed by multivariate analyses were used to build models to differentiate between these patient groups and to differentiate between the infarct and unaffected sides in the "acute carotid stroke patients." Results: Forty "acute carotid stroke" patients and 50 "nonacute carotid stroke" patients were identified. Multivariate modeling identified a small number of the carotid wall CT features that were significantly associated with acute carotid stroke, including wall volume, fibrous cap thickness, number and location of lipid clusters, and number of calcium clusters. Interpretation: Patients with acute carotid stroke demonstrate significant differences in the appearance of their carotid wall ipsilateral to the side of their infarct, when compared with either nonacute carotid stroke patients or the carotid wall contralateral with the infarct side. abstract_id: PUBMED:30439395 Association between carotid artery plaque inflammation and brain MRI. Purpose: To explore the association between presence of inflammatory cells in the carotid plaques surgically treated and brain MRI findings. Material And Methods: Forty consecutive patients were prospectively analyzed. Brain MRI was performed with a 1.5 Tesla scanner and infacts (lacuna and non-lacunar) pertinence of the anterior circulation were recorded. All patients underwent carotid endarterectomy "en bloc"; carotid plaques histological sections were prepared and immuno-cytochemical analysis was performed to characterize and quantify the presence of inflammatory cells. ROC curve analysis, Pearson Rho correlation and Mann-Whitney test were applied. Results: The immuno-cytochemical analysis demonstrated that plaques of symptomatic patients (stroke\TIA; n = 25) had more inflammatory cells, mainly macrophages (CD68) compared with plaques of patients without symptoms (Mann-Whitney = P &lt; 0.001, ROC curve area = 0.901). Correlation analysis showed a statistically significant association between the number of brain non-lacunar infarcts and the entity of macrophages (P &lt; 0.001); whereas no association with lacunar infarcts (P = 0.1934) was found. Conclusion: Results of this preliminary study suggest that the presence and amount of inflammatory cells within carotid artery plaque is associated with cerebrovascular events and with the number of MRI brain detectable infarct. abstract_id: PUBMED:21313759 Brain infarction in the internal carotid artery bed in atherosclerosis Gross and microscopic studies of the brain and its vascular system were conducted in 139 autopsy brain infarction specimens in atherosclerosis. Of them, 102 specimens displayed 193 infarctions of various extent and duration in the bed of the internal carotid artery (ICA) and its branches. The infarctions were solitary and multiple in 56 and 46 cases, respectively. The revealed infarctions were extensive (n = 22), large (n = 26), medium-sized (n = 86), small in-depth (n = 38), and small superficial (n = 21). The occurrence of great (extensive, large, and medium-sized) infarctions was found to be pathogenetically associated with atherothrombosis obliterans, cardiogenic or less frequently arterioarterial thromboembolism of ICA and its main branches. Atherosclerotic small in-depth and small superficial infarctions, as well as a number of median-sized infarctions occurred mainly via mechanism of cerebrovascular insufficiency in carotid tandem atherostenosis. The extent and location of brain infarction are greatly influenced by the site of hemodynamic carotid artery atherostenosis or thrombosis, the state of collateral circulation, the structure of the arterial circle of Willis, the state of the heart, etc. abstract_id: PUBMED:28705824 Carotid Plaque Morphology and Ischemic Vascular Brain Disease on MRI. Background And Purpose: Vulnerable carotid plaque components are reported to increase the risk of cerebrovascular events. Yet, the relation between plaque composition and subclinical ischemic brain disease is not known. We studied, in the general population, the association between carotid atherosclerotic plaque characteristics and ischemic brain disease on MR imaging. Materials And Methods: From the population-based Rotterdam Study, 951 participants underwent both carotid MR imaging and brain MR imaging. The presence of intraplaque hemorrhage, lipid core, and calcification and measures of plaque size was assessed in both carotid arteries. The presence of plaque characteristics in relation to lacunar and cortical infarcts and white matter lesion volume was investigated and adjusted for cardiovascular risk factors. Stratified analyses were conducted to explore effect modification by sex. Additional analyses were conducted per carotid artery in relation to vascular brain disease in the ipsilateral hemisphere. Results: Carotid intraplaque hemorrhage was significantly associated with the presence of cortical infarcts (OR, 1.9; 95% confidence interval, 1.1-3.3). None of the plaque characteristics were related to the presence of lacunar infarcts. Calcification was the only characteristic that was associated with higher white matter lesion volume. There was no significant interaction by sex. Conclusions: The presence of carotid intraplaque hemorrhage on MR imaging is independently associated with MR imaging-defined cortical infarcts, but not with lacunar infarcts. Plaque calcification, but not vulnerable plaque components, is related to white matter lesion volume. abstract_id: PUBMED:35897114 The 2nd European Carotid Surgery Trial (ECST-2): rationale and protocol for a randomised clinical trial comparing immediate revascularisation versus optimised medical therapy alone in patients with symptomatic and asymptomatic carotid stenosis at low to intermediate risk of stroke. Background: Carotid endarterectomy is currently recommended for patients with recently symptomatic carotid stenosis ≥50%, based on randomised trials conducted 30 years ago. Several factors such as carotid plaque ulceration, age and associated comorbidities might influence the risk-benefit ratio of carotid revascularisation. A model developed in previous trials that calculates the future risk of stroke based on these features can be used to stratify patients into low, intermediate or high risk. Since the original trials, medical treatment has improved significantly. Our hypothesis is that patients with carotid stenosis ≥50% associated with a low to intermediate risk of stroke will not benefit from additional carotid revascularisation when treated with optimised medical therapy. We also hypothesise that prediction of future risk of stroke in individual patients with carotid stenosis can be improved using the results of magnetic resonance imaging (MRI) of the carotid plaque. Methods: Patients are randomised between immediate revascularisation plus OMT versus OMT alone. Suitable patients are those with asymptomatic or symptomatic carotid stenosis ≥50% with an estimated 5-year risk of stroke of &lt;20%, as calculated using the Carotid Artery Risk score. MRI of the brain at baseline and during follow-up will be used as a blinded measure to assess the incidence of silent infarction and haemorrhage, while carotid plaque MRI at baseline will be used to investigate the hypotheses that plaque characteristics determine future stroke risk and help identify a subgroup of patients that will benefit from revascularisation. An initial analysis will be conducted after recruitment of 320 patients with baseline MRI and a minimum of 2 years of follow-up, to provide data to inform the design and sample size for a continuation or re-launch of the study. The primary outcome measure of this initial analysis is the combined 2-year rate of any clinically manifest stroke, new cerebral infarct on MRI, myocardial infarction or periprocedural death. Discussion: ECST-2 will provide new data on the efficacy of modern optimal medical therapy alone versus added carotid revascularisation in patients with carotid stenosis at low to intermediate risk of future stroke selected by individualised risk assessment. We anticipate that the results of baseline brain and carotid plaque MRI will provide data to improve the prediction of the risk of stroke and the effect of treatment in patients with carotid stenosis. Trial Registration: ISRCTN registry ISRCTN97744893 . Registered on 05 July 2012. abstract_id: PUBMED:35193028 Association of Carotid Intima-Media Thickness with Brain MRI Markers in the Atherosclerosis Risk in Communities Neurocognitive Study (ARIC-NCS). Objective: Elevated carotid intima-media thickness (cIMT) and carotid plaque are markers of arterial injury and may be linked to structural brain injury. We hypothesized cIMT or presence of carotid plaque at midlife are associated with presence of infarcts and cerebral microbleeds, greater white matter hyperintensity (WMH) volume, and smaller regional brain volumes in late-life. Methods: We included 1,795 Atherosclerosis Risk in Communities (ARIC) Study participants (aged 57±6 years, 57% female, 23% Black) with carotid ultrasounds in 1990-1992 and brain MRI scans in 2011-2013. Weighted linear regression was used for brain volume outcomes, while logistic regression was used for infarcts and cerebral microbleeds. Results: After multivariable adjustments, the highest cIMT quintile was associated with smaller deep gray matter (β [95% CI]: -0.11 [-0.22, -0.01]) and cortical volume in a temporal-parietal meta region of interest (ROI) (β [95% CI]: -0.10 [-0.20, -0.01]) in late-life. Similarly, those with carotid plaque had smaller regional brain volumes than those without (βs [95% CIs]: -0.05 [-0.12, 0.03] and -0.06 [-0.13, 0.01] for deep gray matter and temporal-parietal meta ROI). No significant relations were observed with WMH volume, infarcts, or cerebral microbleeds. Conclusion: Over a median follow-up of 21 years, greater midlife cIMT and presence of carotid plaque were associated with smaller deep gray matter volume and cortical volume in a meta ROI involving temporal and parietal lobe regions typically involved in neurodegeneration, including Alzheimer's disease, in later life. Contrary to our hypothesis, associations between measures of arterial injury and markers of vascular brain injury were null. abstract_id: PUBMED:29571754 Carotid Intraplaque Hemorrhage in Patients with Embolic Stroke of Undetermined Source. Background: Many embolic strokes are of undetermined source (ESUS). Carotid artery intraplaque hemorrhage (IPH), an unstable component of atherosclerosis, may be an under-recognized etiology in patients with ESUS. We investigated the prevalence of carotid IPH detected noninvasively by magnetic resonance imaging (MRI). Methods: This pilot study analyzed data from a prospective cohort of patients with a recent ESUS who underwent MRI for carotid IPH assessment. All patients had carotid artery stenosis of less than 50%. The primary outcome was the presence of carotid IPH ipsilateral to the cerebral ischemic event. Results: The cohort comprised 35 consecutive patients with a recent carotid-territory ESUS who underwent carotid MRI (mean age 74.3 ± 9.6 years). We found ipsilateral and contralateral IPH in 7 of 35 patients (20.0%) and in 3 of 35 patients (8.6%), respectively (P = .005). Conclusions: In this sample of patients with ESUS, 1 in 5 had carotid IPH ipsilateral to their acute infarct, as detected by MRI of the vessel wall. Further studies are warranted to investigate carotid IPH as an etiology of ESUS. Answer: Brain CT infarction patterns in patients with carotid atheroma can indeed predict future cerebrovascular events. A study found that different patterns of brain CT infarction have varying predictive values for subsequent stroke in patients with carotid atheroma. Specifically, patients with pattern A infarctions, which include discrete subcortical and cortical infarctions, had an 18.5% incidence of developing a stroke in the hemisphere of interest during follow-up. This was significantly higher compared to patients with pattern B infarctions (hemodynamic infarctions, widespread white matter lesions, basal ganglia infarctions, and lacunae) who had a 10.5% incidence, and those with normal CT appearance at 4.1% (PUBMED:11533517). Moreover, the pattern A brain CT infarction was associated with carotid plaques that had a lower gray scale median (GSM), suggesting hypoechoic plaques and a higher likelihood of extracranial carotid artery embolization. In contrast, pattern B was associated with hyperechoic plaques, indicating other mechanisms such as hemodynamic changes or intracranial small and large vessel disease (PUBMED:11174786). Additionally, other studies have identified clinical risk factors and CT imaging features of carotid atherosclerotic plaques that are independent predictors of incident stroke on follow-up. For instance, age, use of antihypertensive drugs, and maximal carotid wall thickness were significant predictors of new incident carotid stroke (PUBMED:22859283). Furthermore, carotid plaque inflammation, as indicated by the presence of inflammatory cells, has been associated with the number of brain non-lacunar infarcts detected on MRI (PUBMED:30439395). In summary, brain CT infarction patterns, along with other clinical and imaging factors, can indeed be predictive of future cerebrovascular events in patients with carotid atheroma. These findings suggest that detailed analysis of brain CT infarctions and carotid plaque characteristics can help stratify patients' risk for future strokes and guide clinical management.
Instruction: Intraoperative radiotherapy for early breast cancer: do health professionals choose convenience or risk? Abstracts: abstract_id: PUBMED:24461031 Intraoperative radiotherapy for early breast cancer: do health professionals choose convenience or risk? Background: The randomized TARGIT trial comparing experimental intra-operative radiotherapy (IORT) to up to 7 weeks of daily conventional external beam radiotherapy (EBRT) recruited participants in Western Australia between 2003 and 2012. We aimed to understand preferences for this evolving radiotherapy treatment for early breast cancer (EBC) in health professionals, and how they changed over time and in response to emerging data. Preferences for single dose IORT or EBRT for EBC were elicited in 2004 and 2011, together with factors that may be associated with these preferences. Methods: Western Australian health professionals working with breast cancer patients were invited to complete a validated, self-administered questionnaire. The questionnaire used hypothetical scenarios and trade-off methodology to determine the maximum increase in risk of local recurrence health professionals were willing to accept in order to have a single dose of IORT in the place of EBRT if they were faced with this decision themselves. Results: Health professional characteristics were similar across the two time points although 2011 included a higher number of nurse (49% vs. 36%) and allied health (10% vs. 4%) participants and a lower number of radiation therapists (17% vs. 32% ) compared to 2004.Health professional preferences varied, with 7.5% and 3% judging IORT unacceptable at any risk, 18% and 21% judging IORT acceptable only if offering an equivalent risk, 56% and 59% judging IORT acceptable with a low maximum increase in risk (1-3%) and 19% and 17% judging a high maximum increase in risk acceptable (4-5%), in 2004 and 2011 respectively. A significantly greater number of nurses accepted IORT as a treatment option in 2011. Conclusions: Most Western Australian health professionals working with breast cancer patients are willing to accept an increase in risk of local recurrence in order to replace EBRT with IORT in a hypothetical setting. This finding was consistent over two time points spanning 7 years despite the duration of clinical experience with IORT and the publication of the early clinical results of IORT in 2010. These results need to be compared with preferences elicited from patient groups, and further investigation into the impact of personal preferences on health professionals' advice to patients is warranted. abstract_id: PUBMED:33877937 Introducing a low-risk breast screening pathway into the NHS Breast Screening Programme: Views from healthcare professionals who are delivering risk-stratified screening. Objectives: Proposals to stratify breast screening by breast cancer risk aim to produce a better balance of benefits to harms. Notably, risk estimation calculated from common risk factors and a polygenic risk score would enable high-risk women to benefit from more frequent screening or preventive medication. This service would also identify low-risk women who experience fewer benefits from attending, as lower grade and in situ cancers may be treated unnecessarily. It may therefore be appropriate for low-risk women to attend screening less. This study aimed to elicit views regarding implementing less frequent screening for low-risk women from healthcare professionals who implement risk-stratified screening. Methods: Healthcare professionals involved in the delivery of risk-stratified breast screening were invited to participate in a focus group within the screening setting in which they work or have a telephone interview. Primary care staff were also invited to provide their perspective. Three focus groups and two telephone interviews were conducted with 28 healthcare professionals. To identify patterns across the sample, data were analysed as a single dataset using reflexive thematic analysis. Results: Analysis yielded three themes: Reservations concerning the introduction of less frequent screening, highlighting healthcare professionals' unease and concerns towards implementing less frequent screening; Considerations for the management of public knowledge, providing views on media impact on public opinion and the potential for a low-risk pathway to cause confusion and raise suspicion regarding implementation motives; and Deliberating service implications and reconfiguration management, where the practicalities of implementation are discussed. Conclusions: Healthcare professionals broadly supported less frequent screening but had concerns about implementation. It will be essential to address concerns regarding risk estimate accuracy, healthcare professional confidence, service infrastructure and public communication prior to introducing less frequent screening for low-risk women. abstract_id: PUBMED:25560807 Risk-reducing salpingectomy with delayed oophorectomy in BRCA1/2 mutation carriers: patients' and professionals' perspectives. Objective: To identify influencing factors of BRCA1/2 mutation carriers and their professionals for risk-reducing salpingectomy (RRS) with delayed oophorectomy (RRO) as a substitute for risk-reducing salpingo-oophorectomy (RRSO) and for study participation on this concept. Methods: A qualitative study was performed by four focus group interviews with 39 BRCA1/2 mutation carriers and semi-structured in-depth interviews with 23 professionals in the field of hereditary cancer. We used a theoretical framework of determinants of innovation within healthcare organizations to classify influencing factors (barriers and facilitators). Results: Among BRCA1/2 mutation carriers, main barriers for RRS with delayed RRO were seriousness of ovarian cancer, family history, and previous breast cancer. Among professionals, delay of risk-reducing effect of oophorectomy on breast cancer risk and a second operation were recognized as main barriers. Both BRCA1/2 mutation carriers and professionals found uncertainty about the effect of RRS with delayed RRO and ease of the decision to undergo RRSO important barriers. The main facilitator mentioned by both was longer maintenance of ovarian function thereby delaying negative effects of early surgical menopause. For study participation, BRCA1/2 mutation carriers mentioned a randomized study design as the main barrier, whereas professionals identified two facilitators, namely willingness of BRCA1/2 mutation carriers for study participation and uniform counseling. Furthermore, most BRCA1/2 mutation carriers and professionals were willing to consider participation in a future non-randomized study. Conclusions: We identified several barriers and facilitators for RRS with delayed RRO, and for study participation which can be addressed to optimize the design and implementation of a non-randomized study. abstract_id: PUBMED:29312887 Intraoperative Radiotherapy for Breast Cancer. Intraoperative radiotherapy (IORT) for early stage breast cancer is a technique for partial breast irradiation. There are several technologies in clinical use to perform breast IORT. Regardless of technique, IORT generally refers to the delivery of a single dose of radiation to the periphery of the tumor bed in the immediate intraoperative time frame, although some protocols have performed IORT as a second procedure. There are two large prospective randomized trials establishing the safety and efficacy of breast IORT in early stage breast cancer patients with sufficient follow-up time on thousands of women. The advantages of IORT for partial breast irradiation include: direct visualization of the target tissue ensuring treatment of the high-risk tissue and eliminating the risk of marginal miss; the use of a single dose coordinated with the necessary surgical excision thereby reducing omission of radiation and the selection of mastectomy for women without access to a radiotherapy facility or unable to undergo several weeks of daily radiation; favorable toxicity profiles; patient convenience and cost savings; radiobiological and tumor microenvironment conditions which lead to enhanced tumor control. The main disadvantage of IORT is the lack of final pathologic information on the tumor size, histology, margins, and nodal status. When unexpected findings on final pathology such as positive margins or positive sentinel nodes predict a higher risk of local or regional recurrence, additional whole breast radiation may be indicated, thereby reducing some of the convenience and low-toxicity advantages of sole IORT. However, IORT as a tumor bed boost has also been studied and appears to be safe with acceptable toxicity. IORT has potential efficacy advantages related to overall survival related to reduced cardiopulmonary radiation doses. It may also be very useful in specific situations, such as prior to oncoplastic reconstruction to improve accuracy of adjuvant radiation delivery, or when used as a boost in higher risk patients to improve tumor control. Ongoing international clinical trials are studying these uses and follow-up data are accumulating on completed studies. abstract_id: PUBMED:37511640 Canadian Healthcare Professionals' Views and Attitudes toward Risk-Stratified Breast Cancer Screening. Given the controversy over the effectiveness of age-based breast cancer (BC) screening, offering risk-stratified screening to women may be a way to improve patient outcomes with detection of earlier-stage disease. While this approach seems promising, its integration requires the buy-in of many stakeholders. In this cross-sectional study, we surveyed Canadian healthcare professionals about their views and attitudes toward a risk-stratified BC screening approach. An anonymous online questionnaire was disseminated through Canadian healthcare professional associations between November 2020 and May 2021. Information collected included attitudes toward BC screening recommendations based on individual risk, comfort and perceived readiness related to the possible implementation of this approach. Close to 90% of the 593 respondents agreed with increased frequency and earlier initiation of BC screening for women at high risk. However, only 9% agreed with the idea of not offering BC screening to women at very low risk. Respondents indicated that primary care physicians and nurse practitioners should play a leading role in the risk-stratified BC screening approach. This survey identifies health services and policy enhancements that would be needed to support future implementation of a risk-stratified BC screening approach in healthcare systems in Canada and other countries. abstract_id: PUBMED:28894737 FMECA Application to Intraoperative Electron Beam Radiotherapy Procedure As a Quality Method to Prevent and Reduce Patient's Risk in Conservative Surgery for Breast Cancer. Introduction: Failure Mode Effects and Criticalities Analysis (FMECA) represents a prospective method for risk assessment in complex medical practices. Our objective was to describe the application of FMECA approach to intraoperative electron beam radiotherapy (IOERT), delivered using a mobile linear accelerator, for the treatment of early breast cancer as an anticipated boost. Materials And Methods: A multidisciplinary Working Group, including several different professional profiles, was created before the beginning of clinical practice in 2012, with the purpose of writing the Flow Chart and applying the FMECA methodology to IOERT procedure. Several criticalities were identified a priori in the different steps of the procedure and a list of all potential failure modes (FMs) was drafted and ranked using the risk priority number (RPN) scoring system, based on the product of three parameters: severity, occurrence, and detectability (score between 1 and 5). The actions aimed at reducing the risk were then defined by the Working Group and the risk analysis was repeated in 2014 and in 2016, in order to assess the improvement achieved. Results: Fifty-one FMs were identified, which represented the issues prospectively investigated according to the FMECA methodology. Considering a set threshold of 30, the evaluated RPNs show that 33 out of 51 FMs are critical; 6 are included in the moderate risk class (RPN: 31-40); 16 in the intermediate risk class (RPN: 41-50), and 11 in the high risk class (RPN: &gt;50). Discussion: The most critical steps concerned the surgical procedure and IOERT set-up. The introduction of the corrective actions into the clinical practice achieved the reduction of the RPNs in the re-analysis of the FMECA worksheet after 2 and 4 years, respectively. Conclusion: FMECA proved to be a useful tool for prospective evaluation of potential failures in IOERT and contributed to optimize patient safety and to improve risk management culture among all the professionals of the Working Group. abstract_id: PUBMED:18431045 Intraoperative radiotherapy of breast cancer In the systemic therapy of breast cancer, the tumor itself has become the specific target. If possible the surgical excision of breast carcinoma is restricted to the tumor site and aims at an R0 resection of the invasive and preinvasive portions of the carcinoma. Only percutaneous whole-breast radiotherapy irradiates the whole breast. The additional boost irradiation is targeted and significantly improves local control in all age groups. Due to the increased detection of small breast carcinomas in postmenopausal patients by mammographic screening, it is necessary to consider a change of the existing therapeutic practice. Published results of partial irradiation of the breast (intra- as well as postoperatively) show a very high degree of local control with follow-ups of up to 11 years. At present prospective and randomized studies investigate for which patients an intraoperative radiotherapy is sufficient as the sole irradiation method after previous surgery. Intraoperative radiotherapy as a boost preceding percutaneous whole-breast irradiation should already be possible according to a relevant statement of the DEGRO. abstract_id: PUBMED:22187603 Knowledge of risk factors, beliefs and practices of female healthcare professionals towards breast cancer, Morocco. Background: Breast cancer is the most common cancer affecting women in Morocco. Screening for early detection has led to reduction in mortality from the disease. It is known that female healthcare professionals have greater influence on women's positive perception of breast cancer and motivation to practice screening methods for early detection of the disease. This study aims to investigate knowledge of breast cancer risk factors, beliefs about treatment and practice of screening methods among a cohort of female healthcare professionals in Morocco. Methods: A cross-sectional study was conducted using a self-administered questionnaire to assess the knowledge of breast cancer risk factors, beliefs about treatment and practice of screening methods among 136 female doctors and nurses working in the university hospital of Rabat, Morocco. Stratified random sampling method was employed. Chi square test, analysis of variance and Mantel-Haenszel test were performed in data analysis using SPSS v19.0. Results: Female doctors were the only professional group that had satisfactory knowledge of risk factors while the nurses had an unsatisfactory knowledge with a mean score of 43%. A half of participants believed that that herbal therapy can cure breast cancer. 75% practice breast self-examination once a month and only 15% have ever had a mammogram. Age, profession and beliefs were not significantly associated with rate of BSE in this study; however this rate is influenced by knowledge of breast cancer risk factors. Conclusion: Results from this study suggest the need for continuing medical education programs aimed at improving knowledge of breast cancer among the nurses. abstract_id: PUBMED:36224549 Healthcare professionals' views following implementation of risk stratification into a national breast cancer screening programme. Background: It is crucial to determine feasibility of risk-stratified screening to facilitate successful implementation. We introduced risk-stratification (BC-Predict) into the NHS Breast Screening Programme (NHSBSP) at three screening sites in north-west England from 2019 to 2021. The present study investigated the views of healthcare professionals (HCPs) on acceptability, barriers, and facilitators of the BC-Predict intervention and on the wider implementation of risk-based screening after BC-Predict was implemented in their screening site. Methods: Fourteen semi-structured interviews were conducted with HCPs working across the breast screening pathway at three NHSBSP sites that implemented BC-Predict. Thematic analysis interpreted the data. Results: Three pre-decided themes were produced. (1) Acceptability of risk-based screening: risk-stratification was perceived as a beneficial step for both services and women. HCPs across the pathway reported low burden of running the BC-Predict trial on routine tasks, but with some residual concerns; (2) Barriers to implementation: comprised capacity constraints of services including the inadequacy of current IT systems to manage women with different risk profiles and, (3) Facilitators to implementation: included the continuation of stakeholder consultation across the pathway to inform implementation and need for dedicated risk screening admin staff, a push for mammography staff recruitment and guidance for screening services. Telephone helplines, integrating primary care, and supporting access for all language needs was emphasised. Conclusion: Risk-stratified breast screening was viewed as a progressive step providing it does not worsen inequalities for women. Implementation of risk-stratified breast screening requires staff to be reassured that there will be systems in place to support implementation and that it will not further burden their workload. Next steps require a comprehensive assessment of the resource needed for risk-stratification versus current resource availability, upgrades to screening IT and building screening infrastructure. The role of primary care needs to be determined. Simplification and clarification of risk-based screening pathways is needed to support HCPs agency and facilitate implementation. Forthcoming evidence from ongoing randomised controlled trials assessing effectiveness of breast cancer risk-stratification will also determine implementation. abstract_id: PUBMED:33708543 Medical professionals' knowledge of the use of external breast prostheses among breast cancer patients in China-a cross-sectional study. Background: Breast loss has a negative effect on women physically, psychologically and socially. External breast prostheses can improve patients' figure physically, increase their self-confidence, and thus improve quality of life. Little is known about the knowledge of medical professionals on the use of external breast prostheses in mainland China. This study sought to examine medical professionals' knowledge of the use of external breast prostheses among breast cancer patients in China. Methods: Self-designed questionnaires were administered to 635 medical professionals specializing in breast cancer in China to examine their knowledge. Results: Medical professionals who were older in age, those who had higher levels of education and those who had senior professional titles had a broader professional knowledge of external breast prostheses. Medical professionals who were younger in age and those who had junior professional titles thought that the patients were willing to choose direct-adhesive breast prostheses. Medical professionals who were older in age, those who had higher levels of education, those who had senior professional titles and those who had been working for many (but less than 30) years were more likely to think that patients could wear external breast prostheses shortly after surgery, and at all times except when sleeping. More highly educated doctors, head nurses and medical staffs from the East were more likely to think that patients would accept the use of high-priced external breast prostheses. Conclusions: Medical professionals' knowledge about the wearing of external breast prostheses is lacking. A number of factors, including age, education level, professional title, number of years working years and geographical region have different effects on medical professionals' knowledge. Medical professionals with a good knowledge of external breast prostheses, especially specialist breast nurses, can provide patients with comprehensive information about breast prostheses, assist patients in selecting the appropriate prostheses, understand patients' wearing experiences, and help to reduce patients' physical and mental distress. Answer: The study conducted in Western Australia aimed to understand the preferences of health professionals for intraoperative radiotherapy (IORT) versus conventional external beam radiotherapy (EBRT) for early breast cancer (EBC) and how these preferences changed over time and in response to emerging data. The preferences were elicited in 2004 and 2011 using hypothetical scenarios and trade-off methodology. The results showed that most health professionals were willing to accept an increase in the risk of local recurrence to replace EBRT with IORT in a hypothetical setting. This finding was consistent over the two time points despite the duration of clinical experience with IORT and the publication of early clinical results of IORT in 2010. A significantly greater number of nurses accepted IORT as a treatment option in 2011. These results suggest that health professionals may choose the convenience of IORT despite a potential increase in risk, but further investigation into the impact of personal preferences on health professionals' advice to patients is warranted (PUBMED:24461031). In summary, the study indicates that health professionals may prioritize the convenience offered by IORT, which includes a single dose of radiation and potentially less disruption to patients' lives, over the potential increased risk of local recurrence compared to the longer course of EBRT. However, the study also highlights the need to compare these preferences with those of patient groups and to explore how personal preferences might influence the advice health professionals give to their patients.
Instruction: Radial head reconstruction in elbow fracture-dislocation: monopolar or bipolar prosthesis? Abstracts: abstract_id: PUBMED:24867446 Radial head reconstruction in elbow fracture-dislocation: monopolar or bipolar prosthesis? Background: Monopolar and bipolar radial head prosthetic arthroplasties have been used successfully to treat elbow fracture-dislocation with unsalvageable radial head fractures. The relative stability of these two designs in different clinical situations is a topic of ongoing investigation. Questions/purposes: We tested the effects of monopolar and bipolar fixed-neck prosthetic radial head implants on improvement in elbow coronal and axial plane laxity in a terrible triad biomechanical model that accounted for lateral collateral ligament integrity and the presence of a transverse coronoid fracture. Methods: Kinematic data were collected on six fresh-frozen cadaveric upper extremities tested with passive motion throughout the flexion arc. Varus and valgus gravity stress were applied with the wrist in neutral position. A lateral collateral ligament reconstruction was simulated. We assessed instability after radial head resection and reconstruction with either a monopolar or bipolar implant in the presence of a transversely fractured (Regan and Morrey Type 2) or fixed coronoid process. Results: With collateral ligament integrity, no difference was detected, with the numbers available, in valgus laxity between implants under valgus stress (p = 1.0). Laxity improvement with each prosthesis was higher when the coronoid was fractured (mean ± SD: monopolar: 7.4° ± 1.6°, p &lt; 0.001; bipolar: 6.4° ± 1.6°, p = 0.003) than when it was fixed (monopolar: 4.0° ± 1.6°, p = 0.02; bipolar: 4.2° ± 1.6°, p = 0.01). With the numbers available, there was no difference in external rotation laxity between implants under valgus stress (p = 1.0). The greatest stabilizing effect of the prostheses occurred when the coronoid was fractured (monopolar: 3.3° ± 1.2°, p = 0.15; bipolar: 3.3° ± 1.2°, p = 0.17). Radial head arthroplasty offered no substantial stability under varus stress for varus or internal rotation laxity. Conclusions: In our terrible triad cadaveric model, coronoid fixation was effective in improving varus laxity with a monopolar or bipolar prosthesis in place. Also, both types of prostheses were effective in improving valgus and external rotation laxity to the elbow, regardless of coronoid status. With collateral ligaments reconstructed, no large kinematic differences were noted between implants regardless of the varus-valgus position or whether the coronoid was fractured or fixed. Clinical Relevance: The data from our cadaveric model support the use of either implant type in terrible triad injuries if the collateral ligaments are intact or reconstructed. abstract_id: PUBMED:32173303 Comparison of bipolar and monopolar radial head prostheses in elbow fracture-dislocation. Introduction: The role of bipolar radial head prostheses (RHP) in elbow fracture-dislocation is controversial, with some reports of poorer stabilization than with monopolar designs. The aim of the present study was to compare mono- versus bi-polar RHPs in elbow fracture-dislocation. The study hypothesis was that mono- and bi-polar RHPs do not differ in clinical and radiological results, complications or revision rates. Material And Methods: A single-center retrospective study included 58 patients, with a mean age of 55 years (range, 21-84 years). All received RHP for elbow dislocation with association: terrible triad, Monteggia fracture-dislocation, transolecranal dislocation or divergent dislocation. Two groups were compared: Mono-RHP, with monopolar prosthesis (n=40), and Bi-RHP, with bipolar prosthesis (n=18). All patients underwent clinical and radiological examination at last follow-up. Results: Mean follow-up was 42.7 months (range, 12-131 months). There were no significant (p&gt;0.05) inter-group differences in range of motion or Mayo Elbow Performance Score. Mono- versus bi-polar design did not correlate with onset of complications (p=0.89), surgical revision (p=0.71), persistent or recurrent instability (p=0.59), or ulnohumeral (p=0.62) or capitulum (p=0.159) osteoarthritis at last follow-up. Discussion And Conclusion: No differences were found between mono- and bi-polar RHPs in the treatment of elbow fracture-dislocation. Clinical and radiographic results were similar, as were complications and revision rates. The literature is inconclusive, reporting contradictory conclusions. We see no contraindications to bipolar RHPs in elbow dislocation with association, notably terrible triad injury. Level Of Evidence: III, Retrospective case-control study. abstract_id: PUBMED:26975795 Factors associated with removal of a radial head prosthesis placed for acute trauma. Purpose: This study tests the hypothesis that there are no factors associated with removal or revision of a radial head prosthesis. A secondary analysis addressed the time to removal or revision. Methods: We reviewed the database of two large hospitals from 2000 to 2014 and identified 278 patients that had radial head replacement after an acute fracture or fracture dislocation of the elbow: 19 had removal and 3 had revision of the radial head implant within the study period. Explanatory variables including demographics, the type of injury, prosthesis type, surgeon, medical centre, and associated injuries were evaluated. Survival analysis using Kaplan-Meier curves evaluated time to removal/revision. Results: After adjustment for potential confounders using Cox regression multivariable analysis, hospital was the only factor independently associated with removal or revision (Hazard ratio=2.4, Confidence interval: 1.03-5.8, P value=0.043). The highest proportion of removal/revision was during the first year after implantation and decreased by half each year over the second to fourth years. The most common reason for removal of the prosthesis was to facilitate removal of heterotopic ossification (the majority with proximal radioulnar synostosis) rather than technical error or problems with the prostheses. Conclusion: These findings suggest that the decision to remove a radial head prosthesis may depend more on surgeon or hospital preferences than on objective problems with the prosthesis. Until clarified by additional study, removal of a prosthesis should not be considered an objective outcome in research. In addition, patients offered removal of a radial head prosthesis, might get the opinion of more than one surgeon at more than one hospital before deciding whether or not to proceed. Level Of Evidence: Level III Prognostic. abstract_id: PUBMED:33194100 Radial Head Arthroplasty, Excision and Osteosynthesis in Complex Elbow Fracture-Dislocations in Young Adults: What is Preferred? Objective: Complex elbow fractures are common injuries in young adults. Results in recent studies with various operative treatment protocols are equivocal. We compared the results of radial head arthroplasty, excision with osteosynthesis in such injuries at two follow-ups 1 year apart. Methods: Thirty-five patients of complex elbow fracture-dislocations including, posterior/anterior olecranon dislocation of elbow, terrible triad injury, anteromedial facet of coronoid fracture, Type IV Monteggia fracture-dislocation and unclassified elbow dislocation were enrolled. They were managed operatively by standardized protocol similar to McKee et al. radial head reconstruction with miniplates, lag screws or non-operative treatment for undisplaced fractures. Arthroplasty of radial head with cemented prosthesis + LCL repair with suture anchor/transosseous tunnel was done. Coronoid and olecranon fixation was always performed. Patients were evaluated as follows: Q-DASH score, MEPI, pain according to VAS, range of motion, complications and radiographic findings and fracture union, and elbow instability. Results: The mean length of follow-up among the patients was 18 months. There were statistically significant differences between the DASH score/MEPS between radial head excision and replacement/reconstruction. Instability was significantly reduced in those with LCL reconstruction. Excising the comminuted radial head without replacement had the worst outcome. Best PROM was reported in patients with osteosynthesis. Conclusion: Provided a standard protocol is applied, radial head osteosynthesis has preferable outcomes in terms of patient-related outcomes as compared to arthroplasty, although not statistically significant. Radial head excision though has acceptable outcomes but there is a restriction of movements especially flexion-extension. Acceptable rate of complications major or minor warrants need of secondary surgical procedures or a staged treatment. abstract_id: PUBMED:34441783 Long-Term Outcomes of Radial Head Arthroplasty in Complex Elbow Fracture Dislocation. The purpose of the current study was to investigate the long-term outcomes of radial head arthroplasty in complex elbow injuries through radiographic analysis and functional correlation. We evaluated 24 radial head arthroplasties in 24 consecutive patients with complex elbow fracture dislocation. All patients were treated with a single type of modular monopolar prosthesis containing smooth stem in press-fit implantation. Clinical survey using the Mayo Elbow Performance Score (MEPS), self-reported scales of shortened Disabilities of the Arm, Shoulder, and Hand (QuickDASH) and the visual analog scale (VAS) at more than 10-year follow-up were reported and compared to 2-year outcomes. Periprosthetic osteolysis was measured in the 10 zones of prosthesis-cortical interface with a modified radiolucency score, which was calibrated by each prosthesis size. Pearson correlation analysis was performed to detect the association between periprosthetic radiolucency and clinical assessment. At the final follow-up, MEPS, QuickDASH score and VAS score averaged 82.5 ± 15, 14.1 ± 14.3 and 1.6 ± 1.2 respectively. A decline in functional status was noted, with decreased mean MEPS and increased mean QuickDASH and VAS scores as compared to the 2-year results while the difference was insignificant. Periprosthetic osteolysis was more prevalent around stem tip of zone 3 and zone 8. The final and 2-year radiolucency scores averaged 7.4 ± 4.2 and 2.6 ± 2.3 respectively with significant difference. Pearson correlation analysis indicated that the difference between radiolucency scores and clinical outcomes in MEPS/QuickDASH/VAS was -0.836, 0.517 and 0.464. Progression of periprosthetic osteolysis after postoperative 10 years is more prevalent around the stem tip with moderate to high correlation to clinical outcomes. Sustained follow-up is warranted to justify subsequent surgery for revision or implant removal. abstract_id: PUBMED:16831652 Vitallium radial head prosthesis for acute and chronic elbow fractures and fracture-dislocations involving the radial head. This retrospective study aims to evaluate the radiographic, functional, and patient-derived outcomes of 16 patients who each received a Vitallium radial head prosthesis for unreconstructable acute fractures of the radial head, as well as previously treated fractures of the radial head associated with residual instability, pain, and stiffness. Follow-up averaged 33 months. A trend toward greater disability and poorer motion was noted in the delayed treatment group compared with the acute replacement group. Overall, the results were excellent in 5 patients, good in 10, and poor in 1, as determined by the Mayo Elbow Performance Score. All elbows were stable at follow-up, and no patient reported wrist pain. Four required further operative treatment of their elbow injuries. Metallic radial head arthroplasty yields satisfactory results in acute unreconstructable radial head fractures or as a salvage procedure for previously treated radial head fractures. abstract_id: PUBMED:32432481 Patient and Surgeon Factors Associated With Prosthetic Replacement Rather Than With Open Reduction and Internal Fixation of a Radial Head Fracture. Background: The decision between radial head arthroplasty and open reduction internal fixation in the context of a terrible triad elbow fracture-dislocation is debated. This study investigated both surgeon and patient factors associated with surgeons' recommendations to use arthroplasty. Methods: One hundred fifty-two surgeon members of the Science of Variation Group participated. Surgeons were asked to complete an online survey that included surgeon demographics and 16 patient scenarios. The patient scenarios were randomized using 2 patient variables and 2 anatomical variables. Multilevel logistic mixed regression analysis was performed to identify surgeon and patient variables associated with recommendations for radial head arthroplasty. Results: We found that radial head replacement was recommended in 38% of the scenarios. Scenarios with older patients, with fractures of the whole head, and those involving 3 fracture fragments were independently associated with radial head replacement. Conclusion: We found that most surgeons recommended radial head fracture fixation rather than arthroplasty. Surgeons were more likely to recommend fixation for younger patients with partial articular fractures or with fractures with 3 or fewer fracture fragments. It seems that surgeons are uneasy about using a prosthesis in a young active patient. abstract_id: PUBMED:33330182 Comminuted Radial Head Fracture in All-arthroscopic Repair of Elbow Fracture-dislocation: Is Partial Excision of the Radial Head an Acceptable Treatment Option? Background: In elbow fracture-dislocation, partial excision of the comminuted radial head fracture that is not amenable to fixation remains controversial considering the accompanying symptoms. This study was undertaken to evaluate the results of radial head partial excision when the comminuted radial head fracture involved &lt;50% of the articular surface in all-arthroscopic repair of elbow fracture-dislocation. Methods: Patients were divided into two groups based on the condition of the radial head fracture. In Group A, the patients had a radial head comminuted fracture involving &lt;50% of the articular surface, and underwent arthroscopic partial excision. Group B was the non-excision group comprising patients with stable and non-displacement fractures. Follow-up consultations were conducted at 6 weeks and at 3, 6, 12, and 24 months after surgery. Results: In all, 19 patients (Group A: 11; Group B: 8) met the inclusion criteria and were enrolled in the study. At the final follow-up, all 19 patients showed complete resolution of elbow instability. No significant differences were observed in the range of motion, visual analogue scale score, and Mayo elbow performance score between groups. Radiological findings did not show any complications of the radiocapitellar joint. However, nonunion of the coracoid fracture was observed in 3 patients (Group A: 1; Group B: 2), without any accompanying instability and clinical symptoms. Conclusions: Considering that the final outcome is coronoid fracture fixation and lateral collateral ligament complex repair for restoring elbow stability, arthroscopic partial excision for radial head comminuted fractures involving &lt;50% of articular surface is an effective and acceptable treatment for elbow fracture-dislocation. abstract_id: PUBMED:35846394 Survivorship of anatomic press-fit short-stem radial head replacement with a pyrocarbon bearing. Background: Restoring the radial head as a stabiliser of the elbow is considered important in cases where there is an associated bony or ligamentous injury. A variety of radial head prostheses are available. There are no reports of the survivorship of a short-stem press-fit radial head prostheses with pyrocarbon bearing. Patients And Methods: With institutional review board approval, a retrospective case-note analysis was performed of a consecutive single-surgeon case series of the Integra LifeScience pyrocarbon radial head prostheses (Carbon Modular Radial Head) from October 2010 to October 2019 in a tertiary referral centre. The series was divided into acute trauma and salvage cohorts. Kaplan-Meier survivorship analysis was conducted. Results: 36 patients were included, 23 (64%) for acute injuries and 13 (36%) for failed initial treatment. Of the acute injuries, 20 (87%) had a Wrightington type-C elbow fracture-dislocation. Reoperation was performed in 4 (11%) patients; 10 (28%) had possible loosening on radiographs. The implant survival rate was 94% at shortest follow-up of 17 months (mean 70 months). Discussion: Treatment of complex radial head fractures using a pyrocarbon-bearing, anatomic press-fit design provides satisfactory short-term survivorship in this case series. The implant should be used with caution in salvage cases, due to higher rates of loosening seen in this cohort. abstract_id: PUBMED:36334862 Decision-making in the treatment of radial head fractures: Delphi methodology. Background: The treatment of partially displaced radial head fractures is determined not only by the type of fracture but also by patient characteristics such as age, occupation, hand dominance, mechanism of injury, and concomitant injuries and comorbidities. The goal of this study was to employ the Delphi method to achieve consensus on the management of patients with radial head fractures, utilizing the experience of the ASES Elbow Fracture-Dislocation Multicenter Study Group and Mayo Elbow Club surgeons. Methods: The initial survey was sent to participants, which included consent to participate in the study and questions about their experience, knowledge, and interest in participating in the Delphi method. We used both open-ended and category-based questions. The second questionnaire generated 76 variables, and individual questions with mean Likert ratings of &lt;2.0 or &gt;4.0 were deemed significant and merged to form multifactorial clinical scenarios relating to both nonoperative and operative management, respectively. Results: Of the surgeons who responded to the questionnaire, 64% were from the United States, while the remainder were from overseas practices. Years in practice on average were 12.4 years (range, 1-40). Seven of the 76 factors met the criteria of a mean Likert score of &lt;2.0 or &gt;4.0. These factors were as follows: age, block to the range of motion (ROM) after aspiration/injection, crepitation with ROM, tenderness over the distal radioulnar joint and/or interosseous membrane (dorsal forearm), gap and/or displacement &gt;2 mm on imaging, complete loss of contact of the head with the rest of the radius on imaging, and fracture head involvement 30% on imaging. Twenty-two (46%) of the 96 clinical scenarios gained &gt;90% consensus in favor of surgical treatment, whereas 8 (17%) reached &gt;90% consensus in favor of nonoperative treatment. Conclusion: Obtaining expert consensus on the treatment of radial head fractures remains challenging. Certain factors such as gap/displacement ≥2 mm without complete loss of contact, ≥30% head involvement with a block to an ROM regardless of tenderness over distal radioulnar joint or interosseous membrane (dorsal forearm), or crepitation when the patient was &lt;80 years of age led to a recommendation of operative treatment in 100% of the surveyed surgeons. Patients older than 80 years with no block to ROM after aspiration/injection, no crepitation with ROM, and no tenderness on distal radioulnar joint/interosseous membrane (dorsal forearm) were recommended for nonoperative treatment regardless of the size of the radial head involvement on imaging. Answer: The choice between monopolar and bipolar radial head prostheses in the treatment of elbow fracture-dislocation has been a subject of investigation and debate. Both types of prostheses have been used successfully in such cases, and the decision often depends on the specific clinical situation. A biomechanical study using a terrible triad cadaveric model found that both monopolar and bipolar prostheses were effective in improving valgus and external rotation laxity to the elbow, regardless of the coronoid status. With collateral ligaments reconstructed, no large kinematic differences were noted between the two types of implants, suggesting that either implant type could be used in terrible triad injuries if the collateral ligaments are intact or reconstructed (PUBMED:24867446). A retrospective study comparing monopolar and bipolar radial head prostheses in elbow fracture-dislocation found no significant differences in clinical and radiological results, complications, or revision rates between the two groups. The study concluded that both types of prostheses are viable options for treating elbow fracture-dislocation, including terrible triad injuries (PUBMED:32173303). Another study focusing on factors associated with the removal or revision of radial head prostheses placed for acute trauma found that the decision to remove a prosthesis may depend more on surgeon or hospital preferences than on objective problems with the prosthesis itself (PUBMED:26975795). In terms of long-term outcomes, a study evaluating radial head arthroplasty in complex elbow injuries found that periprosthetic osteolysis was more prevalent around the stem tip and moderately to highly correlated with clinical outcomes, suggesting that sustained follow-up is necessary to justify subsequent surgery for revision or implant removal (PUBMED:34441783). Overall, the literature suggests that both monopolar and bipolar radial head prostheses can be effective in treating elbow fracture-dislocation. The choice between the two may depend on various factors, including surgeon preference, patient characteristics, and the specifics of the injury. It is important for surgeons to consider all relevant factors when deciding on the most appropriate prosthesis for each individual patient.