input
stringlengths
6.82k
29k
Instruction: Collaboration between health services managers and researchers: making a difference? Abstracts: abstract_id: PUBMED:30787619 Leadership in interprofessional collaboration in health care. Purpose: There is a need to develop more knowledge on how frontline managers in health care services facilitate the development of new roles and ways of working in interprofessional collaborative efforts and the challenges they face in daily practice. The article is based on a study that examines the modes of governance adopted by frontline managers in Norway, with a special focus on leadership in collaborations between the Norwegian profession of social educator and other professions. Materials And Methods: A qualitative research design was chosen with interviews of eleven frontline managers from district psychiatric centers, municipal health care services and nursing homes. Results: The results show that frontline managers largely exercise leadership in terms of self-governance and co-governance and, to a lesser degree, hierarchical governance. Self-governance and co-governance can facilitate substantial maneuverability in terms of professional practice and strengthen both discipline-related and user-oriented approaches in the collaboration. However, one consequence of self-governance and co-governance may be that some occupational groups and professional interests subjugate others, as illustrated by social educators in this study. This may be in conflict with frontline managers' abilities to quality assure the services as well as their responsibility for role development in their staff. Conclusion: The results show that frontline managers experience challenges when they try to integrate different professions in order to establish new professional roles and competence. Frontline managers need to support individual and collective efforts in order to reach the overall goals for the services. They must be able to facilitate change and support creativity in a working community that consists of different professions. Moreover, the social educator's role and competence need clarifications in services that traditionally have been dominated by other clinical and health care professions. abstract_id: PUBMED:27629796 The Effects of Collaborative Care Training on Case Managers' Perceived Depression-Related Services Delivery. Objective: This study examined the effects of a depression care quality improvement (QI) intervention implemented by using Community Engagement and Planning (CEP), which supports collaboration across health and community-based agencies, or Resources for Services (RS), which provides technical assistance, on training participation and service delivery by primarily unlicensed, racially and ethnically diverse case managers in two low-income communities in Los Angeles. Methods: The study was a cluster-randomized trial with program-level assignment to CEP or RS for implementation of a QI initiative for providing training for depression care. Staff with patient contact in 84 health and community-based programs that were eligible for the provider outcomes substudy were invited to participate in training and to complete baseline and one-year follow-up surveys; 117 case managers (N=59, RS; N=58, CEP) from 52 programs completed follow-up. Primary outcomes were time spent providing services in community settings and use of depression case management and problem-solving practices. Secondary outcomes were depression knowledge and attitudes and perceived system barriers. Results: CEP case managers had greater participation in depression training, spent more time providing services in community settings, and used more problem-solving therapeutic approaches compared with RS case managers (p<.05). Conclusions: Training participation, time spent providing services in community settings, and use of problem-solving skills among primarily unlicensed, racially and ethnically diverse case managers were greater in programs that used CEP rather than RS to implement depression care QI, suggesting that CEP offers a model for including case managers in communitywide depression care improvement efforts. abstract_id: PUBMED:32762694 Managers in the publicly funded health services in China - characteristics and responsibilities. Background: Health service managers are integral to supporting the effective and efficient delivery of services. Understanding their competencies is essential to support reform and improvement of healthcare provision in China. This paper examines the characteristics and educational background of senior managers working in the community health and hospital sectors in China. We also examine their levels of commitment to continued professional development and continuous education. Methods: A self-administered paper-based questionnaire was administered to 477 level I, II and III managers in community health services and public hospitals in China. The response rate was over 80%. Results: Findings demonstrate significant differences in terms of educational background and commitment to ongoing professional development between the managers in China across levels of management, and between the community and hospital sectors. Hospital managers tend to be older; hospital managers at higher management levels are predominantly male but predominantly female in the community health services. A greater proportion of hospital managers have postgraduate qualifications. In addition, the participants identified specific management tasks that they considered important. Conclusions: This is the first large scale study examining the educational background and commitment to professional development of senior health service managers in China. This study determined that there are differences between the demographics of managers in China across levels of management, but more importantly between the CHC and the hospital sectors. The identification of important managerial tasks will facilitate the development of appropriate education and training for Chinese healthcare managers. All sectors and levels reported the need for informal education focussed on the core roles of developing organisation image and public relations, improving quality and safety of service provision and provision of leadership. Further research to explore the underlying reasons for the above differences is needed to design appropriate professional development for China's health services managers. In addition, the importance of managerial tasks across sectors and management levels requires further investigation. abstract_id: PUBMED:31564222 Care managers in rural Japan: Challenges to interprofessional collaboration. Effective interprofessional collaboration for care managers is vital for the care of older people. This study's aim was to inquire into the obstacles to interprofessional collaboration faced by care managers in rural areas of Japan. Forty-six care managers participated in group discussions and semi-structured interviews, and the qualitative data were analyzed using thematic analysis. Five themes related to obstacles emerged from the analysis regarding relationships with physicians, professional competency, relationships among other professionals, environmental constraints, and relationships with nonprofessionals. Other professionals' unfamiliarity with the care manager's role and a lack of mutual understanding, boundaries, and information sharing among medical professionals were also cited as issues. abstract_id: PUBMED:22572710 Collaboration between health services managers and researchers: making a difference? Objective: Our aim was to evaluate whether the involvement of health care managers in research projects improves the quality and relevance of research, and whether collaboration builds capacity in the managerial community. Methods: The NIHR Service Delivery and Organization Management Fellowship programme supports the direct involvement of health care managers in research projects. Data were collected from face-to-face interviews with management fellows and chief investigators of research projects at 10 case study sites. Data were analysed thematically using an adapted Kirkpatrick framework for programme evaluation. Results: Management fellows improved the relevance and quality of research through enhancing its validity, efficiency and credibility. This was achieved by: using their contextual understanding to enable and support access and recruitment participants, data collection tools, processes and analysis; supporting dissemination activities; and undertaking additional work which was complementary to the main project. Capacity was developed through formal courses and exposure to new knowledge, ideas and practices. Factors found to enable or impede improvements in research included management fellows' knowledge and experience of the NHS, their background and personal characteristics, mutual respect, timing and flexibility. Consequences were not always predictable. Costs for management fellows included foregone opportunities, specifically for promoted posts. Researchers reported time-costs associated with administering the fellowship. Conclusions: Collaborations between managers and researchers can improve research relevance and quality and research capacity development. Factors critical to success relate to the fit between the project and the management fellow and how clearly the purpose is understood. abstract_id: PUBMED:33832409 Examining Interprofessional Collaboration across case managers, peer educators, and counselors in New York City. Many individuals who are vulnerable to HIV infection and People Living with HIV (PLWH) experience fragmented prevention and care. Prevention and care service integration, pivotal for the HIV care continuum, depends on relationships among service providers and agencies offering HIV services. Case managers, counselors, and peer educators often work together to provide integrated services through interprofessional collaboration (IPC) in HIV prevention and care. Although these providers have distinct job titles, they typically offer complementary services on the HIV care continuum. To better train and allocate professional development resources for these providers, research is needed to assess the overall differences between provider-type and their demographics, intrapersonal factors, and job characteristics most likely to predict IPC engagement. We administered a cross-sectional survey to 75 counselors, 80 peer educators, and 112 case managers in 36 agencies in New York City. We performed a series of linear mixed effects models. Most of the HIV-service providers identified as Black and female and had been working for their agencies for less than a year. Knowledge and skills, self-efficacy, understanding of the community, and greater work hours (> 35 hours) were significant predictors of endorsement of IPC. Peer educators compared to case managers were more likely to reflect on the process as they provide myriad services. Eliciting perspectives from providers allows us to explore interventions, both intra-agency (trainings, greater exposure to collaborative initiatives, and supervision) and interagency (retention programs and websites promoting provider collaboration), that could facilitate IPC engagement and integrated services across the HIV care continuum. abstract_id: PUBMED:22125502 Barriers to collaboration between health care, social services and schools. Background: It is essential for professionals from different organizations to collaborate when handling matters concerning children, adolescents, and their families in order to enable society to provide health care and social services from a comprehensive approach. Objective: This paper reports perceptions of obstacles to collaboration among professionals in health care (county council), social services (municipality), and schools in an administrative district of the city of Stockholm, Sweden. Methods: Data were collected in focus group interviews with unit managers and personnel. Results And Discussion: Our results show that allocation of responsibilities, confidence and the professional encounter were areas where barriers to collaboration occurred, mainly depending on a lack of clarity. The responsibility for collaboration fell largely on the professionals and we found that shared responsibility of managers from different organizations is a crucial factor affecting successful collaboration. We conclude that a holding environment, as a social context that facilitates sense making, and a committed management would support these professionals in their efforts to collaborate. abstract_id: PUBMED:23833601 Challenges of interprofessional collaboration in Iranian mental health services: A qualitative investigation. Background: Nurses and other members of health care team provide mental patients with health services through interprofessional collaboration which is a main strategy to improve health services. Nevertheless, many difficulties are evidently influencing interprofessional collaboration in Iranian context. This paper presented the results of a study aimed to explore the context. Materials And Methods: A qualitative study was conducted using in-depth interviews to collect data from 20 health professionals and 4 clients or their family members who were selected purposefully from the health centers affiliated with Isfahan University of Medical Sciences. Themes were identified using latent qualitative content analysis. Trustworthiness of the study was supported considering auditability, neutrality, consistency and transferability. The study lasted from 2010 to 2011. Findings: Some important challenges were identified as protecting professional territory, medical oriented approach and teamwork deficits. They were all under a main theme emphasizing professionals' divergent views. It could shed insight into underlying causes of collaboration gaps among nurses and other health professionals. Conclusions: The three introduced themes implied difficulties mainly related to divergences among health professionals. Moreover, the difficulties revealed the need for training chiefly to improve their convergent shared views and approaches. Therefore, it is worthwhile to suggest interprofessional education for nurses and other professionals with special attention to improving interpersonal skills as well as mental health need-based services. abstract_id: PUBMED:37194101 The role of local context for managers' strategies when adapting to the COVID-19 pandemic in Norwegian homecare services: a multiple case study. Background: The COVID-19 pandemic had a major impact on healthcare systems around the world, and lack of resources, lack of adequate preparedness and infection control equipment have been highlighted as common challenges. Healthcare managers' capacity to adapt to the challenges brought by the COVID-19 pandemic is crucial to ensure safe and high-quality care during a crisis. There is a lack of research on how these adaptations are made at different levels of the homecare services system and how the local context influences the managerial strategies applied in response to a healthcare crisis. This study explores the role of local context for managers' experiences and strategies in homecare services during the COVID-19 pandemic. Methods: A qualitative multiple case study in four municipalities with different geographic locations (centralized and decentralized) across Norway. A review of contingency plans was performed, and 21 managers were interviewed individually during the period March to September 2021. All interviews were conducted digitally using a semi-structured interview guide, and data was subjected to inductive thematic analysis. Results: The analysis revealed variations in managers' strategies related to the size and geographical location of the homecare services. The opportunities to apply different strategies varied among the municipalities. To ensure adequate staffing, managers collaborated, reorganized, and reallocated resources within their local health system. New guidelines, routines and infection control measures were developed and implemented in the absence of adequate preparedness plans and modified according to the local context. Supportive and present leadership in addition to collaboration and coordination across national, regional, and local levels were highlighted as key factors in all municipalities. Conclusion: Managers who designed new and adaptive strategies to respond to the COVID-19 pandemic were central in ensuring high-quality Norwegian homecare services. To ensure transferability, national guidelines and measures must be context-dependent or -sensitive and must accommodate flexibility at all levels in a local healthcare service system. abstract_id: PUBMED:28579793 Future challenges for occupational health services can be prevented by proactive collaboration with the companies using the services: a participatory and reflection project. Background: There is clearly a need for research in the field of occupational health service (OHS) for applying new perspectives. Proactive collaboration is needed between the OHSs and the companies. The customers of the companies using the services should be able to safeguard themselves from the health problems caused by the work environment through proactive collaboration with the OHSs. Objective: The main purpose of this interdisciplinary study was to explore how the stakeholders reflected to create and agree on core values for future challenges in OHS, as seen from the perspectives of OHS professionals and customer companies. Methodology: An action research process was conducted. This study was divided into three phases. In phase I, the data were collected from interviews and diaries of interdisciplinary occupational health professionals (n=12). A focus group that sampled the eight managers of the customer companies was also included. In phase II, a questionnaire was developed with 24 questions focusing on examining the future challenges for OHS. The questionnaire was sent to customer companies (n=116). In phase III, a scoping review was undertaken. Results: Three categories emerged from the analysis: "Balancing complex situations" clarified the complexity regarding senior employees; "Working with a proactive approach" indicated the need for working with a new proactive approach supporting sustainable health; and "Collaborate internally and externally" showed good relationships between the customer and the OHS, which is a mutual responsibility to both the partners. Conclusion: The results outlined that it is necessary to take action to apply new proactive health promotions, with a focus on workplace health promotion. The results also indicated that interventions for senior employees are of importance. This study was done in collaboration with the stakeholders from the occupational health care service center and the managers from the customer companies. The use of a participatory research design, including close collaboration with the participants, allows the researchers to see the challenges. Answer: The collaboration between health services managers and researchers can indeed make a difference in the quality and relevance of research, as well as in building capacity within the managerial community. According to a study evaluating the involvement of healthcare managers in research projects through the NIHR Service Delivery and Organization Management Fellowship programme, management fellows were found to improve the relevance and quality of research by enhancing its validity, efficiency, and credibility. This was achieved through their contextual understanding, which supported participant recruitment, data collection, and analysis, as well as dissemination activities. The study also found that capacity was developed through formal courses and exposure to new knowledge, ideas, and practices. However, success factors were related to the fit between the project and the management fellow and the clarity of the project's purpose (PUBMED:22572710). Moreover, collaborative care training for case managers, as part of a depression care quality improvement intervention, showed that case managers who participated in the training spent more time providing services in community settings and used more problem-solving therapeutic approaches compared to those who did not receive such training. This suggests that collaborative models like Community Engagement and Planning (CEP) can be effective in involving case managers in community-wide care improvement efforts (PUBMED:27629796). In the context of interprofessional collaboration, challenges such as protecting professional territory, medical-oriented approaches, and teamwork deficits have been identified, particularly in Iranian mental health services. These challenges stem from divergent views among health professionals, indicating a need for training to improve shared views and approaches (PUBMED:23833601). Overall, collaboration between health services managers and researchers can lead to improvements in research and practice, provided there is a good match between the collaborators and a clear understanding of the collaborative goals. Additionally, such collaborations can help overcome barriers to effective interprofessional collaboration and lead to better health outcomes.
Instruction: High knee abduction moments are common risk factors for patellofemoral pain (PFP) and anterior cruciate ligament (ACL) injury in girls: is PFP itself a predictor for subsequent ACL injury? Abstracts: abstract_id: PUBMED:24687011 High knee abduction moments are common risk factors for patellofemoral pain (PFP) and anterior cruciate ligament (ACL) injury in girls: is PFP itself a predictor for subsequent ACL injury? Background: Identifying risk factors for knee pain and anterior cruciate ligament (ACL) injury can be an important step in the injury prevention cycle. Objective: We evaluated two unique prospective cohorts with similar populations and methodologies to compare the incidence rates and risk factors associated with patellofemoral pain (PFP) and ACL injury. Methods: The 'PFP cohort' consisted of 240 middle and high school female athletes. They were evaluated by a physician and underwent anthropometric assessment, strength testing and three-dimensional landing biomechanical analyses prior to their basketball season. 145 of these athletes met inclusion for surveillance of incident (new) PFP by certified athletic trainers during their competitive season. The 'ACL cohort' included 205 high school female volleyball, soccer and basketball athletes who underwent the same anthropometric, strength and biomechanical assessment prior to their competitive season and were subsequently followed up for incidence of ACL injury. A one-way analysis of variance was used to evaluate potential group (incident PFP vs ACL injured) differences in anthropometrics, strength and landing biomechanics. Knee abduction moment (KAM) cut-scores that provided the maximal sensitivity and specificity for prediction of PFP or ACL injury risk were also compared between the cohorts. Results: KAM during landing above 15.4 Nm was associated with a 6.8% risk to develop PFP compared to a 2.9% risk if below the PFP risk threshold in our sample. Likewise, a KAM above 25.3 Nm was associated with a 6.8% risk for subsequent ACL injury compared to a 0.4% risk if below the established ACL risk threshold. The ACL-injured athletes initiated landing with a greater knee abduction angle and a reduced hamstrings-to-quadriceps strength ratio relative to the incident PFP group. Also, when comparing across cohorts, the athletes who suffered ACL injury also had lower hamstring/quadriceps ratio than the players in the PFP sample (p<0.05). Conclusions: In adolescent girls aged 13.3 years, >15 Nm of knee abduction load during landing is associated with greater likelihood of developing PFP. Also, in girls aged 16.1 years who land with >25 Nm of knee abduction load during landing are at increased risk for both PFP and ACL injury. abstract_id: PUBMED:27479217 Gender differences in knee abduction during weight-bearing activities: A systematic review and meta-analysis. Background: Increased knee abduction during weight-bearing activities is suggested to be a contributing factor for the high knee injury risk reported in women. However, studies investigating gender difference in knee abduction are inconclusive. Objective: To systematically review gender-differences in knee abduction during weight-bearing activities in individuals with or without knee injury. Methods: A systematic review and meta-analysis were conducted according to the PRISMA guidelines. A search in the databases Medline, CINAHL and EMBASE was performed until September 2015. Inclusion criteria were studies that reported (1) gender differences, (2) healthy individuals and/or those with anterior cruciate ligament (ACL) deficiency or reconstruction or patellofemoral pain PFP, and (3) knee abduction assessed with either motion analysis or visual observation during weight-bearing activity. Results: Fifty-eight articles met the inclusion criteria. Women with PFP had greater peak knee abduction compared to men (Std diff in mean; -1.34, 95%CI; -1.83 to -0.84). In healthy individuals, women performed weight-bearing tasks with greater knee abduction throughout the movement (initial contact, peak abduction, excursion) (Std diff in mean; -0.68 to -0.79, 95%CI; -1.04 to -0.37). In subgroup analyses by task, differences in knee abduction between genders were present for most tasks, including running, jump landings and cutting movements. There were too few studies in individuals with ACL injury to perform meta-analysis. Conclusion: The gender difference in knee abduction during weight-bearing activities should be considered in training programs aimed at preventing or treating knee injury. abstract_id: PUBMED:27032529 Patellofemoral Osteoarthritis: Are We Missing an Important Source of Symptoms After Anterior Cruciate Ligament Reconstruction? Anterior cruciate ligament (ACL) rupture is a well-established risk factor for knee osteoarthritis (OA). Fifty to ninety percent of individuals will develop radiographic tibiofemoral OA within a decade after ACL injury and anterior cruciate ligament reconstruction (ACLR). Although less well recognized, radiographic patellofemoral OA is present in approximately 50% of individuals at more than 10 years after ACLR. This early-onset OA and its associated pain and functional limitations pose a particular challenge to younger adults with OA compared to an older OA population. Targeted interventions need to be developed to reduce the burden of early-onset OA following ACLR. Emerging evidence suggests that such interventions should target both the patellofemoral and tibiofemoral joints. abstract_id: PUBMED:24697014 Morphometric parameters as risk factors for anterior cruciate ligament injuries - a MRI case-control study. Background/aim: The anterior cruciate ligament (ACL) is the most frequently injured ligament of the knee, representing 50% of all knee injuries. The aim of this study was todeter mine the differences in the morphometry of knee injury patients with an intact and a ruptured anterior cruciate ligament. Methods: The study included 33 matched pairs of patients divided into two groups: the study group with the diagnosis of anterior cruciate ligament rupture, and the control group with the diagnosis of patellofemoral pain but no anterior cruciate ligament lesion. The patients were matched on the basis of 4 attributes: age, sex, type of lesion (whether it was profession-related), and whether the lesion was left- or right-sided. Measurements were carried out using magnetic resonance imaging (MRI). Results: The anterior and posterior edges of the anterior cruciate ligament in the control group were highly significantly smaller (p < 0.01; in both cases). The control group showed a statistically significantly larger width of the anterior cruciate ligament (p < 0.05). A significant correlation between the width of the anterior cruciate ligament and the width (p < 0.01) and height (p < 0.05) of the intercondylar notch was found to exist in the control group, but not in the study group (p > 0.05). The patients in the control group showed a shorter but wider anterior cruciate ligament in comparison to their matched pairs. The control group of patients was also characterized by the correlation between the width of the intercondylar notch and the width of the anterior cruciate ligament, which was not the case in the study group. Conclusions: According to the results of our study we can say that a narrow intercondylar notch contains a proportionally thin anterior cruciate ligament, but we cannot say that this factor necessarily leads to rupture of the anterior cruciate ligament. abstract_id: PUBMED:12218469 The science of anterior cruciate ligament rehabilitation. This review of the literature assessed what is known about the biomechanics of the normal anterior cruciate ligament during rehabilitation exercises, the biomechanical behavior of the anterior cruciate ligament graft during healing, and clinical studies of rehabilitation after anterior cruciate ligament replacement. After anterior cruciate replacement, immobilization of the knee, or restricted motion without muscle contraction, leads to undesired outcomes for the ligamentous, articular, and muscular structures that surround the joint. It is clear that rehabilitation that incorporates early joint motion is beneficial for reducing pain, minimizing capsular contractions, decreasing scar formation that can limit joint motion, and is beneficial for articular cartilage. There is evidence derived from randomized controlled trials that immediately after anterior cruciate ligament reconstruction, weightbearing is possible without producing an increase of anterior knee laxity and is beneficial because it lowers the incidence of patellofemoral pain. Rehabilitation with a closed kinetic chain program results in anteroposterior knee laxity values that are closer to normal, and earlier return to normal daily activities, compared with rehabilitation with an open kinetic chain program. This review revealed that more randomized, controlled trials of rehabilitation are needed. These should include the clinicians' and patients' perspective of the outcome, and biomarkers of articular cartilage metabolism. abstract_id: PUBMED:37349732 Assessments of early patellofemoral joint osteoarthritis features after anterior cruciate ligament reconstruction: a cross-sectional study. Background: Persistent anterior knee pain and subsequent patellofemoral joint (PFJ) osteoarthritis (OA) are common symptoms after anterior cruciate ligament reconstruction (ACLR). Quadriceps weakness and atrophy is also common after ACLR. This can be contributed by arthrogenic muscle inhibition and disuse, caused by joint swelling, pain, and inflammation after surgery. With quadriceps atrophy and weakness are associated with PFJ pain, this can cause further disuse exacerbating muscle atrophy. Herein, this study aims to identify early changes in musculoskeletal, functional and quality of health parameters for knee OA after 5 years of ACLR. Methods: Patients treated with arthroscopically assisted single-bundle ACLR using hamstrings graft for more than 5 years were identified and recruited from our clinic registry. Those with persistent anterior knee pain were invited back for our follow-up study. For all participants, basic clinical demography and standard knee X-ray were taken. Likewise, clinical history, symptomatology, and physical examination were performed to confirm isolated PFJ pain. Outcome measures including leg quadriceps quality using ultrasound, functional performance using pressure mat and pain using self-reported questionnaires (KOOS, Kujala and IKDC) were assessed. Interobserver reproducibility was assessed by two reviewers. Results: A total of 19 patients with unilateral injury who had undergone ACLR 5-years ago with persistent anterior knee pain participated in this present study. Toward the muscle quality, thinner vastus medialis and more stiffness in vastus lateralis were found in post-ACLR knees (p < 0.05). Functionally, patients with more anterior knee pain tended to shift more of their body weight towards the non-injured limb with increasing knee flexion. In accordance, rectus femoris muscle stiffness in the ACLR knee was significantly correlated with pain (p < 0.05). Conclusion: In this study, it was found that patients having higher degree of anterior knee pain were associated with higher vastus medialis muscle stiffness and thinner vastus lateralis muscle thickness. Similarly, patients with more anterior knee pain tended to shift more of their body weight towards the non-injured limb leading to an abnormal PFJ loading. Taken together, this current study helped to indicate that persistent quadriceps muscle weakness is potential contributing factor to the early development of PFJ pain. abstract_id: PUBMED:26198056 Risk assessment for anterior cruciate ligament injury. Introduction: Anterior cruciate ligament tears are one of the most frequent soft tissue injuries of the knee. A torn anterior cruciate ligament leaves the knee joint unstable and at risk for further damage to other soft tissues manifested as pain, dislocation, and osteoarthritis. A better understanding of the anatomical details of knee joints suffering anterior cruciate ligament tears is needed to understand and develop prediction models for anterior cruciate ligament injury and/or tear. Materials And Methods: Magnetic resonance images of 32 patients with anterior cruciate ligament tears and 40 patients with non-tears were evaluated from a physician group practice. Digital measurements of femoral condyle length, femoral notch width, anterior cruciate ligament width in the frontal and sagittal plane, and the anterior cruciate ligament length in the sagittal plane were taken in both groups to identify trends. Monte Carlo simulations were performed (n = 2000) to evaluate the relationship between notch width index and sagittal width and to establish functional relationships among the anatomical parameters for potential injury risk. Sensitivity analysis performed shows the risk of anterior cruciate ligament injury a function of force and notch width index. Results: Females have a significantly shorter anterior cruciate ligament when compared to that of males. The notch width index was also significantly different between torn and non-torn individuals. The NWI was not significantly different between genders (p value = 0.40). Conclusions: Anterior cruciate ligament injury has been shown to be caused by the forces which act on the ligament. These forces can result from hyperextension of the tibia or the internal rotation of tibia. The anatomical parameters of the knee joint (i.e., notch width index, anterior cruciate ligament width and length) have no role in the cause of an injury. abstract_id: PUBMED:9006690 Preventing anterior knee pain after anterior cruciate ligament reconstruction. We studied a group of 602 patients who had anterior cruciate ligament reconstructions between 1987 and 1992. An autogenous patellar tendon graft was used, regardless of preexisting patellofemoral pain or chondromalacia. The surgeon and rehabilitation protocol were the same for all patients, with emphasis on obtaining full knee hyperextension postoperatively. All patients were evaluated by a questionnaire designed to determine the incidence and severity of anterior knee pain as it relates to sporting or daily living activities, prolonged sitting, stair climbing, and kneeling. Range of motion for the study group was recorded during physical examination. We compared the findings with those from a control group of 122 patients who had no previous knee injury. The study group reported a mean score of 89.5 +/- 12.5, compared with 90.2 +/- 12.3 in the control group. Both the operative and control groups reported little or no symptoms during sporting activities (94% and 92%, respectively). No differences were noted with respect to the other activities surveyed. These results demonstrate that anterior knee pain after anterior cruciate ligament reconstruction is not an inherent complication associated with patellar tendon harvesting. We suggest that the increased incidence of anterior knee pain with an autogenous patellar tendon graft can be prevented by obtaining full knee hyperextension postoperatively. This goal can be achieved through preoperative rehabilitation and a postoperative protocol emphasizing early restoration of full knee hyperextension. abstract_id: PUBMED:35478322 Risk factors of young males with physically demanding occupations having accumulated damage of anterior cruciate ligament. Objective: To present the clinical characteristics of accumulated anterior cruciate ligament (ACL) damage among young male patients undergoing routine exercise, and to evaluate the related risk factors. Methods: A retrospective study involving ACL-accumulated damage from June 2015 to December 2019 was conducted. Baseline characteristics, such as age, body mass index (BMI), training parameters, and clinical signs, were recorded. The results of the radiologic examinations and related standardized tests were obtained to evaluate the research outcomes. These results were compared using Student's t-test or Chi-square test, and the impact of risk factors on the patient's injury were analyzed. Results: A total of 86 men with accumulated ACL damage were included in this study. Exercise pain (86 [100%]), synovitis (80 [93.0%]), and intra-articular effusion (79 [91.9%]) were the most common clinical symptoms. Loosening of ligaments, decreased tension, mild hyperplasia, and intercondylar fossa effusion were observed using radiography, magnetic resonance imaging, and arthroscopy. Age, BMI, training intensity, length of training, and knee hyperextension were identified as risk factors for accumulated ACL damage. Conclusion: This study suggests that accumulated ACL damage has differentiated clinical symptoms, imaging features, and risk factors compared to common ACL injuries. abstract_id: PUBMED:1499417 Diagnosis of anterior cruciate ligament injury of the knee joint From January, 1979 to May, 1989, 107 patients with problems related to anterior cruciate ligament (ACL) were treated in our hospital. 100 of the patients had anterior cruciate ligament injury confirmed by arthrotomy or arthroscopy. The remaining 7 patients were found to be normal either by arthroscopy or arthrotomy. 29 patients had fresh ACL injury and 71 old. All the patients had history of trauma of the knee joint. Swelling and pain in the affected knee joint took place in fresh cases and few of them complained of instability or deformity of the knee. On examination, floating patella test was positive in the majority of the fresh cases. It was shown that accurate diagnosis could be made by Lachman test rather than by conventional anterior drawer test in dealing with fresh injury, but with old ones, Lachman test didn't show the advantages. Examination under anesthesia or arthroscopy helped a lot in diagnosing fresh ACL injury. Anterior drawer test (ADT) was significant in determining the existence of ACL injury. When ADT was positive, ACL injury was found in the majority of the cases, however, injured ACL couldn't be ruled out by negative ADT only. Positive valgus stress test on 0 degrees position suggests possibility of ACL injury, even ADT was negative. Despite the negative anterior drawer test positive posterior drawer test on three directions indicated the injury of the posterior cruciate ligament and the anterior cruciate ligament. The positive rate of ADT was higher than that of pivot shift test in dealing with anterior cruciate ligament injury. Positive pivot shift test suggests ACL injury. Answer: High knee abduction moments have been identified as common risk factors for both patellofemoral pain (PFP) and anterior cruciate ligament (ACL) injury in girls (PUBMED:24687011). The study described in this abstract compared two cohorts of female athletes, one with PFP and the other with ACL injuries, and found that a knee abduction moment (KAM) during landing above 15.4 Nm was associated with an increased risk of developing PFP, while a KAM above 25.3 Nm was associated with an increased risk for subsequent ACL injury. The athletes who suffered ACL injuries also had a greater knee abduction angle and a reduced hamstrings-to-quadriceps strength ratio relative to the incident PFP group, suggesting that while PFP and ACL injuries share common risk factors, they may not necessarily be directly predictive of each other. However, the abstract does not explicitly state that PFP is a predictor for subsequent ACL injury. It does indicate that similar biomechanical factors, such as high KAM, are associated with both conditions in adolescent girls. Therefore, while PFP and ACL injuries may occur in similar populations and under similar biomechanical conditions, the abstract does not provide direct evidence that PFP is a predictor of ACL injury. Instead, it suggests that both conditions are linked to high knee abduction loads during landing, which could imply that individuals with PFP may also be at risk for ACL injuries due to similar biomechanical vulnerabilities. Further research would be needed to establish a direct predictive relationship between PFP and subsequent ACL injury.
Instruction: Is it possible to diagnose caries under fixed partial dentures with cone beam computed tomography? Abstracts: abstract_id: PUBMED:25864819 Is it possible to diagnose caries under fixed partial dentures with cone beam computed tomography? Objective: The goal of this study was to determine the possibility of detecting/diagnosing caries under fixed partial dentures using cone beam computed tomography. Study Design: A range of teeth with grade 3 to 6 caries, according to International Caries Detection and Assessment System criteria, were selected. All teeth were prepared, and three different specimens- zirconia full-ceramic, lithium disilicate full-ceramic, and metal-supported ceramic crowns-were built for each tooth. Each specimen underwent scanning by cone beam computed tomography. Opacity values were recorded and evaluated using two-way analysis of variance. The Tukey test was performed for material and slice differences, and the t test for mean gray value differences, between caries and noncaries regions of each material. Results: Significant differences were detected with respect to horizontal location (anterior/posterior), restoration type (crown/bridge), material (zirconia/lithium disilicate/metal), and situation (caries/noncaries) (P < .001). Mean gray values of caries and noncaries regions were found to differ for each material. There were no significant differences with respect to vertical position. Conclusions: Cone beam computed tomography can be used as a posttreatment diagnostic technique for detecting caries under high-atomic-number fixed plate dentures. By combining high resolution and accuracy, cone beam computed tomography could provide the means for diagnosing caries without removing fixed plate dentures. abstract_id: PUBMED:33893259 Accuracy of Cone-beam Computed Tomography and Extraoral Bitewings Compared to Intraoral Bitewings in Detection of Interproximal Caries. Aim And Objective: The purpose of this study is to evaluate the diagnostic accuracy of cone-beam computed tomography (CBCT) and extraoral bitewings in the detection of interproximal caries compared to intraoral bitewings. Materials And Methods: Seven preserved cadaver heads with 106 teeth (molars, premolars, and canines) including 183 proximal surfaces were used. Five r adiographic modalities were studied: intraoral bitewings, extraoral bitewings, iCAT 3D, ProMax 3D high r esolution, and ProMax 3D low r esolution. Seven pediatric dental r esidents were r ecruited and calibrated as observers and asked to evaluate each proximal surface. Teeth were extracted, mounted, drilled, caries detection dye was applied, and the surfaces were examined under the light microscope. Interexaminer r eliability, sensitivity, specificity, and area under the curve values were compared. Results: No significant differences were found in sensitivity, specificity, and area under the curve values between the five r adiographic modalities. R estorations may influence the accuracy of caries diagnosis. Conclusion: Cone-beam computed tomography radiographs and extraoral bitewings showed similar accuracies in detecting interproximal caries compared to intraoral bitewings. This suggests that with proper training and experience, CBCT and extraoral bitewings could be comparable to intraoral bitewings in detecting interproximal caries. Clinical Significance: Cone-beam computed tomography and extraoral bitewings could potentially serve as alternatives to intraoral bitewings to diagnose proximal caries, especially when the CBCT study is needed for a specific diagnostic purpose. abstract_id: PUBMED:21977474 Current status of dental caries diagnosis using cone beam computed tomography. Purpose: The purpose of this article is to review the current status of dental caries diagnosis using cone beam computed tomography (CBCT). Materials And Methods: An online PubMed search was performed to identify studies on caries research using CBCT. Results: Despite its usefulness, there were inherent limitations in the detection of caries lesions through conventional radiograph mainly due to the two-dimensional (2D) representation of caries lesions. Several efforts were made to investigate the three-dimensional (3D) image of lesion, only to gain little popularity. Recently, CBCT was introduced and has been used for diagnosis of caries in several reports. Some of them maintained the superiority of CBCT systems, however it is still under controversies. Conclusion: The CBCT systems are promising, however they should not be considered as a primary choice of caries diagnosis in everyday practice yet. Further studies under more standardized condition should be performed in the near future. abstract_id: PUBMED:26726139 Role of cone-beam computed tomography in diagnostic otorhinolaryngological imaging Accurate diagnosis and preoperative planning in modern otorhinolaryngology is strongly supported by imaging with enhanced visualization. Computed tomography is often used to examine structures within bone frameworks. Given the hazards of ionizing radiation, repetitive imaging studies exponentially increase the risk of damages to radiosensitive tissues. The authors compare multislice and cone-beam computed tomography and determine the role, advantages and disadvantages of cone-beam computed tomography in otorhinolaryngological imaging. They summarize the knowledge from the international literature and their individual imaging studies. They conclude that cone-beam computed tomography enables high-resolution imaging and reconstruction in any optional plane and in space with considerably lower effective radiation dose. Cone-beam computed tomography with appropriate indications proved to be an excellent diagnostic tool in otorhinolaryngological imaging. It makes an alternative to multislice computed tomography and it is an effective tool in perioperative and postoperative follow-up, especially in those cases which necessitate repetitive imaging with computed tomography. abstract_id: PUBMED:20934291 An in vitro comparison of diagnostic abilities of conventional radiography, storage phosphor, and cone beam computed tomography to determine occlusal and approximal caries. Aim: The aim of this study was to compare conventional radiography, storage phosphor plate, and cone beam computed tomography for in vitro determination of occlusal and approximal caries. Methods: A total of 72 extracted human premolar and molar teeth were selected. Teeth were radiographed with conventional intraoral radiography, a storage phosphor plate system, and cone beam computed tomography and evaluated by two observers. The teeth were then separated and examined with a stereomicroscope and a scanner at approximately 8×magnification. Results: CBCT was statistically superior to conventional radiography and phosphor plate for determining occlusal caries. No significant difference from CBCT, conventional radiography and the phosphor plate system for determining approximal caries was found. Conclusion: The CBCT system may be used as an auxiliary method for the detection of caries. abstract_id: PUBMED:20657850 Dental caries in an impacted mandibular second molar: using cone beam computed tomography to explain inconsistent clinical and radiographic findings. Benefits of cone beam computed tomography (CBCT) over traditional panoramic radiography for diagnosis and treatment planning have been reported. This article presents a case where CBCT was used to identify the potential route of caries infection from the oral cavity to an impacted mandibular second molar. abstract_id: PUBMED:32345901 Cone-beam computed tomography for trauma. Radiographic imaging is critical in helping guide treatment of critically injured patients. Cone-beam computed tomography is an axial imaging technique available from fixed imaging systems found in hybrid operating rooms. It can be used to provide focused studies of specific anatomical regions, where patients cannot undergo conventional multidetector computed tomography. This includes non-contrast-enhanced evaluation of the intracranial contents and vascular imaging throughout the body. There are a number of advantages and disadvantages to cone-beam computed tomography, but these are not widely discussed within the trauma literature. This narrative review article presents the initial practical experience of this novel imaging modality. LEVEL OF EVIDENCE: Review article, level III. abstract_id: PUBMED:25650901 Histological validation of cone-beam computed tomography versus laser fluorescence and conventional diagnostic methods for occlusal caries detection. Objective: The purpose of this study was to compare the validity of visual (VE), radiological (RE), cone beam computed tomography (CBCT), and laser fluorescence (LFE) examination methods for the detection of the occlusal noncavitated caries in permanent posterior teeth. Methods: Two examiners assessed 121 selected sites on the occlusal surfaces of 44 molar teeth by visual (International Caries Assessment and Detection System II [ICDAS]), radiographic (bite-wing projection) cone-beam computed tomography, and laser fluorescence (DIAGNOdent Pen) examination methods. After a 1-week interval, each measurement was repeated by two examiners. Then, the teeth were sectioned, and histological evaluation was performed, which serves as the gold standard. The lesion depths were classified and correlated with the methods evaluated for validation. The intra- and inter-examiner reliability (sensitivity, specificity) and reproducibility of all examination methods were calculated using a weighted Cohen's κ statistic. The correlation between the examination methods was determined using receiver operating characteristic (ROC) analysis indicating the area under the curve (AUC). Results: CBCT exhibited excellent intra-examiner (0.76 for examiner 1, 0.78 for examiner 2) and fair to good inter-examiner (0.63 for the first, 0.64 for the second measurements) reproducibility. The intra-examiner reproducibility was excellent for the LFE method according to the weighted κ values of examiners 1 (0.90) and 2 (0.79). Among the combined methods, the highest AUC values (0.81-0.95) were obtained for the CBCT examination method performed by the two examiners at both the first and second measurements. Conclusions: Cone beam computed tomography showed better performance than other diagnostic methods. abstract_id: PUBMED:30833974 Effect of Filtration and Slice Thickness of Cone-Beam Computed Tomography Images on Occlusal Caries Detection: An Ex Vivo Study. Objectives: The aim of this study was to evaluate the diagnostic accuracy of different filtrations and slice thicknesses of cone-beam computed tomography (CBCT) in the detection of occlusal caries. Materials And Methods: One-hundred teeth were selected for this ex-vivo experimental study. The CBCT images of the teeth were evaluated and scored by two observers in panoramic and cross-sectional views using different slice thicknesses and filtrations. Paired t-test, repeated-measures analysis of variance (ANOVA), and the least significant difference (LSD) test were used to compare the data with the histological gold standard. Receiver operating characteristic (ROC) analysis was used to determine the diagnostic accuracy of each slice thickness and filtration (P<0.05). Results: The mean score of true caries detection in cross-sectional views was lower than that in panoramic views (P<0.05). Repeated-measures ANOVA showed a significant difference in the mean of true detections in different thicknesses of cross-sectional views, but this difference was significant only between 5 mm thickness and other thicknesses in panoramic views. On all the views, increasing the thickness decreased the accuracy of caries detection. Repeated-measures ANOVA showed a significant difference between different filtrations; on all the views, increasing the filtration increased the accuracy of caries detection. Conclusions: An increase of filtration of CBCT images increases the accuracy of occlusal caries detection; however, an increase in slice thickness results in a lower diagnostic accuracy. abstract_id: PUBMED:21928738 Cone beam computed tomography in the diagnosis of dental disease. Conventional radiographs provide important information for dental disease diagnosis. However, they represent 2-D images of 3-D objects with significant structure superimposition and unpredictable magnification. Cone beam computed tomography, however, allows true 3-D visualization of the dentoalveolar structures, avoiding major limitations of conventional radiographs. Cone beam computed tomography images offer great advantages in disease detection for selected patients. The authors discuss cone beam computed tomography applications in dental disease diagnosis, reviewing the pertinent literature when available. Answer: Yes, it is possible to diagnose caries under fixed partial dentures using cone beam computed tomography (CBCT). A study aimed at determining the possibility of detecting/diagnosing caries under fixed partial dentures with CBCT found that significant differences were detected with respect to horizontal location, restoration type, material, and situation (caries/noncaries). The mean gray values of caries and noncaries regions differed for each material, suggesting that CBCT can be used as a posttreatment diagnostic technique for detecting caries under high-atomic-number fixed plate dentures (PUBMED:25864819). Moreover, CBCT has been shown to have similar accuracies in detecting interproximal caries compared to intraoral bitewings, indicating that with proper training and experience, CBCT could be comparable to intraoral bitewings in detecting caries (PUBMED:33893259). Although CBCT systems are promising for caries diagnosis, they should not yet be considered the primary choice for caries diagnosis in everyday practice, and further studies under more standardized conditions are needed (PUBMED:21977474). Additionally, CBCT has been found to be statistically superior to conventional radiography and phosphor plate for determining occlusal caries, suggesting that it may be used as an auxiliary method for caries detection (PUBMED:20934291). CBCT has also been used to identify the potential route of caries infection from the oral cavity to an impacted mandibular second molar, demonstrating its benefits over traditional panoramic radiography for diagnosis and treatment planning (PUBMED:20657850). In conclusion, CBCT can be a valuable tool for diagnosing caries under fixed partial dentures, offering high-resolution imaging and the ability to detect caries without removing the dentures. However, its use should be considered alongside other diagnostic methods and within the context of appropriate clinical indications.
Instruction: Sleepiness combined with low alcohol intake in women drivers: greater impairment but better perception than men? Abstracts: abstract_id: PUBMED:15532198 Sleepiness combined with low alcohol intake in women drivers: greater impairment but better perception than men? Objectives: We have previously shown that low blood alcohol concentrations (BAC) (at approximately half the legal driving limit in both the United Kingdom and in most states in the United States) exacerbate moderate sleepiness (sleep during the night restricted to 5 hours) and markedly impair driving ability in young men. There are distinct physiologic sex differences in the absorption, metabolism, and central nervous system effects of alcohol; therefore, we replicated this earlier study, this time using women and using similar BAC to provide a comparison. Design: 2 x 2 repeated-measures counterbalanced. Setting: 2-hour drive from 2:00 pm in an instrumented car on a simulated highway. Interventions: Alcohol versus control and normal sleep versus sleep restricted to 5 hours. Measurements And Results: Driving impairment (lane drifting), subjective sleepiness, and electroencephalographic measures of sleepiness. Sleep restriction significantly worsened driving performance and subjective sleepiness as it had in men. Surprisingly, unlike men, women showed no apparent adverse effects of alcohol alone on these indexes; they seemingly compensated for the effects of alcohol. However, alcohol's effects were profound when alcohol was combined with sleep restriction; nevertheless, women, unlike men, were aware of this enhanced sleepiness. After alcohol ingestion, the electroencephalogram showed increased beta activity, an effect not seen in men, indicating a differential pharmacokinetic effect of alcohol on the central nervous system, compensatory effort, or both. Debriefing questionnaires indicated that women were aware of the varying risks of driving under these different conditions. Conclusions: Legally "safe" BAC markedly worsen sleepiness-impaired driving in women. However, they seem to be aware of their impaired driving and are able to judge the degree of risk entailed. Such an attitude may contribute to the lower incidence of sleep- or alcohol-related crashes in women compared with men. abstract_id: PUBMED:12937193 Driving impairment due to sleepiness is exacerbated by low alcohol intake. Aims: To assess whether low blood alcohol concentrations (BACs), at around half the UK legal driving limit, and undetectable by police roadside breathalysers, further impair driving already affected by sleepiness, particularly in young men, who are the most "at risk" group of drivers for having sleep related crashes. Methods: Twelve healthy young men drove for two hours in the afternoon, in an instrumented car on a simulated motorway. In a repeated measures, counterbalanced design, they were given alcohol or placebo under conditions of normal sleep or prior sleep restriction. Measurements were: driving impairment (lane drifting), subjective sleepiness, and EEG measures of sleepiness. Results: Whereas sleep restriction and alcohol each caused a significant deterioration in all indices, the combined alcohol and sleep restriction further and significantly worsened lane drifting (which typifies sleep related crashes). This combined effect was also reflected to a significant extent in the EEG, but not with subjective sleepiness. That is, alcohol did not significantly increase subjective sleepiness in combination with sleep loss when compared with sleep loss alone. Conclusions: Modest, and apparently "safe" levels of alcohol intake exacerbate driving impairment due to sleepiness. The sleepy drivers seemed not to have realised that alcohol had increased their sleepiness to an extent that was clearly reflected by a greater driving impairment and in the EEG. abstract_id: PUBMED:15532199 Low levels of alcohol impair driving simulator performance and reduce perception of crash risk in partially sleep deprived subjects. Study Objectives: Partial sleep deprivation and alcohol consumption are a common combination, particularly among young drivers. We hypothesized that while low blood alcohol concentration (&lt;0.05 g/dL) may not significantly increase crash risk, the combination of partial sleep deprivation and low blood alcohol concentration would cause significant performance impairment. Design: Experimental Setting: Sleep Disorders Unit Laboratory Patients Or Participants: 20 healthy volunteers (mean age 22.8 years; 9 men). Interventions: Subjects underwent driving simulator testing at 1 am on 2 nights a week apart. On the night preceding simulator testing, subjects were partially sleep deprived (5 hours in bed). Alcohol consumption (2-3 standard alcohol drinks over 2 hours) was randomized to 1 of the 2 test nights, and blood alcohol concentrations were estimated using a calibrated Breathalyzer. During the driving task subjects were monitored continuously with electroencephalography for sleep episodes and were prompted every 4.5 minutes for answers to 2 perception scales-performance and crash risk. Measurements And Results: Mean blood alcohol concentration on the alcohol night was 0.035 +/- 0.015 g/dL. Compared with conditions during partial sleep deprivation alone, subjects had more microsleeps, impaired driving simulator performance, and poorer ability to predict crash risk in the combined partial sleep deprivation and alcohol condition. Women predicted crash risk more accurately than did men in the partial sleep deprivation condition, but neither men nor women predicted the risk accurately in the sleep deprivation plus alcohol condition. Conclusions: Alcohol at legal blood alcohol concentrations appears to increase sleepiness and impair performance and the detection of crash risk following partial sleep deprivation. When partially sleep deprived, women appear to be either more perceptive of increased crash risk or more willing to admit to their driving limitations than are men. Alcohol eliminated this behavioral difference. abstract_id: PUBMED:15912483 Early evening low alcohol intake also worsens sleepiness-related driving impairment. Following night-time sleep restriction, afternoon driving performance during the bi-circadian surge in afternoon sleepiness is markedly worsened by blood alcohol concentrations (BACs) well under most national driving limits. This study assessed how driving with this same sleep restriction and BACs (av 40 mg and 28 mg alcohol/100 ml blood at the beginning and end of drive, respectively) respond during the evening circadian rise in alertness. In a 2 x 2 (alcohol versus control drink [double blind] x normal night sleep versus sleep restricted), repeated-measures design, eight healthy young men drove for 2 h from 18:00 h, in a real-car simulator, on a monotonous, simulated highway. Driving impairment (lane drifting), subjective sleepiness and EEG measures of sleepiness were recorded. While sleep restriction alone produced significant impairments to evening driving and subjective sleepiness, alcohol alone did not. However, alcohol combined with sleep restriction significantly worsened all indices, although, this was less than that found for afternoon driving with identical interventions. Whereas low BACs may not affect driving in normally alert drivers in the early evening, the addition of moderate sleep restriction still produces a dangerous combination. Probably, there is no 'safe' level of alcohol intake for otherwise sleepy drivers, at any time of the day. abstract_id: PUBMED:17969466 Effects of moderate sleep deprivation and low-dose alcohol on driving simulator performance and perception in young men. Study Objective: To determine the combined effects of sleep restriction and low-dose alcohol on driving simulator performance, EEG, and subjective levels of sleepiness and performance in the mid-afternoon. Design: Repeated measures with 4 experimental conditions. Normal sleep without alcohol, sleep restriction alone (4 hours) and sleep restriction in combination with 2 different low blood alcohol concentrations (0.025 g/dL and 0.035 g/dL). Setting: Sleep Laboratory, Adelaide Institute for Sleep Health. Participants: Twenty-one healthy young men, aged 18-30 years, mean (+/-SD) = 22.5(+/-3.7) years, BMI = 25(+/-6.7) kg/m2; all had normal sleep patterns and were free of sleep disorders. Measurements: Participants completed a 70-minute simulated driving session, commencing at 14:00. Driving parameters included steering deviation, braking reaction time, and number of collisions. Alpha and theta EEG activity and subjective driving performance and sleepiness were also measured throughout the driving task. Results: All measures were significantly affected by time. Steering deviation increased significantly when sleep restriction was combined with the higher dose alcohol. This combination also resulted in a significant increase in alpha/theta EEG activity throughout the drive, as well as greater subjective sleepiness and negative driving performance ratings compared to control or sleep restriction alone. Discussion: These data indicate that combining low-dose alcohol with moderate sleep restriction results in significant decrements to subjective alertness and performance as well as to some driving performance and EEG parameters. This highlights the potential risks of driving after consumption of low and legal doses of alcohol when also sleep restricted. abstract_id: PUBMED:15303246 Alcohol continues to affect sleepiness related driving impairment, when breath alcohol levels have fallen to near-zero. Epidemiological findings point to very low blood alcohol levels heightening the risk of sleep-related fatal road crashes. This was further assessed using a full sized interactive car simulator. Twenty, sleep restricted, healthy young men underwent a 2 h simulated afternoon monotonous drive, having previously consumed nil alcohol or 3 units &gt;90 min previously, and having near-zero breath alcohol (BrACs) at the start of the drive. In a repeated measures, double-blind, balanced design, driving performance, subjective sleepiness and EEG were monitored throughout. Compared with nil alcohol, the alcohol condition initially increased sleepiness-related driving impairment. However, this was not mirrored by subjective sleepiness or EEG. An unexpected reversal (i.e. improvement) in driving impairment occurred with the alcohol group, in the second hour of the drive. This was supported by a trend for improved subjective alertness. Alcohol continued to interact with sleepiness-related driving impairment after BrACs had reached zero. However, a lack of subjective perception of increased sleepiness, at this time, further points to the dangerous combination of even modest alcohol intake and sleepiness, and confirms the road crash findings. BrACs are a poor guide to driver impairment. abstract_id: PUBMED:28722214 Impairment due to combined sleep restriction and alcohol is not mitigated by decaying breath alcohol concentration or rest breaks. Objective: Epidemiological and laboratory-based driving simulator studies have shown the detrimental impact of moderate, legal levels of alcohol consumption on driving performance in sleepy drivers. As less is known about the time course of decaying alcohol alongside performance impairment, our study examined impairment and recovery of performance alongside decaying levels of alcohol, with and without sleep restriction. Methods: Sixteen healthy young males (18-27 years) underwent 4 counterbalanced conditions: Baseline, Alcohol (breath alcohol concentration [BrAC] &lt; 0.05%), Sleep Restriction (5 hr time in bed), and Combined. Participants consumed alcohol (or control drink) ~4.5 hr post wake (12:30 p.m.). To test on the descending limb of alcohol, attention and vigilance test batteries commenced 1 hr after consumption and were completed every 30 min for 2 hr (1:30 p.m.-3:30 p.m.). Results: The Combined condition impaired subjective and objective sleepiness. Here, performance deficits peaked 90 min after alcohol consumption or 30 min after the BrAC peak. Performance did not return to baseline levels until 2.5 hr following consumption, despite receiving rest breaks in between testing. Conclusions: These findings suggest that (a) falling BrACs are an inadequate guide for performance/safety and (b) rest breaks without sleep are not a safety measure for mitigating performance impairment when consuming alcohol following restricted sleep. abstract_id: PUBMED:1800105 Time-of-day effects of alcohol intake on simulated driving performance in women. There is a circadian propensity for sleepiness in the early afternoon (contrasting with an alertness peak early evening) which potentiates the sedating effects of alcohol. However, little research has been done to examine these effects on driving performance. Four units of alcohol (95 ml of 40% spirit) or placebo were given double blind, with a snack to two groups ('early afternoon' and 'early evening') of 12 young women, at either 1310 h or 1810 h. Blood alcohol levels (BACs) were estimated by breathalyser. BACs were within the UK legal driving maximum. Subjects underwent 40 min of monotonous motorway driving in a car simulator. Self-ratings on sleepiness and alcohol effects showed a significantly greater impact of alcohol in the early afternoon relative to the early evening. Whilst neither time of day nor alcohol affected lateral corrective steering movements, alcohol significantly increased the average following-distance and the variability in this distance, especially during the early afternoon. In some subjects from the early afternoon group, this impairment seemed to be to dangerous levels following alcohol. abstract_id: PUBMED:11012861 Simulated driving performance following prolonged wakefulness and alcohol consumption: separate and combined contributions to impairment. The separate and combined effects of prolonged wakefulness and alcohol were compared on measures of subjective sleepiness, simulated driving performance and drivers' ability to judge impairment. Twenty-two males aged between 19 and 35 years were tested on four occasions. Subjects drove for 30 min on a simulated driving task under conditions determined by the factorial combination of 16 and 20 h of wakefulness and blood alcohol concentrations of 0.00 and 0.08%. The simulated driving session took place 30 min postingestion; subjects in the two alcohol conditions participated in a second 30-min driving session 90-min postingestion. Subjects made simultaneous ratings of their impairment while driving and retrospective ratings at the end of each test session. Subjective sleepiness measures were completed before and after each driving session. The combination of 20 h of prolonged wakefulness and alcohol produced significantly lower ratings of subjective sleepiness and driving performance that was worse, but not significantly so, than would be expected from the additive effects of each condition alone. Driving performance was always worse in the second driving session, during the elimination phase of alcohol metabolism, despite blood alcohol concentrations being lower than during the first driving session. There was a modest association between perceived and actual impairments in driving performance following prolonged wakefulness and alcohol. The findings suggest that the combination of prolonged wakefulness and alcohol consumption produced greater decrements in simulated driving performance than each condition alone and that drivers have only a modest ability to appreciate the magnitude of their impairment. abstract_id: PUBMED:10656195 Ability of baclofen in reducing alcohol craving and intake: II--Preliminary clinical evidence. Background: Accumulating evidence shows the efficacy of the gamma-aminobutyric acid (GABA(B)) receptor agonist baclofen in reducing alcohol intake in rats, but no studies have been performed in alcoholics. In the present preliminary study we investigated the effect of short-term baclofen administration on craving for alcohol, ethanol intake, and abstinence from alcohol in alcoholic individuals. Methods: Ten male current alcoholic individuals were admitted to the study. Baclofen was orally administered for 4 weeks, at a dose of 15 mg/day refracted in three times per day for the first 3 days, with the dose increased to 30 mg/day for the remaining 27 days. Each subject was checked as an outpatient every week for the 4 weeks; at each visit (T0-T4) craving level was evaluated by the Alcohol Craving Scale (ACS), and abstinence from alcohol was assessed based on the individual's self-evaluation, family member interview, and the main biological markers of alcohol abuse. A self-reported alcohol intake was recorded as the mean number of standard drinks consumed per day. Results: Nine subjects completed the study; of these, two subjects continued to drink alcohol although they substantially reduced their daily drinks in the first week of treatment, whereas seven maintained abstinence throughout the experimental period. Craving was significantly reduced from the first week of the drug administration (p &lt; 0.01) and remained so throughout the entire treatment period. Participants also reported that obsessional thinking about alcohol disappeared. Values of gamma-glutamyltranspeptidase, alanine aminotransferase, and mean cellular volume significantly decreased by the end of the study. Tolerability was fair in all participants; headache, vertigo, nausea, constipation, diarrhea, abdominal pain, hypotension, increased sleepiness, and tiredness were present as side effects in the first stage of the treatment. No participants showed craving for the drug. Conclusions: With the limitations of the low number of individuals evaluated and the open design, this preliminary clinical study supports the preclinical evidence on the effect of baclofen in reducing alcohol intake. The anticraving properties of the drug suggest a possible role of baclofen in the treatment of individuals with alcohol problems. Answer: The research suggests that sleepiness combined with low alcohol intake does indeed result in greater impairment in women drivers, but women also have a better perception of their impairment compared to men. Specifically, one study found that while sleep restriction significantly worsened driving performance and subjective sleepiness in women, as it did in men, women showed no apparent adverse effects of alcohol alone on these indices. However, when alcohol was combined with sleep restriction, the effects were profound, yet women were aware of this enhanced sleepiness, unlike men (PUBMED:15532198). This awareness may contribute to the lower incidence of sleep- or alcohol-related crashes in women compared to men. Another study indicated that low blood alcohol concentration (<0.05 g/dL) may not significantly increase crash risk alone, but the combination of partial sleep deprivation and low blood alcohol concentration caused significant performance impairment. Women predicted crash risk more accurately than men in the partial sleep deprivation condition, but neither men nor women predicted the risk accurately in the sleep deprivation plus alcohol condition (PUBMED:15532199). Furthermore, a study on the effects of moderate sleep deprivation and low-dose alcohol on driving simulator performance and perception in young men found that combining low-dose alcohol with moderate sleep restriction results in significant decrements to subjective alertness and performance as well as to some driving performance and EEG parameters (PUBMED:17969466). In summary, while low alcohol intake exacerbates driving impairment due to sleepiness in both men and women, women seem to have a better perception of their impairment, which could potentially lead to safer driving behaviors under these conditions. However, the combination of sleep deprivation and alcohol still poses a significant risk for both sexes, and the perception of risk is not always accurate, especially when both factors are present (PUBMED:15532199).
Instruction: Is there a change in the quality of life comparing the micro-invasive glaucoma surgery (MIGS) and the filtration technique trabeculectomy in glaucoma patients? Abstracts: abstract_id: PUBMED:27848022 Is there a change in the quality of life comparing the micro-invasive glaucoma surgery (MIGS) and the filtration technique trabeculectomy in glaucoma patients? Purpose: This study was conducted to assess the impact on the Quality of Life (QOL) of micro-invasive glaucoma surgery (MIGS: iStent, Trabectome) and a penetrating technique such as Trabeculectomy (TE). Methods: This study evaluated 88 eyes of 88 open angle glaucoma patients undergoing glaucoma surgery: 43 (mean age 72.8 ± 8.8y, female 59.5 %, male 40.5 %) Trabectome (NeoMedix, Inc., Tustin, CA, USA), 20 (mean age 68.6 ± 16.4y, female 60 %, male 40 %) iStent (Glaucos Corporation, Laguna Hills, CA, USA), and 25 TE patients (mean age 74.2 ± 9.1y female 58.3 %, male 41.7 %). The National Eye Institute-Visual Functioning Questionnaire (VFQ-25) survey was used to assess the QOL at 6 months post surgery. The following 12 QOL parameters were evaluated: general health, ocular pain, general vision, near and distance activities, mental health, social functioning, role difficulties, dependency, driving, color vision, and peripheral vision. Intraocular pressure (IOP), number of topical medications, and visual acuity (VA) were examined preoperatively, 1 day, 6 weeks, 3 months, and 6 months post surgery. Statistical data were calculated using SPSS (v20.0, SPSS, Inc.). Results: There was no significant difference between TE and MIGS in the quality of life 6 months postoperatively. IOP was significantly lower in TE compared to MIGS at 6 weeks and 3 months postoperatively (p = 0.046 and p = 0.046). Number of medications was significantly decreased in TE compared to MIGS (p &lt; 0.001). A significant difference in VA between TE and MIGS could be assessed at day 1 post-op (p = 0.011). Conclusion: In this study cohort, the QOL can be maintained by all three surgical techniques. Patients, however, need lower numbers of topical medication in TE, which would impact QOL even though it is not included in the NEI-VFQ-25. The decision of the most appropriate surgical technique should be made by including single QOL categories, IOP and glaucoma medication outcome. abstract_id: PUBMED:29934937 "Minimally Invasive Glaucoma Surgery (MIGS) Is a Poor Substitute for Trabeculectomy"-The Great Debate. Surgical treatment for glaucoma has undergone a dramatic change over the last decade. Trabeculectomy has been the main surgical procedure worldwide for almost 50 years. However, there is a growth in development of new novel devices and surgical techniques designed to lower intraocular pressure in a less invasive fashion. The term minimally invasive glaucoma surgery (MIGS) has been coined and is the subject of investment, debate and, increasingly, research. The position of MIGS in the glaucoma treatment paradigm is yet to be clearly defined and its ability to replace conventional filtration surgery remains debatable. In this paper two glaucoma specialists were invited to debate the motion that "MIGS is a poor substitute for trabeculectomy". abstract_id: PUBMED:29725860 Systematic Literature Review of Clinical and Economic Outcomes of Micro-Invasive Glaucoma Surgery (MIGS) in Primary Open-Angle Glaucoma. Introduction: Primary open-angle glaucoma is estimated to affect 3% of the population aged 40-80 years. Trabeculectomy is considered the gold standard in surgical management of glaucoma; however, it is a technically complex procedure that may result in a range of adverse outcomes. Device-augmented, minimally invasive procedures (micro-invasive glaucoma surgeries, MIGS) have been developed aiming for safer and less invasive intraocular pressure (IOP) reduction compared with traditional surgery. Methods: This paper presents results from a systematic literature review conducted in accordance with National Institute for Health and Care Excellence requirements for the Medical Technology Evaluation Programme via multiple databases from 2005 to 2016. For clinical outcomes, randomized clinical trials (RCTs) comparing MIGS with trabeculectomy or other therapies, observational studies, and other non-RCTs were included. Clinical outcomes reviewed were the change from baseline in mean IOP levels and change in topical glaucoma medication. Safety was assessed by reported harm and adverse events. For economic evidence, trials on cost-effectiveness, cost-utility, cost-benefit, cost-consequences, cost-minimization, cost of illness, and specific procedure costs were included. Risk of bias was assessed for clinical studies using the Cochrane Risk of Bias tool. Results: A total of nine RCTs (seven iStents®, one Hydrus®, and one CyPass®), seven non-RCTs (three iStent®, three CyPass®, and one Hydrus®), and 23 economic studies were analyzed. While various forms of trabeculectomy can achieve postoperative IOP of between 11.0 and 13.0 mmHg, MIGS devices described in this review were typically associated with higher postoperative IOP levels. In addition, MIGS devices may result in increased hypotony rates or bleb needling in subconjunctival placed devices, requiring additional medical resources to manage. There is limited available evidence on the cost-effectiveness of MIGS and therefore it remains unclear whether the cost of using MIGS is outweighed by cost savings through decreased medication and need for further interventions. Conclusion: Larger randomized trials and real-world observational studies are needed for MIGS devices to better assess clinical and economic effectiveness. Given the shortage of published data and increasing use of such procedures, living systematic reviews may help to provide ongoing and timely evidence-based direction for clinicians and decision makers. This review highlights the current unmet need for treatments that are easy to implement and reduce long-term IOP levels without increasing postoperative aftercare and cost. Funding: Santen GmbH, Germany. abstract_id: PUBMED:28919702 Micro-invasive glaucoma surgery (MIGS): a review of surgical procedures using stents. Over the last decade several novel surgical treatment options and devices for glaucoma have been developed. All these developments aim to cause as little trauma as possible to the eye, to safely, effectively, and sustainably reduce intraocular pressure (IOP), to produce reproducible results, and to be easy to adopt. The term "micro-invasive glaucoma surgery (MIGS)" was used for summarizing all these procedures. Currently MIGS is gaining more and more interest and popularity. The possible reduction of the number of glaucoma medications, the ab interno approach without damaging the conjunctival tissue, and the probably safer procedures compared to incisional surgical methods may explain the increased interest in MIGS. The use of glaucoma drainage implants for lowering IOP in difficult-to-treat patients has been established for a long time, however, a variety of new glaucoma micro-stents are being manufactured by using various materials and are available to increase aqueous outflow via different pathways. This review summarizes published results of randomized clinical studies and extensive case report series on these devices, including Schlemm's canal stents (iStent®, iStent® inject, Hydrus), suprachoroidal stents (CyPass®, iStent® Supra), and subconjunctival stents (XEN). The article summarizes the findings of published material on efficacy and safety for each of these approaches. abstract_id: PUBMED:34865399 Strictly following the indications during the promotion of micro-invasive glaucoma surgeries Lots of new micro-invasive glaucoma surgeries (MIGS) are clinically available in recent two decades. The common characters of these surgeries are micro-invasive and non-filter bleb dependent. There are some problems during the promotion of the MIGS in China, like performing the MIGS with inappropriate indications. The MIGS procedures have more strict indications than traditional trabeculectomy and need more technical skills. To promote the popularization of MIGS and improve the clinical treatment of glaucoma, strictly following the indications and standardizing the surgical technique training are needed. (Chin J Ophthalmol, 2021, 57: 641-643). abstract_id: PUBMED:33150512 Effectiveness and limitations of minimally invasive glaucoma surgery targeting Schlemm's canal. Glaucoma surgery is performed to lower intraocular pressure (IOP); ideally, the IOP reduction is safely maintained for an extended period of time. Although trabeculectomy was considered the gold standard for glaucoma surgery for many years because of its effective IOP reduction, yet now it is considered unsafe because of serious complications. In recent years, minimally invasive glaucoma surgery (MIGS), which emphasizes safety and can be performed rapidly, has become widespread. Because MIGS does not involve conjunctival incisions, patients can undergo future trabeculectomy. If IOP reduction can be maintained safely, the number of anti-glaucoma drops can be reduced and visual function maintained, good outcomes for patients with glaucoma. Currently, many types of MIGS approved in Japan are reported to yield relatively good results, with targets of approximately 15-19 mmHg. However, the IOP-lowering effects of MIGS are limited. In procedures targeting Schlemm's canal, it is difficult to lower IOP beyond episcleral venous pressure. In some instances, a beneficial effect cannot be achieved if function is reduced beyond the collector channel. There are many unclear aspects regarding long-term outcomes following MIGS. Notably, investigation is ongoing to determine which patients are likely to benefit most from surgery. Based on previous reports, this review describes the characteristics and results of MIGS, approved in Japan, as well as underlying factors that affect the preoperative predictions and outcomes of the surgical procedure. abstract_id: PUBMED:27815624 Contralateral eye comparison study in MICS &amp; MIGS: Trabectome® vs. iStent inject®. Purpose: To compare the safety and efficacy profile after combined micro-incision cataract surgery (MICS) and micro-invasive glaucoma surgery (MIGS) with the ab interno trabeculectomy (Trabectome®) in one eye versus two iStent® inject devices in the contralateral eye in patients with open-angle glaucoma (OAG) and cataract. Methods: This retrospective, intraindividual eye comparison study included 27 patients (54 eyes) who were treated with combined MICS and ab interno trabeculectomy (group I, Trabectome®) in one eye and two iStent® inject devices (group II, GTS 400) in the fellow eye. Primary outcome measures included intraocular pressure (IOP) and glaucoma medication after 6 weeks, 3, 6, and 12 months follow-up. Secondary outcome measures were number of postoperative interventions, complications, and best-corrected visual acuity (BCVA). Results: Mean preoperative IOP decreased from 22.3 ± 3.7 mmHg in group I and 21.3 ± 4.1 mmHg in group II to 15.6 ± 3.6 mmHg for Trabectome (p &lt; 0.001) and 14.0 ± 2.3 mmHg for iStent inject (p &lt; 0.001) at 12 months after surgery without a significant difference between the two groups (p &gt; 0.05). No vision-threatening complications such as choroidal effusion, choroidal hemorrhage, or infection occurred. In each group trabeculectomy had to be performed in two eyes due to insufficient IOP lowering effect. Conclusions: Ab interno trabeculectomy and iStent® inject were both effective in lowering IOP with a favourable and comparable safety profile in an intraindividual comparative study over a 12-months follow-up in OAG. However, longer follow-up of these patients will be necessary to determine long-term outcomes and to evaluate significant differences. abstract_id: PUBMED:33116356 Quality of Life After Combined Cataract and Minimally Invasive Glaucoma Surgery in Glaucoma Patients. Purpose: To determine the quality of life (QOL) in glaucoma patients undergoing combined cataract and minimally invasive glaucoma surgery from various perspectives ranging from personal, social, occupational life, and economic status. Settings And Design: A cross-sectional study design at King Fahd Hospital of the University, Khobar, Saudi Arabia. Methods: Patients undergoing phacoemulsification in conjunction with various forms of minimally invasive glaucoma surgery (MIGS) for each patient, including either Kahook Dual Blade (KDB) goniotomy, iStent, iStent inject and gonioscopy-assisted transluminal trabeculotomy (GATT), were included in the study between 2018 and 2019. Data were collected through a self-administered questionnaire based on the Visual Function Questionnaire (VFQ-25) for the 25-item National Eye Institute. Results: The study included 93 eyes of 78 patients (40 males and 38 females) who had MIGS: 50 KDB, 13 iStent, 23 iStent inject, and 7 GATT. An overall reduction in the number of anti-glaucoma medications (p&lt;0.001) was statistically significant. In the study, 36.6% of patients had a better social life, but 85.2% had no change in occupational life. Eventually, 86% were satisfied with the operation's outcome, and 79% confirmed that the overall quality of life improved after the procedure. Conclusion: Evaluating QOL is a crucial component of glaucoma treatment. More research is needed on MIGS and their relationship to QOL. In the future, MIGS may provide the desired outcomes in controlling glaucoma and improving the QOL. abstract_id: PUBMED:34825352 Standalone Implantation of 2-3 Trabecular Micro-Bypass Stents (iStent inject ± iStent) as an Alternative to Trabeculectomy for Moderate-to-Severe Glaucoma. Introduction: This retrospective consecutive study compared standalone implantation of multiple (2-3) trabecular micro-bypass stents (iStent inject ± iStent) (Multi-Stent group) vs trabeculectomy + mitomycin C (Trab group) in moderate to severe open-angle glaucoma (OAG). Methods: Eligible patients underwent Multi-Stent or Trab surgery from 2018 to 2020 and had at least 3-month follow-up; visual field mean deviation (VF MD) - 6 dB or worse; inadequate prior response to maximum medications ± laser procedures; and had trabeculectomy as their next planned intervention. Primary effectiveness, safety-adjusted treatment success, was defined as ≥ 20% intraocular pressure (IOP) reduction on the same or fewer medications, without clinically significant safety events (severe complications, secondary surgeries, reinterventions). Secondary effectiveness included mean IOP and medications; qualified and complete attainment of target IOP (≤ 21/18/15/12 mmHg and &gt; 6 mmHg); health-economic and quality-of-life (QoL) measures; and 2-vs-3-stent subgroup analysis. Results: The baseline groups (n = 70 Multi-Stent/40 Trab) were similar: mean IOP (21.1 mmHg/22.3 mmHg); medications (2.87/3.10 medications); disease stage (30%/35% severe); VF MD (- 10.1 dB/- 10.4 dB); and mean last follow-up (LFU, 13.1 months/15.7 months) (all differences non-significant). Primary effectiveness: treatment success at LFU was 62.9% vs 30.0% in Multi-Stent vs Trab eyes, respectively (p = 0.001). Secondary effectiveness: At LFU in Multi-Stent vs Trab groups, respectively: mean IOP decreased by 31% to 14.2 mmHg (p &lt; 0.001) vs by 43% to 12.5 mmHg (p &lt; 0.001); mean medications decreased by 51% to 1.31 medications (p &lt; 0.001) vs by 84% to 0.43 medications (p &lt; 0.001). Multi-Stent eyes, compared to Trab eyes, had fewer visits ± reinterventions within 3 months (3.6 vs 6.1, p &lt; 0.001); longer time to first reintervention (12.2 months vs 4.5 months, p = 0.01); fewer total reinterventions (0.26 vs 0.75, p = 0.006); and earlier lifting of postoperative restrictions (12.6 vs 32.1 days, p &lt; 0.001). In 2-vs-3-stent analysis, there was a trend toward more 3-stent eyes achieving target IOP than 2-stent eyes. Visual fields remained stable in both Multi-Stent and Trab eyes. Conclusion: Implanting 2-3 trabecular micro-bypass stents was a viable alternative to trabeculectomy for moderate-to-severe OAG, with clinically appropriate IOP/medication reductions and higher safety-adjusted treatment success vs trabeculectomy. abstract_id: PUBMED:32936738 Minimally Invasive Glaucoma Surgery: A Critical Appraisal of the Literature. Micro- or minimally invasive glaucoma surgeries (MIGS) have been the latest addition to the glaucoma surgical treatment paradigm. This term refers not to a single surgery, but rather to a group of distinct procedures and devices that aim to decrease intraocular pressure. Broadly, MIGS can be categorized into surgeries that increase the trabecular outflow [Trabectome, iStent (first and second generations), Hydrus microstent, Kahook Dual Blade and gonioscopy-assisted transluminal trabeculotomy], surgeries that increase suprachoroidal outflow (Cypass microstent and iStent Supra), and conjunctival bleb-forming procedures (Xen gel stent and InnFocus microshunt). Compared to traditional glaucoma surgeries, such as trabeculectomy and glaucoma drainage device implantation (Ahmed, Baerveldt, and Molteno valves), MIGS are touted to have less severe complications and shorter surgical time. MIGS represent an evolving field, and the efficacy and complications of each procedure should be considered independently, giving more importance to high-quality and longer-term studies. Answer: The quality of life (QOL) for glaucoma patients undergoing micro-invasive glaucoma surgery (MIGS) compared to those undergoing trabeculectomy does not show a significant difference 6 months postoperatively according to one study (PUBMED:27848022). This study used the National Eye Institute-Visual Functioning Questionnaire (VFQ-25) to assess QOL and found that all three surgical techniques (Trabectome, iStent, and trabeculectomy) maintained QOL. However, patients who underwent trabeculectomy required fewer topical medications, which could impact QOL even though it was not included in the NEI-VFQ-25. Another study focusing on combined cataract and MIGS procedures reported that patients experienced an overall improvement in QOL after the procedure, with a significant reduction in the number of anti-glaucoma medications (PUBMED:33116356). This suggests that MIGS, when combined with cataract surgery, can have a positive impact on patients' lives. However, the debate continues regarding the position of MIGS in the glaucoma treatment paradigm and its ability to replace conventional filtration surgery like trabeculectomy (PUBMED:29934937). While MIGS is associated with higher postoperative intraocular pressure (IOP) levels compared to various forms of trabeculectomy, it aims for safer and less invasive IOP reduction (PUBMED:29725860). The long-term outcomes and cost-effectiveness of MIGS are still areas that require further research to better assess their clinical and economic effectiveness. In summary, the studies suggest that MIGS can maintain QOL for glaucoma patients similarly to trabeculectomy in the short term, with the added benefit of potentially reducing the need for glaucoma medications. However, the long-term effects on QOL and the overall position of MIGS as a substitute for trabeculectomy remain subjects for further investigation.
Instruction: Do women in rural areas of Serbia rarely apply preventive measures against cervical cancer? Abstracts: abstract_id: PUBMED:24697015 Do women in rural areas of Serbia rarely apply preventive measures against cervical cancer? Background/aim: The incidence of cervical cancer in Central Serbia has the higher rate as compared with that in other European countries. Considering mortality rate for cervical cancer, the standardized rate in Serbia is 10.1 per 10,000 females, which is the second highest one after that in Romania with 13.0. The aim of this study was to examine application of preventive measures for cervical cancer in women both from rural and urban areas in Serbia and if they are associated with sociodemographic characteristics and sexual behaviour. Methods: We analyzed secondary data of the 2006 National Health Survey of the population of Serbia focused on characteristics of adult females aged 25 to 65 years (5.314 in total) taking into consideration that programme of the organized screening will include female population aged over 25 years. Results: Respondents from rural areas have gynecological examination less than once a year in comparison with those from urban areas (OR = 0.60, 95% CI 0.54-0.68). Less women from rural areas did Pap test during the last 12 months in comparison with respondents from urban areas (OR = 0.55, 95% Cl 0.48-0.64). Respondents from urban areas less often do the Pap test on doctor's advice in comparison with those from rural one (OR = 0.55, 95% CI 0.42-0.62). Conclusion: This study shows that women in rural areas rarely implement preventive gynecological measures againt cervical cancer in comparison with those in urban areas. Implementation of preventive measures among rural women is conditioned by lower levels of education and lower socioeconomic status. abstract_id: PUBMED:28409674 Comparing Rural and Urban Cervical and Breast Cancer Screening Rates in a Privately Insured Population. Low preventive screening varies by region and contributes to poor outcomes for breast and cervical cancer. Previous comparative urban and rural research on preventive screening has focused on government programs. This study quantified and compared rural and urban preventive cancer screening rates for women who were privately insured. National Quality Forum measures were used to calculate rates for women within rural and urban parts of the same Hospital Referral Region (HRR) using claims data. Mammography screening rates for women age 24 to 69 years were 77.1% in 2011 and 76.1% in 2008. Compared to urban women, mammography screening rates for women visiting rural physicians were lower in 42%, higher in 2% and identical in 56% of HRRs. Cervical cancer screening rates for women age 21 to 64 years were 82.9% in 2011 and 83.5% in 2008. Cervical cancer screening rates among women who saw rural physicians were lower in 55%, higher in 4%, and identical in 42% of HRRs. HRRs where rural areas underperformed urban areas increased between 2008 and 2011 for both screenings. Moderate but notable differences in women's preventive screening rates between rural and urban physicians highlight the need for practical solutions that increase use of screening services and reduce barriers to services in rural areas. abstract_id: PUBMED:38282637 Evaluating an Enterprise-Wide Initiative to enhance healthcare coordination for rural women Veterans using the RE-AIM framework. Introduction: The Veterans Health Administration (VA) Office of Rural Health (ORH) and Office of Women's Health Services (OWH) in FY21 launched a three-year Enterprise-Wide Initiative (EWI) to expand access to preventive care for rural, women Veterans. Through this program, women's health care coordinators (WHCC) were funded to coordinate mammography, cervical cancer screening and maternity care for women Veterans at selected VA facilities. We conducted a mixed-methods evaluation using the RE-AIM framework to assess the program implementation. Materials And Methods: We collected quantitative data from the 14 program facilities on reach (i.e., Veterans served by the program), effectiveness (e.g., cancer screening compliance, communication), adoption, and maintenance of women's health care coordinators (WHCC) in FY2022. Implementation of the program was examined through semi-structured interviews with the facility WHCC funding initiator (e.g., the point of contact at facility who initiated the request for WHCC funding), WHCCs, and providers. Results: Reach. The number of women Veterans and rural women Veterans served by the WHCC program grew (by 50% and 117% respectively). The program demonstrated effectiveness as screening rates increased for cervical and breast cancer screening (+0.9% and +.01%, respectively). Also, maternity care coordination phone encounters with Veterans grew 36%. Adoption: All facilities implemented care coordinators by quarter two of FY22. Implementation. Qualitative findings revealed facilitators and barriers to successful program implementation and care coordination. Maintenance: The EWI facilitated the recruitment and retention of WHCCs at respective VA facilities over time. Implications: In rural areas, WHCCs can play a critical role in increasing Reach and effectiveness. The EWI demonstrated to be a successful care coordination model that can be feasibly Adopted, Implemented, and Maintained at rural VA facilities. abstract_id: PUBMED:1738872 Preventive health behavior among black and white women in urban and rural areas. The relationship of race to preventive health behavior among women is examined using data from the 1985 National Health Interview Survey. We find that black women are less likely to engage in primary prevention behaviors such as exercising, non-smoking and maintaining a favorable weight. However, black women are more likely to engage in secondary prevention behaviors such as receiving a Pap test or a breast exam. These findings are surprising as they indicate a change in secondary prevention behavior among black women. The racial differences in exercising, maintaining a favorable weight and receiving a Pap test or a breast exam cannot fully be explained by the differing levels of socio-economic status, measured by education and income. However, the higher percentage of smoking among black women is due to their lower levels of education. Urban/rural residence modifies the effect of race on smoking and receiving a Pap test. Black women in urban areas are most likely to be smokers. Almost no difference exists between white women in urban and rural areas concerning their likelihood of receiving a Pap test, we find that black women in urban areas are much more likely to be screened for cervical cancer than black women in rural areas. abstract_id: PUBMED:32112367 A Study on Knowledge and Awareness of Cervical Cancer Among Females of Rural and Urban Areas of Haryana, North India. Lack of awareness of screening methods, risk factors, and symptoms may lead to late diagnosis and poor prognosis of cervical cancer. The plan of this study was to assess the level of awareness about cervical cancer and HPV vaccine among females of rural and urban areas of Haryana, India. This cross-sectional study was performed using a comprehensive self-designed questionnaire on 1500 women of urban (700) and rural (800) background aged 18-65 years, evaluating their knowledge for cervical cancer and screening, HPV infection and its preventive measure, and symptoms and risk factors. Data obtained was analyzed and interpreted by using simple percentages and bar charts. Most of the participants were aged between 21 and 30 years and had college level education. Majority of the women from rural areas had poor knowledge about cervical cancer (55%) and its screening (75%), HPV infection (87.5%), and HPV vaccine (95%) compared with urban areas. Knowledge about symptoms and risk factors was very low in both rural and urban areas. Whatever little knowledge the women had about cervical cancer was from college education, friends, neighbors, relatives, and medical practitioner or doctors. The survey pointed to the critical need to educate women about cervical cancer and its early diagnosis, related risk factors, symptoms, and preventive measures which can be achieved by launching extensive awareness programs for educating females about cervical cancer in India. abstract_id: PUBMED:34081371 Higher prevalence of hysterectomy among rural women than urban women: Implications for measures of disparities in uterine and cervical cancers. Purpose: Differences in hysterectomy prevalence by rural or urban residence could distort comparisons of rural-urban cervical and uterine cancer incidence. Using data from a large population-based survey, we sought to understand whether hysterectomy prevalence varies by rural or urban residence and whether the relationship between hysterectomy prevalence and rurality varies by race or ethnicity. Methods: Our analysis included 197,759 female respondents to the 2018 Behavioral Risk Factor Surveillance System, aged 20-79 years. We calculated population weighted proportions and 95% confidence intervals for hysterectomy prevalence, stratified by rural-urban residence and 5-year age groups. We also report estimates of hysterectomy prevalence by rural-urban residence for specific race and ethnic groups. Findings: Hysterectomy prevalence increased with age and was more common among rural women than urban women. The largest absolute difference occurred among women aged 45-49 years; 28.6% of rural women (95% CI: 25.1-32.2) and 16.6% of urban women (95% CI: 15.3-17.8) reported a hysterectomy. For hysterectomy prevalence by race and ethnicity, rural estimates were higher than urban estimates for the following groups of women: non-Hispanic Asian, non-Hispanic other race, non-Hispanic Black, and non-Hispanic White. Among Hispanic women and non-Hispanic American Indian/Alaska Native women, rural-urban differences in hysterectomy prevalence were not statistically different at the 95% confidence level. Conclusions: Our results suggest that variation in hysterectomy prevalence, if not adjusted in the analysis, could produce distorted comparisons in measures of the relationship between rurality and uterine and cervical cancer rates. The magnitude of this confounding bias may vary by race and ethnicity. abstract_id: PUBMED:37170251 Rural-urban disparities in preventive breast and cervical cancer screening among women with early-onset dementia. Background: The early onset of Alzheimer's disease and related dementias (ADRD) before age 65 can introduce life and health care complications. Preserving an early-onset ADRD patient's daily functioning longer and delaying declines in health from non-ADRD conditions become important preventive goals. This study examined the differences in utilization of preventive cancer screenings between patients with and without early-onset ADRD, and compared utilization of the screenings in rural versus urban areas among women with early-onset ADRD in the United States. Methods: We conducted a cross-sectional study of women aged 40 to 64 years eligible for mammogram and cervical cancer screenings using commercial insurance claims from 2012 to 2018. We measured the use of biennial mammogram among women 50 to 64 years old, and the use of triennial Pap smear test among women 40 to 64 years old. We used inverse probability weighted logistic regressions to estimate the odds of receiving preventive cancer screenings by the presence of early-onset ADRD or cognitive impairments (CI). We used multivariable logistic regressions to estimate the odds of receiving preventive cancer screenings by rural or urban residence among women with early-onset ADRD/CI. Results: Among 6,349,308 women in the breast cancer screening sample (mean [SD] age, 56.52 [4.03] years), 36,131 had early-onset ADRD/CI (mean [SD] age, 57.99 [3.98] years). Among 6,583,088 women in the cervical cancer screening sample (mean [SD] age, 52.37 [6.81] years), 30,919 had early-onset ADRD/CI (mean [SD] age, 55.79 [6.22] years). Having early-onset ADRD/CI was associated with lower utilization of mammogram (OR: 0.92, 95% CI: 0.90-0.95). No significant difference was observed in Pap smear screening (OR: 0.99, 95% CI: 0.96-1.02) between patients with and without early-onset ADRD/CI. Among patients with early-onset ADRD/CI, those in rural areas were less likely than those in urban areas to have mammograms (OR: 0.91, 95% CI: 0.85-0.97) and Pap smears (OR: 0.65, 95% CI: 0.61-0.71). Conclusions: The observed pattern of rural-urban differences in cancer screening in our study emphasizes the need for efforts to promote evidence-based, individualized decision-making processes in the early-onset ADRD population. abstract_id: PUBMED:18820832 Communications about cervical cancer between women and gynecologists in Serbia. Objective: The age-standardized incidence rate of cervical cancer in Serbia is 27.2 per 100,000 women, i. e., twice as high as in western European countries. This paper explores the communication which occurs between women and gynecologists in Serbia in relation to cervical cancer screening. Methods: Our study was conducted in two phases: a qualitative phase (focus group discussions and in-depth interviews with women) and a quantitative phase (community-based survey). This paper reports the findings from both phases, and in particular, the in-depth interviews with 22 women with different socio-economic backgrounds residing in the capital city and a regional town. To illustrate women's experiences and attitudes, we used interview excerpts. Results: Our findings indicate that there is poor communication between women and gynecologists and an absence of proper counseling. Women's lack of knowledge about reproductive health issues, poor attitudes of gynecologists, and personal barriers that women experience in accessing health care render preventive practices a low priority both for women and gynecologists. Conclusion: We recommend different educational and organizational strategies that may improve the counseling skills of gynecologists and ultimately reduce the prevalence of cervical cancer in Serbia. abstract_id: PUBMED:12115366 Breast and cervical carcinoma screening practices among women in rural and nonrural areas of the United States, 1998-1999. Background: Prior studies have suggested that women living in rural areas may be less likely than women living in urban areas to have had a recent mammogram and Papanicolau (Pap) test and that rural women may face substantial barriers to receiving preventive health care services. Methods: The authors examined both breast and cervical carcinoma screening practices of women living in rural and nonrural areas of the United States from 1998 through 1999 using data from the Behavioral Risk Factor Surveillance System. The authors limited their analyses of screening mammography and clinical breast examination to women aged 40 years or older (n = 108,326). In addition, they limited their analyses of Pap testing to women aged 18 years or older who did not have a history of hysterectomy (n = 131,813). They divided the geographic areas of residence into rural areas and small towns, suburban areas and smaller metropolitan areas, and larger metropolitan areas. Results: Approximately 66.7% (95% confidence interval [CI] = 65.8% to 67.6%) of women aged 40 years or older who resided in rural areas had received a mammogram in the past 2 years, compared with 75.4% of women living in larger metropolitan areas (95% CI = 74.9% to 75.9%). About 73.0% (95% CI = 72.2% to 73.9%) of women aged 40 years or older who resided in rural areas had received a clinical breast examination in the past 2 years, compared with 78.2% of women living in larger metropolitan areas (95% CI = 77.8% to 78.7%). About 81.3% (95% CI = 80.6% to 82.0%) of 131,813 rural women aged 18 years or older who had not undergone a hysterectomy had received a Pap test in the past 3 years, compared with 84.5% of women living in larger metropolitan areas (95% CI = 84.1% to 84.9%). The differences in screening across rural and nonrural areas persisted in multivariate analysis (P &lt; 0.001). Conclusions: These results underscore the need for continued efforts to provide breast and cervical carcinoma screening to women living in rural areas of the United States. abstract_id: PUBMED:38249409 Demographic and socioeconomic factors associated with cervical cancer screening among women in Serbia. Objectives: Effective reduction of cervical cancer incidence and mortality requires strategic measures encompassing the implementation of a cost-effective screening technology. Serbia has made significant strides, introducing organized cervical cancer screening in 2012. However, various impediments to screening implementation persist. The aim of the study was to estimate the socioeconomic factors associated with cervical cancer screening among women in Serbia. Methods: Data from 2019 National Health Survey of the population of Serbia were used in this study. The study is cross sectional survey on a representative sample of the population of Serbia. Present total number of participants analyzed in survey 6,747. Results: In Serbia, 67.2% of women have done a Pap test at any time during their lives, of which 46.1% of women have undergone cervical cancer screening in the past 3 years. About a quarter of women have never undergone a Pap test in their life (24.3%). The probability of never having a Pap test have: the youngest age group (15-24 years) is 1.3 times more likely than the oldest age group (OR = 1.31), unmarried women 0.3 times more often than married women (OR = 0.37), respondents with basic education 0.9 times more often than married women (OR = 0.98), the women of lower socioeconomic status 0.5 times more often than respondents of high socioeconomic status (OR = 0.56). Conclusion: Enhancement of the existing CCS would be the appropriate public health approach to decrease the incidence and mortality of cervical cancer in the Republic of Serbia. Answer: Yes, women in rural areas of Serbia rarely apply preventive measures against cervical cancer. The study reported in abstract PUBMED:24697015 found that women in rural areas have gynecological examinations less frequently than those in urban areas and are less likely to have had a Pap test in the last 12 months. The implementation of preventive measures among rural women is influenced by lower levels of education and lower socioeconomic status. Additionally, abstract PUBMED:1738872 indicates that black women in urban areas are much more likely to be screened for cervical cancer than black women in rural areas, suggesting that urban/rural residence modifies the effect of race on receiving a Pap test. Furthermore, abstract PUBMED:32112367 highlights that the majority of women from rural areas had poor knowledge about cervical cancer and its screening, HPV infection, and the HPV vaccine compared to those from urban areas. This lack of awareness may contribute to the lower application of preventive measures in rural areas. Abstract PUBMED:38249409 also supports these findings by showing that a significant proportion of women in Serbia have never undergone a Pap test in their life, with the probability being higher among the youngest age group, unmarried women, those with basic education, and women of lower socioeconomic status, which are factors often more prevalent in rural populations.
Instruction: Can analyses of electronic patient records be independently and externally validated? Abstracts: abstract_id: PUBMED:31190405 Effects of electronic medical records on patient safety culture: The perspective of nurses. Objectives: This study investigates the effects of nurses' views regarding electronic medical records on patient safety culture. Methods: The implementation part of the study was conducted with nurses working in seven state hospitals in the Burdur province of Turkey. The data were collected between 15 March and 20 April 2018. Correlation and multiple regression analyses were performed to evaluate the relationships among the variables in the study. In addition, descriptive analyses (mean, standard deviation) and Cronbach α coefficients of reliability of the scales were also used. Results: The results of the analyses revealed that control variables (gender, educational level, age, etc) and all dimensions of electronic medical records affected all three dimensions of patient safety culture. The control variables and all dimensions of electronic medical records explained 41% of the total variance in perceptions of process, 42.5% of the total variance in management support for patient safety, and 27.9% of the total variance in perceptions of safety. Discussion: This study provides insight concerning the effects of nurses' views of electronic medical records on patient safety culture. The results of the study reveal that nurses' views of electronic medical records affect the perception of patient safety culture positively. Conclusions: It is recommended that further studies be conducted on topics such as the use of medical records and the development of patient safety. Health care managers should encourage nurses to undergo training and educational efforts on electronic medical records and patient safety. abstract_id: PUBMED:33684209 Information sharing via electronic health records in team-based care: the patient perspective. Purpose: Team-based care offers potential improvements in communication, care coordination, efficiency, value and satisfaction levels of both patients and providers. However, the question of how to balance the need for information in team-based care without disregarding patient preferences remains unanswered. This study aims to determine patients' perceptions of information sharing via electronic health records (EHRs) in team-based care. Methods: This qualitative study used a focus group approach. Participants were primary care patients and representative members from minority groups (ethnic, racial or social). Audio recordings of the sessions were transcribed and coded consistent with thematic analyses. Results: The analysis revealed that the participants in the focus groups had diverging levels of understanding and personal beliefs around five major themes including (i) patient's understanding of the care team, (ii) perceptions of electronic health records, (iii) defining basic health care information, (iv) sharing information with the health care team and (v) patient's trust in doctors and the health care system. Conclusions: The participants of our focus groups value team-based care and view patients as a critical part of those teams. With respect to electronic health records, our participants recognized their importance but had concerns about inaccuracies and limited options to correct errors in their records. In general, participants were willing to share basic information but disagreed about what information should be considered to be basic. Moreover, based on their trust and comfort level, many participants want to control what information is recorded and shared in the electronic health record. abstract_id: PUBMED:32431170 Improved efficiency of patient admission with electronic health records in neurosurgery. Background: Electronic health records (EHRs) may be controversial but they have the potential to improve patient care. We investigated whether the introduction of an electronic template-based admission form for the collection of information about the patient's medical history and neurological and clinical state at admission in the neurosurgical unit might have an impact on the quality of documentation in a discharge record and the amount of time taken to produce this documentation. Method: A new digital template-based admission form (EHR) was developed and assessed with QNOTE, an assessment tool of medical notes with standardised criteria and the possibility to benchmark the quality of documentations. This was compared to 30 prior paper-based handwritten documentations (HWD) regarding the utilisation of these medical notes for dictation of medical discharge records. Results: Implementation of the EHR significantly improved the quality of patient admission documentation with a QNOTE mean grand score of 87 ± 22 (p &lt; 0.0001) compared to prior HWD with 44 ± 30. The mean documentation time for HWD was 8.1 min ± 4.1 min and the dictation time for discharge records was 10.6 min ± 3.5 min. After implementation of EHR, the documentation time increased slightly to 9.6 min ± 2.3 min (n.s.), while the time for dictation of discharge records was reduced to 5.1 min ± 1.2 min (p &lt; 0.0001). There was a clear correlation between a higher quality of documentation and a higher needed documentation time as well as higher quality of documentation and lower dictation times of discharge records. Conclusion: Implementation of the EHR improved the quality of patient admission documentation and reduced the dictation time of discharge records. Implications: It is crucial to involve stakeholders and users of EHRs in a timely manner during the stage of development and implementation phase to ensure optimal results and better usability. abstract_id: PUBMED:25411220 The impact of using electronic patient records on practices of reading and writing. The aim of this study was to investigate the use of electronic patient records in daily practice. In four wards of a large hospital district in Finland, N = 43 patients' care and activities were observed and analysed in terms of the Grounded Theory method. The findings revealed that using electronic patient records created a particular process of writing and reading. Wireless technology enabled simultaneous patient involvement and point-of-care documentation, additionally supporting real-time reading. Remote and retrospective documentation was distant in terms of both space and time. The remoteness caused double documentation, reduced accuracy and less-efficient use of time. 'Non-reading' practices were witnessed in retrospective reading, causing delays in patient care and increase in workload. Similarly, if documentation was insufficient or non-existent, the consequences were found to be detrimental to the patients. The use of an electronic patient record system has a significant impact on patient care. Therefore, it is crucial to develop wireless technology and interdisciplinary collaboration in order to improve and support high-quality patient care. abstract_id: PUBMED:24726853 Predicting patient acuity from electronic patient records. Background: The ability to predict acuity (patients' care needs), would provide a powerful tool for health care managers to allocate resources. Such estimations and predictions for the care process can be produced from the vast amounts of healthcare data using information technology and computational intelligence techniques. Tactical decision-making and resource allocation may also be supported with different mathematical optimization models. Methods: This study was conducted with a data set comprising electronic nursing narratives and the associated Oulu Patient Classification (OPCq) acuity. A mathematical model for the automated assignment of patient acuity scores was utilized and evaluated with the pre-processed data from 23,528 electronic patient records. The methods to predict patient's acuity were based on linguistic pre-processing, vector-space text modeling, and regularized least-squares regression. Results: The experimental results show that it is possible to obtain accurate predictions about patient acuity scores for the coming day based on the assigned scores and nursing notes from the previous day. Making same-day predictions leads to even better results, as access to the nursing notes for the same day boosts the predictive performance. Furthermore, textual nursing notes allow for more accurate predictions than previous acuity scores. The best results are achieved by combining both of these information sources. The developed model achieves a concordance index of 0.821 when predicting the patient acuity scores for the following day, given the scores and text recorded on the previous day. Conclusions: By applying language technology to electronic patient documents it is possible to accurately predict the value of the acuity scores of the coming day based on the previous daýs assigned scores and nursing notes. abstract_id: PUBMED:32757888 The Relationship between the Nurses' Perception of Electronic Health Records and Patient Privacy. The aim of this study is to examine the relationship between the nurses' perception of electronic health records and patient privacy regarding patient information and socio-economic variables. The implementation part of the study was conducted on nurses working in a public hospital located in the Kilis province in Turkey. As a result of the analysis, the increase in the perceptions of the participants regarding the electronic health records statistically increases their perceptions about patient privacy. It was also determined that the scores of nurses regarding patient privacy showed statistically significant differences based only on the total working time in the current unit. abstract_id: PUBMED:29194056 Neonatal Nurses Experience Unintended Consequences and Risks to Patient Safety With Electronic Health Records. In this article, we examine the unintended consequences of nurses' use of electronic health records. We define these as unforeseen events, change in workflow, or an unanticipated result of implementation and use of electronic health records. Unintended consequences experienced by nurses while using electronic health records have been well researched. However, few studies have focused on neonatal nurses, and it is unclear to what extent unintended consequences threaten patient safety. A new instrument called the Carrington-Gephart Unintended Consequences of Electronic Health Record Questionnaire has been validated, and secondary analysis using the tool explored the phenomena among neonatal nurses (N = 40). The purposes of this study were to describe unintended consequences of use of electronic health records for neonatal nurses and to explore relationships between the phenomena and characteristics of the nurse and the electronic health record. The most frequent unintended consequences of electronic health record use were due to interruptions, followed by a heavier workload due to the electronic health record, changes to the workflow, and altered communication patterns. Neonatal nurses used workarounds most often with motivation to better assist patients. Teamwork was moderately related to higher unintended consequences including patient safety risks (r = 0.427, P = .007), system design (r = 0.419, P = .009), and technology barriers (r = 0.431, P = .007). Communication about patients was reduced when patient safety risks were high (r = -0.437, P = .003). By determining the frequency with which neonatal nurses experience unintended consequences of electronic health record use, future research can be targeted to improve electronic health record design through customization, integration, and refinement to support patient safety and better outcomes. abstract_id: PUBMED:21416889 Electronic records in psychiatry Electronic patient records are very slowly being introduced into French hospitals. It is therefore necessary to consider the factors which contribute to or, on the contrary, hinder the implementation of this tool in a public psychiatric institution. abstract_id: PUBMED:27678461 IT-CARES: an interactive tool for case-crossover analyses of electronic medical records for patient safety. Background: The significant risk of adverse events following medical procedures supports a clinical epidemiological approach based on the analyses of collections of electronic medical records. Data analytical tools might help clinical epidemiologists develop more appropriate case-crossover designs for monitoring patient safety. Objective: To develop and assess the methodological quality of an interactive tool for use by clinical epidemiologists to systematically design case-crossover analyses of large electronic medical records databases. Material And Methods: We developed IT-CARES, an analytical tool implementing case-crossover design, to explore the association between exposures and outcomes. The exposures and outcomes are defined by clinical epidemiologists via lists of codes entered via a user interface screen. We tested IT-CARES on data from the French national inpatient stay database, which documents diagnoses and medical procedures for 170 million inpatient stays between 2007 and 2013. We compared the results of our analysis with reference data from the literature on thromboembolic risk after delivery and bleeding risk after total hip replacement. Results: IT-CARES provides a user interface with 3 columns: (i) the outcome criteria in the left-hand column, (ii) the exposure criteria in the right-hand column, and (iii) the estimated risk (odds ratios, presented in both graphical and tabular formats) in the middle column. The estimated odds ratios were consistent with the reference literature data. Discussion: IT-CARES may enhance patient safety by facilitating clinical epidemiological studies of adverse events following medical procedures. The tool's usability must be evaluated and improved in further research. abstract_id: PUBMED:32593879 Lost in translation - Silent reporting and electronic patient records in nursing handovers: An ethnographic study. Background: Electronic patient records are increasingly being implemented in hospitals around the world to promote a process of sharing information that is reliable, more efficient and will promote patient safety. Evidence suggests that in practice, adaptations are being made to how such technologies are being used in practice. Few studies have explicitly aimed to explore how electronic patient records influence on nurses' communication of patient information in clinical practice. Objective: To enhance understanding of the impact of electronic patient records on nurses' cognitive work, by exploring how nurses engage with the electronic patient record during handover and the representation of patient information. Methods: Ethnographic fieldwork was conducted in a Norwegian hospital cancer ward where computer-mediated handover referred to as 'silent reporting' had been implemented. The fieldwork included five months of participant observation and nine semi-structured interviews with registered nurses. Participating nurses were selected to ensure representation by clinical experience. The analysis of field notes and transcripts was partly performed in NVivo 11, following thematic analysis (Braun and Clarke 2006). Findings: Four themes emerged: 1) nurses' complex and dynamic workflow necessitated talk in handovers, 2) oral communication allowed nurses to share sensitive information on psychosocial issues, and 3) to solve uncertainties considered unsuited for the record, and 4) talk facilitated professional and moral support in clinical decisions-making, as collective achievements. Talk was thereby found to be essential to nurses' cognitive work and professional knowledge, allowing for the translation and interplay between the embodied, informal knowledge of the individual nurse, and formal knowledge inscribed in record notes. Conclusions: Silent reporting has implications for nurses' cognitive work and professional knowledge. With the sole reliance on the electronic patient record as handover tools, it is not only information essential to nurses' evolving, dynamic, and contextualised understanding of the patient's situation that is lost in translation, but also the visibility and legitimacy of nursing knowledge. Nurses' continued practices of talk in handovers can be seen as efforts to counteract these effects in ways that also increased the relevance and usefulness of the electronic patient record as a mediator of knowledge. Answer: The provided abstracts do not directly address the question of whether analyses of electronic patient records (EPRs) can be independently and externally validated. However, some of the abstracts do touch on related aspects of EPRs, such as their impact on patient safety, efficiency, and the quality of documentation, which could be relevant to the broader context of validation. For instance, the study in abstract PUBMED:31190405 suggests that nurses' views of electronic medical records positively affect the perception of patient safety culture. This implies that EPRs could potentially be used to enhance patient safety, which is an aspect that could be subject to validation. Abstract PUBMED:32431170 reports that the implementation of an electronic template-based admission form improved the quality of patient admission documentation and reduced the dictation time of discharge records. This suggests that EPRs can improve efficiency and documentation quality, which are measurable outcomes that could be validated. Abstract PUBMED:24726853 discusses the use of computational intelligence techniques to predict patient acuity from electronic patient records, achieving a concordance index of 0.821 when predicting patient acuity scores. This indicates that EPR data can be used for predictive analyses, which could be validated by comparing predicted outcomes with actual patient outcomes. Abstract PUBMED:27678461 describes IT-CARES, an interactive tool for case-crossover analyses of electronic medical records for patient safety. The tool was tested on a large database and provided results consistent with reference literature data. This suggests that analyses of EPRs can be conducted in a way that allows for comparison with established data, which is a form of validation. While these abstracts provide insights into the potential for EPR analyses to be validated, they do not specifically discuss independent or external validation processes. Independent and external validation would typically involve third-party verification of the analysis methods and results, ensuring that the findings are reliable, accurate, and generalizable beyond the specific context in which they were generated.
Instruction: Is the Nociceptive Blink Reflex Associated with Psychological Factors in Healthy Participants? Abstracts: abstract_id: PUBMED:27128475 Is the Nociceptive Blink Reflex Associated with Psychological Factors in Healthy Participants? Aims: To evaluate the possible association between the nociceptive blink reflex (nBR) and various pain-related psychological measures: the Anxiety Sensitivity Index-3 (ASI-3), the Fear of Pain Questionnaire III (FPQ-III), the Pain Vigilance and Awareness Questionnaire (PVAQ), the Somatosensory Amplification Scale (SSAS), the Pain Catastrophizing Scale (PCS), and the Situational Pain Catastrophizing Scale (S-PCS). Methods: The nBR was evaluated in 21 healthy participants. It was elicited by a nociceptive-specific electrode placed over the entry zone of the right supraorbital nerve, infraorbital nerve, and mental nerve, as well as the left infraorbital nerve. The outcomes were (1) nBR measurements: (a) individual electrical sensory threshold (I0) and pain threshold (IP); (b) root mean square (RMS), area under the curve (AUC), and onset latencies of R2 responses; (c) stimulus-evoked pain on a 0 to 10 numeric rating scale (NRS); and (2) the ASI-3, the FPQ-III, the PVAQ, the SSAS, the PCS, and the S-PCS. Pearson correlation coefficient was used to evaluate the association between the means of nBR measurements from all sites and the questionnaires The significance level was set up after a Bonferroni correction (adjusted α = .8%). Results: There was no correlation for any pair of variables at the adjusted significance level (P &gt; .008). There was only a single significant correlation at the standard significance level (P &lt; .05), where the pain intensity (NRS) at 50% of IP presented a positive and small to moderate correlation with the PCS (r = 0.43, P = .04). Conclusion: It appears that the nBR and its associated psychophysical measures are not associated with psychological factors in healthy participants. abstract_id: PUBMED:28074292 Reliability of the nociceptive blink reflex evoked by electrical stimulation of the trigeminal nerve in humans. Objective: The nociceptive blink reflex (nBR) can be useful to investigate trigeminal nociceptive function. The aim of this study was to estimate the reliability of the nBR evoked by electrical stimulation of the three branches of the trigeminal nerve under the following conditions: over time (test-retest and intrarater reliability) and by two examiners (interrater reliability). Materials And Methods: Twenty-one healthy participants were evaluated in two sessions (24 h apart). The nBR was elicited by a so-called "nociceptive-specific" electrode placed over the entry zone of the right supraorbital (V1R), infraorbital (V2R), mental (V3R), and left infraorbital (V2L) nerve. The outcomes were individual electrical sensory (I 0) and pain thresholds (I P); root mean square (RMS), area-under-the-curve (AUC), and onset latencies of R2 responses (determined twice after a recalibration session); and stimulus-evoked pain on a 0-10 numerical rating scale. Intraclass correlation coefficients (ICCs) and Kappa statistics were computed (α = 5%). Results: ICCs were fair to excellent in 82% of the psychophysical measures (fair 21%, good 31%, excellent 30%) and in 86% of V1R, V2R, and V2L nBR parameters, whereas 52% of V3R showed poor reliability. ICCs for intrarater reliability were fair to good in 70% of measurements (fair 20%, good 50%) and in 75% of interrater measurements after the recalibration (fair 55%, good 20%). All kappa values showed at least fair agreement and the majority of the nBR measures (93%) presented moderate to excellent reliability. Conclusion: The nBR and its associated psychophysical measures can be considered a sufficiently reliable test. Clinical Significance: The nBR can be recommended as an electrophysiological technique to assess trigeminal nociceptive function. abstract_id: PUBMED:31236734 Feasibility and reliability of intraorally evoked "nociceptive-specific" blink reflexes. Objectives: The "nociceptive-specific" blink reflex (nBR) evoked by extraoral stimulation has been used to assess trigeminal nociceptive processing in patients with trigeminal nerve damage regardless of the site of damage. This study aimed to test the feasibility of nBR elicited by intraoral stimulation, compare intraoral and extraoral nBR and assess the intrarater and interrater reliability of the intraoral nBR for the maxillary (V2) and mandibular (V3) branches of the trigeminal nerve. Materials And Methods: In 17 healthy participants, nBR was elicited by stimulation of two extraoral and two intraoral sites by two operators and repeated intraorally by one operator. Main outcome variables were intraoral stimulus-evoked pain scores and nBR R2 responses at different stimulus intensities. Intraclass correlation coefficients (ICC) were used to assess reliability. Results: Dependent on the stimulus intensity, intraoral stimulation evoked R2 responses in up to 12/17 (70.6%) participants for V2 and up to 8/17 (47.1%) participants for V3. Pain scores (p &lt; 0.003) and R2 responses (p &lt; 0.004) increased with increasing intensities for V2, but not V3. The R2 responses were significantly smaller with intraoral stimulation compared to extraoral stimulation (p &lt; 0.014). Overall, ICCs were fair to excellent for V2 but poor for V3. Conclusion: Intraorally evoked nBR was feasible in a subset of healthy participants and was less responsive than nBR with extraoral stimulation. The V2 nBR showed better reliability than V3. Clinical Relevance: The nBR can be used to assess nerve damage to the maxillary intraoral regions, though other measures may need to be considered for the mandibular intraoral regions. abstract_id: PUBMED:24666216 Cervical referral of head pain in migraineurs: effects on the nociceptive blink reflex. Objective: To investigate cervical, interictal reproduction of usual head pain and its effect on the nociceptive blink reflex in migraineurs. Background: Anatomical and neurophysiological studies in animals and humans have confirmed functional convergence of trigeminal and cervical afferent pathways. Migraineurs often present with occipital and neck symptoms, and cervical pain is referred to the head in most cases, suggesting that cervical afferent information may contribute to headache. Furthermore, the effectiveness of greater occipital nerve blockade in migraine and demonstrable modulation of trigeminal transmission following greater occipital nerve blockade suggest an important role for cervical afferents in migraine. However, to what extent cervical afferents contribute actively to migraine is still unknown. Methods: The passive accessory intervertebral movements of the atlanto-occipital and C2-3 spinal segments of 15 participants (14 females, 1 male; age 24-44 years, mean age 33.3 years) with migraine were examined interictally. During 1 session, either the atlanto-occipital or C2-3 segment was examined, resulting in referred usual head pain, while in another session, pressure was applied over the common extensor origin (lateral epicondyle of the humerus) of the ipsilateral arm. Each intervention was repeated 4 times. The nociceptive blink reflex to a supraorbital electrical stimulus was elicited ipsilaterally during both sessions before and during each intervention. The main outcome variables were the number of recorded blinks, area under the curve and latencies of the R2 components of the nociceptive blink reflex. Participants also rated the intensity of referred head pain and the supraorbital stimulus on a scale of 0-10, where 0 = "no pain" and 10 = "intolerable pain," and rated the intensity of applied pressure where 0 = "pressure but no pain" and 10 = "intolerable pain." Results: Participants reported a significant reduction in local tenderness ratings across the 4 trials for the cervical intervention but not for the arm (P = .005). The cervical intervention evoked head pain in all participants. As the cervical intervention was sustained, head pain decreased significantly from the beginning to the end of each trial (P = .000) and from the beginning of the first trial to the end of the last (P = .000). Pain evoked by the supraorbital stimulus was consistent from baseline to across the 4 trials (P = .635) and was similar for the cervical and arm interventions (P = .072). The number of blinks decreased significantly across the experiment (P = .000) and was comparable in the cervical and arm interventions (P = .624). While the R2 area under the curve decreased irrespective of intervention (P = .000), this reduction was significantly greater for the cervical intervention than when pressure was applied to the arm (P = .037). Analysis of the R2 latencies revealed a notable increase across the experiment (P = .037). However, this increase was significantly greater following the cervical than arm intervention (P = .012). Conclusions: Our findings corroborate previous results related to anatomical and functional convergence of trigeminal and cervical afferent pathways in animals and humans, and suggest that manual cervical modulation of this pathway is of potential benefit in migraine. abstract_id: PUBMED:16458070 Dissociation between pain and the nociceptive blink reflex during psychological arousal. Objective: To investigate the effect of psychological arousal on pain ratings and the R2 component of the electrically evoked blink reflex to a 'pure' noiciceptive stimulus. Methods: Pain ratings and R2 to a noiciceptive stimulus (pulse width 0.3ms, 2mA, delivered from a concentric electrode attached to the supraorbital region of the forehead) were investigated in 16 healthy participants before and during a serial subtraction task, and in 16 control participants who sat quietly during nociceptive stimulation. Results: Pain ratings decreased whereas R2 amplitude increased during the serial subtraction task. Conclusions: Supra-spinal rather than spinal mechanisms inhibited pain perception during psychological arousal. Moreover, psychological arousal facilitated the R2 component of the blink reflex to a nociception-specific stimulus. Significance: Supra-spinal influences need to be considered during clinical evaluation of the trigeminal nociceptive blink reflex. abstract_id: PUBMED:24936654 Effects of visual cortex activation on the nociceptive blink reflex in healthy subjects. Bright light can cause excessive visual discomfort, referred to as photophobia. The precise mechanisms linking luminance to the trigeminal nociceptive system supposed to mediate this discomfort are not known. To address this issue in healthy human subjects we modulated differentially visual cortex activity by repetitive transcranial magnetic stimulation (rTMS) or flash light stimulation, and studied the effect on supraorbital pain thresholds and the nociceptive-specific blink reflex (nBR). Low frequency rTMS that inhibits the underlying cortex, significantly decreased pain thresholds, increased the 1st nBR block ipsi- and contralaterally and potentiated habituation contralaterally. After high frequency or sham rTMS over the visual cortex, and rMS over the right greater occipital nerve we found no significant change. By contrast, excitatory flash light stimulation increased pain thresholds, decreased the 1st nBR block of ipsi- and contralaterally and increased habituation contralaterally. Our data demonstrate in healthy subjects a functional relation between the visual cortex and the trigeminal nociceptive system, as assessed by the nociceptive blink reflex. The results argue in favour of a top-down inhibitory pathway from the visual areas to trigemino-cervical nociceptors. We postulate that in normal conditions this visuo-trigeminal inhibitory pathway may avoid disturbance of vision by too frequent blinking and that hypoactivity of the visual cortex for pathological reasons may promote headache and photophobia. abstract_id: PUBMED:34147564 Conditioned Pain Modulation: Comparison of the Effects on Nociceptive and Non-nociceptive Blink Reflex. Although conditioned pain modulation (CPM) is considered to represent descending pain inhibitory mechanisms triggered by noxious stimuli applied to a remote area, there have been no previous studies comparing CPM between pain and tactile systems. In this study, we compared CPM between the two systems objectively using blink reflexes. Intra-epidermal electrical stimulation (IES) and transcutaneous electrical stimulation (TS) were applied to the right skin area over the supraorbital foramen to evoke a nociceptive or a non-nociceptive blink reflex, respectively, in 15 healthy males. In the test session, IES or TS were applied six times and subjects reported the intensity of each stimulus on a numerical rating scale (NRS). Blink reflexes were measured using electromyography (R2). The first and second sessions were control sessions, while in the third session, the left hand was immersed in cold water at 10 °C as a conditioning stimulus. The magnitude of the R2 blink and NRS scores were compared among the sessions by 2-way ANOVA. Both the NRS score and nociceptive R2 were significantly decreased in the third session for IES, with a significant correlation between the two variables; whereas, TS-induced non-nociceptive R2 did not change among the sessions. Although the conditioning stimulus decreased the NRS score for TS, the CPM effect was significantly smaller than that for IES (p = 0.002). The present findings suggest the presence of a pain-specific CPM effect to a heterotopic noxious stimulus. abstract_id: PUBMED:35819340 COVID-19 can cause blink reflex abnormalities. Background And Purpose: Neurological symptoms and complications associated with coronavirus 2019 (COVID-19) are well known. It was aimed to evaluate the brainstem and trigeminal/facial nerves and the pathways between these structures in COVID-19 using the blink reflex test. Methods: Thirty patients with post COVID-19 (16 males, 14 females) and 30 healthy individuals (17 males, 13 females) were included in this prospective study. Individuals who previously had a positive nose swap polymerase chain reaction test for severe acute respiratory syndrome coronavirus 2 and whose previously clinical features were compatible with COVID-19 were included in the post COVID-19 patient group. Neurological examination of the participants should be normal. Blink reflex test was performed on all participants. R1, ipsilateral R2 (IR2), and contralateral R2 (CR2) waves obtained from the test were analyzed. Results: The mean ages of healthy individuals and post COVID-19 patients were 34.0±6.4 and 38.4±10.6 years, respectively. Both age and gender were matched between the groups. R1, IR2, and CR2 latencies/amplitudes were not different between the two groups. The side-to-side R1 latency difference was 0.5±0.3 and 1.0±0.8 ms in healthy individuals and post COVID-19 patients, respectively (p=0.011). One healthy individual and 12 patients with post COVID-19 had at least one abnormal blink reflex parameter (p=0.001). Conclusion: This study showed that COVID-19 may cause subclinical abnormalities in the blink reflex, which includes the trigeminal nerve, the seventh nerve, the brainstem, and pathways between these structures. abstract_id: PUBMED:26449227 Nociception-specific blink reflex: pharmacology in healthy volunteers. Background: The physiology and pharmacology of activation or perception of activation of pain-coding trigeminovascular afferents in humans is fundamental to understanding the biology of headache and developing new treatments. Methods: The blink reflex was elicited using a concentric electrode and recorded in four separate sessions, at baseline and two minutes after administration of ramped doses of diazepam (final dose 0.07 mg/kg), fentanyl (final dose 1.11 μg/kg), ketamine (final dose 0.084 mg/kg) and 0.9 % saline solution. The AUC (area under the curve, μV*ms) and the latency (ms) of the ipsi- and contralateral R2 component of the blink reflex were calculated by PC-based offline analysis. Immediately after each block of blink reflex recordings certain psychometric parameters were assessed. Results: There was an effect due to DRUG on the ipsilateral (F 3,60 = 7.3, P &lt; 0.001) AUC as well as on the contralateral (F 3,60 = 6.02, P &lt; 0.001) AUC across the study. A significant decrement in comparison to placebo was observed only for diazepam, affecting the ipsilateral AUC. The scores of alertness, calmness, contentedness, reaction time and precision were not affected by the DRUG across the sessions. Conclusion: Previous studies suggest central, rather than peripheral changes in nociceptive trigeminal transmission in migraine. This study demonstrates a robust effect of benzodiazepine receptor modulation of the nociception specific blink reflex (nBR) without any μ-opiate or glutamate NMDA receptor component. The nociception specific blink reflex offers a reproducible, quantifiable method of assessment of trigeminal nociceptive system in humans that can be used to dissect pharmacology relevant to primary headache disorders. abstract_id: PUBMED:11877513 Nociceptive quality of the laser-evoked blink reflex in humans. Laser radiant-heat pulses selectively excite the free nerve endings in the superficial layers of the skin and activate mechano-thermal nociceptive afferents; when directed to the perioral or supraorbital skin, high-intensity laser pulses evoke a blink-like response in the orbicularis oculi muscle (the laser blink reflex, LBR). We investigated the functional properties (startle or nociceptive origin) of the LBR and sought to characterize its central pathways. Using high-intensity CO(2)-laser stimulation of the perioral or supraorbital regions and electromyographic (EMG) recordings from the orbicularis oculi muscles, we did five experiments in 20 healthy volunteers. First, to investigate whether the LBR is a startle response, we studied its habituation to expected rhythmic stimuli and to unexpected arrhythmic stimuli. To assess its possible nociceptive quality, we studied changes in the LBR and the R2 component of the electrical blink reflex after a lidocaine-induced supraorbital nerve block and after intramuscular injection of the opiate fentanyl and the opiate-antagonist naloxone. To characterize the central pathways for the LBR, we investigated the interaction between the LBR and the three components of the blink reflex (R1, R2, and R3) by delivering laser pulses to the perioral or supraorbital regions before or after electrical stimulation of the supraorbital nerve at various interstimulus intervals. Finally, to gain further information on the central LBR pathways, using two identical CO(2)-laser stimulators, we studied the LBR recovery curves with paired laser pulses delivered to adjacent forehead points at interstimulus intervals from 250 ms to 1.5 s. The LBR withstood relatively high-frequency rhythmic stimulations, and unexpected laser pulses failed to evoke larger responses. When lidocaine began to induce hypoalgesia (about 5 min after the injection), the LBR was abolished, whereas R2 was only partly suppressed 10 min after the injection. Fentanyl injection induced strong, naloxone-reversible, LBR suppression (the response decreased to 25.3% of predrug values at 10 min and to 4% at 20 min), whereas R2 remained appreciably unchanged. Whether directed to the perioral or supraorbital regions, preceding laser pulses strongly suppressed R2 and R3 though not R1. Conversely, preceding electrical stimuli to the supraorbital nerve suppressed the LBR. In response to paired stimuli, the LBR recovered significantly faster than R2. These findings indicate that the LBR is a nociceptive reflex, which shares part of the interneuron chain mediating the nonnociceptive R2 blink reflex, probably in the medullary reticular formation. The LBR may prove useful for studying the pathophysiology of orofacial pain syndromes. Answer: Based on the findings from the study with PUBMED:27128475, the nociceptive blink reflex (nBR) and its associated psychophysical measures do not appear to be associated with psychological factors in healthy participants. The study evaluated the possible association between the nBR and various pain-related psychological measures, including the Anxiety Sensitivity Index-3 (ASI-3), the Fear of Pain Questionnaire III (FPQ-III), the Pain Vigilance and Awareness Questionnaire (PVAQ), the Somatosensory Amplification Scale (SSAS), the Pain Catastrophizing Scale (PCS), and the Situational Pain Catastrophizing Scale (S-PCS). The results showed no correlation for any pair of variables at the adjusted significance level, with only a single significant correlation at the standard significance level where the pain intensity at 50% of the pain threshold presented a positive and small to moderate correlation with the PCS. Therefore, the conclusion was that the nBR and its associated psychophysical measures are not associated with psychological factors in healthy participants.
Instruction: Is hypertension changing? Abstracts: abstract_id: PUBMED:35253285 Broadening risk factor or disease definition as a driver for overdiagnosis: A narrative review. Medical overuse-defined as the provision of health services for which potential harms exceed potential benefits-constitutes a paradigm of low-value care and is seen as a threat to the quality of care. Value in healthcare implies a precise definition of disease. However, defining a disease may not be straightforward since clinical data do not show discrete boundaries, calling for some clinical judgment. And, if in time a redefinition of disease is needed, it is important to recognize that it can induce overdiagnosis, the identification of medical conditions that would, otherwise, never cause any significant symptoms or lead to clinical harm. A classic example is the impact of recommendations from professional societies in the late 1990s, lowering the threshold for abnormal total cholesterol from 240 mg/dl to 200 mg/dl. Due to these changes in risk factor definition, literally overnight there were 42 million new cases eligible for treatment in the United States. The same happened with hypertension-using either the 2019 NICE guidelines or the 2018 ESC/ECC guidelines criteria for arterial hypertension, the proportion of people overdiagnosed with hypertension was calculated to be between 14% and 33%. In this review, we will start by discussing resource overuse. We then present the basis for disease definition and its conceptual problems. Finally, we will discuss the impact of changing risk factor/disease definitions in the prevalence of disease and its consequences in overdiagnosis and overtreatment (a problem particularly relevant when definitions are widened to include earlier or milder disease). abstract_id: PUBMED:12420190 Hypertension and the eye: changing perspectives. Systemic hypertension is a common condition associated with significant morbidity and mortality. Hypertension confers cardiovascular risk by causing target-organ damage that includes retinopathy in addition to heart disease, stroke, renal insufficiency and peripheral vascular disease. The recognition of hypertensive retinopathy is important in cardiovascular risk stratification of hypertensive individuals. This review reevaluates the changing perspectives in the pathophysiology, classification and prognostic significance of fundal lesions in hypertensives. abstract_id: PUBMED:10981109 Behavior-changing methods for improving adherence to medication. Long-term adherence to antihypertensive drug therapy is poor, and new strategies to predict and improve adherence to prescribed drug regimens are needed. The literature on behavior change is reviewed, and a new perspective on medication adherence is presented. Successfully adopting and continuing with a long-term medication regimen requires behavior change, and behavior change principles can be used to accelerate the adoption of adherence to medication- taking behavior. The efficacy of behavior-changing interventions, which are tailored to each patient's stage of change, has been demonstrated in several health behavior areas. Rewards, monitoring devices, and reminder techniques are most useful for individuals in later stages of behavior change, but individuals in earlier stages need consciousness-raising interventions that focus upon awareness of the benefits of therapy. Recent research has yielded reliable ways to measure the stage of change for medication adherence, providing the foundation for the application of behavior- changing principles to the pharmacologic management of hypertension. abstract_id: PUBMED:28748919 Changing concepts in hypertension management. Hypertension is the most common modifiable risk factor for cardiovascular disease and death, and lowering blood pressure with anti-hypertensive drugs reduces target organ damage and prevents cardiovascular disease outcomes. The recent trials SPRINT and HOPE-3 will lead to changes in the way we manage hypertension and impact on clinical practice guidelines. These studies also demonstrate the shift toward automated blood pressure measurements. We have reviewed these studies and others to put them in context with the guidelines that have come before and to describe how they will impact on hypertension treatment thresholds and targets, the treatment of hypertension in the elderly, and changing approaches to the management of hypertension including resistant hypertension. abstract_id: PUBMED:7919557 Cough induced by quinapril with resolution after changing to fosinopril. Objective: To report a case of chronic, nonproductive cough secondary to the angiotensin-converting enzyme (ACE) inhibitor quinapril, with complete resolution after switching to another ACE inhibitor, fosinopril. Data Sources: All relevant articles from January 1985 through February 1993 were identified, primarily through MEDLINE search and review of pertinent articles' bibliographies. Case Summary: A 68-year-old woman developed a dry, irritating cough within one month of starting quinapril therapy for the treatment of essential hypertension. The patient was a nonsmoker with no respiratory illnesses. The cough continued for the duration of therapy with quinapril. One month after changing to fosinopril therapy, the patient reported complete resolution of the cough. She remains cough-free to date. Discussion: Cough induced by ACE inhibitors is a frequently documented adverse effect. It is severe enough to require discontinuation of therapy in 1-10 percent of patients. The cough is considered to be a class-related adverse effect with cross-reactions between ACE inhibitors routinely reported. At this time, changing to another ACE inhibitor or additive therapy with nonsteroidal antiinflammatory drugs is not recommended. Discontinuation of the ACE inhibitor results in rapid alleviation of the cough, although this is not always necessary, as most patients may experience a cessation or decrease in cough. We report a case of cough following the administration of quinapril, with complete resolution after changing to the alternative ACE inhibitor fosinopril in a patient with essential hypertension. Conclusions: Cough has been encountered commonly after the administration of ACE inhibitors. Frequency of cough is variable and although this complication has been described as a class effect, patients with a persistent, severe ACE inhibitor-induced cough may benefit from a trial of fosinopril therapy. This may be particularly useful in patients unable to tolerate an alternative class of antihypertensive agents. abstract_id: PUBMED:21334012 Changing trends in emergency coronary bypass surgery. Background: Patients undergoing emergency coronary artery bypass grafting represent a unique and high-risk population that remains challenging for cardiac surgeons. We examined the changing trends in patients undergoing emergency bypass grafting over the past 20 years. Methods: We conducted a retrospective review of our database between 1990 and 2009 and patients were divided into 2 groups based on year of operation: 1990-1999, n = 393; 2000-2009, n = 184. The primary outcomes of interest for this study are operative mortality and incidence of low cardiac output syndrome. Results: The percentage of patients undergoing emergency coronary bypass grafting has decreased from 2.7% to 1.7% over time. The percentage of patients with dyslipidemia, hypertension, triple vessel disease, peripheral vascular disease, and left main disease increased over time (P &lt; .05). Operative mortality remained at 8.1% in both year groups. Preoperative hypertension, congestive heart failure, left ventricular ejection fraction less than 20%, and previous cardiac surgery independently predicted operative mortality by logistic regression analysis. Low cardiac output syndrome developed in 25% of the patient population undergoing emergency bypass grafting. The independent predictors of low cardiac output syndrome were small body surface area, congestive heart failure, shock, myocardial infarction, earlier decade (1990-1999) and increased age. Conclusions: Despite a changing preoperative risk profile, the operative mortality of emergency coronary artery bypass grafting has remained stable over the years. However, mortality remains significantly above the observed mortality in elective bypass grafting. Continued improvements in the management of heart failure and the care of the elderly will likely result in reduced risks of emergency coronary artery bypass grafting. abstract_id: PUBMED:3341838 Changing concepts in surgical management of renovascular hypertension. As newer surgical techniques and concepts have emerged, including revascularization of the totally occluded renal artery and alternatives to aortorenal bypass (hepatic, splenic, or iliac artery to renal artery grafts), our patient population has changed. Patients with diffuse atherosclerotic disease, bilateral renal artery stenosis, totally occluded renal arteries, and azotemia are being referred for renal revascularization, thereby changing the indications for operation and the results that can be anticipated. Although our results in patients operated on solely for uncontrollable hypertension or renal failure have been successful, much work needs to be done to improve the results obtained when patients have a combination of uncontrollable hypertension and renal failure. abstract_id: PUBMED:35773841 The Effects of Health Behavior Changing Program for Systolic Blood Pressure Reduction Among Thai Buddhist Monks, Thailand. Thai Buddhist monks' lifestyle has made them likely to get non-communicable diseases. Therefore, the Ministry of Public Health of Thailand has conducted a health behavior-changing program for non-communicable diseases prevention among Thai Buddhist monks. This study aims to examine the effectiveness of the health behavior-changing Program for Non- communicable diseases prevention among 4,786 Thai Buddhist monks who were risk group. They were on the program for 6 months. from January 1st, 2021 to June 30th, 2021. Descriptive statistics were used to describe the characteristics of the subjects and Paired t-test was used to compare the mean difference. The results showed that the health behavior-changing program can reduce Fasting Blood Sugar, Body Mass Index, Risk score, Hypertension, and Smoking scores. Therefore, this program should be used for reducing risk factors of non-communicable diseases among Thai monks in the Upper Northeast region of Thailand. abstract_id: PUBMED:17285035 Research on changing practitioner and patient behavior. The Institute of Medicine's report "Crossing the Quality Chasm" (2001) indicates that it takes 17 years, on average, for knowledge generated by randomized trials to be adopted in practice if there is no effective intervention in place to promote the evidence. Influencing provider behavior is a formidable task, and many strategies traditionally used, such as unsolicited distribution of consensus recommendations or guidelines, and traditional didactic lectures and seminars, have generally been proven to be ineffective in changing clinical practices. The summaries that follow provide an overview of recently published, research-based works on strategies to change clinician and/or patient behavior. Two articles focus on improving hypertension management and two focus on improving heart failure management, conditions that are seen at very high levels in home care. We hope that the following summaries will stimulate some thinking about strategies that can be tried in home care organizations to promote the adoption of evidence-based care. For additional details about the intervention and research findings, we suggest that our readers refer to the original articles. abstract_id: PUBMED:22574293 The changing faces of diabetes, hypertension and arthritis in a Caribbean population. Background: Globally, chronic illnesses are the leading cause of mortality, and this is no different in developing countries, particularly in the Caribbean. Little information emerged in the literature on the changing faces of particular self-reported chronic diseases. Aims: This study examines the transitions in the demographic characteristics of those with diabetes, hypertension and arthritis, as we hypothesized that there are changing faces of those with these illnesses. Materials And Methods: A sample of 592 respondents from the 2002 and 2007 Jamaica Survey of Living Conditions. Only respondents who indicated that they were diagnosed with these particular chronic conditions were used for the analysis. Results: The prevalence of particular chronic diseases increased from 8 per 1,000 in 2002 to 56 per 1,000 in 2007. The average annual increase in particular chronic diseases was 17.2%. Diabetes mellitus showed an exponential average annual increase of 185% compared to hypertension (+ 12.7%) and arthritis (- 3.8%). Almost 5 percent of diabetics were less than 30 years of age (2.4% less than 15 years), and 41% less than 59 years. Three percent of hypertensive respondents were 30 years and under as well as 2% of arthritics. Conclusion: The demographic transition in particular chronic conditions now demands that data collection on those illnesses be lowered to &lt; 15 years. This research highlights the urgent need for a diabetes campaign that extends beyond parents to include vendors, confectionary manufacturers and government, in order to address the tsunami of chronic diseases facing the nation. Answer: Yes, hypertension is changing in various aspects, including its management, prevalence, and the understanding of its impact on health. Firstly, the management of hypertension has evolved over time with new clinical trials and guidelines influencing treatment thresholds and targets. Studies like SPRINT and HOPE-3 have led to changes in clinical practice, including the shift toward automated blood pressure measurements and different approaches to treatment, such as in the elderly and those with resistant hypertension (PUBMED:28748919). Secondly, the prevalence of hypertension and the criteria for its diagnosis have also changed. For instance, the redefinition of disease and risk factor definitions, such as the lowering of the threshold for abnormal total cholesterol and the criteria for arterial hypertension, has led to an increase in the number of individuals diagnosed with hypertension (PUBMED:35253285). This redefinition has implications for overdiagnosis and overtreatment, particularly when definitions are widened to include earlier or milder forms of the disease. Additionally, the demographic characteristics of individuals with hypertension are changing. There has been a noted transition in the age and other demographic factors of those diagnosed with hypertension, with younger individuals now being affected (PUBMED:22574293). Moreover, the understanding of hypertension's impact on health has expanded. It is recognized as a common condition with significant morbidity and mortality, causing target-organ damage such as hypertensive retinopathy, which is important for cardiovascular risk stratification (PUBMED:12420190). Furthermore, the surgical management of renovascular hypertension has evolved with new techniques and concepts, leading to changes in the patient population being referred for renal revascularization (PUBMED:3341838). Lastly, efforts to improve medication adherence through behavior-changing interventions have been explored, recognizing that long-term adherence to antihypertensive drug therapy is often poor and that tailored interventions can be effective in improving adherence (PUBMED:10981109). In summary, hypertension is indeed changing in terms of its management, prevalence, diagnosis criteria, demographic characteristics of affected individuals, and the understanding of its broader health implications.
Instruction: Is obesity predictive of cardiovascular dysfunction independent of cardiovascular risk factors? Abstracts: abstract_id: PUBMED:24957486 Is obesity predictive of cardiovascular dysfunction independent of cardiovascular risk factors? Introduction: Obesity is thought to exert detrimental effects on the cardiovascular (CV) system. However, this relationship is impacted by the co-occurrence of CV risk factors, type 2 diabetes (T2DM) and overt disease. We examined the relationships between obesity, assessed by body mass index (BMI) and waist circumference (WC), and CV function in 102 subjects without overt CV disease. We hypothesized that obesity would be independently predictive of CV remodeling and functional differences, especially at peak exercise. Methods: Brachial (bSBP) and central (cSBP) systolic pressure, carotid-to-femoral pulse wave velocity (PWVcf) augmentation index (AGI; by SphygmoCor), and carotid remodeling (B-mode ultrasound) were examined at rest. Further, peak exercise cardiac imaging (Doppler ultrasound) was performed to measure the coupling between the heart and arterial system. Results: In backward elimination regression models, accounting for CV risk factors, neither BMI nor WC were predictors of carotid thickness or PWVcf; rather age, triglycerides and hypertension were the main determinants. However, BMI and WC predicted carotid cross-sectional area and lumen diameter. When examining the relationship between body size and SBP, BMI (β=0.32) and WC (β=0.25) were predictors of bSBP (P&lt;0.05), whereas, BMI was the only predictor of cSBP (β=0.22, P&lt;0.05) indicating a differential relationship between cSBP, bSBP and body size. Further, BMI (β=-0.26) and WC (β=-0.27) were independent predictors of AGI (P&lt;0.05). As for resting cardiac diastolic function, WC seemed to be a better predictor than BMI. However, both BMI and WC were inversely and independently related to arterial-elastance (net arterial load) and end-systolic elastance (cardiac contractility) at rest and peak exercise. Conclusion: These findings illustrate that obesity, without T2DM and overt CV disease, and after accounting for CV risk factors, is susceptible to pathophysiological adaptations that may predispose individuals to an increased risk of CV events. abstract_id: PUBMED:19721985 Cardiovascular risk factors in young people Cardiovascular disease (CVD) involves several disorders related to the formation and development of atherosclerotic processes. Several risk factors are involved in CVD aetiology; some of them (i.e. age, hypertension, obesity, dislipidemia and diabetes) have been clearly associated, whereas others have a variable level of association. An increase in cardiovascular risk factors has been recently reported in the young population; studies of cardiovascular risk factors in this population have shown that its cardiovascular risk profile could be different from that presented by older populations. This review presents a summary of reported cardiovascular risk factors in the young population and their causes which have been released and indexed in different databases. Most factors discussed are life-habit risk factors and represent direct targets for clinical intervention. We propose that primary CVD prevention should include a more detailed knowledge of the nature of the risk factors concerning the young population and could have a positive impact on CVD prevalence during the next few years. abstract_id: PUBMED:26482327 Erectile dysfunction and cardiovascular risk factors in a Mediterranean diet cohort. Background: Erectile dysfunction affects more than 100 million men worldwide, with a wide variability in prevalence. An overall association of cardiovascular risk factors, lifestyle and diet in the context of ED in a Mediterranean population is lacking. Aim: To assess ED prevalence and associated factors in a Mediterranean cohort of non-diabetic patients with cardiovascular risk factors. Methods: Observational, cross-sectional study of patients aged over 40 treated at cardiovascular risk units in Catalonia. Anthropometric data, risk factors, lifestyle and diet habits were recorded. ED was assessed using the International Index of Erectile Function. Results: Four hundred and forty patients included, 186 (42.3%) with ED (24.8% mild, 6.8% moderate and 10.7% severe). ED presence and severity were associated with age, obesity, waist circumference, hypertension, antihypertensive treatment and ischaemic disease. Patients with ED were more frequently smokers, sedentary and consumed more alcohol. In multivariate analysis, consumption of nuts (&gt; twice a week) (OR 0.41 (95% CI 0.25 to 0.67) and vegetables (≥ once a day) (OR 0.47 (95% CI 0.28-0,77)), were inversely related to ED. Obesity (as BMI ≥ 30 kg/m(2) ) (OR 2.49 (95% CI 1.48-4.17)), ischaemic disease (OR 2.30 (95% CI 1.22 to 4.33), alcohol consumption (alcohol-units a day) (OR 1.14 (95% CI 1.04 to 1.26), and age (year) (OR = 1.07 (95% CI 1.04-1.10) were directly related to ED. Conclusion: Erectile dysfunction is a common disorder in patients treated in lipid units in Catalonia for cardiovascular risk factors. This condition is associated with age, obesity, ischaemic disease and unhealthy lifestyle habits. abstract_id: PUBMED:12089988 Cardiovascular risk factors in the elderly The paper presents a review of recent findings about cardiovascular risk factors in elderly. It becomes important to know that the cardiovascular risk factors we look for in adulthood change in elderly patients. However, we consider that the cardiovascular risk factors in elderly remain hypertension, smoking status, hypercholesterolemia, diabetes mellitus and obesity. Recent studies proved that in elderly high levels of cholesterol are much less found than in adults as well as smoking status. Elderly has specific risk factors: high levels of iron and basic tachycardia. Other possible risk factors are: high levels of homocysteine, low plasmatic levels of HDL-cholesterol, high levels of lipoprotein-A and some coagulation factors. abstract_id: PUBMED:24192101 Psoriasis and cardiovascular disease Psoriasis is a common, chronic and systemic inflammatory disease associated with several comorbidities, such as obesity, hypertension, diabetes, dyslipidaemia and metabolic syndrome, but also with an increased risk of cardiovascular disease, like myocardial infarction or stroke. The chronic inflammatory nature of psoriasis has been suggested to be a contributing and potentially independent risk factor for the development of cardiovascular comorbidities and precocious atherosclerosis. Aiming at alerting clinicians to the need of screening and monitoring cardiovascular diseases and its risk factors in psoriatic patients, this review will focus on the range of cardiometabolic comorbidities and increased risk of cardiovascular disease associated with psoriasis. abstract_id: PUBMED:20077876 Cardiovascular risk factors in climacteric Background: Ischemic heart disease is the second leading killer of women in Mexico, regardless of age group. The incidence of cardiovascular events increases after menopause, and depend on the prevalence and accumulation of risk factors. Objective: To determine the prevalence of cardiovascular risk factors in a population of Mexican women who receive care in a menopause clinic. Methods: Cross-sectional study included 308 women. Sociodemographic characteristics were collected, and background somatometric-family inherited cardiovascular risk factors, biochemical variables: blood glucose and lipid profile. Women were classified into two groups: pre-and postmenopause, the latter being subdivided according to time since menopause: less than three years and more than three years. Results: Two hundred six (66.7%) women had inherited a positive family history. We identified 123 (39.9%) in premenopausal, mean age 46.4 +/- 3.2 years and 185 (60.1%) postmenopausal with a mean age of 50.5 +/- 3.2. We found no differences in blood pressure and blood glucose somatometric features. The levels of total cholesterol (TC) and cholesterol of low density lipoprotein (LDL-C) were significantly higher in the group of postmenopausal women. It was noted that total cholesterol and triglycerides increased with age regardless of hormonal status. Hypercholesterolemia was detected in 41.5% of premenopausal patients and in 51.4% of postmenopausal women. More than half of the population studied had three or more cardiovascular risk factors. Conclusions: There is a high prevalence of cardiovascular risk factors in Mexican women present from pre-menopause. The major modifiable: sedentary lifestyle, dyslipidaemia and overweight. abstract_id: PUBMED:11905401 Association between clustering of cardiovascular risk factors and the risk of cardiovascular disease Background: Cardiovascular diseases are the main cause of mortality in Spain. The aim of this work was to study the association between clustering of cardiovascular risk factors and the risk of suffering major cardiovascular events: ischemic cardiopathy, cerebrovascular disease and peripheral arteriopathy of the lower limbs. Method: A descriptive transversal study was carried out in a city health centre, with a total of 2248 patients selected by simple random sampling of the clinical records with a mean age of 15 years. The data were obtained by examining the clinical records and estimating Odds Ratios (OR) for any cardiovascular event (n = 224), ischemic cardiopathy (n = 123), cerebrovascular disease (n = 84) and peripheral arteriopathy (n = 55) in relation to the number of cardiovascular risk factors. The cardiovascular risk factors included in the study were smoking, arterial hypertension, hypercholesterolemia, hypertriglyceridemia, diabetes and obesity. The OR was adjusted for age and sex. Results: The percentage of patients with 0, 1, 2, 3 and 4-6 cardiovascular risk factors was 39.1, 32.8, 17.5, 6.9 and 3.7 respectively. The OR for experiencing a cardiovascular event associated to 1, 2, 3 and 4-6 cardiovascular risk factors was 1.6 (CI95%: 0.9-2.7), 2.8 (CI95%: 1.7-4.7), 3.6 (CI95%: 1.9-6.5) and 5.6 (CI95%: 2.9-10.8), respectively. The OR for ischemic cardiopathy associated to the same risk levels were 2.3 (CI95%: 1.1-4.6), 2.5 (CI95%: 1.2-5.2), 5.3 (CI95%: 2.4-11.5) and 6.2 (CI95%: 2.7-14.3), respectively. For cardiovascular disease, the OR were 1.1 (CI95%: 0.5-2.5), 2.3 (CI95%: 1.2-5.3), 2.4 (CI95%: 1.0-5.9) and 5.6 (CI95%: 2.2-14.1), respectively. The OR for peripheral arteriopathy were 2.1 (CI95%: 0.8-5.9), 3.7 (CI95%: 1.3-10.5), 3.3 (CI95%: 1.0-11.1) and 6.1 (CI95%: 1.8-20.3), respectively. Conclusions: The addition of cardiovascular risk factors is associated with an increased risk of cardiovascular events. This finding emphasises the need for prevention of cardiovascular risk factors in primary care. abstract_id: PUBMED:16387573 Lifestyle management of erectile dysfunction: the role of cardiovascular and concomitant risk factors. The influence and significance of lifestyle factors in erectile dysfunction (ED) have been demonstrated in cross-sectional and prospective, randomized, controlled trials. Recent epidemiologic studies in several countries have shown that modifiable lifestyle or risk factors, including physical activity in particular, are directly related to the occurrence of ED. In this article, we review several recent observational studies, 2 of which include a longitudinal follow-up component in the study design. The levels of physical activity in both of these studies predicted ED prevalence and incidence. Furthermore, the role of lifestyle changes (weight loss, physical activity) were recently demonstrated to be effective in modifying ED in a prospective, randomized Italian trial in moderately obese, sedentary men. Men without overt diabetes mellitus or cardiovascular disease participated in this landmark study. Other studies have shown that aggressive management of cardiovascular risk factors can increase the effectiveness or outcomes associated with pharmacologic management of ED. Taken together, these studies support the value of risk factor modification and lifestyle change in the clinical management in men with ED and concomitant cardiovascular illness. abstract_id: PUBMED:28595561 Sexual Dysfunction, Cardiovascular Risk and Effects of Pharmacotherapy. Background: Sexual dysfunction affects millions of people with an increasing prevalence, worldwide. The pathophysiology of the disease shares several similarities with cardiovascular disease (CVD), including atherosclerosis, endothelial dysfunction, structural vascular damage and subclinical inflammation. Erectile dysfunction (ED) and female sexual dysfunction are common among patients with CVD and risk factors such as hypertension, diabetes, obesity and metabolic syndrome. Given the common pathogenesis of the diseases, ED is an independent prognostic factor of future ED events. Patients with overt ED or risk factors are usually treated with several drugs for the management of these conditions. Several of these drugs have been evaluated for their effect on sexual activity. Results And Conclusion: Among the antihypertensive drugs, diuretics and beta-blockers seem to exert a detrimental impact on sexual function, with nebivolol being the only beta-blocker with favorable properties through an increase in nitric oxide bioavailability. In contrast, renin-angiotensin system inhibitors and calcium-channel blockers have a neutral effect on sexual activity. Hypoglycemic drugs have been less evaluated in the ED setting, with metformin, pioglitazone and liraglutide presenting favorable results. Statins on the other hand have not provided consistent results with observational studies suggesting a detrimental role in sexual activity and a few randomized studies indicating a neutral or even beneficial effect on erectile function. abstract_id: PUBMED:1751730 Cardiovascular risk factors in the working population of Alicante The aim of the present study was to evaluate the prevalence of certain cardiovascular risk factors in the population of Asepeyo-Elche. The study sample was a working population of 697 persons (71.8% males and 28.2% females) with ages ranging from 15 to 65 years (mean age 34.7 +/- 13.2 years), seen during the first 6 months of 1990 in the health care center of Asepeyo-Elche. The evaluated cardiovascular risk factors and their prevalence rates were: hypertension (14.2%), hypercholesterolemia (18.5%), smoking (52.5%), hyperglycemia (3.5%), obesity (28.2%) and sedentariness (80.3%). We conclude that in our working population the prevalence rates of some cardiovascular risk factors are high and that intervention programs to modify the diet and lifestyle are mandatory to prevent the pathogenetic consequences of these factors. Answer: The relationship between obesity and cardiovascular (CV) dysfunction is complex, as obesity is often accompanied by other CV risk factors such as type 2 diabetes (T2DM), hypertension, and dyslipidemia. However, a study examining the relationships between obesity, assessed by body mass index (BMI) and waist circumference (WC), and CV function in subjects without overt CV disease found that obesity can be independently predictive of CV remodeling and functional differences, particularly at peak exercise. In this study, after accounting for CV risk factors, neither BMI nor WC were predictors of carotid thickness or pulse wave velocity (PWVcf), but they did predict carotid cross-sectional area and lumen diameter. Additionally, BMI and WC were predictors of brachial systolic blood pressure (bSBP) and central systolic blood pressure (cSBP), as well as independent predictors of augmentation index (AGI). Both BMI and WC were inversely and independently related to arterial-elastance and end-systolic elastance at rest and peak exercise, suggesting pathophysiological adaptations that may predispose individuals to an increased risk of CV events (PUBMED:24957486). This finding is consistent with the understanding that obesity is a risk factor for CV disease, even in the absence of other traditional risk factors. Other studies have also highlighted the importance of considering obesity as a risk factor for CV disease. For example, erectile dysfunction, which is associated with CV risk, has been found to be more common in obese individuals (PUBMED:26482327). Additionally, lifestyle management studies have shown that addressing obesity through weight loss and physical activity can improve erectile dysfunction, which is a marker of endothelial function and CV health (PUBMED:16387573). In conclusion, obesity can be predictive of cardiovascular dysfunction independent of other cardiovascular risk factors, and it is important to consider obesity as a modifiable risk factor in the prevention and management of CV disease.
Instruction: Are there any functional differences of nitric oxide between the transitional segment in Hirschsprung's disease and the diseased colon in hypoganglionosis? Abstracts: abstract_id: PUBMED:18795669 Are there any functional differences of nitric oxide between the transitional segment in Hirschsprung's disease and the diseased colon in hypoganglionosis? Background/aims: In histological studies, there are no significant differences between the transitional segment (TS) of Hirschsprung's disease (HD) and the diseased segment in patients with hypoganglionosis (Hypo). In contrast, there are no reports whether or not TS show impaired motility like Hypo. Nitric oxide (NO) has recently been shown to be a neurotransmitter in the non-adrenergic noncholinergic (NANC) inhibitory nerves in the human gut. To clarify the significance of NO in TS and Hypo, enteric nervous responses in colonic tissue obtained from TS and Hypo were investigated. Methodology: This study investigated responses of the enteric nervous system including NANC inhibitory nerves in colonic tissue obtained from TS in 10 patients with HD (8 boys and 2 girls, aged from 6 months to 2 years) and diseased colon in 6 patients with Hypo (6 boys, aged from 6 months to 2 years). Normal colons obtained from patients with HD and Hypo (n = 16) were used as controls. Mechanography was used to evaluate in vitro colonic responses to electrical field stimulation (EFS) of adrenergic and cholinergic nerves before and after treatments with various autonomic nerve blockers, N(G)-monomethyl-L-arginine (L-NMMA), and L-arginine. Results: Non-adrenergic non-cholinergic (NANC) inhibitory nerves were found to act on the normal colon and to a lesser extent both in the TS and Hypo. In addition, there were no significant differences between the TS and Hypo. Nitric oxide (NO) mediates the relaxation reaction of the NANC inhibitory nerve in the normal colon and to a lesser extent both in the TS and Hypo. In addition, there were no significant difference between the TS and Hypo. Conclusions: Diminution of NO mediation of NANC inhibitory nerves may be largely related to the impaired motility observed in patients with TS and diseased colon of Hypo. abstract_id: PUBMED:7666315 Nitric oxide-containing nerves in bowel segments of patients with Hirschsprung's disease. To assess the role of nerves that synthesize nitric oxide (NO) in Hirschsprung's Disease (HD), the authors studied the distribution of the enzyme NADPH diaphorase (NADPHd) in normal and diseased bowel segments. In the proximal (ganglionic) segment of the colon, NADPHd-positive neurons were present in both myenteric and submucosal plexuses. In the distal involved colonic segments from HD patients, the typical pattern of the neuronal network was completely missing in the regions of the two plexuses; instead, only disorganized NADPHd-positive nerve fibers were present and NADPHd-reactive neurons were absent. Mucosal NO synthase activity was 2.76 +/- 0.38 nmol/g/min in the proximal segment and only 0.83 +/- 0.49 nmol/g/min in the distal segment (P &lt; .05, N = 3). abstract_id: PUBMED:10342112 A correlative morphometric and clinical investigation of hypoganglionosis of the colon in children. Hypoganglionosis of the myenteric plexus of the colon is not clearly defined and seldom investigated. Colon segments from 15 children with an extended oligoeuronal hypoganglionosis up to the proximal resection end were morphometrically studied and compared to normally innervated colon segments. The study was performed with resected specimens from 7 children with isolated hypoganglionoses, 8 children with a Hirschsprung-associated hypoganglionosis, and 12 colon segments with normal innervation. The resected colon specimens were caudo-cranial coiled. The native tissue was frozen at -80 degrees C on a cryostat carrier and cut at -20 degrees C in 15 microns-thick sections (equivalent to 4-5-micron-thick paraffin sections). The air-dried sections underwent an enzyme-histochemical procedure for an acetylcholinesterase reaction to stain the parasympathetically innervated myenteric plexus. For histological identification and morphometric measurements, ganglia and nerve cells were selectively stained using a lactic dehydrogenase reaction. The morphometric measurements were performed with an optic-electronic image analysis system that determined ganglion size, ganglion distances, nerve cell number per ganglion, and ganglion number per mm colon. The results showed that hypoganglionosis of the myenteric plexus is characterised by a 42% decrease in plexus area and a 55% decrease of the nerve cell number per mm length of colon. The number and area of myenteric ganglia showed a decrease of 59% and a doubling of the ganglion distances. The histopathological diagnosis of a hypoganglionosis of the colon was not necessarily an indication of a chronic constipation, but rather an indication of a disposition for constipation. A chronic constipation is often caused by a long hypoganglionic segment proximal to a resected short Hirschsprung segment. abstract_id: PUBMED:7510876 Nitric oxide synthase is deficient in the aganglionic colon of patients with Hirschsprung's disease. Objectives: The cause of Hirschsprung's disease is unknown but defects in nonadrenergic, non-cholinergic innervation could prevent relaxation of aganglionic colon in patients with this disease. Nonadrenergic, noncholinergic nerves induce relaxation by using nitric oxide synthase to produce the smooth muscle relaxant nitric oxide (NO). In this study we asked whether aganglionic colon in patients with Hirschsprung's disease is deficient in NO synthase-containing nerves. Methodology: Using the tetrazolium blue dye method of demonstrating nicotinamide adenine dinucleotide phosphate-diaphorase enzymes, we examined eight colon specimens (four aganglionic and four ganglionic) from patients with Hirschsprung's disease for the presence of NO synthase. We further quantified NO synthase enzyme activity in these eight specimens by using the [3H]arginine-to-[3H]citrulline conversion assay. Results: The nicotinamide adenine dinucleotide phosphate-diaphorase staining showed that aganglionic colon contained less NO synthase than ganglionic colon. This NO synthase deficiency was located primarily in the nerves of the circular muscle layer of the colon. In addition, there was a striking difference in the NO synthase enzyme activity between aganglionic and ganglionic colon as measured by the [3H]arginine-to-[3H]citrulline conversion assay. Total NO synthase activity, as measured by this assay, was found to be less in aganglionic than in ganglionic colon. When the total activity was divided into its four known isoforms, aganglionic colon was noted to be striking deficient in the isoform derived primarily from nerves. Conclusion: We conclude that aganglionic colon is deficient in NO synthase-containing nerves. This deficiency could prevent smooth muscle relaxation in the aganglionic colon of patients with Hirschsprung's disease. abstract_id: PUBMED:10211651 Morphological investigation of the enteric nervous system in Hirschsprung's disease and hypoganglionosis using whole-mount colon preparation. Background/purpose: A suction rectal mucosal biopsy with positive staining for acetylcholinesterase is a useful test for diagnosis of Hirschsprung's disease (HD). However, hypoganglionosis has not been diagnosed by a rectal mucosal biopsy. The authors morphologically examined the enteric nervous systems in HD and hypoganglionosis patients using whole-mount preparations. Methods: Six HD patients, two hypoganglionosis patients, and 10 with normally innervated colons were examined. Colonic specimens were incubated with the primary antibodies against protein gene product 9.5 (PGP 9.5) mixed with S-100b protein, tyrosine hydroxylase (TH), calcitonin gene-related peptide (CGRP), substance P (SP), and neurofilament protein 200 kDa (NFH). They were observed by histochemical technique using light-microscopy in whole-mount preparations. Results: The aganglionic distal colon had thick nerve strands stained with PGP 9.5 mixed with S100 or NFH located in the layer between the longitudinal muscle and the circular one, and the submucosal layer. The nerve strands in the myenteric layer contained few CGRP- and SP-positive fibers and ran along the long axis of the intestine. Ganglion cells appeared along with those thick nerve strands in the transitional zone of HD. In hypoganglionosis, we found small myenteric ganglia with no thick nerve strands. Conclusions: The enteric nervous system in oligoganglionic segments of HD morphologically differed from the one in hypoganglionosis. A suction rectal mucosal biopsy would be of no use in the diagnosis of hypoganglionosis. abstract_id: PUBMED:37427059 Diagnostic challenges of hypoganglionosis based on immunohistochemical method. Background: Hypoganglionosis resembles Hirschsprung's disease as in both diseases, patients may present with severe constipation or pseudo-obstruction. To date, diagnosis of hypoganglionosis is still difficult to be established due to lack of international consensus regarding diagnostic criteria. This study aims to evaluate the use of immunohistochemistry to provide objective support for our initial subjective impression of hypoganglionosis as well as to describe the morphological features of this study. Methods: This is a cross-sectional study. Three resected intestinal samples from patients with hypoganglionosis at Kyushu University Hospital, Fukuoka, Japan were included in this study. One healthy intestinal sample was used as control. All specimens were immunohistochemically stained with anti-S-100 protein, anti-α-smooth muscle actin (α-SMA), and anti-c-kit protein antibodies. Results: (I) S-100 immunostaining: hypoplasia of the myenteric ganglia and marked reduction of intramuscular nerve fibers were observed in several segments of the intestine. (II) α-SMA immunostaining: the pattern of the muscular layers was almost normal in all segments; however, some areas showed hypotrophy of the circular muscle (CM) layers and hypertrophy of the longitudinal muscle (LM) layers. (III) C-kit immunostaining: a decreased in the number of interstitial cells of Cajal (ICCs) was observed in almost all segments of the resected intestine, even around the myenteric plexus. Conclusions: Each segment of intestine in hypoganglionosis had different numbers of ICCs, sizes, and distributions of ganglions, as well as patterns of musculature, which may range from severely abnormal to nearly normal. Further investigations regarding the definition, etiology, diagnosis, and treatment of this disease should be performed to improve the prognosis of this disease. abstract_id: PUBMED:578540 Zonal colonic hypoganglionosis. We report the case of a patient with chronic constipation since birth, a strictured area in the distal sigmoid colon, and histologic findings of localized hypoganglionosis. We have compared this entity (zonal colonic hypoganglionosis) and reviewed the literature of all known cases of zonal colonic aganglionosis and hypoganglionosis. abstract_id: PUBMED:30416639 Congenital intestinal hypoganglionosis: A radiologic mimic of Hirschsprung's disease. Intestinal hypoganglionosis or isolated hypoganglionosis is a rare entity with a clinical and radiologic presentation that can mimic Hirschsprung's disease in the neonatal period. The diagnosis of this entity can be challenging with suction rectal biopsies that are standard for diagnosing Hirschsprung's disease. We present this case of congenital intestinal hypoganglionosis detailing the neonatal course, due to its rarity and the conundrums faced before an eventual diagnosis could be rendered. This case also illustrates the role of full thickness rectal biopsy in selected cases such as ours where the radiologic features are typical of Hirschsprung's, despite negative suction biopsies. abstract_id: PUBMED:7760238 A role of nitric oxide in Hirschsprung's disease. Nitric oxide (NO) has recently been shown to be a neurotransmitter in the nonadrenergic noncholinergic (NANC) inhibitory nerves in the gastrointestinal tract. To clarify the significance of NO in Hirschsprung's disease (HD), enteric nerve responses in colonic tissue obtained from HD patients were investigated. Colonic tissue specimens were obtained from four patients with HD and from 11 patients without constipation who were used as controls. A mechanograph was used to evaluate in vitro colonic responses to electrical field stimulation (EFS) of the adrenergic and cholinergic nerves before and after treatment with various autonomic nerve blockers, and NG-nitro-L-arginine (L-NNA) and L-arginine with the following results: (1) NANC inhibitory nerves were found to act on normal human colon, but had no effect on aganglionic colon; (2) L-NNA concentration dependently inhibited the relaxation in response to EFS in the normal colon, but had no effect on aganglionic colon; and (3) this inhibitory effect was reversed by L-arginine in the normal colon, but had no effect on the aganglionic colon. Nitric oxide mediates the relaxation reaction of NANC inhibitory nerves in the human colon, but the effect of NO was absent in aganglionic colon. The loss of action by NO may be implicated in the impaired motility observed in aganglionic colon. abstract_id: PUBMED:7965500 Effect of nitric oxide on the colonic smooth muscle of patients with Hirschsprung's disease. Hirschsprung's disease results in bowel obstruction because of a failure of smooth muscle relaxation in both the aganglionic segment of bowel and the internal anal sphincter (IAS). Nonadrenergic noncholinergic (NANC) nerves, which use nitric oxide (NO) as their chemical messenger, are responsible for relaxing smooth muscle in normal bowel and the IAS. Previous work indicates that the cause of the aganglionic colon's inability to relax may be a lack of NANC nerves. To test this hypothesis, the authors compared the effect of an exogenous source of NO, S-nitroso-N-acetylpenicillamine (SNAP), on the isometric tension of smooth muscle strips taken from the ganglionic colon, aganglionic colon, and IAS of patients with Hirschsprung's disease. Exposure of ganglionic and aganglionic colon specimens to SNAP (10(-3) to 10(-5) mol/L) resulted in up to 70% reduction of resting tension. This relaxation occurred in a dose-dependent fashion and could be promptly reversed by the addition of the NO antagonist methylene blue. However, SNAP had no demonstrable effect on the smooth muscle strips taken from the IAS of patients with Hirschsprung's disease. This finding suggests that, in the aganglionic colon, a deficiency of NANC nerves contributes to the development of bowel obstruction. However, the failure of the IAS to relax in Hirschsprung's disease appears to be unrelated to NO and the NANC nervous system. Answer: Based on the provided abstracts, there do not appear to be significant functional differences in the role of nitric oxide (NO) between the transitional segment in Hirschsprung's disease (HD) and the diseased colon in hypoganglionosis (Hypo). Both conditions show a diminution of NO mediation of non-adrenergic non-cholinergic (NANC) inhibitory nerves, which is largely related to the impaired motility observed in patients with the transitional segment of HD and diseased colon of Hypo (PUBMED:18795669). In HD, the aganglionic colon is deficient in NO synthase-containing nerves, which could prevent smooth muscle relaxation (PUBMED:7510876). Similarly, in Hypo, there is a decrease in the number of nerve cells per mm length of colon and a decrease in the area of myenteric ganglia, which could also affect the normal relaxation of the colon (PUBMED:10342112). Furthermore, the distribution of the enzyme NADPH diaphorase (NADPHd), which is associated with NO synthesis, is disrupted in the distal involved colonic segments from HD patients, indicating a deficiency in NO-containing nerves (PUBMED:7666315). This deficiency in NO synthase-containing nerves is also evident in the aganglionic colon of HD patients (PUBMED:7510876). In summary, both the transitional segment in HD and the diseased colon in Hypo exhibit impaired motility due to a reduction in NO-mediated relaxation of the NANC inhibitory nerves, with no significant functional differences reported between the two conditions in the context of NO's role (PUBMED:18795669).
Instruction: Kazakhstan: a folate-deficient area? Abstracts: abstract_id: PUBMED:18651017 Effects of exercise on the development of atherosclerosis in apolipoprotein E-deficient mice. It has already been shown that mild to moderate exercise training may protect against the development of atherosclerosis. However, the precise mechanisms behind this protection are still unknown. The hypothesis that exercise training reduces the severity of experimental atherosclerosis in apolipoprotein (apo) E-deficient mice was assessed. Swimming training was conducted three times per week for 20 min on apo E-deficient mice fed a high-fat diet for eight or 16 weeks. Atherosclerotic lesions were evaluated. Fatty streak formation and fibrofatty plaques developed in apo E-deficient mice fed the high-fat diet, and were markedly suppressed in mice that received exercise for eight or 16 weeks compared with in nonexercise mice. Differences in lesion area did not correlate with any significant alterations in serum lipid levels. Thus, exercise therapy markedly suppressed experimental atherosclerosis. abstract_id: PUBMED:26221341 Serum galactose-deficient IgA1 levels in children with IgA nephropathy. Immunoglobulin A nephropathy (IgAN) is an immunopathologic diagnosis based on a renal biopsy, it is characterized by deposits of IgA-containing immune complexes in the mesangium. Adults with IgAN have a galactose-deficient IgA1 in the circulation and glomerular deposition. There are few studies on the glycosylation of serum IgA1 in children with IgAN. To measure the serum levels of galactose-deficient IgA1 in pediatric patients with IgAN, 72 biopsy-proven IgAN children were divided into 3 groups based on the clinical features: isolated hematuria group (24 patients), hematuria and proteinuria group (22 patients), and nephritic syndrome group (26 patients). They were also divided into 3 groups according to pathologic grading: grade I + II group (25 patients), grade III group (33 patients) and grade IV + V group (14 patients). 30 healthy children were recruited as a control group. We used vicia villosa lectin binding enzyme-linked immunosorbent assay to measure the serum levels of galactose-deficient IgA1 in all groups and controls. Serum levels of galactose-deficient IgA1 in children with IgAN were higher than controls (P &lt; 0.01). There were no significant differences in serum levels of galactose-deficient IgA1 among the different clinical and pathologic grading groups. The values of the area under the curve for galactose-deficient IgA1 levels were 0.976 (95% CI, 0.953-1.000). The cutoff point for galactose-deficient IgA1 levels was 0.125, with a sensitivity of 87.5% and a specificity of 83.3%, with a positive predictive value of 92.6% and a negative predictive value of 73.5% (P &lt; 0.01). Children with IgAN presented serum galactose-deficient IgA1, which has shown no relationship with the clinical manifestations and pathologic grading of the disease. Detection of serum galactose-deficient IgA1 levels by vicia villosa lectin binding enzyme-linked immunosorbent assay has a certain clinical value in diagnosis of children with IgAN. abstract_id: PUBMED:31249914 Induction of Diabetes Abolishes the Antithrombotic Effect of Clopidogrel in Apolipoprotein E-Deficient Mice. Patients with acute coronary syndrome with diabetes mellitus (DM) exhibit an impaired platelet inhibitory response to clopidogrel which is only partially understood. DM was induced by the administration of streptozotocin (STZ) to 9-week-old mice. The antithrombotic effects of clopidogrel (10 mg/kg/d, orally × 5 days) were determined using a FeCl 3 -induced thrombosis model employing wild-type (WT), apolipoprotein E (apoE)-deficient, and diabetic apoE-deficient mice at 21 weeks. Antiplatelet effects were determined using flow cytometry. The antithrombotic effects of clopidogrel were similar in WT and apoE-deficient mice but were attenuated in diabetic apoE-deficient mice with the percent inhibition of thrombus area (µm 2 ) by clopidogrel being 85.5% (WT mice), 75.0% (apoE-deficient mice), and 1.9% (diabetic apoE-deficient mice). The time to first occlusion and lumen stenosis also reflected a significant loss of the antithrombotic effects of clopidogrel in diabetic apoE-deficient mice. Ex vivo platelet activation, which was assessed using ADP-induced expression of activated glycoprotein IIb/IIIa, was completely inhibited by clopidogrel in these three groups of mice. In contrast, the effect of clopidogrel on the ex vivo expression of platelet P-selectin induced by protease-activated receptor 4-activating peptide was diminished in diabetic apoE-deficient mice compared with that in WT and apoE-deficient mice. These data suggest that diabetic apoE-deficient mice may serve as a useful model to better understand the impaired responses to clopidogrel in patients with DM, which may partially reflect a reduction of the effect of clopidogrel on thrombin-induced platelet activation. abstract_id: PUBMED:31719758 Renal lesions in leptin receptor-deficient medaka (Oryzias latipes). The aim of this study was to elucidate the renal lesions of leptin receptor-deficient medaka showing hyperglycemia and hypoinsulinemia and to evaluate the usefulness of the medaka as a model of diabetic nephropathy. Leptin receptor-deficient medaka at 20 and 30 weeks of age showed hyperglycemia and hypoinsulinemia; they also showed a higher level of plasma creatinine than the control medaka. Histopathologically, dilation of glomerular capillary lumina and of afferent/efferent arterioles was observed in leptin receptor-deficient medaka at 20 weeks of age, and then glomerular enlargement with cell proliferation and matrix expansion, formation of fibrin cap-like lesions, glomerular atrophy with Bowman's capsule dilation, and renal tubule dilation were observed at 30 weeks of age. These histopathological characteristics of leptin receptor-deficient medaka were similar to the characteristics of kidney lesions of human and rodent models of type II diabetes mellitus, making leptin receptor-deficient medaka a useful model of diabetic nephropathy. abstract_id: PUBMED:8605672 Nephelometric determination of carbohydrate deficient transferrin. We describe a technique for measuring carbohydrate-deficient transferrin (CDT) in serum. Serum transferrin fractions are separated by anion-exchange chromatography on microcolumns. Sialic acid-deficient transferrin fractions are collected in the eluate, and transferrin is then quantified by a rate-nephelometric technique. Imprecision (CV) was 4-5% within-run and 7-9% between runs (n = 15). Comparison with an isoelectric focusing-immunofixation method for transferrin index (x) yielded y = 761x + 7, Sy/x = 39 mg/L. Assay of sera from 90 abstainers or moderate consumers of alcohol showed that 81 (90%) had CDT concentrations between 30 and 70 mg/L. Among 74 alcoholics admitted to an alcohol treatment center, 54 (73%) had CDT &gt; 70 mg/L, i.e., the diagnostic sensitivity was 73% at a specificity of 90% (area under receiver-operator characteristic curve = 0.891). abstract_id: PUBMED:36282627 Organophotocatalytic [2+2] Cycloaddition of Electron-Deficient Styrenes. A visible-light organophotocatalytic [2+2] cycloaddition of electron-deficient styrenes is described. Photocatalytic [2+2] cycloadditions are typically performed with electron-rich styrene derivatives or α,β-unsaturated carbonyl compounds, and with transition-metal-based catalysts. We have discovered that an organic cyanoarene photocatalyst is able to deliver high-value cyclobutane products bearing electron-deficient aryl substituents in good yields. A range of electron-deficient substituents are tolerated, and both homodimerisations and intramolecular [2+2] cycloadditions to fused bicyclic systems are available by using this methodology. abstract_id: PUBMED:29479143 Ocular lesions in leptin receptor-deficient medaka (Oryzias latipes). Ocular lesions in leptin receptor-deficient medaka were examined histopathologically at 10, 28, and 37 weeks post hatching. Leptin receptor-deficient medaka at 28 and 37 weeks old showed hyperglycemia and hypoinsulinemia. Histopathologically, vacuolation, swelling, fragmentation, and liquefaction of the lens fibers and dilatation of the retinal central veins, retinal capillaries, iridal veins and capillaries, and choroidal veins were observed in leptin receptor-deficient medaka at 28 and 37 weeks old. Thinning of the total retina, pigment epithelial layer, layer of rods and cones, outer granular layer, outer plexiform layer, inner granular layer, and inner plexiform layer was observed in leptin receptor-deficient medaka at 28 and 37 weeks compared with in control medaka. These histopathological characteristics in leptin receptor-deficient medaka are similar to characteristics in ocular lesions of rodent models for type II diabetes mellitus, making leptin receptor-deficient medaka a useful model of diabetic cataract and retinopathy. abstract_id: PUBMED:38311902 Succinate Dehydrogenase Deficient Renal Cell Carcinoma With Sarcomatoid and Rhabdoid Features-A Diagnostic Dilemma. Succinate dehydrogenase (SDH)-deficient renal cell carcinoma (RCC) is a rare epithelial tumor with a biallelic mutation involving any subunit of the SDH complex. Mostly, it has low-grade morphology and a favorable prognosis. We present a case of a 36-year-old woman with weight loss, night sweats, and symptomatic anemia. Her imaging showed a hypo-enhancing heterogeneous right renal mass with invasion of the renal vein and inferior vena cava. Microscopically, the tumor had focal low-grade areas (5%) and extensive areas with high-grade features, including rhabdoid (85%) and sarcomatoid (10%) dedifferentiation. Cytoplasmic inclusions, foci of extracellular mucin, coagulative necrosis, and inflammatory infiltrate were present. The tumor cells, including rhabdoid differentiated, were focally positive for AE1/AE3. Tumor cells showed loss of SDHB immunostaining, consistent with diagnosis. Genetics testing was recommended, but the patient expired due to metastatic carcinoma. Prior studies suggest that sarcomatoid transformation and coagulative necrosis increase the risk of metastasis by up to 70% in SDH-deficient RCC. Follow-up with surveillance for other SDH-deficient neoplasms is recommended in cases of germline mutation. Here, we report the first case of SDH-deficient RCC with concomitant rhabdoid and sarcomatoid features and a detailed review of diagnostic difficulties associated with high-grade tumors. abstract_id: PUBMED:34262616 Targeted therapy in SDH-deficient GIST. The medical management of advanced gastrointestinal stromal tumors (GIST) has improved with the development of tyrosine kinase inhibitors (TKIs) targeting KIT and PDGFRA mutations. However, approximately 5-10% of GIST lack KIT and PDGFRA mutations, and about a half are deficient in succinate dehydrogenase (SDH) that promotes carcinogenesis by the cytoplasmic accumulation of succinate. This rare group of GIST primarily occurs in the younger patients than other subtypes, and is frequently associated with hereditary syndromes. The role of TKIs in patients with SDH-deficient GIST is controversial, with conflicting results; thus, there is an urgent need to uncover the disease mechanisms, treatment patterns, and responses to systemic therapy among these patients. Here, based on an extensive literature search, we have provided a rigorous overview of the current evidence on the medical treatment of SDH-deficient GIST. abstract_id: PUBMED:26855498 Carbohydrate Deficient Transferrin and Interleukin-6 as Predictors of Fibrosis in Alcohol Cirrhosis. The severity of alcoholic cirrhosis depends on the presence of liver inflammation and fibrogenesis. Previous studies have hypothesized that carbohydrate deficient transferrin can be used as marker of liver impairment in alcoholic liver disease patients. The present study was designed to assess whether carbohydrate deficient transferrin is associated with procollagen III peptide and predict fibrosis in alcohol cirrhosis patients. We enrolled 48 patients with alcoholic cirrhosis and 38 healthy controls. Serum carbohydrate deficient transferrin, procollagen III peptide and interleukin-6 levels were estimated in both groups. Serum carbohydrate deficient transferrin, procollagen III peptide and interleukin-6 were significantly increased in alcoholic cirrhosis patients compared to controls. Stepwise regression analysis showed that carbohydrate deficient transferrin (adjusted R(2) = 0.313, β = 0.362, p = 0.003) and interleukin-6 (adjusted R(2) = 0.194, β = 0.459, p = 0.001) were positively associated with procollagen III peptide when age, duration and amount of alcohol consumption were considered as covariates. We conclude that elevated carbohydrate deficient transferrin and interleukin-6 act as predictors of fibrosis in alcoholic cirrhosis. Answer: The provided abstracts do not contain any information regarding Kazakhstan being a folate-deficient area. The abstracts discuss various topics related to different types of deficiencies and diseases, such as the effects of exercise on atherosclerosis in apolipoprotein E-deficient mice (PUBMED:18651017), serum galactose-deficient IgA1 levels in children with IgA nephropathy (PUBMED:26221341), the impact of diabetes on the antithrombotic effect of clopidogrel in apolipoprotein E-deficient mice (PUBMED:31249914), renal and ocular lesions in leptin receptor-deficient medaka as models for diabetic nephropathy and diabetic retinopathy (PUBMED:31719758, PUBMED:29479143), a nephelometric determination of carbohydrate-deficient transferrin (PUBMED:8605672), organophotocatalytic [2+2] cycloaddition of electron-deficient styrenes (PUBMED:36282627), a case of succinate dehydrogenase-deficient renal cell carcinoma with sarcomatoid and rhabdoid features (PUBMED:38311902), targeted therapy in SDH-deficient gastrointestinal stromal tumors (PUBMED:34262616), and carbohydrate-deficient transferrin and interleukin-6 as predictors of fibrosis in alcohol cirrhosis (PUBMED:26855498). To determine if Kazakhstan is a folate-deficient area, one would need to look for studies or reports specifically addressing the prevalence of folate deficiency or related health issues in the Kazakhstani population. This information is not provided in the abstracts given.
Instruction: Synchronous work: myth or reality? Abstracts: abstract_id: PUBMED:34935997 Virtual and augmented reality in urology Although continuous technological developments have optimized and evolved medical care throughout time, these technologies were mostly still comprehensible for users. Driven by immense financial efforts, modern innovative products and technical solutions are transforming medicine today and will do so even more in the future: virtual and augmented reality. This review critically summarizes the current literature and future uses of virtual and augmented reality in the field of urology. abstract_id: PUBMED:38143714 Enhancing pain modulation: the efficacy of synchronous combination of virtual reality and transcutaneous electrical nerve stimulation. Introduction: Virtual reality (VR) and transcutaneous electrical nerve stimulation (TENS) have emerged as effective interventions for pain reduction. However, their standalone applications often yield limited analgesic effects, particularly in certain painful conditions. Aims: Our hypothesis was that the combination of VR with TENS in a synchronous manner could produce the best analgesic effect among the four experimental conditions. Methods: To address this challenge, we proposed a novel pain modulation strategy that synchronously combines VR and TENS, aiming to capitalise on both techniques' complementary pain modulation mechanisms. Thirty-two healthy subjects participated in the study and underwent three types of interventions: VR alone, a combination of VR with conventional TENS, and a combination of VR with synchronous TENS. Additionally, a control condition with no intervention was included. Perceived pain intensity, pain unpleasantness, positive and negative affect scores, and electroencephalographic (EEG) data were collected before and after the interventions. To delve into the potential moderating role of pain intensity on the analgesic efficacy of VR combined with synchronous TENS, we incorporated two distinct levels of painful stimuli: one representing mild to moderate pain (ie, low pain) and the other representing moderate to severe pain (ie, high pain). Results: Our findings revealed that both combination interventions exhibited superior analgesic effects compared with the VR-alone intervention when exposed to low and high pain stimuli. Notably, the combination of VR with synchronous TENS demonstrated greater analgesic efficacy than the combination of VR with conventional TENS. EEG data further supported these results, indicating that both combination interventions elicited a greater reduction in event-related potential magnitude compared with the VR-alone intervention during exposure to low and high pain stimuli. Moreover, the synchronous combination intervention induced a more significant reduction in N2 amplitude than the VR-alone intervention during exposure to low pain stimuli. No significant differences in EEG response changes were detected between the two combination interventions. Both combination interventions resulted in a greater reduction in negative affect compared with the VR-alone intervention. Conclusions: Altogether, our study highlights the effectiveness of the synchronous combination of VR and TENS in enhancing pain modulation. These findings offer valuable insights for developing innovative pain treatments, emphasising the importance of tailored and multifaceted therapeutic approaches for various painful conditions. abstract_id: PUBMED:30522160 Augmented reality and virtual reality in the operating theatre status quo und quo vadis Introduction: Virtual reality (VR) is an artificially simulated environment permitting interaction. On the other hand, augmented reality (AR) is an enhanced version of reality created by the use of technology to overlay digital information on an image of something being viewed through a device. Both technologies have partially been implemented in clinical daily routine. Surgical applications of VR and AR are currently evaluated. Yet it is still unclear which possibilities these new and versatile applications offer for physicians. Intention: The goal of this article was to assess current and future use of AR und VR, with a special focus on surgery. We also summarised obstacles for AR and VR use as well as potential clinical improvements through these new technologies. Methods: Systematic literature research in PubMed with inclusion of reviews referring to AR and VR, especially focused on articles on surgery. Keywords were Augmented Reality, Virtual Reality, Telementoring and Telesurgery. Furthermore we briefly analysed the investment volume and investment strategy in medicine from the ten largest private technology companies of the USA. Results: The keyword "Augmented Reality" leads to 1222 articles and 119 reviews, while "Virtual Reality" offered 7766 articles and 878 reviews. In recent years, the amount of published articles has increased. 45 articles were included. Multiple AR- and VR-applications are already integrated in surgical daily routine. The next promising applications will concern the possibility of intraoperative overlap with radiological imaging via AR-tools, as well as telementoring and the use of AR and VR in surgical and anatomical education. The expected - but unproven - advantages include cost savings, reduction in complications, comprehensive knowledge acquisition and improvement in surgical results. In addition, we notice enormous financial investment by technology companies in this sector. Conclusion: Due to their tremendous potential, AR and VR technologies will be increasingly integrated into surgery. However, the benefit of these new technologies for relevant endpoints are currently unclear. This should be examined in rigorous clinical trials. Physicians should play a key role in this technological revolution to exploit the potential of AR and VR for their patients. abstract_id: PUBMED:33588501 Surgical Education in the Digital Age - Virtual Reality, Augmented Reality and Robotics in the Medical School Background: The digital transformation of healthcare is changing the medical profession. Augmented/Virtual Reality (AR/VR) and robotics are being increasingly used in different clinical contexts and require supporting education and training, which must begin within the medical school. There is currently a large discrepancy between the high demand and the number of scientifically proven concepts. The aim of this thesis was the conceptual design and structured evaluation of a newly developed learning/teaching concept for the digital transformation of medicine, with a special focus on the influence of surgical teaching. Methods: Thirty-five students participated in three courses of the blended learning curriculum "Medicine in the digital age". The 4th module of this course deals with virtual reality, augmented reality and robotics in surgery. It is divided into the following course parts: (1) immersive surgery simulation of a laparoscopic cholecystectomy, (2) liver surgery planning using AR/VR, (3) basic skills on the VR simulator for robotic surgery, (4) collaborative surgery planning in virtual space and (5) expert discussion. After completing the overall curriculum, a qualitative and quantitative evaluation of the course concept was carried out by means of semi-structured interviews and standardised pre-/post-evaluation questionnaires. Results: In the qualitative evaluation procedure of the interviews, 79 text statements were assigned to four main categories. The largest share (35%) was taken up by statements on the "expert discussion", which the students consider to be an elementary part of the course concept. In addition, the students perceived the course as a horizon-widening "learning experience" (29% of the statements) with high "practical relevance" (27%). The quantitative student evaluation shows a positive development in the three sub-competences knowledge, skills and attitude. Conclusion: Surgical teaching can be profitably used to develop digital skills. The speed of the change process of digital transformation in the surgical specialty must be considered. Curricular adaptation should be anchored in the course concept. abstract_id: PUBMED:31915586 Survival between synchronous and non-synchronous multiple primary cutaneous melanomas-a SEER database analysis. Background: There is no criterion to distinguish synchronous and non-synchronous multiple primary cutaneous melanomas (MPMs). This study aimed to distinguish synchronous and non-synchronous MPMs and compare the survivals of them using the Surveillance, Epidemiology, and End Results database. Methods: Synchronous and non-synchronous MPMs were distinguished by fitting the double log transformed distribution of the time interval between the first and second primary cutaneous melanomas (TIFtS) through a piecewise linear regression. The overall and melanoma-specific survivals were compared by the Kaplan-Meier method and Cox proportional hazard model through modeling the occurrence of synchronous MPMs as a time-dependent variable. Results: The distribution of TIFtS was composed by three power-law distributions. According to its first inflection point, synchronous MPMs were defined as tumors that occurred within 2 months. The Kaplain-Meier plot revealed a significant inferior survival for synchronous MPMs than non-synchronous MPMs (P &lt; 0.0001), and the occurrence of synchronous MPM was a risk factor for overall survival of cutaneous melanoma (CM) (hazard ratio: 2.213; (95% CI [2.087-2.346]); P &lt; 0.0001). Conclusions: This study provided data analysis evidences for using 2 months to distinguish synchronous MPMs and non-synchronous MPMs. Furthermore, the occurrence of synchronous MPM was a risk factor for prognosis of patients with CM. abstract_id: PUBMED:28453428 Reality Check: Basics of Augmented, Virtual, and Mixed Reality. Augmented, virtual, and mixed reality applications all aim to enhance a user's current experience or reality. While variations of this technology are not new, within the last few years there has been a significant increase in the number of artificial reality devices or applications available to the general public. This column will explain the difference between augmented, virtual, and mixed reality and how each application might be useful in libraries. It will also provide an overview of the concerns surrounding these different reality applications and describe how and where they are currently being used. abstract_id: PUBMED:28293571 Virtual reality, augmented reality…I call it i-Reality. The new term improved reality (i-Reality) is suggested to include virtual reality (VR) and augmented reality (AR). It refers to a real world that includes improved, enhanced and digitally created features that would offer an advantage on a particular occasion (i.e., a medical act). I-Reality may help us bridge the gap between the high demand for medical providers and the low supply of them by improving the interaction between providers and patients. abstract_id: PUBMED:38194123 Exploring the potential role for extended reality in Mohs micrographic surgery. Mohs micrographic surgery (MMS) is a cornerstone of dermatological practice. Virtual reality (VR) and augmented reality (AR) technology, initially used for entertainment, have entered healthcare, offering real-time data overlaying a surgeon's view. This paper explores potential applications of VR and AR in MMS, emphasizing their advantages and limitations. We aim to identify research gaps to facilitate innovation in dermatological surgery. We conducted a PubMed search using the following: "augmented reality" OR "virtual reality" AND "Mohs" or "augmented reality" OR "virtual reality" AND "surgery." Inclusion criteria were peer-reviewed articles in English discussing these technologies in medical settings. We excluded non-peer-reviewed sources, non-English articles, and those not addressing these technologies in a medical context. VR alleviates patient anxiety and enhances patient satisfaction while serving as an educational tool. It also aids physicians by providing realistic surgical simulations. On the other hand, AR assists in real-time lesion analysis, optimizing incision planning, and refining margin control during surgery. Both of these technologies offer remote guidance for trainee residents, enabling real-time learning and oversight and facilitating synchronous teleconsultations. These technologies may transform dermatologic surgery, making it more accessible and efficient. However, further research is needed to validate their effectiveness, address potential challenges, and optimize seamless integration. All in all, AR and VR enhance real-world environments with digital data, offering real-time surgical guidance and medical insights. By exploring the potential integration of these technologies in MMS, our study identifies avenues for further research to thoroughly understand the role of these technologies to redefine dermatologic surgery, elevating precision, surgical outcomes, and patient experiences. abstract_id: PUBMED:36197806 Comparison of in-person and synchronous remote musculoskeletal exam using augmented reality and haptics: A pilot study. Introduction: Utilization of telemedicine for health care delivery increased rapidly during the coronavirus disease 2019 (COVID-19) pandemic. However, physical examination during telehealth visits remains limited. A novel telerehabilitation system-The Augmented Reality-based Telerehabilitation System with Haptics (ARTESH)-shows promise for performing synchronous, remote musculoskeletal examination. Objective: To assess the potential of ARTESH in remotely examining upper extremity passive range of motion (PROM) and maximum isometric strength (MIS). Design: In this cross-sectional pilot study, we compared the in-person (reference standard) and remote evaluations (ARTESH) of participants' upper extremity PROM and MIS in 10 shoulder and arm movements. The evaluators were blinded to each other's results. Setting: Participants underwent in-person evaluations at a Veterans Affairs hospital's outpatient Physical Medicine and Rehabilitation (PM&amp;R) clinic, and underwent remote examination using ARTESH with the evaluator located at a research lab 30 miles away, connected via a high-speed network. Patients: Fifteen participants with upper extremity pain and/or weakness. Interventions: Not applicable. Main Outcome Measures: Inter-rater agreement between in-person and remote evaluations on 10 PROM and MIS movements and presence/absence of pain with movement was calculated. Results: The highest inter-rater agreements were noted in shoulder abduction and protraction PROM (kappa (κ) = 0.44, confidence interval (CI): -0.1 to 1.0), and in elbow flexion, shoulder abduction, and shoulder protraction MIS (κ = 0.63, CI: 0 to 1.0). Conclusions: This pilot study suggests that synchronous tele-physical examination using the ARTESH system with augmented reality and haptics has the potential to provide enhanced value to existing telemedicine platforms. With the additional technological and procedural improvements and with an adequately powered study, the accuracy of ARTESH-enabled remote tele-physical examinations can be better evaluated. abstract_id: PUBMED:35309391 Synchronised neural signature of creative mental imagery in reality and augmented reality. Creativity, transforming imaginative thinking into reality, is a mental imagery simulation in essence. It can be incorporeal, concerns sophisticated and/or substantial thinking, and involves objects. In the present study, a mental imagery task consisting of creating a scene using familiar (FA) or abstract (AB) physical or virtual objects in real (RMI) and augmented reality (VMI) environments, and an execution task involving effectively creating a scene in augmented reality (VE), were utilised. The beta and gamma neural oscillations of healthy participants were recorded via a 32 channel wireless 10/20 international EGG system. In real and augmented environments and for both the mental imagery and execution tasks, the participants displayed a similar cortico-cortical neural signature essentially based on synchronous vs asynchronous beta and gamma oscillatory activities between anterior (i.e. frontal) and posterior (i.e. parietal, occipito-parietal and occipito-temporal) areas bilaterally. The findings revealed a transient synchronised neural architecture that appears to be consistent with the hypothesis according to which, creativity, because of its inherent complexity, cannot be confined to a single brain area but engages various interconnected networks. Answer: Synchronous work refers to activities or processes that occur at the same time or in real-time collaboration. In the context of the provided abstracts, synchronous work is a reality, not a myth, as evidenced by various applications and studies. For instance, the efficacy of combining virtual reality (VR) with transcutaneous electrical nerve stimulation (TENS) in a synchronous manner has been demonstrated to produce superior analgesic effects compared to standalone interventions, indicating that synchronous work in pain management is effective (PUBMED:38143714). Similarly, the Augmented Reality-based Telerehabilitation System with Haptics (ARTESH) enables synchronous, remote musculoskeletal examinations, showing the potential for real-time tele-physical examinations (PUBMED:36197806). In the field of surgery, synchronous work is also a reality with the use of VR and augmented reality (AR) technologies. These technologies are being integrated into surgical education and practice, allowing for real-time data overlaying a surgeon's view, which can assist in real-time lesion analysis, incision planning, and margin control during procedures such as Mohs micrographic surgery (PUBMED:30522160, PUBMED:38194123, PUBMED:33588501). Moreover, the concept of synchronous neural signatures has been explored in the context of creative mental imagery, where participants displayed similar cortico-cortical neural signatures based on synchronous beta and gamma oscillatory activities in both real and augmented environments (PUBMED:35309391). In summary, synchronous work is a reality in various fields, including pain management, surgery, telemedicine, and neuroscience. The abstracts provided offer evidence of the practical applications and benefits of synchronous activities, supported by technological advancements in VR and AR, as well as innovative medical devices and systems.
Instruction: Does the presence of additional thyroid nodules on ultrasound alter the risk of malignancy in patients with a follicular neoplasm of the thyroid? Abstracts: abstract_id: PUBMED:18063067 Does the presence of additional thyroid nodules on ultrasound alter the risk of malignancy in patients with a follicular neoplasm of the thyroid? Background: Follicular neoplasms of the thyroid are associated with an approximately 20% risk of malignancy. We sought to determine whether the presence of additional thyroid nodules on preoperative ultrasound decreased the risk of malignancy in a patient with a follicular neoplasm. Methods: Between January 2000 and November 2006, 325 patients underwent thyroidectomy with a fine needle aspiration diagnosis of either follicular neoplasm, Hürthle cell neoplasm, or indeterminate (not including suspicious for papillary thyroid cancer). Records were reviewed retrospectively and statistical analysis was performed using SPSS (SPSS Corporation, Chicago, Ill). Results: The rate of malignancy in our patient population was 20% (23% in follicular neoplasm, 19% in Hürthle cell neoplasm, 9% in indeterminate). Overall, 57% of patients had multiple thyroid nodules on preoperative ultrasound. The risk of malignancy was lower in patients with greater than or equal to 1 additional nodule in comparison with those with a solitary nodule (16.6% vs 28.0%, P = .02). The risk of malignancy was lowest in those with 1-3 additional nodules in comparison with those with greater than or equal to 4 nodules (14.5% vs 21.7%, P = .04). Conclusions: The presence of additional thyroid nodules on preoperative ultrasound is associated with a lower risk of malignancy in a patient with a follicular neoplasm. abstract_id: PUBMED:37530386 Ultrasound features and risk stratification system in NIFT-P and other follicular-patterned thyroid tumors. Objective: Noninvasive follicular thyroid neoplasm with papillary-like nuclear features (NIFT-P) is an encapsulated follicular variant of papillary thyroid carcinoma (PTC) with nonaggressive clinical behavior. However, since its diagnosis is exclusively possible after surgery, it represents a clinical challenge. Neck ultrasound (US) shows good sensitivity and specificity in suggesting malignancy in thyroid nodules. However, little information is available about its ability in identifying NIFT-P. Design: The aim of this study was to evaluate the US features of NIFT-P, comparing them with other follicular-patterned thyroid tumors, and to test the ability of the main US risk stratification system (RSS) in identifying NIFT-P. Methods: We retrospectively evaluated 403 consecutive patients submitted to thyroid surgery, with positive histology for at least 1 nodule being NIFT-P, follicular variant of PTC (FV-PTC), follicular thyroid carcinoma (FTC), or follicular adenoma (FA). Results: The US features of NIFT-P (n = 116), FV-PTC (n = 170), FTC (n = 76), and FA (n = 90) were reported. Follicular variant of PTC and FTC more frequently showed irregular margins, presence of calcifications, "taller than wide" shape, and the absence of halo compared with NIFT-P. Furthermore, FTC and also FA were larger and more frequently hypoechoic than NIFT-P. Most cases (77%) showed an indeterminate cytology. Regardless of the US RSS considered, NIFT-P and FA were less frequently classified in the high-suspicious category compared with FV-PTC and FTC. Conclusions: Ultrasound features of NIFT-P are frequently superimposable to those of nodules with low suspicion of malignancy. The NIFT-P is almost never classified in the high-suspicious category according to the main US RSS. Therefore, although the preoperative identification of NIFT-P remains a challenge, neck US can be integrated in the algorithm of management of nodules with indeterminate cytology, suggesting a possible conservative approach in those with low-suspicious features. abstract_id: PUBMED:34973102 Performance of current ultrasound-based malignancy risk stratification systems for thyroid nodules in patients with follicular neoplasms. Objectives: To investigate the ability of the currently used ultrasound-based malignancy risk stratification systems for thyroid neoplasms (ATA, AACE/ACE/AME, K-TIRADS, EU-TIRADS, ACR-TIRADS and C-TIRADS) in distinguishing follicular thyroid carcinoma (FTC) from follicular thyroid adenoma (FTA). Additionally, we evaluated the ability of these systems in correctly determining the indication for biopsy. Methods: Three hundred twenty-nine follicular neoplasms with definitive postoperative histopathology were included. The nodules were categorized according to each of six stratification systems, based on ultrasound findings. We dichotomized nodules into the positive predictive group of FTC (high and intermediate risk) and negative group of FTC based on the classification results. Missed biopsy was defined as neoplasms that were diagnosed as FTCs but for which biopsy was not indicated based on lesion classification. Unnecessary biopsy was defined as neoplasms that were diagnosed as FTAs but for whom biopsy was considered indicated based on classification. The diagnostic performance and missed and unnecessary biopsy rates were evaluated for each stratification system. Results: The area under the curve of each system for distinguishing follicular neoplasms was &lt; 0.700 (range, 0.511-0.611). The missed biopsy rates were 9.0-22.4%. The missed biopsy rates for lesions ≤ 4 cm and lesions sized 2-4 cm were 16.2-35.1% and 0-20.0%, respectively. Unnecessary biopsy rates were 65.3-93.1%. In ≤ 4 cm group, the unnecessary biopsy rates were 62.2-89.7%. Conclusion: The malignancy risk stratification systems can select appropriate nodules for biopsy in follicular neoplasms, while they have limitations in distinguishing follicular neoplasms and reducing unnecessary biopsy. Specific stratification systems and recommendations should be established for follicular neoplasms. Key Points: • Current ultrasound-based malignancy risk stratification systems of thyroid nodules had low efficiency in the characterization of follicular neoplasms. • The adopted stratification systems showed acceptable performance for selecting FTC for biopsy but unsatisfactory performance for reducing unnecessary biopsy. abstract_id: PUBMED:32759413 Risk factors of malignancy in patients with fine needle aspiration biopsy results interpreted as "suspicious for follicular neoplasm" Objective: Introduction: Approximately 10% of fine needle aspiration biopsy (FNAB) of thyroid nodules may be verified as "suspicious for follicular neoplasm"; this category involves follicular adenoma, follicular carcinoma, follicular variants of papillary carcinoma and subclass "suspicious for Hurthle cell neoplasm". At present, there is no diagnostic tool to discriminate between follicular adenoma and cancer. Most patients are required surgery to exclude malignant process. The aim: To define factors correlating with risk of malignancy in patients with FNAB of thyroid focal lesions and nodules verified as Bethesda tier IV. Patients And Methods: Materials and Methods: In this study 110 consecutive patients were included. All patients were operated because of FNAB result "suspicious for follicular neoplasm" of thyroid gland at a single institution from January 2016 until March 2020. From this set, six specific categories were defined and the clinical records for patients were collected: sex, age, presence of oxyphilic cells, diameter of the tumour, presence of Hashimoto disease, aggregate amount of clinical and ultrasonographic features of malignancy according to ATA. Results: Results: In 18 patients (16,3%) thyroid cancer occurred. Most frequent subtype turned out to be papillary cancer (66,6%). In group of benign lesion (92 patients) predominance of follicular adenoma was disclosed - (49%). Age, gender, tumour diameter, aggregate amount of clinical and ultrasonografic factors, presence of Hashimoto disease and fine needle aspiration biopsy result suspicious for Hurthle cell neoplasm did not correspond to increased risk of malignancy. Conclusion: Conclusions: In patients with FNAB results classified as Bethesda tier IV there are no reliable clinical features associated with low risk of malignancy and surgery should be consider in every case as most appropriate manner to exclude thyroid cancer . abstract_id: PUBMED:33309397 An attempt to reduce unnecessary surgical procedures... Can ultrasound characteristics help in differentiating adenoma vs carcinoma in follicular thyroid neoplasms? Introduction And Objectives: Thyroid nodules frequently require ultrasound and Fine Needle Aspiration Cytology (FNAC) evaluation. However, FNA cytology does not allow differentiation between follicular adenoma and carcinoma on Bethesda type IV lesions. This situation leads to many unnecessary surgical procedures because it is not possible to assure the benignity of the lesions, even when most of the specimens correspond to adenomas or even other benign lesions. The objective in this study is to establish if there are any US characteristics that would help us to predict the risk of malignancy of nodules with a pathological diagnosis of follicular neoplasm in order to achieve a more conservative management for non-suspicious nodules. Material And Methods: We studied 61 nodules in 61 patients (51 women and 10 men) that underwent thyroid surgery and had histopathological results of either follicular adenoma or carcinoma. Different US characteristics of the nodules were analysed (composition, echogenicity, margin, calcification status, the presence of halo and overall observer suspicion of malignancy) and were correlated with the histopathological analysis. Results: We have found a statistically significant association between the presence of calcifications, ill-defined borders and overall observer suspicion or impression (defined by well-known suspicious for malignancy ultrasonographic features, such as calcification, poorly defined margin, and a markedly hypoechoic solid nodule; and benign ultrasonographic features, such as predominantly cystic echogenic composition and the presence of a perinodular hypoechogenic halo) with follicular carcinoma. However all those features have shown low sensitivities in the present study (30%, 30% and 50%, respectively). On the other hand, the absence of halo sign has shown a sensitivity of 100% and a negative predictive value (NPV) of 100% in our study. Conclusions: The presence of calcifications, ill-defined borders and the overall impression or suspicion of malignancy are associated with a higher risk for follicular carcinoma in Bethesda type IV thyroid nodules but their absence does not allow to predict benignity in these nodules. Inversely, when a halo sign lesion is observed, benign follicular neoplasm should be considered. abstract_id: PUBMED:36842782 An attempt to reduce unnecessary surgical procedures... Can ultrasound characteristics help in differentiating adenoma vs carcinoma in follicular thyroid neoplasms? Introduction And Objectives: Thyroid nodules frequently require ultrasound and Fine Needle Aspiration Cytology (FNAC) evaluation. However, FNA cytology does not allow differentiation between follicular adenoma and carcinoma on Bethesda type IV lesions. This situation leads to many unnecessary surgical procedures because it is not possible to assure the benignity of the lesions, even when most of the specimens correspond to adenomas or even other benign lesions. The objective is this study is to establish if there are any US characteristics that would help us to predict the risk of malignancy of nodules with a pathological diagnosis of follicular neoplasm in order to achieve a more conservative management for non-suspicious nodules. Material And Methods: We studied 61 nodules in 61 patients (51 women and 10 men) that underwent thyroid surgery and had histopathological results of either follicular adenoma or carcinoma. Different US characteristics of the nodules were analysed (composition, echogenicity, margin, calcification status, the presence of halo and overall observer suspicion of malignancy) and were correlated with the histopathological analysis. Results: We have found a statistically significant association between the presence of calcifications, ill-defined borders and overall observer suspicion or impression (defined by well-known suspicious for malignancy ultrasonographic features, such as calcification, poorly defined margin, and a markedly hypoechoic solid nodule; and benign ultrasonographic features, such as predominantly cystic echogenic composition and the presence of a perinodular hypoechogenic halo) with follicular carcinoma. However all those features have shown low sensitivities in the present study (30%, 30% and 50%, respectively). On the other hand, the absence of halo sign has shown a sensitivity of 100% and a negative predictive value (NPV) of 100% in our study. Conclusions: The presence of calcifications, ill-defined borders and the overall impression or suspicion of malignancy associate with a higher risk for follicular carcinoma in Bethesda type IV thyroid nodules but their absence do not allow to predict benignity in these nodules. Inversely, when a halo sign lesion is observed, benign follicular neoplasm should be considered. abstract_id: PUBMED:37694360 Comparison of the diagnostic performance of three ultrasound thyroid nodule risk stratification systems for follicular thyroid neoplasm: K-TIRADS, ACR -TIRADS and C-TIRADS. Objective: To explore the diagnostic performance of the currently used ultrasound-based thyroid nodule risk stratification systems (K-TIRADS, ACR -TIRADS, and C-TIRADS) in differentiating follicular thyroid adenoma (FTA) from follicular thyroid carcinoma (FTC). Methods: Clinical data and preoperative ultrasonographic images of 269 follicular thyroid neoplasms were retrospectively analyzed. All of them were detected by Color Doppler ultrasound instruments equipped with high-frequency liner array probes (e.g. Toshiba Apoli500 with L5-14MHZ; Philips IU22 with L5-12MHZ; GE LOGIQ E9 with L9-12MHZ and MyLab Class C with L9-14MHZ). The diagnostic performance of three TIRADS classifications for differentiating FTA from FTC was evaluated by drawing the receiver operating characteristic (ROC) curves and calculating the cut-off values. Results: Of the 269 follicular neoplasms (mean size, 3.67±1.53 cm), 209 were FTAs (mean size, 3.56±1.38 cm) and 60 were FTCs (mean size, 4.07±1.93 cm). There were significant differences in ultrasound features such as margins, calcifications, and vascularity of thyroid nodules between the FTA and FTC groups (P &lt; 0.05). According to the ROC curve comparison analysis, the diagnostic cut-off values of K-TIRADS, ACR-TIRADS, and C-TIRADS for identifying FTA and FTC were K-TR4, ACR-TR4, and C-TR4B, respectively, and the areas under the curves were 0.676, 0.728, and 0.719, respectively. The difference between ACR-TIRADS and K-TIRADS classification was statistically significant (P = 0.0241), whereas the differences between ACR-TIRADS and C-TIRADS classification and between K-TIRADS and C-TIRADS classification were not statistically significant (P &gt; 0.05). Conclusion: The three TIRADS classifications were not conducive to distinguishing FTA from FTC. It is necessary to develop a novel malignant risk stratification system specifically for the identification of follicular thyroid neoplasms. abstract_id: PUBMED:25750321 The impact of thyroid nodule size on the risk of malignancy in follicular neoplasms. Background/aim: Studies have shown that the risk of malignancy in follicular neoplasms is as high as 30%. Often, surgery is recommended for such lesions, not for therapeutic purposes but as a diagnostic method, leading to increased hospital costs and related morbidities. Recent studies have suggested that tumor size predicts malignant potential of these follicular neoplasms. Our aim was to identify the impact of nodule size on the risk of malignancy for such lesions. Patients And Methods: A retrospective medical chart review was undertaken for patients who underwent thyroid surgery at a single academic North American Institution. A total of 120 follicular lesions, follicular neoplasms (Bethesda category IV) or follicular lesions of undetermined significance (Bethesda category III) in 110 patients undergoing thyroid surgery were evaluated. Nodule size as measured by ultrasound, fine-needle aspiration cytological results, and final histopathology reports were reviewed. Analysis was performed by classification according to nodule size: &lt;3 cm, ≥3 cm, &lt;4 cm and ≥4 cm. Results: Out of the 120 nodules, 48 (40%) were reported to be malignant on final pathological examination. The malignancy rate in nodules&lt;3 cm and ≥ 3cm was 41% and 37.8%, respectively (p=0.84). When 4 cm was used as the cut-off, the rate in nodules&lt;4 cm and ≥4 cm was 40.6% and 37.5%, respectively (p=0.82). Conclusion: Increased thyroid nodule size does not increase the malignancy rate for follicular neoplasms. Hence, we recommend against routine total thyroidectomy for patients with follicular neoplasms based on the size criteria. abstract_id: PUBMED:36338713 Diagnostic performance of six ultrasound-based risk stratification systems in thyroid follicular neoplasm: A retrospective multi-center study. This study aimed to compare the diagnostic performances of six commonly used ultrasound-based risk stratification systems for distinguishing follicular thyroid adenoma (FTA) from follicular thyroid carcinoma (FTC), including the American Thyroid Association Sonographic Pattern System (ATASPS), ultrasound classification systems proposed by American Association of Clinical Endocrinologists, American College of Endocrinology, and Associazione Medici Endocrinology (AACE/ACE/AME), Korean thyroid imaging reporting and data system (K-TIRADS), European Thyroid Association for the imaging reporting and data system (EU-TIRADS), American College of Radiology for the imaging reporting and data system (ACR-TIRADS), and 2020 Chinese Guidelines for Ultrasound Malignancy Risk Stratification of Thyroid Nodules (C-TIRADS). A total of 225 FTA or FTC patients were retrospectively analyzed, involving 251 thyroid nodules diagnosed by postoperative pathological examinations in three centers from January 2013 to October 2021. The diagnostic performances of six ultrasound-based risk stratification systems for distinguishing FTA from FTC were assessed by plotting the receiver operating characteristic (ROC) curves and compared at different cut-off values. A total of 205 (81.67%) cases of FTA and 46 (18.33%) cases of FTC were involved in the present study. Compared with those of FTA, FTC presented more typical ultrasound features of solid component, hypoechoic, irregular margin and sonographic halo (all P&lt;0.001). There were no significant differences in ultrasound features of calcification, shape and comet-tail artifacts between cases of FTA and FTC. There was a significant difference in the category of thyroid nodules assessed by the six ultrasound-based risk stratification systems (P&lt;0.001). The areas under the curve (AUCs) of ATASPS, AACE/ACE/AME, K-TIRADS, EU-TIRADS, ACR-TIRADS and C-TIRADS in distinguishing FTA from FTC were 0.645, 0.729, 0.766, 0.635, 0.783 and 0.798, respectively. Our study demonstrated that all the six ultrasound-based risk stratification systems present potential in the differential diagnosis of FTA and FTC. Specifically, C-TIRADS exerts the best diagnostic performance among the Chinese patients. ATASPS possesses a high sensitivity, while K-TIRADS possesses a high specificity in distinguishing FTA from FTC. abstract_id: PUBMED:17201796 Combined clinical, thyroid ultrasound and cytological features help to predict thyroid malignancy in follicular and Hupsilonrthle cell thyroid lesions: results from a series of 505 consecutive patients. Objective: The cytological patterns of follicular and Hupsilonrthle cell nodules are included among the indeterminate results of fine-needle aspiration cytology, because distinction between benign and malignant lesion can only be made on histological criteria. The diagnostic value of atypia at cytology, clinical parameters and echographic patterns were examined to establish the risk of malignancy in 505 patients with follicular and Hupsilonrthle cell thyroid nodules at cytology. Design And Patients: The study included 505 consecutive patients who had undergone thyroidectomy from the period 2002-2005. Results: Histological diagnosis of malignancy was carried out in 125 of 505 (25%) patients, the follicular variant of papillary carcinoma being the most frequent histotype. Only atypia at cytology (P &lt; 0.0001) and spot microcalcifications at ultrasound (P = 0.009) were predictive of malignancy. Male gender, normal thyroid volume, single nodularity, nodule hypoechogenicity, size and blurred margins were associated with malignancy, although not significantly. An arbitrary clinical score allowed the identification of patients with high (41%, 110 patients) and low (16%, 242 patients) risk of malignancy. Combining the clinical score with the presence of atypia at cytology we could identify 30 patients (6%) in whom the risk of malignancy was as high as 63%. Conclusions: Twenty-five per cent of patients with a cytological result of follicular and Hupsilonrthle cell thyroid lesion had a final diagnosis of malignancy. Only atypia at cytology and spot microcalcifications at thyroid ultrasound were significantly associated with malignancy. Other clinical parameters and thyroid ultrasound patterns can be used to set up a clinical score useful for predicting the individual risk of malignancy before surgery. Answer: Yes, the presence of additional thyroid nodules on ultrasound is associated with a lower risk of malignancy in patients with a follicular neoplasm of the thyroid. A study found that the risk of malignancy was lower in patients with greater than or equal to 1 additional nodule compared to those with a solitary nodule (16.6% vs 28.0%, P = .02). Furthermore, the risk of malignancy was lowest in those with 1-3 additional nodules in comparison with those with greater than or equal to 4 nodules (14.5% vs 21.7%, P = .04) (PUBMED:18063067).
Instruction: Laughing at the looking glass: does humor style serve as an interpersonal signal? Abstracts: abstract_id: PUBMED:23531806 Laughing at the looking glass: does humor style serve as an interpersonal signal? Objective: The provision of information appears to be an important feature of humor. The present studies examined whether humor serves as an interpersonal signal such that an individual's style of humor is associated with how the individual is perceived by others. Method: We examined this issue across two studies. In Study 1, undergraduate participants (257 targets) were rated more positively by their friends and family members (1194 perceivers) when they possessed more benign humor styles. In Study 2, 1190 community participants rated the romantic desirability of targets ostensibly possessing different humor styles. Results: Across both studies, our results were consistent with the possibility that humor serves as a signal. More specifically, individuals with benign humor styles (affiliative and self-enhancing humor styles) were evaluated more positively than those targets with injurious humor styles (aggressive and self-defeating humor styles). Conclusion: These findings are discussed in terms of the role that humor may play in interpersonal perception and relationships. abstract_id: PUBMED:35667481 Finding humor in hormones: Oxytocin promotes laughing and smiling. Oxytocin (OT) is known for facilitating interpersonal interactions in favorable conditions. Humor, on the other hand, is an interpersonal phenomenon that reduces social distance. In this study, we investigated whether OT would increase laughing and smiling in a favorable environment. Therefore, participants were asked to view a brief video clip of a comedy television program in the presence of an ingroup member after administration of either OT or a placebo. Results demonstrated that participants in the OT condition exhibited more laughing and smiling behavior over the course of viewing the video compared to participants in the placebo condition. OT, however, neither affected self-reported affect nor perceived funniness of the clip. These findings indicate that OT, a hormone that can bolster attachment, also has the power to facilitate implicit social interactions by promoting visible and audible social signals, but does not affect the socially irrelevant inner cheerfulness. abstract_id: PUBMED:38192393 Development and validation of the Chinese coaches' interpersonal style scale. Purpose: Coaches' behaviors and coaching styles play a critical role in influencing athletes' psychological experiences and performance. According to the self-determination theory (SDT), coaches' interpersonal behaviors are commonly categorized as autonomy-supportive and controlling. Due to less focus on the unique behaviors of Chinese coaches, this study incorporated coaches' parental care for athletes, referred to as paternalistic benevolence, in their interpersonal styles in the context of the Chinese culture. Methods: Exploratory factor analyses were used in studies 1 and 2 to find items associated with benevolent coaching behaviors and items to create the Chinese Coaches' Interpersonal Style Scale. Study 3 used the constructed scale, as well as the Subjective Vitality Scale and Athlete Burnout Questionnaire, with a sample of athletes to examine scale reliability. The 15-item Chinese Coaches' Interpersonal Style Scale contained three dimensions: benevolent, autonomy-supportive, and controlling coaching styles. Results: The findings showed that: (1) benevolent coaching behaviors held significant explanatory weight in the Chinese cultural context; (2) controlling and autonomy-supportive coaching styles were culturally congruent among both Eastern and Western athletes; and (3) benevolent and autonomy-supportive coaching behaviors positively impacted athletes, whereas controlling coaching behaviors had a negative impact. Conclusion: The measure showed strong validity and reliability, making it useful for future practice and research on the interpersonal style of Chinese coaches. abstract_id: PUBMED:37583608 The influence of parenting style in childhood on adult depressed patients' interpersonal relationships in the period of youth. Objective: The objective of this study was to explore the mediating effect of adolescent self and courage on the relationship between parenting style in childhood and adult depressed patients' interpersonal relationships in the period of youth. Methods: The study analyzed data from 651 depressed individuals using the Wang Weidong memory-tracing personality developmental inventory (WMPI) from the psychology department of Guang'anmen Hospital. Results: The results of the study show a significant positive correlation between parenting style in childhood, adolescent self, courage, and adult depressed patients' interpersonal relationships in the period of youth. Parenting style in childhood has a direct positive predictive effect on adult depressed patients' interpersonal relationships in the period of youth. It also has an indirect effect on interpersonal relationships in the period of youth through three indirect pathways: the independent mediating effect of adolescent self, the independent mediating effect of adolescent courage, and the chain mediating effect of adolescent self and courage. Conclusion: The findings of this study suggest that parenting style in childhood plays an important role in shaping adult depressed patients' interpersonal relationships in the period of youth. The relationship between parenting style in childhood and interpersonal relationships in the period of youth is influenced by the independent mediating effect of adolescent self and courage, as well as the chain mediating effect of adolescent self and courage. These findings have implications for the development of interventions and programs aimed at improving the mental health and well-being of depressed patients. abstract_id: PUBMED:28690563 Modeling Infant i's Look on Trial t: Race-Face Preference Depends on i's Looking Style. When employing between-infant designs young infants' looking style is related to their development: Short looking (SL) infants are cognitively accelerated over their long looking (LL) peers. In fact, looking style is a within-infant variable, and depends on infant i's look distribution over trials. For the paired array setting, a model is provided which specifies the probability, π i ∈ [0, 1], that i is SL. The model is employed in a face preference study; 74 Caucasian infants were longitudinally assessed at 3, 6, and 9 months. Each i viewed same race (Caucasian) vs. other race (African) faces. Infants become SL with development, but there are huge individual differences in rate of change over age. Three month LL infants, [Formula: see text], preferred other race faces. SL infants, [Formula: see text], preferring same race faces at 3, and other race faces at 6 and 9 months. Looking style changes precede and may control changes in face preference. Ignoring looking style can be misleading: Without considering looking style, 3 month infants show no face preference. abstract_id: PUBMED:38293521 Relationship between parenting style and internet addiction: Interpersonal relationship problem as a mediator and gender as a moderator. Purpose: This study assessed the moderating effect of gender on the indirect effects of positive and negative parenting styles on Internet addiction through interpersonal relationship problem. Methods: A cross-sectional survey of randomly sampled 1194 college students recruited voluntarily from three universities in China was conducted to assess the variables of positive and negative parenting styles, interpersonal relationship problem, and Internet addiction. Results: Positive parenting style, such as emotional warmth, was a protective factor for the development of Internet addiction, whereas negative parenting style, such as rejection and overprotection, was a potential risk factor for Internet addiction. Furthermore, interpersonal relationship problem completely mediated the association between positive parenting style and Internet addiction but partially mediated the relationship between negative parenting style and Internet addiction. Finally, gender moderated the indirect effect of parenting style on Internet addiction through interpersonal relationship problem. Conclusion: The correlation between positive parenting style and interpersonal relationship problem was considerably weaker among females, whereas the association between interpersonal relationship problem and Internet addiction was much stronger among females. For the prevention and intervention of Internet addiction, it is important to increase positive parenting style for males while enhancing interpersonal skills training for females. Further longitudinal studies should discuss the effects of paternal and maternal parenting styles on Internet addiction. abstract_id: PUBMED:33446073 Does coaches' satisfaction with the team determine their interpersonal style? The mediating role of basic psychological needs. The purpose of the present study was to examine how coaches' satisfaction with the team could be related to their reported interpersonal style towards young athletes, and to analyze the mediating role of basic psychological needs (i.e. need satisfaction and need frustration) in this relationship. Participants were 352 coaches (16-67 years old; Mage = 32.88, SD = 11.14) from 48 clubs, who had between 1 and 52 years of training experience (M = 23.23, SD = 15.02). Structural equation modelling (SEM) was employed to test the relationships between variables. Results showed that satisfaction with the team is positively related to coaches' need satisfaction, and negatively to their need frustration. Need satisfaction positively predicted coaches' need-supportive style, and need frustration predicted their need-thwarting style. Regarding indirect effects, need satisfaction positively mediated the relationship between coaches' satisfaction with the team and their need-supportive style, and need frustration negatively mediated the relationship between coaches' satisfaction with the team and their need-thwarting style. These findings are a first step to highlight satisfaction with the team as an antecedent of coaches' self-reported need-supportive and need-thwarting behaviours towards athletes, and the mediating role of coaches' psychological needs (need satisfaction and need frustration) in this relationship.HighlightsWe examined the satisfaction of the team as antecedent of coaches' interpersonal style.We tested the mediating role of coaches' psychological needs in this relationship.Satisfaction with the team was positively related to need-supportive style.Satisfaction with the team was negatively associated with need-thwarting behaviors.Coaches' psychological needs mediated the relationship between team satisfaction and their interpersonal style. abstract_id: PUBMED:26283661 Teachers' interpersonal style and its relationship to emotions, causal attributions, and type of challenging behaviors displayed by students with intellectual disabilities. Teachers' interpersonal style is a new field of research in the study of students with intellectual disabilities and challenging behaviors in school context. In the present study, we investigate emotions and causal attributions of three basic types of challenging behaviors: aggression, stereotypy, and self-injury, in relation to teachers' interpersonal style. One hundred and seventy seven Greek general and special educator teachers participated in the study by completing a three-scaled questionnaire. Statistical analysis revealed that the type of challenging behaviors affected causal attributions. According to regression analysis, emotions, teaching experience, expertise in special education, and gender explained a significant amount of variance in interpersonal style. Emotions were found to have a mediating role in the relationship between causal attributions and interpersonal style of "willingness to support," when challenging behaviors were attributed to stable causes or causes under the control of the individual with intellectual disabilities. abstract_id: PUBMED:2213644 Interpersonal style and anxiety. Agoraphobic anxiety has been linked to traditional feminine sex role attitudes and related dyadic interactions. This research correlated measures of panic experience and agoraphobic anxiety with interpersonal style in a college student population. Significant positive correlations were observed between anxiety scores and the following interpersonal styles: submissive/deferent, self-effacing/obedient, mistrusting/cold, submissive, inhibited, and unassured. Results lend support to clinical reports that some individuals may develop agoraphobic symptoms as a result of the interaction of interpersonal style with relationship patterns. abstract_id: PUBMED:38007524 Effect of coaches' interpersonal style on young athletes' individual resilience and team adherence intention: a season-long investigation. Background: In the sports context, coaches must be able to improve their players physically, psychologically, and socially. Hence, a fundamental part of this process is the athlete's individual resilience (IR). Methods: Three hundred and fifteen youth team-sport players (boys: n = 283; Mage = 16.02, SD = 0.56; and girls: n = 32; Mage = 15.92, SD = 0.62) completed the measures of coach's interpersonal style, individual resilience, perceived performance, and team adherence intention (intention to remain on the same team the following year) twice (Time 1: mid-season; Time 2: end-season). Structural equation modeling was used to test the relationships between variables. Results: The results showed that coach support was positively related to IR (p &lt; 0.001) and, in turn, IR to individual (p &lt; 0.01) and team performance (p &lt; 0.05) at Time 1, and to individual performance (p &lt; 0.001) and team adherence intention at Time 2 (p &lt; 0.01). In addition, team performance at Time 2 was positively related to team adherence intention (p &lt; 0.001). Finally, a mediating effect of IR was observed between interpersonal coaching style, individual and team performance, and team adherence intention. Conclusions: These results show the importance of a supportive interpersonal coaching style to foster athletes' levels of resilience, which could have positive consequences in performance (individual and team) and team adherence intention. Answer: Yes, humor style does serve as an interpersonal signal. According to the findings from the studies, an individual's style of humor is associated with how they are perceived by others (PUBMED:23531806). Specifically, individuals with benign humor styles, such as affiliative and self-enhancing humor, are evaluated more positively than those with injurious humor styles, like aggressive and self-defeating humor. This suggests that humor can play a significant role in interpersonal perception and relationships, indicating that the way a person uses humor can signal certain characteristics to others and potentially influence the dynamics of their social interactions.
Instruction: Using a new HPV detection system in epidemiological research: change of views on cervical dyskaryosis? Abstracts: abstract_id: PUBMED:11574131 Using a new HPV detection system in epidemiological research: change of views on cervical dyskaryosis? Unlabelled: The prevalence of human papillomavirus (HPV) rises with increasing histological severity of neoplasia, more cigarettes smoked per day and higher lifetime number of sexual partners in women with cervical dyskaryosis. Recently, the highly sensitive SPF10 primers and Inno-LiPA (line probe assay) HPV prototype research assay became available for the detection and typing of HPV. Background: using this system, we challenged the previously reported findings. Study Design: the study group comprised 304 women referred because of abnormal pap smears in whom a histological diagnosis was made. Data on the lifetime number of sexual partners and smoking behaviour were obtained by questionnaire. HPV analysis was performed on cervical scrapes obtained at the enrollment visit. Results: oncogenic HPV was found in 288 (95%) women. A total of 86 (30%) out of these 288 women disclosed multiple types. HPV 16 occurred significantly less often in multiple infections than was expected on the basis of chance alone. The grade of neoplasia was significantly associated with the presence of oncogenic HPV, and this association depended on the presence of HPV type 16. No association was found between grade of neoplasia and the presence of multiple HPV types. Neither the lifetime number of sexual partners nor smoking were associated with oncogenic HPV, the five most frequent HPV types separately or the presence of multiple types. Conclusion: we conclude that the association between the detection of HPV and the epidemiological risk factors, as found with the GP5/6 PCR in the past, could not be confirmed when using SPF10 PCR primers and LiPA HPV genotyping. We suggest that the number of sexual partners and smoking may be determinants of high HPV viral load rather than determinants of the presence of HPV per se. abstract_id: PUBMED:37451412 Detection of HPV in urine for cervical cancer screening: Feasibilty of an assay system. The detection of urine HPV is considered as a promosing alternative to increase the screening coverage of cervical cancer. However, the validated assay of urine HPV is still scarse. We described a nouvel assay syetem for the urine-based detection of HPV in the framework of HPV screening. This sytsem consisted of Automate Nimbus extraction of DNA and Anyplex™ II HPV HR Detection PCR of HPV DNA. We validated this system by spiking HPV-infected cervical cancer cell line HeLa cells into normal urine and compared the prelimary results of cervical samples and urine samples. We found that this system could detect as few as 5 HeLa cells in normal urine model. Some discordances of HPV results between cervical samples and urine samples were observed. We concluded that this assay system could be applied for the detection of HPV in urine. A large scale study is necessary to evaluate the clinical significance of this assay system. abstract_id: PUBMED:32318115 Cervical, anal and oral HPV detection and HPV type concordance among women referred for colposcopy. Background: Infection with human papillomaviruses (HPVs) can cause benign and malignant tumours in the anogenital tract and the oropharynx both in men and women. The aim of the presented study was to investigate cervical, anal, and oral HPV-detection rates among women referred to colposcopy for abnormal Cervical Cancer (CaCx) screening results and assess the concordance of HPV-types among these anatomical sites. Methods: Women referred to colposcopy at a single centre due to abnormal cytology, conducted for CaCx screening, were subjected to cervical Liquid-based Cytology (LBC) smear testing, anal and oral sampling. Routine colposcopy consisted in multiple biopsies and/or Endocervical Curettage (ECC). HPV-detection was performed by PCR genotyping in all three anatomical sites. In high-risk (hr) HPV-DNA positive samples either from anal canal or oral cavity, anal LBC cytology and anoscopy were performed, or oral cavity examination respectively. Descriptive statistics was used for the analysis of HPV-detection rates and phi-coefficient for the determination of HPV-positivity concordance between the anatomical sites. Results: Out of 118 referred women, hr. HPV-DNA was detected in 65 (55.1%), 64 (54.2%) and 3 (2.5%) at cervix, anal canal and oral cavity respectively while low-risk HPV-DNA was detected in 14 (11.9%) and 11 (9.3%) at cervix and anal canal respectively. The phi-coefficient for cervix/anal canal was 0.392 for HPV16, 0.658 for HPV31, 0.758 for HPV33, - 0.12 for HPV45, 0.415 for HPV52 and 0.473 for HPV58. All values were statistically significant (p &lt; 0.001). Conclusions: The results suggest that most HPV-types, high-risk and low-risk, detected in the cervix of women with prevalent cervical dysplasia, correlate with the ones detected in their anal canal. This particularly applies for the HPV-types included in the nonavalent HPV-vaccine (HPVs 6/11/16/18/31/33/45/52/58). abstract_id: PUBMED:15754998 Human papillomavirus infection in relation to mild dyskaryosis in conventional cervical cytology. Purpose Of Investigation: To establish the prevalence and distribution of high-risk human papillomavirus (HPV) genotypes in Slovene women with repeat mild dyskaryosis, and to evaluate three molecular methods for the detection of HPV that could be used as a complementary method to cervical cytology. Methods: In this prospective study 148 women with three subsequent cervical cytologic tests within two years showing mild dyskaryosis were enrolled. HPV infection was determined using three molecular tests: Hybrid Capture II and two variants of polymerase chain reaction (PCR-PGMY11/PGMY09 and PCR-CPI/CPIIG). Results: HPV was detected in 17 of the 45 women aged &lt; or =30 years and in 21 of the 103 women aged &gt;30 years (37.8% vs 20.4%, p = 0.04). The most common genotype was HPV 16 detected in eight (21.1%) women, the next were HPV 53 and HPV 51, each detected in five (13.2 %) women. The three molecular methods matched in 92.9%. Conclusion: Low prevalence of HPV infections indicates that cervical screening programmes in Slovenia are overburdened with mild dyskaryosis. Repeat cytology is not reliable; HPV testing might be useful as a complementary method. abstract_id: PUBMED:24235325 HPV testing in prevention of cervical cancer: practices and current trends Robust evidence supports new strategies for prevention of cervical cancer based on the detection of persistent Human papillomavirus (HPV) infection, the causative agent of the disease. New HPV infection is usually benign and transient, while persistent infection with one of -high risk HPV explains almost all of these cancers. In fact, the detection of one of the 12 oncogenic HPV increase the sensitivity of the screening and predicts, sooner than cytology, the risk of precancerous lesions, the high grade of cervical intraepithelial neoplasia (HG CIN). Negative HPV detection gives instantaneously a reassurance close to 100% of absence of disease at risk (while cytology detection is less than 60%) and almost guarantees protection of the absence of HG CIN over a prolonged period, allowing lengthening safely the screening interval to 5 years. Pooled HPV-based screening tests decrease the specificity of the screening and increases the number of colposcopy. New strategies can significantly improve the specificity of HPV screening without a significant impact on the sensitivity, including exclusion of women less than 30 years, the use of HPV DNA genotyping tests with recognition of the most HPV at risk, the 16 and 18 types. HPV alone can be used as a screening tool in women of 30 years + with a cytology triage or immunocytochemical staining of cyto slides for p16 of HPV positive. Co-testing (cytology+ HPV test) was adopted in the USA as a standard screening option. abstract_id: PUBMED:27060653 Clinical sensitivity of HPV assays for the detection of high grade cervical disease in cervical samples treated with glacial acetic acid. Background: Lysis of bloody liquid based cytology (LBC) specimens with glacial acetic acid (GAA) is performed to aid cytological interpretation. However, the influence of GAA treatment on HPV detection is not fully understood and in studies designed to assess this, few cases of high-grade disease have been included. Objectives: To assess the sensitivity of HPV molecular tests for the detection of high grade cervical disease in GAA treated samples Study Design: A total of 207 specimens associated with high grade dyskaryosis and treated with GAA were collated prospectively. Overall 140 specimens had underlying CIN2+, including 88 CIN3. All specimens were tested with the Abbott RealTime High Risk HPV test (rtHPV) and the Qiagen Hybrid Capture 2High Risk HPV DNA test (HC2). Specimens associated with a CIN2+ that were negative by either assay were genotyped. Results: The sensitivity of rtHPV for CIN2+ and CIN3+ was 92.8% (87.2, 96.5) and 94.3% (87.2, 98.1) respectively. Sensitivity of the HC2 for CIN2+ and CIN3+ was 97.2% (92.8, 99.2) and 96.6% (90.3, 99.2) respectively. The sensitivity of both assays in GAA treated specimens was thus consistent with the level required for clinical application. HPV negative, CIN2+ specimens were generally attributable to HPV types outside the explicit analytical range of the assays. Conclusions: The data indicate that GAA treatment has little impact on the detection of CIN2+ by HPV testing in LBC specimens. abstract_id: PUBMED:30224201 Potential for HPV vaccination and primary HPV screening to reduce cervical cancer disparities: Example from New Zealand. Background: Cervical cancer rates are over twice as high, and screening coverage is lower, in Māori women compared to other women in New Zealand, whereas uptake of HPV vaccine is higher in Maori females. We aimed to assess the impact of HPV vaccination and the proposed transition to 5-yearly primary HPV screening in Māori and other women in New Zealand, at current participation levels; and additionally to investigate which improvements to participation in Māori females (in vaccination, screening, or surveillance for screening-defined higher-risk women) would have the greatest impact on cervical cancer incidence/mortality. Methods: An established model of HPV vaccination and cervical screening in New Zealand was adapted to fit observed ethnicity-specific data. Ethnicity-specific models were used to estimate the long-term impact of vaccination and screening (vaccination coverage 63% vs 47%; five-year screening coverage 68% vs 81% in Maori vs European/Other women, respectively). Results: Shifting from cytology to HPV-based screening is predicted to reduce cervical cancer incidence by 17% (14%) in Maori (European/Other) women, respectively. The corresponding reductions due to vaccination and HPV-based screening combined were 58% (44%), but at current participation levels long-term incidence would remain almost twice as high in Māori women (6.1/100,000 compared to 3.1/100,00 in European/Other women). Among strategies we examined, the greatest impact came from high vaccine coverage and achieving higher attendance by Māori women under surveillance for screen-detected abnormalities. Conclusion: Relative reductions in cervical cancer due to vaccination and HPV-based screening are predicted to be greater in Maori than in European/Other women. While these interventions have the potential to substantially reduce between-group differences, cervical cancer incidence would remain higher in Maori women. These findings highlight the importance of multiple approaches and the potential influence of factors beyond HPV prevention. abstract_id: PUBMED:30196012 High-risk HPV detection and associated cervical lesions in a population of French menopausal women. Background: With population ageing, post-menopausal women represent a new group to be considered in cervical cancer screening strategies, including the significance of High Risk (HR)-HPV detection. Objectives: A retrospective analysis was conducted in a cohort of 406 menopausal women attending routine gynaecological consultation at the Hospital of Montpellier (France). Study Design: All women benefited from a cervical smear and HR-HPV detection using Hybrid Capture 2 (HC2) test. The prevalence of cytological abnormalities, HR-HPV detection and risk factors associated with HR-HPV detection were analyzed. Evolution of both tests was evaluated in a sub-group of women with adequate follow-up. Results: Five women (1.2%) had an abnormal cervical smear at baseline. HR-HPV was detected in 40 women (9.9%), including 36 women with normal cytology (9%). Risk factors associated with HR-HPV detection at enrolment were a previous history of Cervical Intraepithelial Neoplasia and a high socio-economic level, but not hormone replacement therapy. When cytology and HR-HPV detection were negative at enrolment, both remained negative for 95% (230/241) of women during follow-up (median duration of follow-up: 60 months). HR-HPV persistence was observed for 55% (18/33) of women with normal cytology and positive HR-HPV test. Finally, all women with a final diagnosis of high-grade (CIN2+) cervical lesion (N = 7) had a positive HR-HPV test with or without abnormal cytology. Conclusions: HR-HPV was detected in 9.9% of menopausal women. HR-HPV detection was a better predictor of CIN2+ lesions than cytology in this population. Women with previous CIN history should benefit from HR-HPV testing and need long term follow-up. abstract_id: PUBMED:36054732 Clinical performance of primary HPV screening cut-off for colposcopy referrals in HPV-vaccinated cohort: Observational study. Objective: To understand the effect of changing from cytology-based to primary HPV screening on the positive predictive value (PPV) of colposcopy referrals for cervical intraepithelial neoplasia (CIN) in a cohort offered HPV vaccination. Design: Retrospective pre/post observational cohort study. Setting: Scotland. Population Or Sample: 2193 women referred to colposcopy between September 2019 and February 2020 from cytology-based screening and between September 2020 and February 2021 from primary high-risk HPV (hrHPV) screening. Methods: Calculating positive predictive values (PPVs) for two cohorts of women; one having liquid-based cytology screening and the other, the subsequent hrHPV cervical screening as a pre/post observational study. Main Outcome Measures: Positive predictive values of LBC and hrHPV cut-offs for colposcopy referral for CIN at colposcopy. Results: Three papers fitted our criteria; these reported results only for cytology-based screening. The PPV was lower for women in HPV-vaccinated cohorts indicating a lower prevalence of disease. Vaccination under the age of 17 had the lowest PPV reported. Scottish colposcopy data concerning hrHPV and cytology showed a non-significant difference between PPV (17.5%, 95% CI 14.3-20.7, and 20.6, 95% CI 16.7-24.5, respectively) for referrals with a cut-off of low grade dyskaryosis (LGD); both met the standard set of 8-25%. The hrHPV PPV (66.7, 95% CI 56.8-76.6) was comparable to cytology (64.1, 95% CI 55.8-72.4) for referrals with a cut-off of high grade dyskaryosis (HGD) but neither met the standard set of 77-92%. Conclusions: Current literature only provides PPVs for LBC and, overall, the vaccinated cohort had lower PPVs. Only LG dyskaryosis met PHE criteria. The PPV for HPV-vaccinated women undergoing either LBC or HR-HPV screening were not statistically different. However, similar to papers in the current literature, HG dyskaryosis (HGD) PPVs of both techniques did not meet the PHE threshold of 76.6-91.6% outlined in the cervical standards data report. abstract_id: PUBMED:34036681 The application of PAX1 methylation detection and HPV E6/E7 mRNA detection in cervical cancer screening. Aim: We aimed to explore the application of PAX1 methylation and human papillomavirus (HPV) E6/E7 mRNA detection in cervical cancer screening and to compare the efficacy with high-risk (HR)-HPV detection. Patients And Methods: The cervical exfoliative cytology samples of 337 patients were collected, including 70 cases of cervical inflammation, 72 cases of low-grade squamous intraepithelial lesions, 97 cases of high-grade squamous intraepithelial lesions, and 98 cases of cervical carcinoma. The PAX1 gene methylation (PAX1) status was detected by multiple quantitative PCR, HPV E6/E7 mRNA (E6/E7) was detected by QuantiVirus detection, and HR-HPV (HPV) was detected by the Cobas 4800 detection system. The sensitivities, specificities, and accuracies were validated in the testing set. Results: The sensitivities of the HPV, HPV E6/E7, and PAX1 testing were 89.23%, 84.10%, and 86.67%, respectively, which all maintained a high level. In contrast, the specificities of the HPV, E6/E7, and PAX1 testing were only 19.10%, 37.32%, and 97.18% (in pairwise comparisons, p = 0.000). The AUC of PAX1 (0.919) was significantly larger than that of HPV (0.541) and E6/E7 detection (0.607) (p &lt; 0.0001). In addition, the AUC areas of all combined parallel testing were lower than that of single PAX1 test (p &lt; 0.05). Conclusion: The diagnostic efficacy of E6/E7 detection and PAX1 detection was better than that of HPV detection, especially for PAX1 detection. Answer: The new HPV detection system mentioned in the abstract from PUBMED:11574131 has led to a change in views on cervical dyskaryosis. The study utilized highly sensitive SPF10 primers and Inno-LiPA HPV prototype research assay for the detection and typing of HPV in a group of 304 women with abnormal pap smears. The results showed that oncogenic HPV was found in 95% of the women, and the presence of oncogenic HPV was significantly associated with the grade of neoplasia, particularly when HPV type 16 was present. However, no association was found between the grade of neoplasia and the presence of multiple HPV types. Interestingly, the study did not find an association between oncogenic HPV and the lifetime number of sexual partners or smoking, which contrasts with previous findings using the GP5/6 PCR system. The authors suggest that the number of sexual partners and smoking may be determinants of high HPV viral load rather than determinants of the presence of HPV per se. This new perspective challenges the previously held belief that the number of sexual partners and smoking are directly associated with the presence of HPV and cervical dyskaryosis. Instead, it implies that these factors might influence the viral load, which could have implications for the progression and severity of cervical neoplasia. The findings from this study using the new detection system could lead to a reevaluation of risk factors and their role in HPV-related cervical disease, potentially impacting future epidemiological research and screening strategies.
Instruction: Could the Sasang constitution itself be a risk factor of abdominal obesity? Abstracts: abstract_id: PUBMED:23548105 Could the Sasang constitution itself be a risk factor of abdominal obesity? Background: Abdominal obesity (AO) is a medical condition in which excess body fat accumulates in the abdomen. It may cause adverse effects on health and result in reduced life expectancy or increased health problems. While various genetic approaches have explained the risks of AO in Western society, the Sasang constitution (SC) has been identified as a risk factor in Korean medicine. Different SC types are associated with different fat distribution, body shapes and susceptibility to diseases. We evaluated whether the SC type could be a risk for AO in a cross-sectional study among Koreans. Methods: In total, 2,528 subjects aged over 30 years were recruited from 23 medical clinics. We collected waist circumference (WC), weight, height, and some clinical information for AO from the subjects. A Chi-square test and a one-way ANOVA were performed according to SC type (p &lt; .05), while multiple logistic regression was used to produce odds ratios (ORs). Results: The rates of AO in Tae-eumin (TE), Soeumin (SE), and Soyangin (SY) types were 63.7%, 14.7%, and 32.8% in males and 84.8%, 41.7%, and 52.8% in females, respectively. The TE type was associated with increased AO prevalence compared with the SE and SY types in males (OR 1.79; 95% CI 1.02-3.15, p = 0.044 and OR 1.74; 95% CI 1.18-2.58, p = 0.006, respectively) and females (OR 1.51; 95% CI 1.03-2.23, p = 0.037 and OR 1.88; 95% CI 1.32-2.68, p &lt; 0.001, respectively) after adjusting for age, BMI, hypertension, diabetes mellitus, hypertriglyceridemia, and low HDL cholesterol. Conclusions: This study suggested that SC, particularly the TE type, might be significantly and independently associated with AO and could be considered a risk factor in predicting AO. abstract_id: PUBMED:25123680 The prevalence of general and abdominal obesity according to sasang constitution in Korea. Background: Obesity is an important risk factor for cardiovascular and metabolic diseases and could affect mortality rates. Body mass index (BMI) and waist circumference (WC) have been used to classify obesity, and waist-to-hip ratio (WHR) has recently emerged as a discriminator of cardiovascular disease. Sasang constitution (SC) is a kind of well-known traditional Korean medicine: Tae-eumin (TE), Soeumin (SE), Taeyangin (TY) and Soyangin (SY) carrying a different level of susceptibility to chronic diseases. We aimed to examine the prevalence in general and abdominal obesity (AO) using BMI, WC and WHR according to SC in the Korean population. Methods: A total of 3,348 subjects were recruited from 24 Korean medicine clinics. Obesity was divided into three categories: general obesity by BMI, abdominal obesity by waist circumference (WC AO) and abdominal obesity by waist-to-hip ratio (WHR AO). A Chi-square test was performed to compare prevalence, and logistic regression was conducted to generate odds ratios (ORs) according to SC (p &lt; .05). Results: The prevalence of general obesity was significantly higher in males than in females. The highest prevalence of general obesity, WC AO and WHR AO were all observed in the TE type, and the SY and SE types were followed in order, for both males and females respectively.The TE type was highly associated with increased risk of general obesity (OR = 20.2, 95% CI: 12.4-32.9 in males and OR = 14.3, 95% CI: 10.1-20.2 in females), of WC AO (OR = 10.7, 95% CI: 7.2-15.9 in males and OR = 7.5, 95% CI: 5.8-9.6 in females), and of WHR AO (OR = 4.6, 95% CI: 3.3-6.4 in males and OR = 3.8, 95% CI: 2.9-4.9 in females) compared with the SE type. In addition, after controlling for age, social status and eating habits, the ORs were similar to the crude model according to gender and SC. Conclusions: This study shows that the prevalence of obesity varies according to SC in the Korean population. In particular, the TE type was highly associated with increased ORs for general obesity, WC AO and WHR AO in both genders. abstract_id: PUBMED:22454673 Prevalence of Metabolic Syndrome according to Sasang Constitutional Medicine in Korean Subjects. Metabolic syndrome (MS) is a complex disorder defined by a cluster of abdominal obesity, atherogenic dyslipidemia, hyperglycemia, and hypertension; the condition is recognized as a risk factor for diabetes and cardiovascular disease. This study assessed the effects of the Sasang constitution group (SCG) on the risk of MS in Korean subjects. We have analyzed 1,617 outpatients of Korean oriental medicine hospitals who were classified into three SCGs, So-Yang, So-Eum, and Tae-Eum. Significant differences were noted in the prevalence of MS and the frequencies of all MS risk factors among the three SCGs. The odds ratios for MS as determined via multiple logistic regression analysis were 2.004 for So-Yang and 4.521 for Tae-Eum compared with So-Eum. These results indicate that SCG may function as a significant risk factor of MS; comprehensive knowledge of Sasang constitutional medicine may prove helpful in predicting susceptibility and developing preventive care techniques for MS. abstract_id: PUBMED:26914220 Sasang Constitution May Play a Key Role in Increasing the Number of Sub-Elements of Metabolic Syndrome. Background: Metabolic syndrome (MS), a representative cluster of chronic diseases, is defined by the presence of three or more of the following five elements: high blood glucose, high blood pressure, low high-density lipoprotein (HDL)cholesterol, high serum triglyceride levels, and abdominal obesity. Recently, innate factors have been continuously demonstrated as important risk factors for increasing the number of MS sub-elements. Sasang constitutional medicine (SCM) is a traditional Korean medicine in which each Sasang constitution (SC) type has a different susceptibility to pathology and diseases. The aim of this study is to determine whether the SC could be an independent risk factor for single and multiple MS sub-elements. Methods: Twenty-four Korean medical clinics joined the study, and 3334 participants aged 20-80 years were recruited. Clinical data related to MS and general characteristics were obtained. The chi-square test and a one-way analysis of variance were conducted, and the odds ratios (ORs) with 95% confidence intervals (CIs) were generated through multinomial logistic regression according to the SC. Results: The prevalence of single and multiple MS sub-elements was significantly different according to SC. The ORs of the Tae-Eumin (TE) type were significantly high for abdominal obesity, diabetes mellitus, hypertriglyceridemia, low HDL cholesterol, and hypertension. The ORs for the So-Yangin type were also high in hypertriglyceridemia and low HDL cholesterol compared with the So-Eumin type, even after being adjusted for sex, age, body mass index, and eating habits. As the numbers of MS sub-elements increased, the ORs of the TE type also increased. Conclusions: This study showed that the SC types may be risk factors for not only single MS sub-elements but also multiple MS sub-elements and that the TE type's risk degree is associated with an increase in the number of MS sub-elements. abstract_id: PUBMED:7712364 Body compartment and subcutaneous adipose tissue distribution--risk factor patterns in obese subjects. The purpose of this study was to investigate whether upper body obesity and/or visceral obesity are related to cardiovascular risk factors among severely obese subjects, phenomena that have previously been reported in more heterogeneous body weight distributions. 2450 severely obese men and women aged 37 to 59 years, with a body mass index of 39 +/- 4.5 kg/m2 (mean +/- SD) were examined cross-sectionally. Eight cardiovascular risk factors were studied in relation to the following body composition indicators: four trunk and three limb circumferences, along with weight, height and sagittal trunk diameter. From the latter three measurements lean body mass (LBM, i.e., the non-adipose tissue mass) and the masses of subcutaneous and visceral adipose tissue were estimated by using sex-specific prediction equations previously calibrated by computed tomography. Two risk factor patterns could be distinguished: 1. One body compartment-risk factor pattern in which the subcutaneous adipose tissue (AT) mass and, in particular, the visceral AT mass were positively related to most risk factors while the lean body mass was negatively related to some risk factors. 2. One subcutaneous adipose tissue distribution- risk factor pattern in which the neck circumference was positively and the thigh circumference negatively related to several risk factors. It is concluded that lean body mass (LBM), visceral and subcutaneous adipose tissue masses as well as neck and thigh circumferences, used as indices of subcutaneous adipose tissue distribution, are independently related to cardiovascular risk factors in severely obese men and women. abstract_id: PUBMED:8017338 Body-fat distribution and changes in the atherogenic risk-factor profile in obese adolescent girls during weight reduction. We examined the effect of the pattern of body-fat distribution on the modification of atherogenic risk factors in obese adolescent girls during weight reduction. During the 6-wk program, which included a mixed diet of 4321 kJ/d and intensive physical exercise, the girls lost 8.5 +/- 2.4 kg and their waist-to-hip ratio (WHR) decreased from 0.86 +/- 0.05 to 0.81 +/- 0.05 (P &lt; 0.01). Significant reductions were observed for total cholesterol, LDL cholesterol, uric acid, fasting insulin, and systolic and diastolic blood pressure. Girls with abdominal obesity (WHR &gt; 0.88) had greater reductions in serum cholesterol, LDL cholesterol, and uric acid than did girls with gluteal-femoral obesity (WHR &lt; 0.81). In a multivariate-regression analysis these differences could be partly explained by the greater weight loss of the girls with abdominal obesity. These results suggest that during weight reduction girls with abdominal obesity exhibit more beneficial changes in the atherogenic-risk-factor profile than do girls with gluteal-femoral obesity, partly because of a greater weight loss. abstract_id: PUBMED:32509704 Effects of risk factor numbers on the development of the metabolic syndrome. This study was performed to identify the factors affecting the develop-ment of metabolic syndrome by comparing the numbers of risk factors of the syndrome and by identifying the factors influencing the develop-ment of metabolic syndrome. Two hundred forty-eight health screening of examinee were used for the study (101 males, 147 females). Diagnostic basis ratio of metabolic syndrome risk factors showed that 35.1% of the subjects had abdominal obesity, 32.7% for high blood pressure, 66.1% for high insulin blood sugar, 43.1% for high triglyceride lipidemia, and 7.3% for low high-density lipoprotein lipidemia. No significant difference of the incidence of metabolic syndrome was found between gender. The diagnostic number for male was the highest with 1 risk factor (31.7%) while the highest with 2 factors (30.6%) in female. Significant differences were found in age and body mass index (BMI) between normal group with no risk factor and metabolic syndrome group. There was significant difference in BMI between the syndrome group with 1 risk factor and 3 risk factors. BMI was significantly higher (5.282 times) compared to their counterpart (P&lt;0.001). Significant difference was found in BMI between 2 risk factors and the syndrome group with more than 3 risk factors and the incidence was higher (4.094 times) in the overweight group than their counterpart (P&lt;0.001). abstract_id: PUBMED:14744751 Association between insulin-like growth factor-I: insulin-like growth factor-binding protein-1 ratio and metabolic and anthropometric factors in men and women. Several prospective observational studies have suggested that elevated circulating IGF-I levels are associated with an increased risk of cancer. These observations may provide a potential mechanism through which previously identified metabolic and anthropometric factors, such as obesity and elevated insulin and glucose levels, may operate. We therefore examined metabolic and anthropometric influences on circulating levels of insulin-like growth factor-I (IGF-I), insulin-like growth factor-binding protein-1 (IGFBP-1), and the IGF-I:IGFBP-1 ratio in a middle-aged population of 349 men and 492 women. IGF-I showed only modest inverse associations with indices of adiposity. However, we found that low IGFBP-I levels and an increased IGF-I:IGFBP-1 ratio were strongly associated with increased levels of insulin and glucose in men and women. Body mass index was also positively related to the IGF-I:IGFBP-1 ratio in men (P &lt; 0.001) and women (P &lt; 0.001), independent of metabolic correlates of IGFBP-1 and IGF-I. Similarly, waist:hip ratio and waist circumference were also associated with an increased IGF-I:IGFBP-1 ratio and low circulating IGFBP-1 levels. These findings suggest that individuals with greater fat mass and upper body obesity may have elevated levels of bioavailable or free IGF-I, which could, in part, mediate the reported associations among metabolic and anthropometric factors and cancer risk. abstract_id: PUBMED:12734058 A meta-analysis of published literature on waist-to-hip ratio and risk of breast cancer. Epidemiological studies have identified body weight as a risk factor for breast cancer. Beyond the amount of adipose tissue a woman has, its distribution, particularly abdominally, may be a risk factor in breast cancer etiology. Body fat distribution is commonly measured by a waist-to-hip circumference ratio lpar;WHR). We performed a meta-analysis to summarize the published literature on WHR and breast cancer risk. After assembling all published studies, we extracted mean WHRs for study participants and adjusted risk estimates comparing highest with lowest partition of WHR and calculated weighted mean differences in WHR between cases and noncases and summary risk estimates based on study design and menopausal status. The weighted mean difference was 0.016 [95% confidence interval (CI) = 0.005-0.028] for all studies combined. The summary risk estimates were 1.80 (95% CI = 1.29-2.50) for case-control studies and 1.27 (95% CI = 1.07-1.51) for cohort studies. By menopausal status, the summary risks were 1.79 (95% CI = 1.22-2.62) for premenopausal women and 1.50 (95% CI = 1.10-2.04) for postmenopausal women. For all studies combined, the summary risk was 1.62 (95% CI = 1.28-2.04). This meta-analysis indicates that a greater WHR is associated with increased risk of breast cancer and suggests that the avoidance of abdominal obesity may reduce risk of the disease. abstract_id: PUBMED:25374865 Cardiovascular Disease Risk Factor Profiling of Group C Employees in JIPMER, Puducherry. Background: Settings-based approach for health promotion includes conducting risk factor surveillance as one of its component. It was aimed to estimate the prevalence of CVD risk factors among group C employees of tertiary care hospital in south India. Materials And Methods: A cross-sectional survey was conducted among 400 group C employees aged ≥20 years using the WHO "STEPwise approach to surveillance of non-communicable diseases" (STEPS) methodology. Standardized international protocols were used to measure behavioral risk factors (smoking, alcohol consumption, fruit and vegetable consumption, physical activity) and physical characteristics (weight, waist and hip circumferences, height, and blood pressure). Multivariate analysis was done to predict the factors, which carry independent risk of hypertension. Risk factor profiling of the staff was done using WHO/ISH risk prediction chart to calculate the 10-year risk of a fatal or non-fatal major cardiovascular events (myocardial infarction or stroke), according to age, gender, blood pressure, smoking status, and presence or absence of diabetes mellitus. Results: Mean age in years was 40.9 (±10.4), and men constituted 81.3% of study population. Prevalence of major cardiovascular risk factors was as follows: Current smokers 12.3% men, regular alcohol intake 33.2% among men, overweight (≥23 kg/m(2)) 74.5%, central obesity 78.7%, hypertension 38.8%, and history of diabetes mellitus 13.2%. Age, gender, physical inactivity, obesity, and family history of hypertension were found to be independently associated with hypertension. Four percent participants had a &gt;10% risk of developing CVD in next 10 years. Conclusion: The prevalence of CVD risk factors is high in the sample population. Employee wellness program should be started in the institute to combat the burden of cardiovascular diseases. Answer: Yes, the Sasang constitution (SC) itself could be a risk factor for abdominal obesity (AO). Studies have shown that different SC types are associated with varying prevalence rates of AO and other metabolic syndromes. For instance, the Tae-eumin (TE) type has been consistently associated with increased prevalence and odds ratios for AO compared to the Soeumin (SE) and Soyangin (SY) types in both males and females (PUBMED:23548105; PUBMED:25123680). The TE type was found to have a significantly higher risk of general obesity, WC AO (waist circumference abdominal obesity), and WHR AO (waist-to-hip ratio abdominal obesity) (PUBMED:25123680). Additionally, the TE type showed higher odds ratios for multiple metabolic syndrome sub-elements, including abdominal obesity, diabetes mellitus, hypertriglyceridemia, low HDL cholesterol, and hypertension (PUBMED:26914220). These findings suggest that the SC, particularly the TE type, may be considered an independent risk factor for predicting AO and related metabolic disorders.
Instruction: Children with minor head injury in the emergency department: Is skull radiography necessary for children under 2 years? Abstracts: abstract_id: PUBMED:24930859 Children with minor head injury in the emergency department: Is skull radiography necessary for children under 2 years? Background: Current guidelines on the management of mild head trauma (traumatic brain injury/TBI) do not include the presence of a skull fracture in determining the risk of intracranial injury. However, in our setting cranial radiography is still performed frequently to rule out the presence of skull fracture. Objective: To estimate the prevalence of clinically-important traumatic brain injuries (ciTBI) in children younger than two years of age with mild TBI. Patients And Methods: Descriptive observational study. All children attended in emergency department with mild TBI (Glasgow ≥14 points) for a year were included. We defined ciTBI as intracranial injuries that caused death or required neurosurgery, intubation for more than 24 hours, inotropic drugs or mechanical ventilation. Results: The study included 854 children, of which 457 (53.5%) were male. The median patient age was 11.0 months (P25-75: 7.5-17.0 months). In 741 cases (86.8%) the mechanism of TBI was a fall. In 438 cases (51.3%) skull radiography was performed. Eleven children (1.3%) had intracranial injury, but none met the criteria for ciTBI (estimated prevalence of ciTBI was 0%; CI 95%: 0%-0.4%). Conclusion: Children younger than two years of age with mild TBI have low prevalence of ciTBI. Consequently, it is possible to monitor children younger than two years with a TBI without performing skull radiography. abstract_id: PUBMED:9023615 Radiography for head trauma in children: what guidelines should we use? Objective: To audit the appropriateness of skill radiography in children attending an accident and emergency (A&amp;E) department with head injuries. Methods: 569 children presenting to a large teaching hospital A&amp;E unit were retrospectively audited. The indications for radiography according to British published guidelines and American published guidelines were compared with the actual requests for radiography. The criteria for admission from the two guidelines were also compared with the actual admissions. Results: 50% of children presenting with head injury actually had skull radiography. If British guidelines for the use of skull radiography had been complied with, 63% of children should have had radiography, but if American guidelines had been used, 18% would have required radiography. All the actual fractures identified were in this 18%. Conclusions: The British guidelines overinvestigate children with head injury. This seems to have been recognised clinically, and the doctors did not adhere to the guidelines. Neither did they adhere to the American guidelines, which would have resulted in a further reduction in radiography. All the fractures identified were covered by the American guidelines. The American guidelines for skull radiography can be safely used in a British A&amp;E unit. abstract_id: PUBMED:22563059 Nondepressed linear skull fractures in children younger than 2 years: is computed tomography always necessary? Background: Current recommendations are that young children with a skull fracture following head injury undergo computed tomography (CT) examination of their head to exclude significant intracranial injury. Recent reports, however, have raised concern that radiation exposure from CT scanning may cause malignancies. Objective: To estimate the proportion of children with nondisplaced linear skull fractures who have clinically significant intracranial injury. Methods: Retrospective review of patients younger than 2 years who presented to an emergency department and received a diagnosis of skull fracture. Results: Ninety-two patients met the criteria for inclusion in the study; all had a head CT scan performed. None suffered a clinically significant intracranial injury. Conclusion: Observation, rather than CT, may be a reasonable management option for head-injured children younger than 2 years who have a nondisplaced linear skull fracture on plain radiography but no clinical signs of intracranial injury. abstract_id: PUBMED:28755763 Reliability of Triage Nurses and Emergency Physicians for the Interpretation of the C-3PO Rule for Head Trauma in Children. Introduction: The C-3PO rule has been validated for use by emergency physicians to identify young children at risk of skull fracture following head trauma. The use of the rule by triage nurses could improve patient flow in the emergency department. Objectives: To evaluate the interobserver agreement of triage nurses and emergency physicians in the interpretation of the C-3PO rule in a pediatric emergency department. Methods: This was a prospective observational study performed in a consecutive sample of children visiting a single emergency department. Participants were all children younger than 24 months of age who presented at the emergency department for head trauma that had occurred in the previous 24 hours. The primary outcome was the interobserver agreement between nurses and emergency physicians as to whether the child was at high risk of skull fracture according to the interpretation of the C-3PO rule. All study participants were evaluated sequentially by a triage nurse and an emergency physician. Outcome of evaluation was kept blinded between nurses and physicians. The primary analysis was the interrater reliability using the kappa score. The sample size was set to provide lower boundary of 0.70 for a 95% confidence interval (95% CI) for kappa coefficient of at least 0.80. Results: A total of 226 children were evaluated by a physician and a nurse. Among them, 10 had skull fractures. A total of 34 nurses and 42 physicians evaluated between 1 and 21 children. The interrater reliability was excellent, as demonstrated by a kappa score of 0.85 (95% CI: 0.77-0.92). Moreover, all children with skull fractures were categorized at "high risk" by the nurse and the physician. Conclusion: This study demonstrates an almost perfect interrater reliability between triage nurses and emergency physicians in interpreting the C-3PO rule when evaluating children who presented at an emergency department for head trauma. Contribution to Emergency Nursing Practice. abstract_id: PUBMED:15159702 Skull radiograph interpretation of children younger than two years: how good are pediatric emergency physicians? Study Objective: We determine pediatric emergency physicians' accuracy in interpreting skull radiographs of children younger than 2 years and determine the characteristics of misidentified skull radiographs. Methods: A set of 31 skull radiographs (16 with fractures, 15 normal) was compiled from children younger than 2 years who were evaluated for head trauma in a pediatric emergency department from March 3, 1997, to March 3, 1998. A pediatric radiologist reinterpreted the films and agreed with all of the original readings in the final set. Participants (attending level physicians) were asked to identify the presence, location, and pattern of any fracture. Skull radiograph interpretation was considered radiographically correct if the presence, location, and pattern of fracture were correctly identified and was considered diagnostically correct if the presence of a fracture was recognized. Results: Twenty-five of 26 eligible pediatric emergency physicians completed the study. The mean of each participant's radiographically correct interpretation was 65%+/-10% (mean+/-SD), and diagnostically correct interpretation was 80%+/-9%. The group's mean sensitivity for diagnostically correct interpretation was 76%+/-15%, and specificity was 84%+/-14%. Shorter fractures were identified correctly less often (63% &lt; or =5 cm versus 93% &gt;5 cm; mean difference 30%; 95% confidence interval 21% to 39%). Diagnostically correct rates did not differ according to age of patient, physician practice location, years in practice, or practice in ordering skull radiographs. Conclusion: Pediatric emergency physicians have limited accuracy in interpreting skull radiographs of children younger than 2 years. Shorter fractures are more commonly misinterpreted. abstract_id: PUBMED:32815628 Epidemiology of soccer-related head injury in children 5-14 years in Victoria, Australia. Aim: Our aim was to use epidemiological data to determine the incidence of soccer-related head injuries in children aged 5-14 years who presented at emergency departments (EDs) or were admitted in hospitals in Victoria, Australia. Methods: ED presentation and hospital admission de-identified aggregate data were from the Victorian Injury Surveillance Unit. Soccer participation data were compared with the soccer-related head injury data to determine the incidence of this injury among these children. Results: The incidence of ED presentations was 0.17% of children participating in soccer during the study period (financial years 2011-2012 to 2015-2016). The 10-14-years age group presented with more head injuries than the 5-9-years age group. For the admissions data, soccer had a significantly lower (P = 0.0379) incidence of head injury when compared with 'sport as a whole'. Conclusions: The low incidence of soccer-related head injuries presenting to an ED or admission to hospital is consistent with international findings. abstract_id: PUBMED:9121256 Predictive value of skull radiography for intracranial injury in children with blunt head injury. Background: The value of routine skull radiography as a method of predicting intracranial injury is controversial. We aimed to assess the effectiveness of skull radiography by prospectively studying head-injured children admitted to a children's hospital that serves an urban population. Methods: Over a 2-year period, 9269 children attended our accident and emergency department with head injury, and 6011 were referred for skull radiography. All children who were admitted to hospital or had a skull fracture (n = 883) were included in the study. Computed tomography (CT) was done in children with skull fractures on radiography and in those without fractures if there were neurological indications. Findings: Radiographs showed 162 fractures (2.7% of all radiographs and 18% of study group radiographs). Staff in the accident and emergency department missed 37 (23%) fractures. CT scan was done on 156 children, of whom 107 had a skull fracture. 23 children were found to have intracranial injuries on CT. The presence of neurological abnormalities had a sensitivity for identification of intracranial injury of 91% (21 of 23) and a negative predictive value of 97%. The corresponding values for skull fracture on radiography were 65% (15 of 23) and 83%. Four children died, of whom only one had a skull fracture. Interpretation: In children, severe intracranial injury can occur in the absence of skull fracture. Skull radiography is not a reliable predictor of intracranial injury and is indicated only to confirm or exclude a suspected depressed fracture or penetrating injury, and when non-accidental injury is suspected, including in all infants younger than 2 years. Clinical neurological abnormalities are a reliable predictor of intracranial injury. If imaging is required, it should be with CT and not skull radiography. abstract_id: PUBMED:33461571 Utility of a pediatric observation unit for the management of children admitted to the emergency department. Background: Observation Units (OU), as part of emergency department (ED), are areas reserved for short-term treatment or observation of patients with selected diagnoses to determine the need for hospitalization or home referral. Methods: In this retrospective cohort study, we analyzed similarities and differences of children admitted to the pediatric ED of the Fondazione Policlinico Universitario A. Gemelli IRCCS hospital in the first 2 years of OU activity, analyzing general patient characteristics, access modalities, diagnosis, triage, laboratory and instrumental examinations, specialist visits, outcome of OU admission and average time spent in OU. Furthermore, we compared total numbers and type of hospitalization of the first 2 years of OU activity with those of previous 2 years. Results: The most frequent diagnoses were abdominal pain, minor head injury without loss of consciousness, vomiting, epilepsy and acute bronchiolitis. The most performed laboratory examinations were blood count. The most commonly performed instrumental examination was abdominal ultrasound. Neurological counseling was the most commonly requested. Average time spent in OU was 13 h in 2016 and 14.1 h in 2017. Most OU admissions did not last longer than 24 h (90.5% in 2016 and 89.5% in 2017). In the years 2014-2015, 13.4% of pediatric patients accessing the ED were hospitalized, versus 9.9% the years 2016-2017 reducing pediatric hospital admissions by 3.6% (p &lt; 0.001). Conclusions: This study demonstrate that OU is a valid alternative to ordinary wards for specific pathologies. In accordance with the literature, our study showed that, in the first 2 years of the OU activity, admissions to hospital ward decreased compared with the previous 2 years with an increase of complex patients. abstract_id: PUBMED:30833284 Traumatic brain injury in young children with isolated scalp haematoma. Objective: Despite high-quality paediatric head trauma clinical prediction rules, the management of otherwise asymptomatic young children with scalp haematomas (SH) can be difficult. We determined the risk of intracranial injury when SH is the only predictor variable using definitions from the Pediatric Emergency Care Applied Research Network (PECARN) and Children's Head Injury Algorithm for the Prediction of Important Clinical Events (CHALICE) head trauma rules. Design: Planned secondary analysis of a multicentre prospective observational study. Setting: Ten emergency departments in Australia and New Zealand. Patients: Children &lt;2 years with head trauma (n=5237). Interventions: We used the PECARN (any non-frontal haematoma) and CHALICE (&gt;5 cm haematoma in any region of the head) rule-based definition of isolated SH in both children &lt;1 year and &lt;2 years. Main Outcome Measures: Clinically important traumatic brain injury (ciTBI; ie, death, neurosurgery, intubation &gt;24 hours or positive CT scan in association with hospitalisation ≥2 nights for traumatic brain injury). Results: In children &lt;1 year with isolated SH as per PECARN rule, the risk of ciTBI was 0.0% (0/109; 95% CI 0.0% to 3.3%); in those with isolated SH as defined by the CHALICE, it was 20.0% (7/35; 95% CI 8.4% to 36.9%) with one patient requiring neurosurgery. Results for children &lt;2 years and when using rule specific outcomes were similar. Conclusions: In young children with SH as an isolated finding after head trauma, use of the definitions of both rules will aid clinicians in determining the level of risk of ciTBI and therefore in deciding whether to do a CT scan. Trial Registration Number: ACTRN12614000463673. abstract_id: PUBMED:28551693 Can infrascanner be useful in hospital emergency departments for diagnosing minor head injury in children? Aim: The aim of the study was to determine whether Infrascanner screening is a test which would facilitate excluding acute intracranial bleeding in children after minor head injury and thus make it possible to limit indications for computed tomography in those children. Material And Methods: The study enrolled 155 children aged 2-18 years after a minor or moderate head injury. The children were assessed using the Glasgow Coma Scale, examined by Infrascanner screening. Those who had relvant indications also had head computed tomography. Results: A negative Infrascanner screening result (no intracranial bleeding) was noted in 151 children. The Infrascanner result was positive in 4 children. Head computed tomography was performed in 28 of the 155 children. The conformity of the Infrascanner result with the computed tomography image was found in 26 children: no evidence of intracranial bleeding in 24 children and confirmation of intracranial haematoma in 2 children. The sensitivity of the screening was 66.67% and its specificity 98.68%. The positive and negative predictive values of the screening were 50% and 99.34%, respectively. The reliability of the test results was 98.06%. Conclusion: The Infrascanner seems to be a useful device in diagnosing children after minor head injury in the emergency department and its portability makes it possible to use it in practically all settings. Introducing the device into management standards in children after minor head injury might facilitate selecting those after minor head injury who are not at risk of intracranial bleeding and contribute to a reduction in the number of imaging investigations being performed and decrease the number of hospitalisations. Answer: Based on the provided abstracts, skull radiography may not be necessary for all children under 2 years with minor head injury in the emergency department. A study included in abstract PUBMED:24930859 found that children younger than two years of age with mild traumatic brain injury (TBI) had a low prevalence of clinically-important traumatic brain injuries (ciTBI), suggesting that it is possible to monitor these children without performing skull radiography. Similarly, abstract PUBMED:22563059 reported that none of the children under 2 years with nondisplaced linear skull fractures had clinically significant intracranial injury, indicating that observation rather than computed tomography (CT) may be a reasonable management option for these patients. Furthermore, abstract PUBMED:9023615 suggests that British guidelines for the use of skull radiography in children with head injury may lead to overinvestigation, and that adherence to American guidelines, which would result in fewer radiographies, still identified all actual fractures. This implies that a more conservative approach to radiography, as per American guidelines, can be safely used. The abstract PUBMED:28755763 demonstrates excellent interrater reliability between triage nurses and emergency physicians in interpreting the C-3PO rule for identifying children at risk of skull fracture, which could potentially reduce unnecessary imaging if used effectively in triage. However, abstract PUBMED:15159702 highlights that pediatric emergency physicians have limited accuracy in interpreting skull radiographs of children younger than 2 years, with shorter fractures being more commonly misinterpreted, which could be a concern if radiography is used as a diagnostic tool. In summary, the evidence suggests that for children under 2 years with minor head injury, skull radiography may not be routinely necessary, and a more selective approach based on clinical guidelines and careful observation could be adopted to minimize exposure to radiation and reduce unnecessary imaging (PUBMED:24930859, PUBMED:22563059, PUBMED:9023615, PUBMED:28755763, PUBMED:15159702).
Instruction: The use of synthetic bedding in children. Do strategies of change influence associations with asthma? Abstracts: abstract_id: PUBMED:15962878 The use of synthetic bedding in children. Do strategies of change influence associations with asthma? Background: Epidemiological data suggest in contrast to clinical recommendations a negative effect of synthetic bedding on asthma and respiratory symptoms. Objective: To assess the effects of bedding filled with synthetic material on the risk of asthma and respiratory symptoms in 6- to 7-year-old children, taking into account allergy-related change of bedding material. Methods: We analyzed data from the ISAAC Phase III cross-sectional survey (1999/2000) in Münster, Germany. Data were collected by parental report from representative school-based samples of 6- to 7-year old children (n = 3,529). We calculated prevalence ratios with 95% confidence intervals for the association between respiratory symptoms suggestive of asthma and synthetic pillows and blankets and adjusting for potential confounders. Results: In the preliminary analyses, synthetic pillows and synthetic blankets were positively associated with the studied respiratory outcomes. For example, a high number of wheezing attacks was positively associated with synthetic pillows (PR = 4.44; 95% CI 2.84-6.94) and synthetic blankets (PR = 3.80; 95% CI 2.48-5.82). However, in the restricted analysis, excluding participants reporting allergy-related change of bedding (pillows n = 440; blankets n = 437), the positive associations disappeared for all studied outcomes. Conclusions: Our findings suggest that allergy-related choice of bedding is an important factor in the assessment of the relation between synthetic bedding and asthma symptoms. Ignoring those changes can lead to false-positive risk estimates. Prospective studies that allow to disentangle the temporal sequence of disease, exposure, and change of bedding should help to further clarify this issue. abstract_id: PUBMED:12500044 Synthetic bedding and wheeze in childhood. Background: The reasons for the increase in childhood asthma over time are unclear. The indoor environment is of particular concern. An adverse role for synthetic bedding on asthma development in childhood has been suggested by cross-sectional studies that have found an association between synthetic pillow use and childhood wheeze. Prospective data on infant bedding have not been available. Methods: Bedding data at 1 month of age were available from an infant survey for children who were participating in a 1995 follow-up study (N = 863; 78% traced). The 1995 follow-up was embedded in a larger cross-sectional survey involving 6,378 seven year olds in Tasmania (N = 92% of eligible). Outcome measures included respiratory symptoms as defined in the International Study of Asthma and Allergies in Childhood protocol. Frequent wheeze was defined as more than 12 wheeze episodes over the past year compared with no wheeze. Results: Synthetic pillow use at 1 month of age was associated with frequent wheeze at age 7 (adjusted relative risk [aRR] = 2.5; 95% confidence interval [CI] = 1.2-5.5) independent of childhood exposure. Current synthetic pillow and quilt use was strongly associated with frequent wheeze (aRR = 5.2; CI = 1.3-20.6). Substantial trends were evident for an association of increasing number of synthetic bedding items with frequent wheeze and with increasing wheeze frequency. Among children with asthma, the age of onset of asthma occurred earlier if synthetic bedding was used in infancy. Conclusions: In this cohort, synthetic bedding was strongly and consistently associated with frequent childhood wheeze. The association did not appear to be attributable to bedding choice as part of an asthma management strategy. abstract_id: PUBMED:11906340 The association between synthetic bedding and adverse respiratory outcomes among skin-prick test positive and skin-prick test negative children. Background: Synthetic bedding has been associated with increased child wheeze and also higher allergen levels in several studies. We aimed to examine whether the association between synthetic bedding and adverse respiratory outcomes was more evident among skin-prick test (SPT) positive children. Methods: A cross-sectional survey involving a population sample of 758 (81% of eligible) school children aged 8-10 years from randomly selected schools in the Australian Capital Territory in 1999. Parental questionnaires for ISAAC respiratory symptoms and child bedding were obtained. SPT results of 10 common allergens were available on 722 of the subjects (77% of those eligible). Synthetic pillow or quilt use was termed synthetic upper bedding. Results: Synthetic quilt use was associated with asthma (Adjusted Odds Ratio 1.67 (1.05, 2.65)), recent wheeze (AOR 1.63 (1.03, 2.59)) and allergic rhinoconjunctivitis (AOR 2.11 (1.33, 3.34)) among SPT-positive children. However, these associations were not apparent for SPT-negative children. Similarly, increasing synthetic upper bedding use was associated with more than 12 episodes of wheeze among SPT-positive children (AOR 1.69 (1.08, 2.64), P=0.02, per category) but not SPT-negative children (AOR 0.77 (0.26, 2.21), P=0.6, per category). Conclusion: The apparent association between synthetic upper bedding and adverse respiratory outcomes was evident among SPT-positive but not SPT-negative children. Prospective intervention studies that aim to examine the effect of upper bedding composition on child asthma among SPT-positive children are required. abstract_id: PUBMED:15121932 The bedding environment, sleep position, and frequent wheeze in childhood. Objective: Synthetic quilt use has been associated with increased childhood wheeze in previous studies. Our aim was to examine whether the adverse effect of synthetic quilt use on frequent wheeze differed by usual sleep position. Design, Setting, And Participants: A population-based cross-sectional study of 6378 (92% of those eligible) 7-year-olds in Tasmania, Australia, was conducted in 1995. Exercise-challenge lung function was obtained on a subset of 414 children from randomly selected schools. Exposure Measures: Child bedding including pillow and overbedding composition and usual sleep position by parental questionnaire. Outcome Measures: Frequent wheeze (&gt;12 wheeze episodes over the past year), using the International Study of Asthma and Allergies in Childhood parental questionnaire, and baseline and postexercise forced expiratory volume in 1 second lung-function measures. Results: Frequent wheeze (n = 117) was positively associated with synthetic quilts, synthetic pillows, electric blankets, and sleeping in a bottom bunk bed but did not vary by sleep position. In a nested case-control analysis, the association between synthetic quilt use and frequent wheeze differed by sleep position. Among children who slept supine, synthetic (versus feather) quilt use was associated with frequent wheeze (adjusted odds ratio: 2.37 [1.08, 5.23]). However, among nonsupine sleepers, overlying synthetic quilt use was not associated with frequent wheeze (adjusted odds ratio: 1.06 [0.60, 1.88]). This difference in quilt effect by sleep position was highly significant. Similarly, synthetic quilt use was associated with lower postexercise forced expiratory volume in 1 second measures among supine but not nonsupine sleeping children. Conclusion: An increasing focus on the bedding environment immediately adjacent to the nose and mouth is required for respiratory disorders provoked by bedding, such as child asthma characterized by frequent wheeze. abstract_id: PUBMED:7001668 Effect of a change to mite-free bedding on children with mite-sensitive asthma: a controlled trial. Twenty-one children with mite-sensitive asthma took part in a crossover randomised controlled trial of mite-free bedding. Each child was issued with a new sleeping bag and pillow for a month, and twice-daily peak flow readings were compared with those obtained during a month in the child's ordinary bedding. Seventeen of the children had higher mean peak flow readings during the period in the mite-free bedding (p &lt; 0.01). The overall improvement was only modest, however, and some mites had appeared in most of the bedding by the end of the trial. New bedding may be helpful to patients with mite-sensitive asthma, but methods are needed to prevent colonisation by mites. abstract_id: PUBMED:23022822 Does bedding affect the airway and allergy? Various cross-sectional and longitudinal studies have suggested that synthetic bedding is associated with asthma, allergic rhinitis and eczema while feather bedding seems to be protective. Synthetic bedding items have higher house dust mite allergen levels than feather bedding items. This is possibly the mechanism involved although fungal and bacterial proinflammatory compounds and volatile organic compounds may play a role. In this review we present and discuss the epidemiological evidence and suggest possible mechanisms. Primary intervention studies are required to show whether feather bedding is protective for the development of childhood asthma and allergic diseases while secondary intervention studies are required to potentially reduce symptoms and medication use in subjects with established disease. abstract_id: PUBMED:12033480 House dust mite allergen levels in individual bedding components in New Zealand. Aims: House dust mite allergen (Der p 1) levels are high in New Zealand and bedding Der p 1 levels have been shown to be associated with the clinical severity of asthma. The aim of this study was to measure Der p 1 levels in synthetic and feather duvets and other individual bedding items, and to examine factors affecting these levels. Methods: Reservoir dust samples were collected and analysed for Der p 1 content by ELISA from 65 duvets, 81 pillows, and 65 mattresses of 34 children and 31 adults in 34 households. Results: Der p 1 geometric mean levels (95% confidence interval) were: 13.4 microg/g (9.5-18.9) in pillows; 29.4 microg/g (19.8-43.5) in duvets; and 53.8 microg/g (39.4-73.4) in mattresses. Synthetic pillows and duvets yielded significantly more Der p 1 than feather pillows and duvets (about 7-fold and 15-fold respectively). The presence of under-bedding resulted in significantly higher pillow and duvet Der p 1 levels. Mattresses &gt;10 years old had significantly higher Der p 1 levels. Conclusions: Synthetic pillows and duvets contain higher levels of Der p 1 than feather pillows and duvets. Advise for house dust mite sensitized individuals to use synthetic bedding does not prevent house dust mite allergen exposure. abstract_id: PUBMED:7717920 Sheepskins and bedding in childhood, and the risk of development of bronchial asthma. Background: Sheepskin bedding might increase house dust mite exposure and so explain some of the increasing prevalence of severity of childhood asthma. Methods: Relationships between use of different types of bedding, and diagnoses of asthma, symptoms of wheezing, skin prick test evidence of house dust mite sensitivity, and airway responsiveness to methacholine, were examined retrospectively in a birth cohort of children followed longitudinally to age 15 years. Results: In the whole cohort, no associations were identified to suggest a causal relationship between use of any type of bedding and development of features of asthma. Although not an a priori hypothesis, we noted that among children with a family history of atopic disease, those who were house dust mite sensitive were more likely to have used an innerspring mattress (29.6% vs 10.2% who had not used an innerspring mattress, p = 0.005). Conclusion: In this subgroup, increased airway responsiveness and mite sensitivity were significantly associated with use of innerspring mattresses, although whether this is a causal or secondary association is not certain. Use of a sheepskin in the bed in early childhood was not an additional risk factor for the development of asthma. abstract_id: PUBMED:21451166 Feather bedding and childhood asthma associated with house dust mite sensitisation: a randomised controlled trial. Introduction: Observational studies report inverse associations between the use of feather upper bedding (pillow and/or quilt) and asthma symptoms but there is no randomised controlled trial (RCT) evidence assessing the role of feather upper bedding as a secondary prevention measure. Objective: To determine whether, among children not using feather upper bedding, a new feather pillow and feather quilt reduces asthma severity among house dust mite (HDM) sensitised children with asthma over a 1-year period compared with standard dust mite avoidance advice, and giving children a new mite-occlusive mattress cover. Design: RCT. Setting: The Calvary Hospital in the Australian Capital Territory and the Children's Hospital at Westmead, Sydney, New South Wales. Patients: 197 children with HDM sensitisation and moderate to severe asthma. Intervention New upper bedding duck feather pillow and quilt and a mite-occlusive mattress cover (feather) versus standard care and a mite-occlusive mattress cover (standard). Main Outcome Measures: The proportion of children reporting four or more episodes of wheeze in the past year; an episode of speech-limiting wheeze; or one or more episodes of sleep disturbance caused by wheezing; and spirometry with challenge testing. Statistical analysis included multiple logistic and linear regression. Results: No differences between groups were found for primary end points--frequent wheeze (OR 1.51, 95% CI 0.83 to 2.76, p=0.17), speech-limiting wheeze (OR 0.70, 95% CI 0.32 to 1.48, p=0.35), sleep disturbed because of wheezing (OR 1.17, 95% CI 0.64 to 2.13, p=0.61) or for any secondary end points. Secondary analyses indicated the intervention reduced the risk of sleep being disturbed because of wheezing and severe wheeze to a greater extent for children who slept supine. Conclusion: No differences in respiratory symptoms or lung function were observed 1 year after children with moderate-severe asthma and HDM sensitisation were given a mite-occlusive mattress cover and then received either feather upper bedding (pillow and quilt) or standard bedding care. abstract_id: PUBMED:28741144 Tobacco Use and Smoke Exposure in Children: New Trends, Harm, and Strategies to Improve Health Outcomes. Purpose Of Review: Every day in the USA, approximately 4000 adolescents begin smoking and the adolescent brain is particularly susceptible to nicotine addiction. We present current pediatric trends on tobacco use and exposures, various new products used by adolescents, the adverse biological and behavioral effects of tobacco use and exposures, and tobacco control strategies to eliminate tobacco-related illnesses and deaths in the pediatric population. Recent Findings: Twelve-20% of women continue to smoke during pregnancy. New research reveals cognitive differences and behavior-control disorders are seen in elementary school children from prenatal and postnatal exposures. Traditional cigarette smoking has decreased in adolescents; novel and appealing tobacco products have captured their attention, particularly electronic cigarettes, and rates double and often triple from middle to high school. Children with asthma and those living in multi-housing units have higher rates of secondhand smoke exposure than non-asthmatics and children living in single-home dwellings. There is no "safe or risk-free" level of tobacco use or exposure. Tobacco use and exposure in childhood and adolescence must be decreased using evidenced-based strategies to improve child health. Answer: The use of synthetic bedding in children has been associated with asthma and respiratory symptoms. However, the relationship between synthetic bedding and asthma may be influenced by strategies of change, such as allergy-related changes in bedding material. In a study analyzing data from the ISAAC Phase III cross-sectional survey, preliminary analyses showed positive associations between synthetic pillows and blankets with respiratory outcomes suggestive of asthma. Yet, when excluding participants who reported allergy-related change of bedding, these positive associations disappeared for all studied outcomes, suggesting that allergy-related choice of bedding is a significant factor in assessing the relationship between synthetic bedding and asthma symptoms. Ignoring these changes can lead to false-positive risk estimates (PUBMED:15962878). Further evidence from a cohort study indicated that synthetic bedding was strongly and consistently associated with frequent childhood wheeze, and this association did not appear to be attributable to bedding choice as part of an asthma management strategy (PUBMED:12500044). Additionally, the association between synthetic bedding and adverse respiratory outcomes was more evident among skin-prick test (SPT) positive children, suggesting that the impact of synthetic bedding may be more pronounced in children with existing allergic sensitivities (PUBMED:11906340). Moreover, the bedding environment's immediate proximity to the nose and mouth, such as the use of synthetic quilts, has been shown to have an adverse effect on frequent wheeze, particularly among children who slept in the supine position (PUBMED:15121932). This indicates that sleep position may interact with bedding material to influence respiratory outcomes. In conclusion, strategies of change, such as the selection of bedding material based on allergy-related concerns and the consideration of sleep position, do influence the associations between synthetic bedding and asthma in children. Prospective studies and intervention trials are needed to further clarify these relationships and to determine whether changes in bedding can serve as effective measures for asthma management or prevention (PUBMED:15962878, PUBMED:12500044, PUBMED:11906340, PUBMED:15121932).
Instruction: Screening for Pain in the Ambulatory Cancer Setting: Is 0-10 Enough? Abstracts: abstract_id: PUBMED:26306620 Screening for Pain in the Ambulatory Cancer Setting: Is 0-10 Enough? Purpose: The purpose of this study was to explore concordance between patient self-reports of pain on validated questionnaires and discussions of pain in the ambulatory oncology setting. Methods: Adult, ambulatory patients (N = 452) with all stages of cancer were included. Three pain measures were evaluated: two items from the Symptom Distress Scale (frequency [SDSF] and intensity [SDSI]) and the Pain Intensity Numeric Scale (PINS). Relevant pain was defined as: (1) scores 3 of 5 on SDSF or SDSI or 5 of 10 on the (PINS); or (2) discussion of existing pain in an audio-recorded clinic visit. For each scale, McNemar's test assessed concordance of patient self-reports of relevant pain with discussions of relevant pain in the audio-recorded clinic visit. Sensitivity, specificity, and accuracy were calculated and a receiver operating characteristic analysis evaluated thresholds on self-report pain questionnaires to best identify relevant pain discussed in clinic. Results: Identification of relevant pain by self-report was discordant (P &lt; .001) with discussed pain coded in audio-recorded visits for all three measures. Specificity was higher for intensity (SDSI, 0.94; PINS, 0.97) than frequency (SDSF, 0.87); sensitivity was higher for frequency (SDSF, 0.35) than intensity (SDSI, 0.24; PINS, 0.12). Accuracy was higher for the SDS pain items (SDSF, 0.57; SDSI, 0.54) than for PINS (0.48). Receiver operating characteristic analysis curves suggest that lower threshold scores may improve the identification of relevant pain. Conclusion: Self-report pain screening measures favored specificity over sensitivity. Asking about pain frequency (in addition to intensity) and reconsidering threshold scores on pain intensity scales may be practical strategies to more accurately identify patients with cancer who have relevant pain. abstract_id: PUBMED:31961849 Opioid Misuse Risk: Implementing Screening Protocols in an Ambulatory Oncology Clinic. With morbidity and mortality related to opioid use continuing to increase, clinicians need to better understand the risk for opioid misuse in patient populations. Screening for opioid misuse risk has not been routinely adopted as a standard practice in clinical settings. A pilot study was performed to determine the feasibility of screening for future opioid misuse risk using the Opioid Risk Tool (ORT) in an ambulatory oncology clinic. Twelve patients in this sample scored in the moderate- to high-risk range for aberrant behavior, and 8 patients reported a personal history of substance abuse, indicating a need for opioid misuse risk screening in populations of patients with cancer. Because it is easy and quick to use, the ORT may be a feasible tool to incorporate into standard practice. abstract_id: PUBMED:17959345 Use of a single-item screening tool to detect clinically significant fatigue, pain, distress, and anorexia in ambulatory cancer practice. Fatigue, pain, distress, and anorexia are four commonly encountered symptoms in cancer. To evaluate the usefulness of a single-item screening for these symptoms, 597 ambulatory outpatients with solid tumors were administered a self-report screening instrument within the first 12 weeks of chemotherapy. Patients rated the severity of each symptom on a 0-10 scale, at its worst over the past three days, with higher ratings associated with higher symptom levels. From this sample, 148 patients also completed a more comprehensive assessment of these symptoms. Two criteria were used to determine optimal cut-off scores on the screening items: 1) the sensitivity and specificity of each screening item to predict clinical cases using receiver-operating characteristics analysis and 2) the proportion of patients at each screening score who reported that some relief of the target symptom would significantly improve their life. Optimal cut-off scores ranged from 4 to 6 depending on the target symptom (area under the curve range=0.68-0.88). Use of single-item screening instruments for fatigue, pain, distress, and anorexia may assist routine clinical assessment in ambulatory oncology practice. In turn, such assessments may improve identification of those at risk of morbidity and decreased quality of life due to excess symptom burden. abstract_id: PUBMED:25873062 Incorporating patient-reported outcomes to improve emotional distress screening and assessment in an ambulatory oncology clinic. Purpose: Assessment of distress and well-being of patients with cancer is not always documented or addressed in a clinical visit, reflecting a need for improved psychosocial screening. Methods: A multidisciplinary team completed process mapping for emotional distress assessment in two clinics. Barriers were identified through cause-and-effect analysis, and an intervention was chosen. Patient-reported outcomes were collected over 6 months using the validated National Comprehensive Cancer Network Emotional Distress Thermometer (EDT) paper tool. The American Society of Clinical Oncology Quality Oncology Practice Initiative (QOPI) measures were compared before and after intervention. Results: During 6 months, a total of 864 tools were collected from 1,344 patients in two ambulatory clinics (64%). Electronic medical record documentation of distress increased from 19.2% to 34% during the 6 months before and after intervention. QOPI measures showed an increase in emotional well-being documentation. Of 29 new and 835 return patients, 62% indicated mild distress (EDT, 0 to 3), 18% moderate (EDT, 4 to 6), and 11% severe (EDT, 7 to 10). The average distress score of new patients was significantly higher than that of return patients (5.39 [n = 26] v 2.52 [n = 754]; P &lt; .001). The top problems for patients with moderate and severe distress were worry, fatigue, pain, and nervousness; depression and sadness were particularly noted in patients reporting severe distress. Eleven percent of patients were referred to the social worker on site. Conclusion: A pilot intervention collecting Patient-reported outcomes in two ambulatory clinics led to increase in psychosocial distress screening followed by sustained improvement, indicated by both process and QOPI measures. abstract_id: PUBMED:35241067 Impact of ambulatory palliative care on symptoms and service outcomes in cancer patients: a retrospective cohort study. Background: The integration of palliative care into routine cancer care has allowed for improved symptom control, relationship building and goal setting for patients and families. This study aimed to assess the efficacy of an ambulatory palliative care clinic on improving symptom burden and service outcomes for patients with cancer. Methods: A retrospective review of data of cancer patients who attended an ambulatory care clinic and completed the Symptom Assessment Scale between January 2015 and December 2019. We classified moderate to severe symptoms as clinically significant. Clinically meaningful improvement in symptoms (excluding pain) was defined by a ≥ 1-point reduction from baseline and pain treatment response was defined as a ≥ 2-point or ≥ 30% reduction from baseline. Results: A total of 249 patients met the inclusion criteria. The most common cancer diagnosis was gastrointestinal (32%) and the median time between the initial and follow-up clinic was 4 weeks. The prevalence of clinically significant symptoms at baseline varied from 28% for nausea to 88% for fatigue, with 23% of the cohort requiring acute admission due to unstable physical/psychosocial symptoms. There was significant improvement noted in sleep (p &lt; 0.001), pain (p = 0.002), wellbeing (p &lt; 0.001), and overall symptom composite scores (p = 0.028). Despite 18-28% of patients achieving clinically meaningful symptom improvement, 18-66.3% of those with moderate to severe symptoms at baseline continued to have clinically significant symptoms on follow-up. A third of patients had opioid and/or adjuvant analgesic initiated/titrated, with 39% educated on pain management. Goals of care (31%), insight (28%) and psychosocial/existential issues (27%) were commonly explored. Conclusions: This study highlights the burden of symptoms in a cohort of ambulatory palliative care patients and the opportunity such services can provide for education, psychosocial care and future planning. Additionally routine screening of cohorts of oncology patients using validated scales may identify patients who would benefit from early ambulatory palliative care. abstract_id: PUBMED:8302688 Ambulatory pharmacy services affiliated with acute care hospitals. The extent to which hospital-based pharmacists provide ambulatory clinical pharmacy services in the United States is unknown. We evaluated pharmacists' activities in hospital-affiliated ambulatory clinics and home health services. A questionnaire was mailed to directors of pharmacy in one-half of the United States acute care general medical-surgical hospitals with 50 or more licensed beds. The survey response rate was 56% (n = 1174). In 19% of hospitals, pharmacists provided patient care (nondispensing activities) in ambulatory clinics. The most common clinics with pharmacist involvement were diabetes (10% of hospitals), oncology (9%), cardiology (6%), and geriatrics, infectious disease, and pain (4% each). Nondispensing roles varied by clinic type; prescribing by protocol was performed in 57% of anticoagulation clinics and 7% of diabetes clinics. Home health care services, with pharmacists' activity extending beyond providing drugs, were offered by 28% of the hospitals. Thirty-six percent of the hospitals operated one or more outpatient pharmacies. A statistically significant association was observed between hospitals' inpatient clinical pharmacy services (as assessed by the pharmaceutical care index) and the involvement of pharmacists in both ambulatory clinics and home health care services. abstract_id: PUBMED:10299707 Bibliography: ambulatory infusion therapy. Although clinical research involving ambulatory drug infusion therapy can be documented back to 1963, its widespread application outside major teaching centers has only come about in recent years. This is largely due to advances in pump technology, which include the advent of totally implanted pumps and injection ports and the increased reliability and patient acceptance of external infusers. The therapeutic implications of using various drugs via this route of administration have also received greater attention. Many insights have been gained into the kinetics, stability, dosage, and usefulness of various agents administered continuously over extended periods. In an effort to keep abreast of available information regarding this therapeutic modality, an online bibliography is maintained in our drug information center. Contained within are 161 articles cross-referenced under 27 headings yielding over 360 citations. The citations are organized into three basic reference groups: (I) Administration method, (II) Target lesion, and (III) Cancer chemotherapeutic agent. "Target lesion" listings for nononcologic applications of ambulatory infusions include references to the following active agents: Dobutamine (cardiac function), Insulin (diabetes), Morphine, Bupivacaine (pain management), and Heparin (thromboembolic disease). Given that the majority of references describing ambulatory infusion therapy of metastatic colorectal carcinoma include floxuridine (FUDR) dosing guidelines, no attempt was made to include these citations within the floxuridine listing. All citations included under the "Colorectal cancer/hepatic lesions" listing, which contain FUDR dosing guidelines, are so indicated with an asterisk. All other FUDR citations are listed under the "Floxuridine" agent listing. abstract_id: PUBMED:32407900 Quality of life, anxiety, and postoperative complications of patients undergoing breast cancer surgery as ambulatory surgery compared to non-ambulatory surgery: A prospective non-randomized study. Purpose: According to the latest recommendations a minimally invasive approach should be used to manage breast cancer and a global policy for minimizing costs encourages shorter periods of hospitalization. The aim of this study was to investigate the impact of length of hospitalization on quality of life, anxiety and depression and postoperative complications. Methods: This is a prospective observational study of 412 female patients with breast cancer requiring a first mastectomy or lumpectomy to assess the impact of the length of hospitalization on quality of life (using the European Organization for Research and Treatment of Cancer Quality of Life QLQ30 and BR23 questionnaires) at postoperative day 14 (D+14), levels of anxiety at d-1 and D+1 (according to the Hospital Anxiety and Depression scale) and postoperative state at D+21. Results: Our study included 244 patients that had ambulatory surgery and 124 that had non-ambulatory surgery. Global health status was significantly better for ambulatory surgery patients (adjusted p-value=0.014). There were no significant differences between the two groups for levels of anxiety, pain, lymphoceles and postoperative complications. No cases of nausea and vomiting requiring medical treatment were reported for either group. Conclusions: Breast cancer surgery can be performed using ambulatory surgery with no significant differences compared to non-ambulatory surgery in terms of quality of life, perioperative anxiety, and postoperative complications. Indeed, our study suggests that ambulatory surgery improves patient outcome. It should be determined whether the mode of hospitalization has any long-term impact on the patient, as a shorter hospitalization period would allow decreasing waiting times. abstract_id: PUBMED:22651950 Assessing the quality of pain care in ambulatory patients with advanced stage cancer. Context: Pain is common among patients with advanced cancer despite the dissemination of clinical pain care guidelines. Objectives: We sought to assess the quality of pain care among patients with advanced disease. Methods: We reviewed the records of 85 adult ambulatory patients with advanced breast, lung, and gastrointestinal cancer treated in 2004-2006. Patients' screening pain intensity scores were at least 7 of 10. Nurse reviewers completed medical record reviews of care rendered at the index visit and over the subsequent 30 days based on the 2004 National Comprehensive Cancer Network pain guideline. An expert panel then rated the quality of the evaluation, treatment, and overall pain care. We used a multivariable model to analyze guideline compliance and resolution of severe pain. Results: Among advanced cancer patients with severe pain, clinicians adjusted pain medications only half the time and made few timely referrals for pain-related consultations. By 30 days after the index visit, 34% of patients continued to report severe pain. The expert panel judged the overall quality of pain care as "fair" or "poor" in about two-thirds of cases because more timely and effective intervention could have reduced the severity and duration of pain. Resolution of severe pain was associated with adjustment of pain medications at the index visit (adjusted odds ratio 3.8, 95% CI 1.3-10.6). Conclusion: There is room for improvement in the pain care of patients with advanced cancer. Additional research is needed to understand the reasons for poor performance. abstract_id: PUBMED:12003689 The role of a pharmacist in ambulatory cancer pain management. Cancer pain is progressive and complex. The multidimensional character of cancer pain requires comprehensive management by a multidisciplinary team of health care professionals. Pharmacotherapy is a cornerstone of cancer pain management. Pharmacists who are engaged in ambulatory cancer pain management can play a pivotal role in the pharmacotherapy of cancer pain by optimizing medication therapy, monitoring outcomes, enhancing adherence through patient education regarding drug use, pain and symptom control, educating other health professionals and students, and conducting research. To fully meet the therapeutic challenges of cancer pain, pharmacists need to improve their knowledge and attitudes about cancer pain and pain medications. Answer: The study "Screening for Pain in the Ambulatory Cancer Setting: Is 0-10 Enough?" aimed to explore the concordance between patient self-reports of pain on validated questionnaires and discussions of pain in the ambulatory oncology setting (PUBMED:26306620). The study included adult ambulatory patients with all stages of cancer and evaluated three pain measures: the Symptom Distress Scale frequency (SDSF) and intensity (SDSI), and the Pain Intensity Numeric Scale (PINS). Relevant pain was defined by scores on these scales or by discussion of existing pain in an audio-recorded clinic visit. The results showed that identification of relevant pain by self-report was discordant with discussed pain in audio-recorded visits for all three measures. The study concluded that self-report pain screening measures favored specificity over sensitivity and suggested that asking about pain frequency (in addition to intensity) and reconsidering threshold scores on pain intensity scales may be practical strategies to more accurately identify patients with cancer who have relevant pain. This study indicates that a simple 0-10 scale may not be sufficient for accurately identifying relevant pain in ambulatory cancer patients. It suggests that incorporating additional questions about pain frequency and potentially adjusting the threshold scores on pain intensity scales could improve the identification of patients experiencing significant pain. This is important for ensuring that pain is adequately managed in the ambulatory oncology setting, which can have a significant impact on patient quality of life and treatment outcomes.
Instruction: Should mean arterial pressure be included in the definition of ambulatory hypertension in children? Abstracts: abstract_id: PUBMED:23340855 Should mean arterial pressure be included in the definition of ambulatory hypertension in children? Background: The diagnosis of hypertension (HTN)/normotension (NT) on ambulatory blood pressure monitoring (ABPM) is usually based on systolic (SBP) or diastolic blood pressure (DBP). The goal of this study was to analyze whether inclusion of mean arterial pressure (MAP) improves the detection of HTN on ABPM. Methods: We retrospectively studied ABPM records in 229 children (116 boys, median age = 15.3 years) who were referred for evaluation of HTN. A diagnosis of HTN was made if: (A) MAP or SBP or DBP was ≥ 1.65 SDS (95th percentile); (B) SBP or DBP was ≥ 1.65 SDS (95th percentile), during 24-h or daytime or night-time in both definitions. Results: Using definition A, 46/229 patients had HTN compared to definition B by which only 37/229 patients had HTN (p = 0.001). The level of agreement between the two definitions was very good (kappa = 0.86 ± 0.04), however nine patients (19.5 %) were missed by not using MAP in the definition of HTN. These nine patients had only mild HTN with a median Z score of 1.69. Conclusions: The inclusion of MAP in the definition of ambulatory HTN significantly increased the number of hypertensive patients. MAP may be very helpful in detecting mild HTN in patients with normal/borderline SBP and DBP. abstract_id: PUBMED:32761266 Challenges of diagnosing pediatric hypertension using ambulatory blood pressure monitoring. Background: Ambulatory blood pressure monitoring (ABPM) measures mean arterial pressure (MAP) then extrapolates systolic and diastolic blood pressure (BP) values. Pediatric guidelines recommend using calculated systolic and diastolic BP rather than measured MAP for diagnosis of ambulatory hypertension (HTN). The 95th percentile BP that defines ambulatory HTN is higher in some children than thresholds used to define ambulatory HTN in adults. Methods: This is a retrospective study of patients who underwent 24-h ABPM. The level of agreement in ambulatory HTN diagnosis using MAP vs. systolic/diastolic BP was evaluated using Cohen's kappa coefficient. Similar analysis was done to assess agreement in HTN diagnosis using adult vs. pediatric criteria for males taller than 165 cm. Results: A total of 263 ABPM studies were included. There was good agreement for diagnosis of HTN using MAP or systolic/diastolic BP (k = 0.75; 95% CI: 0.67-0.83). However, there was disagreement between the methods in 12% (n = 31) of subjects. Similarly, there was good agreement (k = 0.70; 95% CI: 0.56-0.85) between pediatric and adult criteria for HTN diagnosis. Nineteen patients were found to be hypertensive (9 using MAP criteria, 10 using adult criteria) who would not have met ambulatory HTN criteria using current pediatric guidelines. Conclusions: Inclusion of MAP along with systolic and diastolic BP in ABPM analysis alongside using adult criteria for diagnosing HTN in male children ≥ 165 cm may improve accuracy of pediatric HTN diagnosis and reduce false negative rate. Larger studies are needed to assess the clinical validity of these results. Graphical abstract. abstract_id: PUBMED:32419418 Ambulatory arterial stiffness index is increased in obese children. Background And Objectives: One way to measure arterial stiffness is the ambulatory arterial stiffness index (AASI), which is the relationship between diastolic and systolic ambulatory blood pressure (BP) over 24-hours. Methods: We studied the difference in AASI between obese and lean children. AASI was calculated from 24- hour ambulatory blood pressure monitoring in 53 obese children (33 girls) and compared with age-matched 42 healthy subjects (20 girls). Hypertension was defined according to the criteria of the American Heart Association. To evaluate inflammation, the blood level of high-sensitive C-reactive protein was measured. Results: The mean age was 10.6 ± 2.83 years in obese children and 11.3 ± 3.17 years in healthy subjects. Hypertension was determined in three (5.6%) obese children. The median heart rate-SDS, pulse pressure and blood pressure values did not differ between the two groups. The mean AASI was significantly higher in obese children compared to healthy subjects (0.42 ± 0.15 vs. 0.29 ± 0.18, p &lt; 0.001). AASI significantly correlated with nighttime SBP-SDS, nighttime SBP-load, systolic and diastolic nocturnal dipping, with no independent predictor. Conclusion: This study confirms that AASI is increased in obese children. AASI calculation is a useful, costeffective, and an easy method to evaluate arterial stiffness. Early detection of increased arterial stiffness can help clinicians come up with preventive measures in the management of patients. abstract_id: PUBMED:32556954 Ambulatory blood pressure abnormalities in children with migraine. Background: Although there are data showing that the frequency of hypertension increases in adults with migraine, there has been no study on this subject in children. In this study, we aimed to evaluate the presence of hypertension in children with migraine by performing ambulatory blood pressure monitoring (ABPM). Methods: Thirty-seven children diagnosed with migraine and 30 healthy controls were evaluated between January 2015 and March 2016. Demographic data, clinical and laboratory features, and physical examination findings were recorded for both groups. Office blood pressure was measured for all children, and each also underwent ABPM. The two groups were compared in terms of ambulatory blood pressure parameters. Results: The mean age was 13.3 and 13.1 years and the proportion of females was 73% and 60% in the migraine and control groups, respectively. Although the frequency of hypertension was not higher, abnormal ABPM patterns were found to be significantly more frequent in the migraine group (migraine, 45.9%; control, 16.7%; p, 0.018). Nighttime mean arterial blood pressure, nighttime diastolic blood pressure, and non-dipping pattern were higher in children with migraine than those in the control group (p &lt; 0.05). Conclusions: These results suggest that ambulatory blood pressure abnormalities may be present in almost half of patients with migraine. Therefore, we suggest that ABPM should be performed even if the office blood pressure measurements of children diagnosed with migraine are normal. abstract_id: PUBMED:35222581 Increased ambulatory arterial stiffness index and blood pressure load in normotensive obese patients. Objectives: It has been shown that blood pressure (BP) values measured in obese subjects are higher than the individuals with normal weight, even in normotensive limits. However, data concerning the Ambulatory Arterial Stiffness Index (AASI) and blood pressure load in normotensive obese subjects is lacking. This study was aimed to compare the ambulatory arterial stiffness index and blood pressure load in normotensive obese and healthy controls. Methods: One hundred normotensive obese and one hundred normal weight subjects were included in this study. All subjects underwent 24-hour ambulatory blood pressure monitoring. Ambulatory arterial stiffness index was calculated from 24-hour ambulatory blood pressure monitoring records. Ambulatory arterial stiffness index was defined as one minus the regression slope of unedited 24-h diastolic on systolic blood pressures. Systolic blood pressure (SBP) and diastolic blood pressure (DBP) load values were calculated from 24-hour ambulatory blood pressure monitoring analysis. Results: Ambulatory arterial stiffness index of the obese subjects was significantly higher than the healthy controls (0.48±0.2 vs. 0.33±0.11, p&lt;0.001). 24-hours systolic blood pressure and diastolic blood pressure loads were significantly higher in obese subjects. Logistic regression analysis revealed that body mass index (BMI) was an independent predictor for an abnormal ambulatory arterial stiffness ındex (≥0.50) (OR: 1.137, 95% CI: 0.915-1.001, p=0.004). Conclusion: Blood pressure load and ambulatory arterial stiffness index are increased in normotensive obese patients. Moreover, body mass index is an independent predictor for an abnormal ambulatory arterial stiffness index. Our results indicate that obese subjects are at higher risk for future cardiovascular events despite normal office BP levels. abstract_id: PUBMED:26794338 Current clinical aspects of ambulatory blood pressure monitoring Systemic arterial hypertension is the prevalentest disease worldwide that significantly increases cardiovascular risk. An early diagnosis together to achieve goals decreases the risk of complications significatly. Recently have been updated the diagnostic criteria for hypertension and the introduction of ambulatory blood pressure monitoring. The introduction into clinical practice of ambulatory blood pressure monitoring was to assist the diagnosis of «white coat hypertension» and «masked hypertension». Today has also shown that ambulatory blood pressure monitoring is better than the traditional method of recording blood pressure in the office, to the diagnosis and to adequate control and adjustment of drug treatment. Also there have been introduced important new concepts such as isloted nocturnal hypertension, morning blood pressure elevation altered and altered patterns of nocturnal dip in blood pressure; which have been associated with increased cardiovascular risk. Several studies have shown significant prognostic value in some stocks. There are still other concepts on which further study is needed to properly establish their introduction to clinical practice as hypertensive load variability, pulse pressure and arterial stiffness. In addition to setting values according to further clinical studies in populations such as elderly and children. abstract_id: PUBMED:32672698 VALUE OF AMBULATORY BLOOD PRESSURE MONITORING IN THE VERIFICATION OF ARTERIAL HYPERTENSION IN SCHOOL AGE CHILDREN. Arterial hypertension is a common pathology in children of different ages. The introduction of daily monitoring of blood pressure into the practice of pediatric cardiologists makes it possible to more accurately establish a diagnosis, determine the prognosis of the course of the disease and monitor treatment of hypertension. Objective - to assess the daily fluctuations of blood pressure in schoolchildren with arterial hypertension. 70 children of school age were examined. The main group (38 people) included children with high blood pressure, the control group included 32 clinically healthy children. All children underwent tonometry. The results for each child evaluated by percentile nomograms regarding age, gender and height. Verification of the diagnosis of arterial hypertension performed according to the recommendations of the American Academy of Pediatrics (AAP). In addition, children underwent ambulatory blood pressure monitoring. In 79% of children of the main group, the level of blood pressure assessed as arterial hypertension of the first stage, in 21% of children - arterial hypertension of the second stage. When conducting daily monitoring of blood pressure in 35 children (92.1%) of the main group, 2 peaks of systolic blood pressure observed: the first peak between 23:00 and 01:00 at night (from 5.5 to 18.8 mm Hg.), the second peak - in 28 children (73.7%) between 6.30 and 8.00 (from 6.8 to 10.1 mm Hg). At the same time, peaks in the level of diastolic blood pressure appeared in fewer children and were not so pronounced. In schoolchildren with stage 1 hypertension, a night peak observed in 60% of children, and a morning peak was in 22% of children. Among children with second stage of arterial hypertension a night peak observed in 100% persons and a morning peak observed in 72% of children. This suggests that the nocturnal peak of blood pressure may be a marker of the severity of arterial hypertension. In healthy children, there were no peaks in the rise in blood pressure. The presence of a non-dipper circadian profile in a school-age child in combination with the morning and/or night peak of systolic blood pressure can serve as a marker for the development of arterial hypertension. Therefore, such children must be attributed to the risk group for the development of this pathology. abstract_id: PUBMED:18266183 Usefulness of ambulatory blood pressure monitoring in diagnosis of arterial hypertension in children and adolescents. Background: Arterial hypertension in children and adolescents is an important medical problem with a prevalence rising over the last ten years from 1 to 4.5%. Aim: To assess the usefulness of ambulatory blood pressure monitoring (ABPM) in diagnosis of arterial hypertension in children and adolescents. Methods: Two hundred and twelve children with elevated blood pressure (BP) and 81 healthy controls participated in this study. In all children from the study and control groups standard BP measurement and ABPM were performed. Results: With the use of standard BP measurement, 168 (79.2%) children were diagnosed as hypertensive and the remaining 44 (20.8%) as prehypertensive. When the ABPM was used, arterial hypertension was diagnosed in 143 (67.4%) cases and white coat hypertension in the remaining 69 (32.6%) subjects. In 7 (8.7%) control children elevated BP in ABPM was detected, and masked hypertension were diagnosed. Conclusions: 1. Ambulatory blood pressure monitoring is a useful tool in diagnosis of arterial hypertension in children and adolescents. 2. Systolic hypertension is a major form of hypertension in childhood. 3. Ambulatory blood pressure monitoring is helpful to identify patients with white coat hypertension. 4. Further studies are necessary to establish uniform indications, standards and rules for interpretation of ABPM in children and adolescents. abstract_id: PUBMED:32644244 Automated office blood pressure is in agreement with awake and mean 24-hour ambulatory blood pressure at the lower blood pressure range. Automated office blood pressure measurement eliminates the white coat effect and is associated with awake ambulatory blood pressure. This study examined whether automated office blood pressure values at lower limits were comparable to those of awake and mean 24-hour ambulatory blood pressure. A total of 552 patients were included in the study, involving 293 (53.1%) men and 259 (46.9%) women, with a mean age 55.0 ± 12.5, of whom 36% were treated for hypertension. Both systolic and diastolic automated office blood pressures exhibited lower values compared to awake ambulatory blood pressure among 254 individuals with systolic automated office blood pressure &lt;130 mm Hg (119 ± 8 mm Hg vs 125 ± 11 mm Hg, P &lt; .0001 and 75 ± 9 mm Hg vs 79 ± 9 mm Hg, P &lt; .0001 for systolic and diastolic BPs, respectively). Furthermore, the comparison of systolic automated office blood pressure to the mean 24-hour ambulatory blood pressure levels also showed lower values (119 ± 8 vs 121 ± 10, P = .007), whereas the diastolic automated office blood pressure measurements were similar to 24-hour ambulatory blood pressure values. Our findings show that when automated office blood pressure readings express values &lt;130/80 mm Hg in repeated office visits, further investigation should be performed only when masked hypertension is suspected; otherwise, higher automated office blood pressure values could be used for the diagnosis of uncontrolled hypertension, especially in individuals with organ damage. abstract_id: PUBMED:37975691 Arterial Hypertension in Pediatrics. Role of Ambulatory Blood Pressure Monitoring (ABPM) Hypertension is an important risk factor for cardiovascular disease. The prevalence of hypertension in children has risen in last years, secondary to the increase in overweight and obesity in pediatric population, among other factors. An adequate diagnosis and opportune treatment can prevent the development of organ damage. The American Academy of Pediatrics recommends the performance of Ambulatory Blood Pressure Monitorization in children with risk factors or suspected hypertension, in order to confirm the diagnosis and guide treatment. New recommendations recently published simplifies the interpretation; blood pressure load is no longer included in the diagnosis, so only 4 categories remain: Normotension, White Coat Hypertension, Masked Hypertension and Ambulatory Hypertension. They also propose single cut-off values for patients ≥ 13 years, similar to those recommended for adults in 2017 American Heart Association guidelines. Both changes increase sensibility in organ damage diagnosis. This article presents an actualization in pediatric Chilean Ambulatory Blood Pressure Monitorization guidelines in order to unify criteria and attain nationwide broadcast. It also reviews the ambulatory categories of blood pressure and describes target organ damage in children, including recommendations for the correct use of the exam and proposes a report form. Answer: The inclusion of mean arterial pressure (MAP) in the definition of ambulatory hypertension in children has been a subject of research and debate. According to a study by Stabouli et al. (PUBMED:23340855), the inclusion of MAP in the definition of ambulatory hypertension significantly increased the number of hypertensive patients diagnosed. This suggests that MAP can be helpful in detecting mild hypertension in patients with normal or borderline systolic blood pressure (SBP) and diastolic blood pressure (DBP). Another study by Seeman et al. (PUBMED:32761266) found good agreement for the diagnosis of hypertension using MAP or systolic/diastolic BP. However, there was disagreement between the methods in 12% of subjects, indicating that including MAP in ABPM analysis may improve the accuracy of pediatric hypertension diagnosis and reduce the false-negative rate. Furthermore, the study by Flynn et al. (PUBMED:37975691) suggests that the American Academy of Pediatrics recommends the performance of Ambulatory Blood Pressure Monitoring (ABPM) in children with risk factors or suspected hypertension to confirm the diagnosis and guide treatment. This indicates that ABPM, which measures MAP and extrapolates systolic and diastolic BP values, is considered a valuable tool in the diagnosis and management of pediatric hypertension. In conclusion, the evidence suggests that including MAP in the definition of ambulatory hypertension in children may improve the detection of hypertension and provide a more comprehensive assessment of a child's blood pressure profile. However, larger studies may be needed to assess the clinical validity of these results and to establish standardized guidelines for the inclusion of MAP in pediatric hypertension diagnosis (PUBMED:32761266).
Instruction: Increases in food intake or food-seeking behavior induced by GABAergic, opioid, or dopaminergic stimulation of the nucleus accumbens: is it hunger? Abstracts: abstract_id: PUBMED:14598017 Increases in food intake or food-seeking behavior induced by GABAergic, opioid, or dopaminergic stimulation of the nucleus accumbens: is it hunger? Rationale: Previous work has shown that stimulation of GABAergic, opioid, or dopaminergic systems within the nucleus accumbens modulates food intake and food-seeking behavior. However, it is not known whether such stimulation mimics a motivational state of food deprivation that commonly enables animals to learn a new operant response to obtain food. Objectives: In order to address this question, acquisition of lever pressing for food in hungry animals was compared with acquisition in non-food-deprived rats subjected to various nucleus accumbens drug treatments. Methods: All animals were given the opportunity to learn an instrumental response (a lever press) to obtain a food pellet. Prior to training, ad lib-fed rats were infused with the gamma-aminobutyric acid (GABA)A agonist muscimol (100 ng/0.5 microl per side) or the mu-opioid receptor agonist D-Ala2, N-me-Phe4, Gly-ol5-enkephalin (DAMGO, 0.25 microg/0.5 microl per side), or saline into the nucleus accumbens shell (AcbSh). The indirect dopamine agonist amphetamine (10 microg/0.5 microl per side) was infused into the AcbSh or nucleus accumbens core (AcbC) of ad lib-fed rats. An additional group was food deprived and infused with saline in the AcbSh. Chow and sugar pellet intake responses after drug treatments were also evaluated in free-feeding tests. Results: Muscimol, DAMGO, or amphetamine did not facilitate acquisition of lever pressing for food, despite clearly increasing food intake in free-feeding tests. In contrast, food-deprived animals rapidly learned the task. Conclusions: These findings suggest that pharmacological stimulation of any of these neurochemical systems in isolation is insufficient to enable acquisition of a food-reinforced operant task. Thus, these selective processes, while likely involved in control of food intake and food-seeking behavior, appear unable to recapitulate the conditions necessary to mimic the state of negative energy balance. abstract_id: PUBMED:12708516 Nucleus accumbens opioid, GABaergic, and dopaminergic modulation of palatable food motivation: contrasting effects revealed by a progressive ratio study in the rat. The current studies were designed to evaluate whether incentive motivation for palatable food is altered after manipulations of opioid, GABAergic, and dopaminergic transmission within the nucleus accumbens. A progressive ratio schedule was used to measure lever-pressing for sugar pellets after microinfusion of drugs into the nucleus accumbens in non-food-deprived rats. The mu opioid agonist D-Ala2, NMe-Phe4, Glyo15-enkephalin and the indirect dopamine agonist amphetamine induced a marked increase in break point and correct lever-presses; the GABA(A) agonist muscimol did not affect breakpoint or lever-presses. The data suggest that opioid, dopaminergic, and GABAergic systems within the accumbens differentially modulate food-seeking behavior through mechanisms related to hedonic evaluation of food, incentive salience, and control of motor feeding circuits, respectively. abstract_id: PUBMED:21262268 Nucleus accumbens dopamine and mu-opioid receptors modulate the reinstatement of food-seeking behavior by food-associated cues. The high attrition rates for dietary interventions aimed at promoting a healthier body mass may be caused, at least in part, by constant exposure to environmental stimuli that are associated with palatable foods. In both humans and animals, conditioned stimuli (CSs) that signal reward availability reliably reinstate food- and drug-seeking behaviors. The nucleus accumbens (NAcc) is critically involved in the cue-evoked reinstatement of food-seeking, but the role of individual neurotransmitter systems within the NAcc remains to be determined. These experiments tested the effects of intra-accumbal pharmacological manipulations of dopamine (DA) D(1) and D(2) receptors, mu-opioid receptors, or serotonin (5-HT) receptors on cue-evoked relapse to food-seeking. Rats were trained to lever press for sucrose pellets and the concurrent presentation of a light-tone CS. Once training was complete, lever-pressing was extinguished in the absence of either sucrose or CS presentation. Once each rat had reached extinction criterion, they received two reinstatement sessions in which lever pressing was renewed by response-contingent presentation of the CS. Prior to each reinstatement test, rats received NAcc microinfusions of saline or the selective D(1) receptor antagonist SCH 23390, the D(2) receptor antagonist raclopride, the mu-opioid receptor agonist [D-Ala2, N-MePhe4, Gly-ol]-enkephalin (DAMGO), or 5-HT hydrogen maleate. Compared to saline test days, intra-accumbens infusions of SCH 23390 (1 μg/0.5 μL), raclopride (1 μg/0.5 μL), or DAMGO (0.25 μg/0.5 μL) effectively blocked the cue-evoked reinstatement of food-seeking. In contrast, stimulation of serotonin (5-HT) receptors by 5-HT hydrogen maleate (5 μg/0.5 μL) had no effect on cue-induced reinstatement. These novel data support roles for NAcc DA D(1), D(2), and mu-opioid receptors in the cue-evoked reinstatement of food seeking. abstract_id: PUBMED:36813210 Melanocortin-4 receptors on neurons in the parabrachial nucleus mediate inflammation-induced suppression of food-seeking behavior. Anorexia is a common symptom during infectious and inflammatory disease. Here we examined the role of melanocortin-4 receptors (MC4Rs) in inflammation-induced anorexia. Mice with transcriptional blockage of the MC4Rs displayed the same reduction of food intake following peripheral injection of lipopolysaccharide as wild type mice but were protected against the anorexic effect of the immune challenge in a test in which fasted animals were to use olfactory cues to find a hidden cookie. By using selective virus-mediated receptor re-expression we demonstrate that the suppression of the food-seeking behavior is subserved by MC4Rs in the brain stem parabrachial nucleus, a central hub for interoceptive information involved in the regulation of food intake. Furthermore, the selective expression of MC4R in the parabrachial nucleus also attenuated the body weight increase that characterizes MC4R KO mice. These data extend on the functions of the MC4Rs and show that MC4Rs in the parabrachial nucleus are critically involved in the anorexic response to peripheral inflammation but also contribute to body weight homeostasis during normal conditions. abstract_id: PUBMED:35083015 GABAergic neurons in the nucleus accumbens regulate hedonic food intake via orexin-A expression in the lateral hypothalamus. Objectives: To investigate the regulatory effects of the nucleus accumbens (NAcSh)-lateral hypothalamus (LHA) GABAergic neural pathway on palatable food (PF) intake via orexin-A expression in diet-induced obesity (DIO) rats. Materials And Methods: NAcSh-LHA GABAergic pathways were observed by fluorogold retrograde tracing combined with fluorescence immunohistochemistry, and the regulatory effects of this neural pathway on PF intake were detected after 1) microinjection of GABA-A receptor agonist muscimol (MUS) or antagonist bicuculine (BIC) into LHA, 2) electrical stimulation NAcSh, and 3) blocking the orexin-A receptor by icv SB334867. Results: Compared with rats on a normal diet (ND), NAcSh-LHA GABAergic neurons in the DIO rats were significantly decreased, and orexin-A expression in LHA significantly increased (P&lt;0.05). Microinjection of MUS into LHA significantly decreased the PF intake in both ND and DIO rats (P&lt;0.05), and BIC could markedly increase the PF intake in the ND rats (P&lt;0.05), but not the DIO rats (P&gt;0.05). After NAcSh electrical stimulation or SB334867 ICV injection, the PF intake was significantly decreased in the DIO rats (P&lt;0.05), and there was no significant difference after preadministration of BIC into LHA (P&gt;0.05). Conclusion: This GABAergic pathway could regulate the expression of orexin-A in LHA and PF intake. Orexin-A neurons in LHA of DIO rats might be less sensitive to GABAergic signals and may consequently lead to more hedonic food intake. abstract_id: PUBMED:29547121 Drosophila mushroom bodies integrate hunger and satiety signals to control innate food-seeking behavior. The fruit fly can evaluate its energy state and decide whether to pursue food-related cues. Here, we reveal that the mushroom body (MB) integrates hunger and satiety signals to control food-seeking behavior. We have discovered five pathways in the MB essential for hungry flies to locate and approach food. Blocking the MB-intrinsic Kenyon cells (KCs) and the MB output neurons (MBONs) in these pathways impairs food-seeking behavior. Starvation bi-directionally modulates MBON responses to a food odor, suggesting that hunger and satiety controls occur at the KC-to-MBON synapses. These controls are mediated by six types of dopaminergic neurons (DANs). By manipulating these DANs, we could inhibit food-seeking behavior in hungry flies or promote food seeking in fed flies. Finally, we show that the DANs potentially receive multiple inputs of hunger and satiety signals. This work demonstrates an information-rich central circuit in the fly brain that controls hunger-driven food-seeking behavior. abstract_id: PUBMED:21161187 New operant model of reinstatement of food-seeking behavior in mice. Rationale: A major problem in treating obesity is the high rate of relapse to abnormal food-taking behavior when maintaining diet. Objectives: The present study evaluates the reinstatement of extinguished palatable food-seeking behavior induced by cues previously associated with the palatable food, re-exposure to this food, or stress. The participation of the opioid and dopamine mechanisms in the acquisition, extinction, and cue-induced reinstatement was also investigated. Materials And Methods: C57BL/6 mice were first trained on a fixed-ratio-1 schedule of reinforcement to obtain chocolate-flavored pellets during 20 days, which was associated to a stimulus light. Operant behavior was then extinguished during 20 daily sessions. mRNA levels of opioid peptide precursors and dopamine receptors were evaluated in the brain by in situ hybridization and RT-PCR techniques. Results: A reinstatement of food-seeking behavior was only obtained after exposure to the food-associated cue. A down-regulation of prodynorphin mRNA was found in the dorsal striatum and nucleus accumbens after the acquisition, extinction, and reinstatement of the operant behavior. Extinction and reinstatement of this operant response enhanced proenkephalin mRNA in the dorsal striatum and/or the nucleus accumbens core. Down-regulation of D2 receptor expression was observed in the dorsal striatum and nucleus accumbens after reinstatement. An up-regulation of PDYN mRNA expression was found in the hypothalamus after extinction and reinstatement. Conclusions: This study provides a new operant model in mice for the evaluation of food-taking behavior and reveals specific changes in the dopamine and opioid system associated to the behavioral responses directed to obtain a natural reward. abstract_id: PUBMED:23816798 PYY(3-36) into the arcuate nucleus inhibits food deprivation-induced increases in food hoarding and intake. Central administration of neuropeptide Y (NPY) increases food intake in laboratory rats and mice, as well as food foraging and hoarding in Siberian hamsters. The NPY-Y1 and Y5 receptors (Rs) within the hypothalamus appear sufficient to account for these increases in ingestive behaviors. Stimulation of NPY-Y2Rs in the Arcuate nucleus (Arc) has an anorexigenic effect as shown by central or peripheral administration of its natural ligand peptide YY (3-36) and pharmacological NPY-Y2R antagonism by BIIE0246 increases food intake. Both effects on food intake by NPY-Y2R agonism and antagonism are relatively short-lived lasting ∼4h. The role of NPY-Y2Rs in appetitive ingestive behaviors (food foraging/hoarding) is untested, however. Therefore, Siberians hamsters, a natural food hoarder, were housed in a semi-natural burrow/foraging system that had (a) foraging requirement (10 revolutions/pellet), no free food (true foraging group), (b) no running wheel access, free food (general malaise control) or (c) running wheel access, free food (exercise control). We microinjected BIIE0246 (antagonist) and PYY(3-36) (agonist) into the Arc to test the role of NPY-Y2Rs there on ingestive behaviors. Food foraging, hoarding, and intake were not affected by Arc BIIE0246 microinjection in fed hamsters 1, 2, 4, and 24h post injection. Stimulation of NPY-Y2Rs by PYY(3-36) inhibited food intake at 0-1 and 1-2h and food hoarding at 1-2h without causing general malaise or affecting foraging. Collectively, these results implicate a sufficiency, but not necessity, of the Arc NPY-Y2R in the inhibition of food intake and food hoarding by Siberian hamsters. abstract_id: PUBMED:9580643 Intake of high-fat food is selectively enhanced by mu opioid receptor stimulation within the nucleus accumbens. The present study was designed to further investigate the nature of feeding induced by opioid stimulation of the nucleus accumbens through an examination of the effects of intra-accumbens (ACB) opioids on macronutrient selection. In 3-hr tests of free-feeding (satiated) rats, intra-ACB administration of the mu receptor agonist D-Ala2,N,Me-Phe4, Gly-ol5-enkephalin (DAMGO; 0, 0.025, 0.25 and 2.5 micrograms bilaterally) markedly enhanced the intake of fat or carbohydrate when the diets were presented individually (although the effect on fat intake was much greater in magnitude). Intra-ACB injections of DAMGO, however, produced potent preferential stimulatory effects on fat ingestion with no effect on carbohydrate ingestion when both fat and carbohydrate diets were present simultaneously. Moreover, this selective stimulation of fat intake was independent of base-line diet preference and could be blocked by systemic injection of naltrexone (5 mg/kg). We also examined the effect of 24-hr food deprivation on the pattern of macronutrient intake in rats with access to both carbohydrate and fat. In contrast to the DAMGO-induced selective enhancement of fat intake, food deprivation significantly increased the intake of both diets to the same extent; however, in this case, only the stimulated fat intake was blocked by systemic naltrexone. Intra-ACB administration of DAMGO in hungry rats produced an effect similar to that observed in free-feeding rats; preference was strongly shifted to fat intake. Similarly, the opioid antagonist naltrexone (20 micrograms) infused directly into ACB preferentially decreased fat intake in hungry rats. These findings suggest that endogenous opioids within the ventral striatum may participate in the mechanisms governing preferences for highly palatable foods, especially those rich in fat. abstract_id: PUBMED:33333134 The neurobiology of abstinence-induced reward-seeking in males and females. Drugs of abuse and highly palatable foods (e.g. high fat or sweet foods) have powerful reinforcing effects, which can lead to compulsive and addictive drives to ingest these substances to the point of psychopathology and self-harm--specifically the development of Substance Use Disorder (SUD) and obesity. Both SUD and binge-like overeating can be defined as disorders in which the salience of the reward (food or drug) becomes exaggerated relative to, and at the expense of, other rewards that promote well-being. A major roadblock in the treatment of these disorders is high rates of relapse after periods of abstinence. It is common, although not universal, for cue-induced craving to increase over time with abstinence, often triggered by cues previously paired with the reinforcing substance. Accumulating evidence suggests that similar neural circuits and cellular mechanisms contribute to abstinence-induced and cue-triggered seeking of drugs and palatable food. Although much research has focused on the important role of corticolimbic circuitry in drug-seeking, our goal is to expand focus to the more recently explored hypothalamic-thalamic-striatal circuitry. Specifically, we review how connections, and neurotransmitters therein, among the lateral hypothalamus, paraventricular nucleus of the thalamus, and the nucleus accumbens contribute to abstinence-induced opioid- and (high fat or sweet) food-seeking. Given that biological sex and gonadal hormones have been implicated in addictive behavior across species, another layer to this review is to compare behaviors and neural circuit-based mechanisms of abstinence-induced opioid- or food-seeking between males and females when such data is available. Answer: The question of whether increases in food intake or food-seeking behavior induced by GABAergic, opioid, or dopaminergic stimulation of the nucleus accumbens is equivalent to hunger is addressed in the provided abstracts. The studies explore the effects of pharmacological stimulation of these neurochemical systems within the nucleus accumbens on food-related behaviors. One study found that while muscimol (a GABA_A agonist), DAMGO (a mu-opioid receptor agonist), or amphetamine (an indirect dopamine agonist) increased food intake in free-feeding tests, they did not facilitate the acquisition of lever pressing for food in non-food-deprived rats. In contrast, food-deprived animals rapidly learned the task, suggesting that pharmacological stimulation of these systems in isolation does not mimic the motivational state of food deprivation that enables animals to learn a new operant response to obtain food (PUBMED:14598017). Another study using a progressive ratio schedule to measure lever-pressing for sugar pellets in non-food-deprived rats found that the mu opioid agonist and the indirect dopamine agonist induced a marked increase in breakpoint and correct lever-presses, while the GABA_A agonist muscimol did not affect these measures. This suggests that opioid and dopaminergic systems within the nucleus accumbens differentially modulate food-seeking behavior, potentially through mechanisms related to the hedonic evaluation of food and incentive salience (PUBMED:12708516). Further research indicates that nucleus accumbens dopamine and mu-opioid receptors modulate the reinstatement of food-seeking behavior by food-associated cues, supporting their roles in cue-evoked reinstatement of food seeking (PUBMED:21262268). In summary, the findings suggest that while GABAergic, opioid, or dopaminergic stimulation of the nucleus accumbens can increase food intake or food-seeking behavior, these effects do not necessarily equate to the complex motivational state of hunger that is typically associated with negative energy balance and the need to acquire food. Instead, these neurochemical systems may be more closely related to the hedonic and incentive aspects of feeding behavior rather than hunger per se.
Instruction: Are pediatric emergency medicine training programs adequately preparing graduates for involvement in EMS? Abstracts: abstract_id: PUBMED:11138878 Are pediatric emergency medicine training programs adequately preparing graduates for involvement in EMS? Objective: To examine the level of involvement in pre-hospital care for children by faculty and fellows of teaching hospitals with a Pediatric Emergency Medicine (PEM) fellowship. In addition, we hypothesized that a divisional faculty member's involvement as principal investigator (PI) on an EMSC grant would not impact divisional involvement in on or off-line medical direction. Methods: Design: Cross-sectional national survey. Participants: PEM fellowship directors. Interventions: Self-administered questionnaire. Statistics: Descriptive and Chi-square analysis to study null hypothesis. Results: The response rate to the survey was 62% (53/85). Of the programs responding, 53 % provided on-line pediatric medical direction for pre-hospital providers, 77% were involved with paramedic education other than PALS, and 58% of systems had pediatric specific protocols. In 87 % of the programs, a designated faculty member functioned as an EMSC liaison. A division faculty member was or had been the PI on an EMSC grant in 18 programs (34%). There was no significant difference in the provision of on or off-line medical direction comparing programs with or without involvement in an EMSC grant. Only 34% of the responding program directors felt that the current level of exposure to EMS was adequate for PEM fellow training. Conclusions: The current level of involvement in EMS of PEM faculty and fellows has significant room for improvement. It does not appear that grant support translates into increased local involvement in EMS. Current PEM fellowship curriculum guidelines for training in EMS are not being met by the majority of responding training programs. abstract_id: PUBMED:37814741 Resident Physicians' Knowledge of Emergency Medical Services: A Comparison Between Emergency Medicine and Non-Emergency Medicine Resident Physicians. Background and objective Emergency medical services (EMS) are often assumed to only involve bringing patients to physicians for treatment in the emergency department. However, EMS staff are also responsible for responding to physicians in the primary care setting when medical emergencies arise. While emergency medicine (EM) residents are exposed to EMS as part of their curriculum, little is known about the knowledge of other resident physicians who may interact with EMS. In light of this, we conducted this study to address the scarcity of data related to this topic. Methods A quantitative cross-sectional knowledge assessment was conducted among resident physicians in emergency medicine, internal medicine, family medicine, pediatric, and combined medicine and pediatric residencies at the Penn State Milton S. Hershey Medical Center. Results Eighteen EM residents and 26 non-EM residents completed the assessment. The EM residents had a higher average score when compared to non-emergency medicine residents (69.2% vs. 53.8%, p=0.0012). Conclusion Variations in scores between EM and other specialties that interact with EMS highlight the need for further training and familiarization related to EMS for residents in non-EM specialties. abstract_id: PUBMED:17307622 A survey on the graduates from the combined emergency medicine/pediatric residency programs. The guidelines for dual training in Emergency Medicine (EM) and Pediatrics over a 5-year program have long existed. Many have questioned the benefit of such training in relation to either specialty and in relation to Pediatric Emergency Medicine (PEM) sub-specialty training. We report on the professional outcome, career focus, and job satisfaction of these graduates. Surveys were returned from 91% (n = 29) of graduates, all of whom reported completing either of the two combined training programs. All respondents reported practicing in an emergency medicine setting either with or without an additional pediatric emphasis. Fifty-nine percent reported an academic EM affiliation. Almost all (96.5%) would choose to repeat combined training and all reported they would recommend the combined program to medical students interested in Pediatrics and EM. Combined graduates report a high level of satisfaction with their training and overwhelmingly would recommend such training to medical students. Combined graduates seem to universally work in an ED setting, although a number maintain their pediatric involvement. Over half of the graduates participate in academic EM. abstract_id: PUBMED:7978554 Proposed core content for pediatric emergency medicine fellowship training of emergency medicine graduates. The establishment of pediatric emergency medicine as a subspecialty of emergency medicine has engendered the need for closer examination and development of guidelines for fellowship training. Core content and curriculum documents pertaining to fellowship training in pediatric emergency medicine for pediatric graduates have been published previously. However, the educational needs of emergency medicine graduates for such training are significantly different from those of pediatric graduates in several important respects. We believe that emergency physicians should take an active role in the creation and refinement of educational guidelines for fellowship training in pediatric emergency medicine for emergency medicine graduates. For this reason, we present a proposed core content outline in the hope that it will serve to foster this process. abstract_id: PUBMED:27503190 A Survey of Graduates of Combined Emergency Medicine-Pediatrics Residency Programs: An Update. Background: In 1998, emergency medicine-pediatrics (EM-PEDS) graduates were no longer eligible for the pediatric emergency medicine (PEM) sub-board certification examination. There is a paucity of guidance regarding the various training options for medical students who are interested in PEM. Objectives: We sought to to determine attitudes and personal satisfaction of graduates from EM-PEDS combined training programs. Methods: We surveyed 71 graduates from three EM-PEDS residences in the United States. Results: All respondents consider their combined training to be an asset when seeking a job, 92% find it to be an asset to their career, and 88% think it provided added flexibility to job searches. The most commonly reported shortcoming was their ineligibility for the PEM sub-board certification. The lack of this designation was perceived to be a detriment to securing academic positions in dedicated children's hospitals. When surveyed regarding which training offers the better skill set for the practice of PEM, 90% (44/49) stated combined EM-PEDS training. When asked which training track gives them the better professional advancement in PEM, 52% (23/44) chose combined EM-PEDS residency, 27% (12/44) chose a pediatrics residency followed by a PEM fellowship, and 25% (11/44) chose an EM residency then a PEM fellowship. No EM-PEDS respondents considered PEM fellowship training after the completion of the dual training program. Conclusion: EM-PEDS graduates found combined training to be an asset in their career. They felt that it provided flexibility in job searches, and that it was ideal training for the skill set required for the practice of PEM. EM-PEDS graduates' practices varied, including mixed settings, free-standing children's hospitals, and community emergency departments. abstract_id: PUBMED:2757277 Evaluation of EMS management training offered during emergency medicine residency training. Physician involvement in the provision of both direct and indirect medical control to emergency medical providers is critical to the effective operation of an emergency medical services (EMS) system. We conducted a survey of all accredited emergency medicine residency programs in the United States to determine the content of EMS instruction provided to these physicians-in-training. The majority of programs provide an introduction to direct medical control, to EMS organizational structure, and the opportunity to participate in EMS-related research. Less than 65%, however, provide formal instruction in EMS risk management or quality assurance or the opportunity to observe policy-making bodies related to EMS. The importance placed on EMS during residency training is variable. EMS is the domain of emergency medicine, and adequate training of residents for these responsibilities is imperative. abstract_id: PUBMED:30924437 Resident Involvement in Tactical Medicine: 12 Years Later. Introduction: Interest in tactical medicine, the provision of medical support to law enforcement and military special operations teams, continues to grow. The majority of tactical physicians are emergency physicians with additional training and experience in tactical operations. A 2005 survey found that 18% of responding Emergency Medicine (EM) residencies offered their resident physicians structured exposure to tactical medicine at that time. Methods: This study sought to assess interval changes in tactical medicine exposure during EM residency and Emergency Medical Services (EMS) fellowship training. A secure online survey was distributed electronically to all 212 EM residency programs and 44 EMS fellowship programs in the United States. Results: Responses were received from 99 (46%) EM residency and 40 (91%) EMS fellowship programs. Results showed that 52 (53%) of the responding residencies offered physician trainees formal exposure to tactical medicine as part of their training (P &amp;lt; .0001 compared to 18% in 2005). In addition, 32 (72%) of the 40 responding EMS fellowships (newly established since the initial survey) offered this opportunity. Experiences ranged from observation to active participation during tactical training and call-outs. The EM residents and EMS fellows provide support to local, state, and federal law enforcement agencies. A small number of programs (six residencies and four fellowships) allowed a subset of qualified trainees to be armed during tactical operations. Conclusion: Overall, training opportunities in tactical medicine have grown significantly over the last decade from 18% to 53% of responding EM residencies. In addition, 72% of responding EMS fellowships incorporate tactical medicine in their training program.Petit NP, Stopyra JP, Padilla RA, Bozeman WP. Resident involvement in tactical medicine: 12 years later. Prehosp Disaster Med. 2019;34(2):217-219. abstract_id: PUBMED:37417034 The paucity of pediatric emergency medicine fellowship training programs in Africa. Sub-Saharan Africa has the highest burden of childhood and adolescent mortality in the world. The leading causes of mortality in pediatric populations in Africa include preterm birth complications, pneumonia, malaria, diarrheal diseases, HIV/AIDS, and road injuries. These causes of childhood and adolescent mortality often lead to emergency room utilization due to critical presentation, placing emphasis on the importance of pediatric emergency services in Africa. Despite the criticality of pediatric emergency medicine (PEM) in the region, there is a paucity of PEM training programs in Africa. Ongoing interventions focused on addressing the poor access to PEM training and services include isolated efforts to provide PEM-specific training to nonemergency medicine (EM)-trained practitioners and expand current EM training to include PEM piloted in a single center in Kenya. Sustainable efforts require organized efforts with government and graduate medical education bodies. We discuss the existing infrastructure that can be utilized in promoting the establishment of PEM training programs and urge local governments' investment as well as other stakeholders, including graduate medical education, to address the issue of childhood mortality in Africa through the improved provision and access to PEM training. abstract_id: PUBMED:18353602 Pediatric emergency medicine fellowships: faculty and resident training profiles. The objective of this study was to evaluate the faculty and graduate training profiles of Pediatric Emergency Medicine (PEM) fellowship training programs. An electronic 10-point questionnaire was sent to 57 PEM fellowship directors, with a 70% response rate. Analysis of the individual certification of faculty members in PEM training programs demonstrated that the largest represented training types were general pediatricians and pediatricians with PEM sub-certification (29% and 62% representation, respectively). The remaining faculty types consistently showed &lt; 5% overall involvement. Reported estimates on faculty delivery of clinical training, didactic training, and procedural skills demonstrated that pediatricians sub-board certified in PEM consistently administered the highest percentage of these skill sets (74%, 68%, and 68%, respectively). Emergency Medicine-trained physicians showed a relative increase of involvement in fellowship programs administered by Emergency Medicine departments and in those programs located within adult hospitals. Yet, this involvement still remained substantially lower than that of the pediatric-type faculty. Program directors of fellowships within pediatric hospitals and those administered by Pediatric programs demonstrated a preference for general pediatricians with sub-board certification in PEM to improve their faculty pools. Program directors of fellowship programs located in adult hospitals and those administered by departments of EM demonstrated no preference in training type. Lastly, program directors report that 95% of past graduates received their primary board certification through Pediatrics and only 5% received their primary board certification through Emergency Medicine. There are currently many more pediatric-trained physicians among PEM fellowship faculty and graduates. This survey has demonstrated that there has been a decline in EM-trained physicians involved in PEM fellowships since 2000. abstract_id: PUBMED:7596880 Fellowship training in pediatric emergency medicine for graduates of emergency medicine residencies. Candidates for fellowship training in Pediatric Emergency Medicine may be residency-trained in pediatrics or emergency medicine. The fellows who are emergency medicine-trained have different needs than those with pediatric training. Training programs for these individuals should focus on normal and abnormal development, behavioral pediatrics, pediatric subspecialties, and resuscitation of newborns and children. In addition, training in anesthesia, pediatric intensive care, and transport medicine are also important components of the curriculum. All fellowships should include research methodology and protected time for research activities. Because of the variability in training, fellows with residency training in emergency medicine should receive a core curriculum with the remainder of the training time tailored to meet their specific training needs in pediatrics. This report offers a template for this training. Answer: The abstracts provided offer a mixed perspective on whether pediatric emergency medicine (PEM) training programs are adequately preparing graduates for involvement in emergency medical services (EMS). According to the survey conducted in abstract PUBMED:11138878, the current level of involvement in EMS by PEM faculty and fellows has significant room for improvement. The study found that only 53% of programs provided on-line pediatric medical direction for pre-hospital providers, and only 34% of responding program directors felt that the current level of exposure to EMS was adequate for PEM fellow training. This suggests that many PEM training programs may not be meeting the curriculum guidelines for training in EMS. In contrast, abstract PUBMED:7978554 discusses the proposed core content for PEM fellowship training of emergency medicine graduates, indicating an awareness of the need for specific guidelines and training in PEM for emergency medicine graduates. This suggests that there is an effort to tailor the training to the needs of emergency medicine graduates, which could potentially improve their preparedness for EMS involvement. Abstract PUBMED:27503190 provides an update on graduates from combined emergency medicine-pediatrics residency programs, indicating that these graduates found their training to be an asset in their career and felt it provided flexibility in job searches. However, the lack of eligibility for PEM sub-board certification was perceived as a detriment to securing academic positions in dedicated children's hospitals. Abstract PUBMED:18353602 highlights that the majority of PEM fellowship faculty and graduates are primarily trained in pediatrics rather than emergency medicine, which may influence the focus and delivery of training in these programs. In summary, while there are efforts to improve and tailor PEM training for EMS involvement, the evidence suggests that there is still a need for further enhancement of these training programs to ensure that graduates are adequately prepared for their roles in EMS (PUBMED:11138878, PUBMED:7978554, PUBMED:27503190, PUBMED:18353602).
Instruction: Changes in hospital competitive strategy: a new medical arms race? Abstracts: abstract_id: PUBMED:12650375 Changes in hospital competitive strategy: a new medical arms race? Objective: To describe changes in hospitals' competitive strategies, specifically the relative emphasis placed on strategies for competing along price and nonprice (i.e., service, amenities, perceived quality) dimensions, and the reasons for any observed shifts. Methods: This study uses data gathered through the Community Tracking Study site visits, a longitudinal study of a nationally representative sample of 12 U.S. communities. Research teams visited each of these communities every two years since 1996 and conducted between 50 to 90 semistructured interviews. Additional information on hospital competition and strategy was gathered from secondary data. Principal Findings: We found that hospitals' strategic emphasis changed significantly between 1996-1997 and 2000-2001. In the mid-1990s, hospitals primarily competed on price through "wholesale" strategies (i.e., providing services attractive to managed care plans). By 2000-2001, nonprice competition was becoming increasingly important and hospitals were reviving "retail" strategies (i.e., providing services attractive to individual physicians and the patients they serve). Three major factors explain this shift in hospital strategy: less than anticipated selective contracting and capitated payment; the freeing up of hospital resources previously devoted to horizontal and vertical integration strategies; and, the emergence and growth of new competitors. Conclusion: Renewed emphasis on nonprice competition and retail strategies, and the service mimicking and one-upmanship that result, suggest that a new medical arms race is emerging. However, there are important differences between the medical arms race today and the one that occurred in the 1970s and early 1980s: the hospital market is more concentrated and price competition remains relatively important. The development of a new medical arms race has significant research and policy implications. abstract_id: PUBMED:33411086 Does hospital competition lead to medical equipment expansion? Evidence on the medical arms race. With the implementation of a series of pro-competition policies in China, the hospital market competition has been intensified dramatically over the past decade. Based on previous literature, such competition is very much likely to bring about an upgoing trend in the promotion and expansion of medical facilities among hospitals as an essential strategy for attracting patients, which is known as Medical Arms Race (MAR). Comprehensive evaluations have been conducted by previous studies on the consequences of the MAR, which, however, merely provided inadequate empirical evidence on the relationship between hospital competition and MAR. Utilizing the variations in hospital competition across various regions and through different time periods in Sichuan Province as a prototype representative of the nationwide situation, a dynamic panel data model was established and adopted in this study for investigating whether intensified hospital competition had resulted in the expansion of medical facilities in China during the corresponding time period. The geopolitical boundaries and Herfindahl-Hirschman Index (HHI) were respectively employed to define the hospital market and measure the competition degree. We found that a 10% reduction in HHI is associated with an 8.79% increase in regional total costs of advanced medical equipment per capita, suggesting that hospital competition would lead to medical equipment expansion. Our results provide novel evidence on MAR which is particularly applicable for the healthcare system in China, providing suggestions for nationwide healthcare reform in order to mitigate potential negative outcomes induced by the implementation of pro-competition policies. abstract_id: PUBMED:36567370 Impacts of the medical arms race on medical expenses: a public hospital-based study in Shenzhen, China, during 2009-2013. Background: Has the medical arms race (MAR) increased healthcare expenditures? Existing literature has yet to draw a consistent conclusion. Hence, this study aims to reexamine the relationship between the MAR and medical expenses by the data from public hospitals in Shenzhen, China, during the period of 2009 to 2013. Methods: This study's data were collected through panel datasets spanning 2009 to 2013 from the Shenzhen Statistical Yearbook, Shenzhen Health Statistical Yearbook, and annual reports from the Shenzhen Municipal Health Commission. The Herfindahl-Hirschman index and hierarchical linear modeling were combined for empirical analysis. Results: The MAR's impact on medical examination fees differed during the inpatient and outpatient stages. Further analysis verified that the MAR had the most significant impact on outpatient examination fees. Due to the characteristics of China's medical system, government regulations in the healthcare market may consequently accelerate the MAR among public hospitals. Strict government regulations on the medical system have also promoted increased medical examination costs to some extent. Once medical service prices are under strict administrative control, only drug and medical examination fees are the primary forms of extra income for hospitals. After the proportion of drug fees is further regulated, medical examinations will then become another staple method to generate extra revenue. These have distorted Chinese public hospitals' medical fees, which completely differ from those in other countries. Conclusion: The government should confirm that they have allocated sufficient financial investments for public hospitals; otherwise, the competition among hospitals will transfer the burden to patients, and especially to those who can afford to pay for care. A core task for public hospitals involves providing safer, less expensive, and more reliable medical services. abstract_id: PUBMED:10115537 Service patterns in local hospital markets: complementarity or medical arms race? In this paper we investigate how the availability of selected services in individual hospitals is influenced by the number of neighbors who are potential or actual competitors. Hospitals offer a wide range of services, and some services may display a complementary pattern while others may contribute to a competitive medical arms race. We will examine the types of services that show these opposite patterns to analyze the dynamics that have characterized local hospital service decisions. abstract_id: PUBMED:30715314 The medical arms race and its impact in Chinese hospitals: implications for health regulation and planning. The rapid diffusion of medical technologies is widely recognized as a key driver of healthcare cost escalation. The excessive duplication of technologies gives rise to the so-called medical arms race. Conventional wisdom tends to explain this phenomenon by external reimbursement mechanisms and hospitals' competitive strategies, but has largely neglected the role played by health regulations that may also affect hospitals' technology adoption decisions. This study sheds new light on the medical arms race with evidence from China, which has witnessed an unprecedented expansion of big tertiary hospitals and a keen pursuit of expensive medical technologies. Chinese hospitals aggressively pursue high-tech medical equipment as an opportunistic reaction to the peculiar health regulatory environment. By analysing a panel dataset collected from Shenzhen City, this study reveals a series of important impacts of the medical arms race in Chinese public hospitals. High-tech medical equipment is found to lead to an increase in hospital revenues and patient volumes, but no significant impact is noted on unit costs. While high-tech medical equipment is associated with a discernible improvement in clinical outcomes, no contribution to hospitals' operational efficiency is noted. These findings are interpreted in the context of the broader health regulatory framework and China's public hospital reforms. abstract_id: PUBMED:24822021 An arms race between producers and scroungers can drive the evolution of social cognition. The "social intelligence hypothesis" states that the need to cope with complexities of social life has driven the evolution of advanced cognitive abilities. It is usually invoked in the context of challenges arising from complex intragroup structures, hierarchies, and alliances. However, a fundamental aspect of group living remains largely unexplored as a driving force in cognitive evolution: the competition between individuals searching for resources (producers) and conspecifics that parasitize their findings (scroungers). In populations of social foragers, abilities that enable scroungers to steal by outsmarting producers, and those allowing producers to prevent theft by outsmarting scroungers, are likely to be beneficial and may fuel a cognitive arms race. Using analytical theory and agent-based simulations, we present a general model for such a race that is driven by the producer-scrounger game and show that the race's plausibility is dramatically affected by the nature of the evolving abilities. If scrounging and scrounging avoidance rely on separate, strategy-specific cognitive abilities, arms races are short-lived and have a limited effect on cognition. However, general cognitive abilities that facilitate both scrounging and scrounging avoidance undergo stable, long-lasting arms races. Thus, ubiquitous foraging interactions may lead to the evolution of general cognitive abilities in social animals, without the requirement of complex intragroup structures. abstract_id: PUBMED:29928294 Short- and long-term evolution in our arms race with cancer: Why the war on cancer is winnable. Human society is engaged in an arms race against cancer, which pits one evolutionary process-human cultural evolution as we develop novel cancer therapies-against another evolutionary process-the ability of oncogenic selection operating among cancer cells to select for lineages that are resistant to our therapies. Cancer cells have a powerful ability to evolve resistance over the short term, leading to patient relapse following an initial period of apparent treatment efficacy. However, we are the beneficiaries of a fundamental asymmetry in our arms race against cancer: Whereas our cultural evolution is a long-term and continuous process, resistance evolution in cancer cells operates only over the short term and is discontinuous - all resistance adaptations are lost each time a cancer patient dies. Thus, our cultural adaptations are permanent, whereas cancer's genetic adaptations are ephemeral. Consequently, over the long term, there is good reason to expect that we will emerge as the winners in our war against cancer. abstract_id: PUBMED:32774881 The geographic mosaic of arms race coevolution is closely matched to prey population structure. Reciprocal adaptation is the hallmark of arms race coevolution. Local coadaptation between natural enemies should generate a geographic mosaic pattern where both species have roughly matched abilities across their shared range. However, mosaic variation in ecologically relevant traits can also arise from processes unrelated to reciprocal selection, such as population structure or local environmental conditions. We tested whether these alternative processes can account for trait variation in the geographic mosaic of arms race coevolution between resistant garter snakes (Thamnophis sirtalis) and toxic newts (Taricha granulosa). We found that predator resistance and prey toxin levels are functionally matched in co-occurring populations, suggesting that mosaic variation in the armaments of both species results from the local pressures of reciprocal selection. By the same token, phenotypic and genetic variation in snake resistance deviates from neutral expectations of population genetic differentiation, showing a clear signature of adaptation to local toxin levels in newts. Contrastingly, newt toxin levels are best predicted by genetic differentiation among newt populations, and to a lesser extent, by the local environment and snake resistance. Exaggerated armaments suggest that coevolution occurs in certain hotspots, but prey population structure seems to be of particular influence on local phenotypic variation in both species throughout the geographic mosaic. Our results imply that processes other than reciprocal selection, like historical biogeography and environmental pressures, represent an important source of variation in the geographic mosaic of coevolution. Such a pattern supports the role of "trait remixing" in the geographic mosaic theory, the process by which non-adaptive forces dictate spatial variation in the interactions among species. abstract_id: PUBMED:10118404 Boston building boom--is it an arms race? A massive healthcare construction boom is under way in downtown Boston, prompting some observers to declare it a "medical arms race." Others, however, say the new projects are critical to some hospitals' ability to remain competitive. Whatever the case may be, there's big money involved--some $1.3 billion in projects that already have been started or soon will be. abstract_id: PUBMED:35167315 Toxic Relationships and Arms-Race Coevolution Revisited. Toxin evolution in animals is one of the most fascinating and complex subjects of scientific inquiry today. Gaining an understanding of toxins poses a multifaceted challenge given the diverse modes of acquisition, evolutionary adaptations, and abiotic components that affect toxin phenotypes. Here, we highlight some of the main genetic and ecological factors that influence toxin evolution and discuss the role of antagonistic interactions and coevolutionary dynamics in shaping the direction and extent of toxicity and resistance in animals. We focus on toxic Pacific newts (family Salamandridae, genus Taricha) as a system to investigate and better evaluate the widely distributed toxin they possess, tetrodotoxin (TTX), and the hypothesized model of arms-race coevolution with snake predators that is used to explain phenotypic patterns of newt toxicity. Finally, we propose an alternative coevolutionary model that incorporates TTX-producing bacteria and draws from an elicitor-receptor concept to explain TTX evolution and ecology. Answer: The concept of a "medical arms race" refers to the competition among hospitals to acquire the latest medical technologies and services to attract patients and physicians, often leading to escalating costs without necessarily improving the quality of care. The studies provided offer insights into this phenomenon from different perspectives. In the mid-1990s, hospitals primarily competed on price through strategies aimed at managed care plans. By 2000-2001, the emphasis shifted towards nonprice competition, with hospitals adopting "retail" strategies to appeal to individual physicians and patients. This shift was attributed to factors such as less selective contracting and capitated payment than expected, the release of resources previously tied up in integration strategies, and the emergence of new competitors. This new emphasis on nonprice competition and service one-upmanship suggests the emergence of a new medical arms race, albeit with important differences from the past, such as a more concentrated hospital market and the continued relevance of price competition (PUBMED:12650375). In China, hospital competition has intensified due to pro-competition policies, leading to an expansion of medical facilities as hospitals strive to attract patients. A study using data from Sichuan Province found that increased hospital competition was associated with a significant increase in the cost of advanced medical equipment per capita, providing empirical evidence of a medical arms race in the Chinese healthcare system (PUBMED:33411086). Another study from Shenzhen, China, examined the impact of the medical arms race on medical expenses, finding that it significantly affected outpatient examination fees. The study suggested that government regulations in the healthcare market might accelerate the medical arms race among public hospitals, leading to increased medical examination costs. The study concluded that without sufficient government investment in public hospitals, the competition could transfer the financial burden to patients (PUBMED:36567370). The rapid diffusion of medical technologies in China has been driven by hospitals' aggressive pursuit of high-tech equipment as a response to the health regulatory environment. This has led to increased hospital revenues and patient volumes, but not necessarily to improvements in operational efficiency. The study suggests that health regulations and public hospital reforms play a crucial role in the medical arms race (PUBMED:30715314). In summary, the changes in hospital competitive strategy indicate a new medical arms race characterized by increased nonprice competition and investment in medical technologies and services. This trend has implications for healthcare costs and the need for regulatory and policy interventions to ensure that competition leads to improved healthcare outcomes rather than just increased expenses.
Instruction: Cross-cultural medical education: can patient-centered cultural competency training be effective in non-Western countries? Abstracts: abstract_id: PUBMED:18777429 Cross-cultural medical education: can patient-centered cultural competency training be effective in non-Western countries? Background: No evidence addresses the effectiveness of patient-centered cultural competence training in non-Western settings. Aims: To examine whether a patient-centered cultural competency curriculum improves medical students' skills in eliciting the patients' perspective and exploring illness-related social factors. Method: Fifty-seven medical students in Taiwan were randomly assigned to either the control (n = 27) or one of two intervention groups: basic (n = 15) and extensive (n = 15). Both intervention groups received two 2-hour patient-centered cultural competency workshops. In addition, the extensive intervention group received a 2-hour practice session. The control group received no training. Results: At the end of the clerkship, all students were evaluated with an objective structured clinical examination (OSCE). Students in the extensive intervention group scored significantly higher than the basic intervention and control groups in eliciting the patient's perspective (F = 18.38, p &lt; 0.001, eta(2) = 0.40). Scores of both intervention groups were significantly higher than the control group in the exploring social factors (F = 6.66, p = 0.003, eta(2) = 0.20). Conclusion: Patient-centered cultural competency training can produce improvement in medical students' cross-cultural communication skills in non-Western settings, especially when adequate practice is provided. abstract_id: PUBMED:32148073 Scoping Review of Economical, Efficient, and Effective Cultural Competency Measures. Identifying practical and effective tools to evaluate the efficacy of cultural competency (cc) training in medicine continues to be a challenge. Multiple measures of various lengths and stages of psychometric testing exist, but none have emerged as a "gold standard." This review attempts to identify cc measures with potential to economically, efficiently, and effectively provide insight regarding the value of cc training efforts to make it easier for wider audience utilization. A scoping review of 11 online reference databases/search engines initially yielded 9,626 items mentioning cc measures. After the initial review, focus was placed on measures that assessed cultural competence of medical students, residents, and/or attending physicians. Six measures were identified and reviewed: (1) Cross-Cultural Care Survey, (2) Cultural Competence Health Practitioner Assessment, (3) Cultural Humility Scale, (4) Health Beliefs Attitudes Survey, (5) Tool for Assessing Cultural Competency Training, and (6) the Tucker-Culturally Sensitive Health-Care Provider Inventory. Relevant literature documenting use and current psychometric assessments for each measure were noted. Each measure was found to be of value for its particular purpose but needs more strenuous reliability and validity testing. A commitment to include psychometric assessments should be an expected part of studies utilizing these measures. abstract_id: PUBMED:28676414 Cultural Competency Training in Emergency Medicine. Background: The Emergency Department is widely regarded as the epicenter of medical care for diverse and largely disparate types of patients. Physicians must be aware of the cultural diversity of their patient population to appropriately address their medical needs. A better understanding of residency preparedness in cultural competency can lead to better training opportunities and patient care. Objective: The objective of this study was to assess residency and faculty exposure to formal cultural competency programs and assess future needs for diversity education. Methods: A short survey was sent to all 168 Accreditation Council for Graduate Medical Education program directors through the Council of Emergency Medicine Residency Directors listserv. The survey included drop-down options in addition to open-ended input. Descriptive and bivariate analyses were used to analyze data. Results: The response rate was 43.5% (73/168). Of the 68.5% (50/73) of residency programs that include cultural competency education, 90% (45/50) utilized structured didactics. Of these programs, 86.0% (43/50) included race and ethnicity education, whereas only 40.0% (20/50) included education on patients with limited English proficiency. Resident comfort with cultural competency was unmeasured by most programs (83.6%: 61/73). Of all respondents, 93.2% (68/73) were interested in a universal open-source cultural competency curriculum. Conclusions: The majority of the programs in our sample have formal resident didactics on cultural competency. Some faculty members also receive cultural competency training. There are gaps, however, in types of cultural competency training, and many programs have expressed interest in a universal open-source tool to improve cultural competency for Emergency Medicine residents. abstract_id: PUBMED:29793905 The Need for Cultural Competency in Health Care. Purpose: To highlight the importance of cultural competency education in health care and in the medical imaging industry. Methods: A comprehensive search of the Education Resource Information Center and MEDLINE databases was conducted to acquire full-text and peer-reviewed articles relating to cultural competency training in health care. Results: A total of 1008 academic journal articles and 3 books were identified for this literature review. Search criteria was narrowed to peer-reviewed articles published between 2000 and 2016, resulting in 24 articles. A majority of the research studies addressed cultural competency education in allied health professions, as well as psychology and athletic training. Recent research studies pertaining to the cultural competence of imaging professionals were not found. Discussion: Research shows that the behaviors of health care providers can contribute to health disparities. National standards have been established to promote patient-centered care that reduces or eliminates health disparities in the U.S. Population: Lectures and training sessions help professionals maintain these standards, but they might not be adequate. Health care workers need to interact and work with diverse patient populations to increase their empathy and become culturally competent. Conclusion: A patient-centered care approach that responds to patients' unique needs and reduces health disparities among diverse patient populations can be achieved by training culturally competent health care professionals. More research is needed to determine the nature of cultural competency education taught in radiography programs. abstract_id: PUBMED:34330392 The association between students' emotional intelligence, cultural competency, and cultural awareness. Introduction: Emphasis has been placed on health professionals' employment of social and behavioral skills to negotiate complex patient-clinician relationships. One example is a professional's ability to provide culturally appropriate care. This study evaluated the relationship between pharmacy students' cultural awareness, emotional intelligence, and their ability to engage in appropriate cross-cultural interactions as measured by a cultural competency scale. Methods: A cross-sectional study was conducted in first-year pharmacy students using three distinct survey instruments to measure cultural awareness, emotional intelligence, and cultural competence. Demographic characteristics assessed included gender, race, ethnicity, and previous cultural competency training. Descriptive statistics were used to characterize performance on each survey instrument. Pearson's correlation was used to evaluate the statistical significance of associations observed between the variables measured within the study. Results: Forty-four students responded, of which 34% had previous cultural competency training. No statistically significant associations were observed between overall cultural competence, emotional intelligence, or cultural awareness. The self-cultural scale (part of the cultural awareness scale) was significantly related to higher overall emotional intelligence scores (P = .02). Previous cultural competency training was associated with significantly higher scores on the cultural competence scale (P = .004). Previous cultural competency training was also associated with enhanced ability to perceive one's own emotions as measured by the emotional intelligence scale (P = .02). Conclusions: Previous exposure to cultural competency training impacts cultural competence scores most significantly. abstract_id: PUBMED:30283831 Measuring Medical Students' Preparedness and Skills to Provide Cross-Cultural Care. Purpose: Cross-cultural education is an integral and required part of undergraduate medical curricula. However, the teaching of cross-cultural care varies widely and methods of evaluation are lacking. We sought to better understand medical students' perspectives on their own cultural competency across the 4-year curriculum using a validated survey instrument. Methods:We conducted an annual Internet-based survey at Harvard Medical School with students in all 4 years of training, for four consecutive years. We used a tool previously validated with residents and slightly modified it for medical students, assessing their (1) preparedness, (2) skillfulness, and (3) perspectives on the educational curriculum and learning climate. Results: Of 2592 possible survey responses, we received 1561 (60% response rate). Fourth-year students had significantly higher scores than first-year students (p&lt;0.001) for all but one preparedness item (caring for transgender patients) and all but one skillfulness item (identifying ability to read/write English). Less than 50% of students felt adequately prepared/skilled by their fourth year on 8 of 11 preparedness items and 5 of 10 skillfulness items. Lack of practical experience caring for diverse patients was the most frequently cited challenge. Conclusions: While students reported that preparedness and skillfulness to care for culturally diverse patients seem to increase with training, fourth-year students still felt inadequately prepared and skilled in many important aspects of cross-cultural care. Medical schools can use this tool with students to self-assess cultural competency and to help guide enhancements to their curricula focusing on cross-cultural care. abstract_id: PUBMED:19717367 A framework for enhancing and assessing cultural competency training. The globalization of medical practice using accepted evidence-based approaches is matched by a growing trend for shared curricula in medicine and other health professions across international boundaries. Interest in the common challenges of curricular design, delivery and assessment is expressed in conferences and dialogues focused on topics such as teaching of professionalism, humanism, integrative medicine, bioethics and cultural competence. The spirit of collaboration, sharing, acknowledgment and mutual respect is a guiding principle in cross-cultural teaching. This paper uses the Tool for Assessing Cultural Competency Training to explore methods for designing and implementing cultural competency curricula. The intent is to identify elements shared across institutional, national and cross-cultural borders and derive common principles for the assessment of learners and the curricula. Two examples of integrating new content into existing clerkships are provided to guide educators interested in an integrated and learner-centered approach to assimilate cultural competency teaching into existing required courses, clerkships and elective experiences. The paper follows an overarching principle that "every patient-doctor encounter is a cross-cultural encounter", whether based on ethnicity, age, socioeconomic status, sex, religious values, disability, sexual orientation or other differences; and whether the differences are explicit or implicit. abstract_id: PUBMED:37614855 Cultural competency education in the medical curriculum to overcome health care disparities. Background: Our increasingly diverse population demands the adoption of transcultural approaches to health care delivery. Training courses in medical education have been developed across the country for cultural competency, but have not been standardized or incorporated consistently. This study sought to formulate an educational intervention in medical training using the concepts of cultural competency and humility to improve understanding of cultural disparities in health care. Methods: This study used three domains of Tools for Assessing Cultural Competence Training (TACCT) by the Association of American Medical Colleges. Participants included 106 fourth-year medical students and 19 internal medicine residents at Louisiana State University in Shreveport in 2022. The training session included a lecture introducing cultural and structural competency for 30 minutes followed by three workshops based on the TACCT domains of key aspects of cultural competence, understanding the impact of stereotyping on medical decision-making, and cross-cultural clinical skills. The participants were given a pre- and postsession questionnaire. Results: After the session, 68% of students rated their understanding of cultural competency as excellent. For methods of teaching-lecture versus workshop versus both-66% rated the combination as excellent. Conclusion: The rudimentary understanding of cultural competency and cultural humility improved after the session. abstract_id: PUBMED:27818848 An Examination of Cultural Competence Training in US Medical Education Guided by the Tool for Assessing Cultural Competence Training. In the United States, medical students must demonstrate a standard level of "cultural competence," upon graduation. Cultural competence is most often defined as a set of congruent behaviors, attitudes, and policies that come together in a system, organization, or among professionals that enables effective work in cross-cultural situations. The Association of American Medical Colleges developed the Tool for Assessing Cultural Competence Training (TACCT) to assist schools in developing and evaluating cultural competence curricula to meet these requirements. This review uses the TACCT as a guideline to describe and assess pedagogical approaches to cultural competence training in US medical education and identify content gaps and opportunities for curriculum improvement. A total of 18 programs are assessed. Findings support previous research that cultural competence training can improve the knowledge, attitudes, and skills of medical trainees. However, wide variation in the conceptualization, implementation, and evaluation of cultural competence training programs exists, leading to differences in training quality and outcomes. More research is needed to establish optimal approaches to implementing and evaluating cultural competence training that incorporate cultural humility, the social determinants of health, and broader structural competency within the medical system. abstract_id: PUBMED:28725822 Could Values and Social Structures in Singapore Facilitate Attainment of Patient-Focused, Cultural, and Linguistic Competency Standards in a Patient-Centered Medical Home Pilot? Primary care practices in the United States are transforming into patient-centered medical homes (PCMHs) at a rapid pace. Newer PCMH standards have emphasized culturally and linguistically appropriate services (CLAS), but at this time, only some states in the United States have proposed or passed cultural competency training for health care professionals. Other countries are moving to PCMH models. Singapore, a small, ethnically diverse island nation, has national values and social structures that emphasize cultural and linguistic cohesion. In this piece, we examine Singapore's first PCMH pilot with a national academic center and primary care practice group. Features such as common shared values, self-reliance, racial and religious harmony, patient experience surveillance, and incorporation of CLAS standards in routine health care transactions may predict success for the PCMH in Singapore, with some implications for the United States. Answer: Yes, patient-centered cultural competency training can be effective in non-Western countries. A study conducted in Taiwan with medical students demonstrated that such training can improve students' skills in eliciting the patient's perspective and exploring illness-related social factors. In this study, medical students who received patient-centered cultural competency workshops, especially those with an additional practice session, scored significantly higher in these areas compared to those who did not receive any training (PUBMED:18777429). This suggests that with adequate practice, patient-centered cultural competency training can lead to improvements in cross-cultural communication skills in non-Western settings.
Instruction: Ropivacaine versus bupivacaine 0.125% with fentanyl 1 microg/ml for epidural labour analgesia: is daily practice more important than pharmaceutical choice? Abstracts: abstract_id: PUBMED:16617757 Ropivacaine versus bupivacaine 0.125% with fentanyl 1 microg/ml for epidural labour analgesia: is daily practice more important than pharmaceutical choice? Unlabelled: Ropivacaine might be superior to bupivcaine for epidural labour analgesia because it appears to induce less lower extremity motor blockade. The clinical relevance of this difference is not yet clear. Methods: In a double-blind randomised trial bupivacaine and ropivacaine each at 0.125% with 1 microg/ml fentanyl were compared for epidural labour analgesia. This study was performed in two university hospitals. Results: Sixty-three nulliparous women with singleton pregnancies at term were included. There were no differences between bupivacaine and ropivacaine as far as motor blockade, analgesic outcome, mode of delivery and neonatal outcome are concerned. However, the clinical management of epidural analgesia differed significantly between the two institutions involved. Parturients of one institution had their epidural catheter placed earlier, needed less top-up medication, and had more successful mobilisations, when compared to the other institution. Conclusions: Institutional clinical practice can be significantly different. Pharmacological differences between bupivacaine and ropivacaine at 0.125% with 1 microg/ml fentanyl seem to be less important than differences between institutions in terms of clinical practice. abstract_id: PUBMED:11309010 Ropivacaine 1 mg/ml, plus fentanyl 2 microg/ml for epidural analgesia during labour. Is mode of administration important? Background: Patient-controlled epidural analgesia (PCEA) with a moderate to high concentration of bupivacaine in obstetrics has been shown to give comparable analgesia and even higher level of satisfaction compared to continuous epidural infusion. We hypothesised that the use of a very low concentration technique (ropivacaine/fentanyl) might result in excessive dosing in the PCEA group, more motor blockade and a negative impact on spontaneous delivery rate. Methods: We conducted a randomised, double-blind study of 60 nulliparous women at term comparing low concentration ropivacaine/fentanyl administered in either patient-controlled or fixed continuous infusion mode. Parturients with known predictors of painful deliveries, i.e. breech presentation, primary induction of labour, were not included. Deliveries within 90 min from the start of epidural analgesia were omitted from the evaluation. Results: We found that both groups required a mean of 12 ml/h low concentration mixture (loading and midwife rescue boluses included). There was no difference between groups with respect to spontaneous delivery rate (71%). This low concentration technique resulted in haemodynamic stability without crystalloid preloading, infusion or vasopressor use. Motor blockade of clinical importance was not detected in any patient. Conclusion: We conclude that epidural use of ropivacaine 1 mg/ml+fentanyl 2 microg/ml provides effective analgesia with equal volume requirements irrespective of administration mode, with a high spontaneous delivery rate. Choice of PCEA or CEI (continuous epidural infusion) should be directed by other considerations, most importantly compliance of midwife and possible reduction in workload for anaesthesiology staff. abstract_id: PUBMED:10958089 Comparison of ropivacaine 0.1%-fentanyl and bupivacaine 0.125%-- fentanyl infusions for epidural labour analgesia. Purpose: To compare analgesic efficacies of ropivacaine-fentanyl and bupivacaine-fentanyl infusions for labour epidural analgesia. Methods: In this double- blind, randomized study 100, term, nulliparous women were enrolled. Lumbar epidural analgesia (LEA) was started at cervical dilatation &lt; 5 cm using either bupivacaine 0.25% followed by bupivacaine 0.125% + 2 microg x ml(-1) fentanyl infusion (n=50) or ropivacaine 0.2% followed by ropivacaine 0.1% + 2 microg x ml(-1) fentanyl infusion (n=50). Every hour maternal vital signs, visual analog scale (VAS) pain score, sensory levels, and motor block (Bromage score) were assessed. Data were expressed as mean +/-1 SD and analyzed using Chi -Squared and Mann-Whitney U tests at &lt;0.05. Results: The onset times were 10.62+/-4.9 and 11.3+/-4.7 min for the bupivacaine and ropivacaine groups respectively (P = NS). The median VAS scores were not different between the groups at any of the evaluation periods. However, at least 80% of patients in the ropivacaine group had no demonstrable motor block after the first hour compared with only 55% of patients given bupivacaine (P =0.01). Conclusions: Both bupivacaine and ropivacaine produce satisfactory labour analgesia. However, ropivacaine infusion is associated with less motor block throughout the first stage of labour and at 10 cm dilatation. abstract_id: PUBMED:29643619 Comparison of continuous epidural infusion of 0.125% ropivacaine with 1 μg/ml fentanyl versus 0.125% bupivacaine with 1 μg/ml fentanyl for postoperative analgesia in major abdominal surgery. Background And Aim: The present study was carried out to compare the efficacy of continuous epidural infusion of two amide local anesthetics, ropivacaine and bupivacaine with fentanyl for postoperative analgesia in major abdominal surgeries. Material And Methods: A total of 60 patients scheduled for major abdominal surgery were randomized into two study Groups B and R with thirty patients in each group. All patients were administered general anesthesia after placing epidural catheter. Patients received continuous epidural infusion of either 0.25% bupivacaine with 1 ug/ml fentanyl (Group B) or of 0.25% ropivacaine with 1 ug/ml fentanyl (Group R) at the rate 6 ml/h intraoperatively. Postoperatively, they received 0.125% bupivacaine with 1 ug/ml fentanyl (Group B) or 0.125% ropivacaine with 1 ug/ml fentanyl (Group R) at the rate 6 ml/h. Hemodynamic parameters, visual analog scale (VAS), level of sensory block, and degree of motor block (based on Bromage scale) were monitored for 24 h postoperatively. Results: Hemodynamic parameters and VAS scores were comparable in the two groups. The level of sensory block was higher in bupivacaine group. There were more patients with higher Bromage score in the (23.3%) bupivacaine group than in (6.7%) ropivacaine group though the difference was not statistically significant. Conclusion: Both ropivacaine and bupivacaine in the concentration of 0.125% with fentanyl 1 ug/ml are equally safe, with minimal motor block and are effective in providing postoperative analgesia. abstract_id: PUBMED:12074416 Fentanyl added to bupivacaine 0.05% or ropivacaine 0.05% in patient-controlled epidural analgesia in labour. Background And Objective: Epidural analgesia is the most effective method for pain relief during labour. The aim was to elucidate the efficacy of ropivacaine 0.05% and bupivacaine 0.05%, which were both combined with fentanyl 0.00015% to provide analgesia in labour. Methods: Forty nulliparous females were enrolled into the study. After insertion of an epidural catheter, patients were randomly assigned into two groups. Once the os uteri had dilated to 4-5 cm, a bolus of bupivacaine 0.125% 10mL + fentanyl 50 microg (1 mL) in Group 1 patients, and ropivacaine 0.125% 10mL + fentanyl 50 microg (1 mL) in Group 2 patients was administered via the epidural catheter. Then, patient-controlled epidural analgesia was started with a basal infusion of bupivacaine 0.05% 10 mLh(-1) + fentanyl 0.00015% 1.5 pgmL(-1) in Group 1, and ropivacaine 0.05% + fentanyl 1.5 microgmL(-1) in Group 2. When needed, a 10 mL bolus infusion could be given and the lockout time was 20 min. Maternal and fetal haemodynamic variables were monitored before induction and subsequently at 5 min intervals. Using a visual analogue scale assessed the degree of pain. Results: Maternal haemodynamic variables and Apgar scores were not different between the two groups. The second stage of the labour was shorter in Group 2 (P &lt; 0.01). There were no significant differences in patients' assessment of motor block or mode of delivery between groups. Conclusions: An epidural infusion (10 mLh(-1)) of bupivacaine 0.05% or ropivacaine 0.05% together with fentanyl 1.5 microg mL(-1) provided good and safe analgesia during labour. abstract_id: PUBMED:10702449 A comparison of epidural analgesia with 0.125% ropivacaine with fentanyl versus 0.125% bupivacaine with fentanyl during labor. Unlabelled: We previously found that the extent of an epidural motor block produced by 0.125% ropivacaine was clinically indistinguishable from 0.125% bupivacaine in laboring patients. By adding fentanyl to the 0. 125% ropivacaine and bupivacaine solutions in an attempt to reduce hourly local anesthetic requirements, we hypothesized that differences in motor block produced by the two drugs may become apparent. Fifty laboring women were randomized to receive either 0. 125% ropivacaine with fentanyl 2 microg/mL or an equivalent concentration of bupivacaine/fentanyl using patient-controlled epidural analgesia (PCEA) with settings of: 6-mL/hr basal rate, 5-mL bolus, 10-min lockout, 30-mL/h dose limit. Analgesia, local anesthetic use, motor block, patient satisfaction, and side effects were assessed until the time of delivery. No differences in verbal pain scores, local anesthetic use, patient satisfaction, or side effects between groups were observed; however, patients administered ropivacaine/fentanyl developed significantly less motor block than patients administered bupivacaine/fentanyl. Ropivacaine 0.125% with fentanyl 2 microg/mL produces similar labor analgesia with significantly less motor block than an equivalent concentration of bupivacaine/fentanyl. Whether this statistical reduction in motor block improves clinical outcome or is applicable to anesthesia practices which do not use the PCEA technique remains to be determined. Implications: By using a patient-controlled epidural analgesia technique, ropivacaine 0.125% with fentanyl 2 microg/mL produces similar analgesia with significantly less motor block than a similar concentration of bupivacaine with fentanyl during labor. Whether this statistical reduction in motor block improves clinical outcome or is applicable to anesthesia practices which do not use the patient-controlled epidural analgesia technique remains to be determined. abstract_id: PUBMED:12074415 Comparison of bupivacaine 0.2% and ropivacaine 0.2% combined with fentanyl for epidural analgesia during labour. Background And Objective: Recent clinical studies comparing ropivacaine 0.25% with bupivacaine 0.25% reported not only comparable analgesia, but also comparable motor block for epidural analgesia during labour. An opioid can be combined with local anaesthetic to reduce the incidence of side-effects and to improve analgesia for the relief of labour pain. The purpose of the study was to evaluate the effects of epidural bupivacaine 0.2% compared with ropivacaine 0.2% combined with fentanyl for the initiation and maintenance of analgesia during labour and delivery. Methods: Sixty labouring nulliparous women were randomly allocated to receive either bupivacaine 0.2% with fentanyl 2 microg mL(-1) (B/F), or ropivacaine 0.2% with fentanyl 2 microg mL(-1) (R/F). For the initiation of epidural analgesia, 8 mL of the study solution was administered. Supplemental analgesia was obtained with 4 mL of the study solution according to parturients' needs when their pain was &gt; or = 4 on a visual analogue scale. Analgesia, hourly local anaesthetic use, motor block, patient satisfaction and side-effects between groups were evaluated during labour and at delivery. Results: Sixty patients were enrolled and 53 completed the study. No differences in verbal pain scores, hourly local anaesthetic use or patient satisfaction between groups were observed. However, motor block was observed in 10 patients in the B/F group whereas only two patients had motor block in the R/F group (P &lt; 0.05). The incidence of instrumental delivery was also higher in the B/F group than in the R/F group (P &lt; 0.05). Conclusions: The results suggest that epidural bupivacaine 0.2% and ropivacaine 0.2% combined with fentanyl produced equivalent analgesia for pain relief during labour and delivery. It is concluded that ropivacaine 0.2% combined with fentanyl 2 microg mL(-1) provided effective analgesia with significantly less motor block and need for an instrumental delivery than a bupivacaine/fentanyl combination at the same concentrations during labour and delivery. abstract_id: PUBMED:11772824 Ropivacaine 0.075% and bupivacaine 0.075% with fentanyl 2 microg/mL are equivalent for labor epidural analgesia. Unlabelled: Fifty percent effective dose estimates for ropivacaine and bupivacaine suggest that ropivacaine is 40% less potent than bupivacaine to initiate labor analgesia. At clinically used concentrations, however, the drugs seem indistinguishable for initiating and maintaining labor analgesia. We designed this study to evaluate a concentration near the reported 50% effective dose values for ropivacaine and bupivacaine in an attempt to detect differences between the drugs during routine clinical use. Fifty-nine nulliparous women in labor were randomized to receive 0.075% ropivacaine or bupivacaine, each with fentanyl 2 microg/mL. After epidural placement and the administration of a lidocaine/epinephrine test dose, 20 mL of study solution was administered and a patient-controlled epidural infusion was initiated with the following settings: 6 mL/h basal rate, 5 mL bolus, 10 min lockout, and 30 mL/h limit. Breakthrough pain was treated with 10-mL boluses of study solution. By using a study design to detect a 40% difference in hourly drug use between groups, we found no statistically significant differences in the amount of local anesthetic used, verbal pain scores, sensory levels, motor blockade, labor duration, mode of delivery, side effects, or patient satisfaction. We conclude that 0.075% ropivacaine and bupivacaine, with fentanyl, are equally effective for labor analgesia using the patient-controlled epidural analgesia technique. Implications: At small concentrations, ropivacaine and bupivacaine when combined with fentanyl are equally effective for labor analgesia. Patients self-administered similar volumes of 0.075% ropivacaine or bupivacaine solutions containing fentanyl (2 microg/mL) suggesting that at this concentration, and with the addition of fentanyl, ropivacaine and bupivacaine can be used interchangeably. abstract_id: PUBMED:11070964 Comparison of 0.125% bupivacaine and 0.2% ropivacaine in obstetric analgesia This comparative study of low doses of ropivacaine was conducted in order to identify the most effective form of analgesia during labour with the aid of supplementary low doses of fentanyl and clonidine. 60 ASA I and II parturient primipares who had asked for epidural analgesia were randomly assigned to two groups. Group R was given 5-7 ml 0.2% ropivacaine and Group B 0.125% bupivacaine with both groups receiving 75 ng clonidine and 50 ng fentanyl with their first bolus of local anaesthetic. The parameters measured included the speed and spread of the sensory blockade and the scale of any motor blockade. The material haemodynamics and VAS pain relief scores were also measured at 30-minute intervals during labour and all side-effects (nausea, vomiting, localised or generalised itching, headache etc) were also monitored. Apgar anaesthetics and other drugs was decided on the basis of the VAS score (a further dose was given to women with a VAS of &gt; 3-4). The study was completed by a telephone interview 6 months after delivery and the data were analysed using the Student's t-test and the chi 2 test. The analgesic effect was satisfactory in both groups and no statistically significant differences were found between the two groups under most of the headings analysed, apart from the top-up doses needed to maintain adequate analgesia. The average time between the first VAS to parturition was 292 mns in Group B and 267 mns in Groups R. Top-up doses of local anaesthetic (2.35 vs 5.05) came on average to 15.8 ml in Group B compared to 24.1 ml in Group R. There were 20% Caesarian sections in Group R and 13.8% in Group B. Optimum analgesia was achieved in Group R, the level of analgesia was insufficient or barely sufficient in 3.3% of cases. There was no Apgar score &lt; 7 in either group. It was therefore concluded that both bupivacaine and ropivacaine offer excellent analgesia during labour and have no significant side effects on mothers or babies. abstract_id: PUBMED:10626714 Patient supplemented epidural analgesia after major abdominal surgery with bupivacaine/fentanyl or ropivacaine/fentanyl. Purpose: To compare analgesic efficacy and occurrence of motor block and other side effects during patient supplemented epidural analgesia (PSEA) with either ropivacaine/fentanyl or bupivacaine/fentanyl mixtures. Methods: In a prospective, randomized, double-blind study, 32 ASAI-III patients undergoing major abdominal surgery received an epidural catheter at the T8- T10, followed by integrated general epidural anesthesia. Postoperative epidural analgesia was provided using a patient controlled pump with either ropivacaine 0.2%/2 microg x ml(-1) fentanyl (group Ropivacaine, n = 16) or bupivacaine 0.125%/2 microg x ml(-1) fentanyl (group Bupivacaine, n = 16) [background infusion 4-6 ml x hr(-1), 1.5 ml Incremental Doses and 20 min lock out]. Verbal pain rating score, number of incremental doses, consumption of epidural analgesic solution and rescue analgesics, sedation (four-point scale), and pulse oximetry were recorded by a blind observer for 48 hr after surgery. Results: No differences in pain relief, motor block, degree of sedation, pulse oximetry and other side effects were observed between the two groups. The number of incremental doses and the volume of analgesic solution infused epidurally were higher in patients receiving the bupivacaine/fentanyl mixture (10 [0-52] I.D. and 236 [204-340] ml) than in patients receiving the ropivacaine/fentanyl solution (5 [0-50] I.D. and 208 [148-260] ml) (P = 0.03 and P = 0.05, respectively). Conclusion: Using a ropivacaine 0.2%/2 microg x ml(-1) fentanyl mixture for patient supplemented epidural analgesia after major abdominal surgery provided similar successful pain relief as bupivacaine 0.125%/2 microg x ml(-1) fentanyl, but patients receiving bupivacaine/fentanyl requested more supplemental. Answer: The comparison between ropivacaine and bupivacaine, both at 0.125% with fentanyl 1 microg/ml for epidural labour analgesia, has been the subject of several studies. The results suggest that while there are some differences between the two local anesthetics, the way in which epidural analgesia is managed on a daily basis in clinical practice may play a more significant role than the choice of pharmaceutical agent. A double-blind randomized trial found no differences between bupivacaine and ropivacaine in terms of motor blockade, analgesic outcome, mode of delivery, and neonatal outcome. However, significant differences were observed in the clinical management of epidural analgesia between two institutions involved in the study, indicating that institutional practices can greatly influence the outcomes of epidural analgesia (PUBMED:16617757). Another study comparing ropivacaine 1 mg/ml with fentanyl 2 microg/ml administered either as patient-controlled epidural analgesia (PCEA) or fixed continuous infusion found that both administration modes required a similar volume of the low concentration mixture and resulted in a high spontaneous delivery rate. This suggests that the mode of administration may be chosen based on factors other than analgesic efficacy, such as midwife compliance and potential reduction in workload for anesthesiology staff (PUBMED:11309010). Further research comparing ropivacaine 0.1%-fentanyl with bupivacaine 0.125%-fentanyl infusions for labour epidural analgesia showed that ropivacaine is associated with less motor block throughout the first stage of labour and at 10 cm dilatation, although both provided satisfactory labour analgesia (PUBMED:10958089). In conclusion, while there are some pharmacological differences between ropivacaine and bupivacaine when used with fentanyl for epidural labour analgesia, these differences may be less critical than the variations in clinical practice between different institutions. The choice of local anesthetic should therefore be made considering both the pharmacological properties and the specific clinical practices and preferences of the healthcare providers involved.
Instruction: Does common channel length affect surgical choice in female congenital adrenal hyperplasia patients? Abstracts: abstract_id: PUBMED:24703836 Does common channel length affect surgical choice in female congenital adrenal hyperplasia patients? Objective: Partial/total urogenital sinus mobilization (UGSM) is one of the recommended techniques for treatment of female congenital adrenal hyperplasia (CAH). In this study we compared the length of common channel (CC) and type of operation performed in CAH patients. Patients And Methods: We retrospectively analyzed data of patients receiving surgery for female CAH. Patients were separated into three groups: group 1 had partial UGSM, group 2 had total UGSM, and group 3 had total UGSM plus the vaginal anterior wall was made from CC. Age at surgery, length of CC, surgical time, follow-up time, and complications were compared. Results: There were a total of 29 patients. For groups 1, 2, and 3, the average age at surgery was 47.2 months, 14.4 months, and 21.3 months, respectively, and the average CC length was 1.25 cm, 3.1 cm, 4.3 cm, respectively. The average time of surgery was 165 min, 193.1 min, 282.5 min, respectively. The average follow-up time was 34.7 months, 36.3 months, 28.3 months, respectively. There were two complications (UGS flap necrosis and opening of sutures) in the third group. Conclusion: We advise the use of partial UGSM for CC of 0.5-2 cm, total UGSM for CC of 2.5-3.5 cm, and total USM with use of CC as the anterior vaginal wall in CC ≥ 4 cm in length. Good cosmetic and functional results are obtained with this approach. abstract_id: PUBMED:28535944 Restoring normal anatomy in female patients with atypical genitalia. Female patients with congenital adrenal hyperplasia (CAH) have varying degrees of atypical genitalia secondary to prenatal and postnatal androgen exposure. Surgical treatment is focused on restoring normal genitalia anatomy by bringing the vagina to the normal position on the perineum, separating the distal vagina from the urethra, forming a normal introitus and preserving sexual function of the clitoris by accepting moderate degrees of hypertrophy as normal and strategically reducing clitoral size only in the most severely virilized patients. There remains a need for continued monitoring of patients as they go through puberty with the possibility of additional surgery for vaginal stenosis. Anatomically based surgery and refinement in surgical techniques with acceptance of moderate degrees of clitoral hypertrophy as normal should improve long-term outcomes. abstract_id: PUBMED:12602002 New approach in the surgical treatment of the urogenital sinus The urogenital sinus is an embriological anomaly which consists on a common channel from the urethra and vagina. The major incidence is produced in the congenital adrenal hyperplasia's context. In certain occasions it can be associated to an imperforate anus, then the malformation is called a cloacal defect. There are multiple surgical techniques to correct this malformation and different therapeutical approaches (without surgery, surgery at one or various times, early or delayed surgery) being the newest one the total urogenital mobilization. The purpose of this work is to reflect our experience with this technique. We present seven girls with urogenital sinus (3 with congenital adrenal hyperplasia, 2 with a cloacal defect, and the other 2 associated to ambiguous genitalia). Five patients were operated in the first year of life. The outcome has been favorable, and the cosmetic and functional results have been very good. The surgical technique consists on posterior sagittal incision, it can be done transanorectal if necessary, the urethrovaginal union is achieved and both structures are mobilized together, connecting them to the perineum, as a single unit. We believe that the total urogenital mobilization is actually the surgical technique to be chosen in every of urogenital sinus, for being easier, allowing early realization (girls under 1 year old), correcting simultaneously other anomalies, reducing the complications (urethrovaginal fistula, vaginal structure, or acquired vaginal atresia); and the result is excellent. abstract_id: PUBMED:29505858 Characteristics of Female Genital Restoration Surgery for Congenital Adrenal Hyperplasia Using a Large-scale Administrative Database. Objective: To analyze nationwide information on the timing of surgical procedures, cost of surgery, hospital length of stay following surgery, and surgical complications of female genital restoration surgery (FGRS) in females with congenital adrenal hyperplasia (CAH). Materials And Methods: We used the Pediatric Health Information System database to identify patients with CAH who underwent their initial FGRS in 2004-2014. These patients were identified by an International Classification of Diseases, Ninth Revision (ICD-9) diagnosis code for adrenogenital disorders (255.2) in addition to a vaginal ICD-9 procedure code (70.x, excluding vaginoscopy only) or perineal ICD-9 procedure code (71.x), which includes clitoral operations (71.4). Results: A total of 544 (11.8%) females underwent FGRS between 2004 and 2014. Median age at initial surgery was 9.9 months (interquartile range 6.8-19.1 months). Ninety-two percent underwent a vaginal procedure, 48% underwent a clitoral procedure, and 85% underwent a perineal procedure (non-clitoral). The mean length of stay was 2.5 days (standard deviation 2.5 days). The mean cost of care was $12,258 (median $9,558). Thirty-day readmission rate was 13.8%. Two percent underwent reoperation before discharge, and 1 (0.2%) was readmitted for a reoperation within 30 days. Four percent had a perioperative surgical complication. Conclusion: Overall, 12% of girls with CAH underwent FGRS at one of a national collaborative of freestanding children's hospitals. The majority underwent a vaginoplasty as a part of their initial FGRS for CAH. Clitoroplasty was performed on less than half the patients. Overall, FGRS for CAH is performed at a median age of 10 months and has low 30-day complication and immediate reoperation rates. abstract_id: PUBMED:34419416 Surgical Therapy After Failed Feminizing Genitoplasty in Young Adults With Disorders of Sex Development: Retrospective Analysis and Review of the Literature. Background: Secondary vaginal stenosis may occur after reconstruction of genital malformations in childhood or after failed vaginal aplasia repair in adults. Aim: This study focusses on the results of the surgical treatment of these patients in our multidisciplinary transitional disorders/differences of sex development team of pediatric surgeons and gynecologists. Methods: A retrospective analysis was carried out on adult and female identified disorders/differences of sex development patients with vaginal stenoses treated between 2015 and 2018 in a single center with revision vaginoplasty. The underlying type of malformation, the number and surgical techniques of vaginoplasties in infancy, techniques of revision of the stenotic vagina, vaginal length and caliber, possibility of sexual intercourse, and temporary vaginal dilatation. A review of literature with regard to recommended surgical techniques of revision vaginoplasties was accomplished. Outcomes: To describe the surgical technique, the main outcome measures of this study are vaginal calipers after revision vaginoplasty as well as ability for sexual intercourse. Results: Thirteen patients presented with vaginal stenosis with a median age of 19 years (range 16-31). All patients had one or more different types of vaginoplasties in their medical history, with a median age at first vaginoplasty of 15 months (0-233). Underlying anatomical conditions were urogenital sinus (n = 8), vaginal agenesis (n = 2), persistent cloacae (n = 2), and cloacal exstrophy (n = 1). The main symptoms were disability of sexual intercourse in 13 patients due to stenotic vaginal tissue. The most frequently performed surgical technique was partial urogenital mobilization with a perineal or lateral flaps (n = 10), followed by bowel vaginoplasty (n = 2), in 1 patient a revision vaginoplasty failed due to special anatomical conditions. In a median follow-up of 11 months, all but one patient presented with physiological vaginal length and width, and normal sexual intercourse in those with a partnership. Clinical Implications: Perineal flap with partial urogenital mobilization should be considered as a treatment of choice in severe cases of distal vaginal stenosis and after multiple failed former vaginoplasties, while bowel vaginoplasty should be reserved only for cases of complete cicatrization or high located stenosis of the vagina. Strengths & Limitations: The strength of this study is the detailed description of several cases while the retrospective character is a limitation. Conclusion: In patients after feminizing genital repair, perineal flap with partial urogenital mobilization provides a normal anatomical outcome and allows unproblematic sexual intercourse. Ellerkamp V, Rall KK, Schaefer J, et al. Surgical Therapy After Failed Feminizing Genitoplasty in Young Adults With Disorders of Sex Development: Retrospective Analysis and Review of the Literature. J Sex Med 2021;18:1797-1806. abstract_id: PUBMED:18790419 Perineal mobilization of the common urogenital sinus for surgical correction of high urethrovaginal confluence in patients with intersex disorders. Objective: We report anatomical and cosmetic results of feminizing genital reconstruction in patients with a high vagina due to disorders of sexual differentiation. Patients And Methods: Twelve patients with urogenital sinus anomalies graded as Prader IV underwent one-stage perineal clitoral vaginoplasty at a mean age of 1.6years. Seven patients had congenital adrenal hyperplasia, four partial androgen insensitivity and one mixed gonadal dysgenesis. Mobilized common sinus, opened dorsally without pubourethral ligament dissection, was used in combination with a perineal skin flap to construct the distal vagina. Clitoroplasty and labioplasty were done simultaneously. Mean follow up was 7.3years. Results: In all cases the vaginal introitus was positioned in the vestibule region below the urethral meatus. One patient developed postoperative glans atrophy. Agreement between parental and physician satisfaction with postoperative cosmetic genital appearance was recorded in 11 girls. Vaginal stricture occurred in one patient, treated successfully with repeat vaginoplasty. One girl experienced urinary stress incontinence and became dry after bladder neck injection of a bulking agent. Conclusion: This procedure is successful in creating a feminine genital appearance in children having disorders of sexual differentiation with high vagina. Long-term follow up is needed to reassess the initial good anatomical and cosmetic results and evaluate sexual function after puberty. abstract_id: PUBMED:20728173 Does preoperative genitography in congenital adrenal hyperplasia cases affect surgical approach to feminizing genitoplasty? Purpose: Genitography has traditionally been an imperative part of radiographic evaluation in females born with congenital adrenal hyperplasia before surgical reconstruction. We evaluated the role of preoperative genitogram in surgical reconstruction planning and how it correlates with intraoperative findings. Materials And Methods: We retrospectively reviewed the records of 40 patients with congenital adrenal hyperplasia who underwent feminizing genitoplasty at our institution between 2003 and 2009. Preoperative genitogram findings were recorded and correlated with operative findings. Results: A total of 42 preoperative genitograms were available for review in 40 patients with congenital adrenal hyperplasia who underwent feminizing genitoplasty. Genitography revealed complete anatomy of the urogenital sinus in 30 cases (72%) while bladder filling alone was present in 9 (21%) and vaginal filling was noted in 2 (5%). The urogenital sinus could not be catheterized in 1 patient (2%). Vesicoureteral reflux was identified in 6 patients (15%) with a mean grade of 2. Vaginoplasty was done with a flap technique in 37 patients (more than 90%) while the remaining 3 underwent pull-through vaginoplasty. In no case did genitogram reveal anatomy that was not visible via endoscopy or at reconstruction. The vaginoplasty technique was based on endoscopic and intraoperative findings, and not on genitogram. Conclusions: Genitography during preoperative evaluation in females with congenital adrenal hyperplasia undergoing feminizing genitoplasty did not reveal urogenital sinus anatomy completely in 25% of the patients in our series. Preoperative genitogram did not influence the surgical approach. Its value as preoperative imaging in patients with congenital adrenal hyperplasia may be limited. abstract_id: PUBMED:9396289 Female pseudohermaphroditism Patients with female pseudohermaphroditism have female internal genitalia and karyotype (XX) and various degree of external genitalia virilization. External genitalia is musculinized congenitally when female fetus is exposed to excess androgenic environment. Congenital adrenal hyperplasia (CAH), mostly 21-hydroxylase deficiency, is the most common cause. Maternal androgen excess due to maternal ovarian tumor or drug intake also causes female pseudohermaphroditism. Combination of hormonal therapy and surgical correction is required for CAH. When appropriate treatments were given, normal puberty, fertility and child bearing are possible. HLA typing in patient's family is useful for identifying heterozygote and homozygote, because of close linkage of 21-hydroxylase gene and HLA gene. Prenatal diagnosis and genetic diagnosis for female pseudohermaphroditism due to 21-hydroxylase deficiency can be performed, however prenatal treatment is not completely established. abstract_id: PUBMED:26995108 Surgical outcomes and complications of reconstructive surgery in the female congenital adrenal hyperplasia patient: What every endocrinologist should know. Surgical management of classical congenital adrenal hyperplasia (CAH) in 46, XX females has evolved significantly. Virilization of the genitalia of 46, XX females with CAH begins prenatally as a result of excess fetal androgen production. Improved understanding of anatomy and surgical outcomes has driven changes in surgical techniques as well as the timing of surgery. For endocrinologists treating these patients, it is important to understand the outcome of genitoplasty, identify patients who need further treatment and direct these patients to experienced surgeons. We performed a literature search on PubMed of publications addressing CAH and genital reconstruction published in the English language from 1990 to the present. In accordance with our institutional review board, we performed a retrospective analysis of clitoroplasty and/or vaginoplasty procedures performed by a single surgeon at our institution from 1996 to 2015. We found that genital reconstruction in 46, XX CAH patients is associated with few immediate post-operative, infectious, and urinary complications. Vaginal stenosis is a common complication of vaginal reconstruction and requires evaluation by an experienced surgeon. Clitoral pain or decreased sensation can be associated with clitoral recession and clitorectomy. Outcomes in sexual satisfaction and gender identity can also be impacted by surgical technique and success. Long term follow up and patient reported feedback are crucial to our understanding and management of this special group of patients. Improved awareness and understanding of the complications of genital surgery will allow endocrinologists to know what to ask patients and be ready to provide them with a resource with the understanding and experience to help them improve their quality of life. abstract_id: PUBMED:7404878 Reconstruction of the female external genitalia. The experience of the authors with the surgical reconstruction of female external genitalia is described. In all cases in which these procedures were used, reduction clitoroplasty resulted in satisfactory cosmetic appearance and the preservation of tactile sensation, and posterior flap vaginoplasty provided adequate introital capacity. Answer: Yes, the common channel (CC) length does affect the surgical choice in female congenital adrenal hyperplasia (CAH) patients. According to the study by PUBMED:24703836, the length of the CC is a determining factor in the type of operation performed. The study advises the use of partial urogenital sinus mobilization (UGSM) for CC lengths of 0.5-2 cm, total UGSM for CC lengths of 2.5-3.5 cm, and total UGSM with the use of CC as the anterior vaginal wall in cases where the CC is ≥ 4 cm in length. This approach is recommended based on the observation that it yields good cosmetic and functional results. The study involved a retrospective analysis of 29 patients who underwent surgery for female CAH and were separated into three groups based on the surgical technique used. The average CC length was found to be 1.25 cm for group 1 (partial UGSM), 3.1 cm for group 2 (total UGSM), and 4.3 cm for group 3 (total UGSM plus the vaginal anterior wall made from CC). The study concluded that the length of the CC should guide the choice of surgical technique to achieve optimal outcomes.
Instruction: Double or triple interlocking when nailing proximal tibial fractures? Abstracts: abstract_id: PUBMED:19685060 Double or triple interlocking when nailing proximal tibial fractures? A biomechanical investigation. Objectives: To determine whether there are differences in stability between double and triple interlocked intramedullary nails used for the fixation of extraarticular proximal tibial fractures. Design: Randomized in vitro biomechanical-experimental laboratory investigation. Setting: Biomechanics laboratory of the Clinic for Trauma Surgery at the Johannes Gutenberg-University Mainz. Intervention: A 10-mm defect osteotomy was performed on six paired human tibiae, and the proximal and distal ends were potted in polymethylmethacrylate cement (PMMA). Each pair of bones was randomly stabilized with an intramedullary nail (IM-nail) with two interlocking options (PTN 2s) in one tibia, and with an IM-nail with three interlocking options (PTN 3s) in the corresponding contralateral bone. A biomechanical test of the bone implant construct was then performed with an axial force of 900 N. Displacement of bone fragments was measured and depicted as a force-displacement diagram. Main Outcome Measurements: Biomechanical construction stiffness. Results: The stiffness values for PTN 3s were significantly higher than for PTN 2s. In the group of PTN 2s, two out of six implants failed biomechanically with breakage of one proximal interlocking screw. Conclusions: Given the parameters of this investigation, triple proximal interlocking provides more stability in nailed proximal tibia fractures than double proximal interlocking. Larger series with clinical follow-up after triple proximal interlocking in tibial nailing should be undertaken to further clarify these questions. abstract_id: PUBMED:34491387 Focus on interlocking intramedullary nailing without fluoroscopy in resource-limited settings: strategies, outcomes, and outlook. Introduction: Closed static interlocking nailing with c-arm guidance is the standard procedure for the treatment of closed diaphyseal fractures. In low-income settings, it is still very difficult to carry out such procedures because of few or absent image intensifiers (c-arm) despite the necessity. Authors provide a review of the literature on interlocking intramedullary nailing without fluoroscopy in resource-limited settings, followed by strategies, outcomes, and outlook. Materials And Method: A comprehensive search of the PubMed, Web of Science, Embase, and Cochrane Library databases was performed with the help of a biomedical information specialist. The Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines were followed. Results: We identified 15 series of interlocking intramedullary nailing without fluoroscopy in resource-limited settings. All papers focused on the care for long bones (humerus, femur, tibia). All studies discussed the quality of the nailing operative procedure. The entry point was described in five series; the nail insertion in the proximal and distal medullary canal was good in all studies. The distal locking was missed between 0 and 27%. Discussion: Intraoperative strategies depend on the type of bone affected, the opening of the fracture site, the fracture line, and the availability of a functional orthopaedic table. Three techniques to insert the nail in the proximal and distal fracture fragment with reduction of the fracture site are described. Insertion of distal screws is possible by using ancillary devices. Outcomes are comparable to those of the series using c-arm guidance. In low-income countries, it can been proposed as an alternative to the gold standard in resources constraints settings. In high-income setting this technique can help to reduce exposure of X-ray. Conclusion: There is a need to improve equipment in low-income countries hospitals to make trauma surgery with c-arm a gold standard with a minimal exposure to radiation. abstract_id: PUBMED:34859267 Retrograde tibial nailing of far distal tibia fractures: a biomechanical evaluation of double- versus triple-distal interlocking. Objectives: Retrograde tibial nailing using the Distal Tibia Nail (DTN) is a novel surgical option in the treatment of distal tibial fracture. Its unique retrograde insertion increases the range of surgical options in far distal fractures of the tibia beyond the use of plating. The aim of this study was to assess the feasibility of the DTN for far distal tibia fractures where only double rather than triple-distal locking is possible due to fracture localisation and morphology. Methods: Six Sawbones® were instrumented with a DTN and an AO/OTA 43-A3 fracture simulated. Samples were tested in two configurations: first with distal triple locking, second with double locking by removing one distal screw. Samples were subjected to compressive (350 N, 600 N) and torsional (± 8 Nm) loads. Stiffness construct and interfragmentary movement were quantified and compared between double and triple-locking configurations. Results: The removal of one distal screw resulted in a 60-70% preservation of compressive stiffness, and 90% preservation of torsional stiffness for double locking compared to triple locking. Interfragmentary movement remained minimal for both compressive and torsional loading. Conclusions: The DTN with a distal double locking can, therefore, be considered for far distal tibia fractures where nailing would be preferred over plating. abstract_id: PUBMED:2270497 Interlocking nailing of fractures of the proximal tibial shaft The anatomic construction of the proximal part of the tibial shaft induces high ventral traction forces. Consequently, the mechanical requirements for stability of any internal fixation system are also high. Therefore, interlocking nailing of fractures of the proximal tibial shaft demands a meticulous operative technique as far as the incision, implantation of the nail and the final interlocking are concerned. Typical complications in this area are axis deviation, loosening of the interlocking screws and delayed union. Particular attention should be paid to the risk of hitherto unnoticed fissures ascending into the tibial head. abstract_id: PUBMED:30276432 Reduction techniques in intramedullary nailing osteosynthesis Intramedullary nailing was originally conceived for the stabilization of shaft fractures of long bones. Due to new nail designs and multiple interlocking possibilities, the spectrum of nailing has significantly increased. Nailing of fractures beyond the isthmus is technically challenging because fractures need to be reduced before the nailing procedure starts. Indirect techniques of reduction include the use of an extension table, a large distractor or an external fixator. Direct reduction with pointed reduction forceps, lag screws, a cerclage wire or a short plate can optimize indirect reduction. The choice of the correct entry portal is of utmost importance for an optimal operative result. The location of the entry portal is dependent on the local anatomy and the bend of the nail. The optimal entry portal at the proximal tibia is directly behind the patellar tendon and accessible with the knee in more than 90° of flexion, alternatively through a suprapatellar approach with a slightly flexed knee joint. Insertion of the nail through the suprapatellar approach is possible without stress on the reduced fracture fragments. Blocking screws create an artificial isthmus in the metaphyseal area and force the guide wire in the desired direction. Blocking screws help to avoid axial malalignment during nail insertion. Interlocking of the nail with screws coming from different directions prevents secondary dislocation. abstract_id: PUBMED:36937755 Biomechanical comparison of intramedullary nail and plate osteosynthesis for extra-articular proximal tibial fractures with segmental bone defect. Purpose: Proximal tibial fractures are common, but the current available internal fixation strategies remain debatable, especially for comminuted fractures. This study aimed to compare the biomechanical stability of three internal fixation strategies for extra-articular comminuted proximal tibial fractures. Methods: A total of 90 synthetic tibiae models of simulated proximal tibial fractures with segmental bone defects were randomly divided into three groups: Single lateral plating (LP), double plating (DP) and intramedullary nailing (IN). Based on the different number of fixed screws, the above three groups were further divided into nine subgroups and subjected to axial compression, cyclic loading and static torsional testing. Results: The subgroup of intramedullary nailing with five proximal interlocking screws showed the highest axial stiffness of 384.36 ± 35.00 N/mm. The LP group obtained the lowest axial stiffness performance with a value of 96.59 ± 16.14 N/mm. As expected, the DP group offered significantly greater biomechanical stability than the LP group, with mean static axial stiffness and mean torque increasing by approximately 200% and 50%, respectively. According to static torsional experiments, the maximum torque of the DP subgroup was 3,308.32 ± 286.21 N mm, which outperformed all other groups in terms of torsional characteristics. Conclusion: Utilizing more than four distal screws did not provide improved biomechanical stability in the LP or DP groups, while a substantial increase in the biomechanical stability of DP was obtained when an additional medial plate was used. For the intramedullary nailing group, increasing the number of proximal interlocking screws could significantly improve biomechanical stability, and the intramedullary nailing with three proximal interlocking screws had similar static and cyclic stiffness as the DP group. The intramedullary nailing with five proximal screws had better axial stability, whereas DP had better torsional stability. abstract_id: PUBMED:32367222 Interlocking screw configuration influences distal tibial fracture stability in torsional loading after intramedullary nailing. Purpose: This study evaluated the influence of fracture obliquity and locking screw configuration on interfragmentary motion during torsional loading of distal metaphyseal tibial fractures fixed by intramedullary (IM) nailing. Methods: The stability of six IM nail locking screw configurations used to fix distal metaphyseal tibial fractures of various obliquities was evaluated. A coronal osteotomy from proximal lateral to distal medial was made in sawbone tibiae at different obliquities from 0° to 60°. After fixation, motion at the fracture was assessed during internal and external rotation tests to 7 Nm under two compressive loading conditions: 20 N and 500 N. Results: With results organized by interlocking configuration, significant differences in interfragmentary rotation between fracture obliquities are observed when the number of interlocking screws is decreased to one distal static and one proximal dynamic during internal rotation. During external rotation testing, significant rotational differences between fracture obliquities are encountered with two distal static screws and one proximal dynamic. No significant differences were seen between different distal interlocking screw orientations (two parallel versus perpendicular distal screws) for all fracture obliquity patterns tested. Conclusion: Fracture obliquity influences rotational stability which can be mitigated by interlocking screw configurations when nailing distal tibia fractures. At least two distal and one proximal interlocking screw in a static mode is recommended to resist torsional loading of distal tibia fractures undergoing intramedullary nailing. The addition of more interlocking screws than this did not significantly alter control of torsional displacement with load. abstract_id: PUBMED:27687142 Pseudoaneurysm of the anterior tibial artery after interlocking tibial nailing: an unexpected complication. Anterior tibial pseudoaneurysm is a rare complication after interlocking screw insertion in tibial nailing. We present the case of a 28-year-old male patient with this complication with a 6-week delay after tibial nailing of a right tibial fracture type 42-A1 of the Association for the Study of Internal Fixation (AO/ASIF) classification. On presentation to our emergency department, the patient's complaints were solemnly intermittent pain and occasional swelling of his proximal lower leg. Deep vein thrombosis, compartment syndrome, and implant dislocation were ruled out, and the patient was discharged after his symptoms improved without further intervention. Four weeks later, the patient was readmitted for similar symptoms. A computed tomography (CT) angiography then revealed a pseudoaneurysm of the anterior tibial artery at the level of the proximal interlocking screw insertion. Aneurysmal sac excision with vessel repair was performed while reconstructing the additional dislocated proximal fibular fracture using standard AO/ASIF plating. Postoperatively, sufficient flow through the repaired vessel was documented using Doppler ultrasound and CT angiography. However, the patient sustained a temporal damage to the peroneal nerve after surgery. This case highlights the risk of a pseudoaneurysm of the anterior tibial artery after interlocking screw insertion as a rare but major complication of a routine surgical procedure. Early ultrasound diagnostics, CT angiography, or magnetic resonance (MR) angiogram should be performed to prevent the delay in diagnosis and treatment of such complications. abstract_id: PUBMED:19398788 Unreamed intramedullary nailing with oblique proximal and biplanar distal interlocking screws for proximal third tibial fractures. Purpose: To assess the outcome of unreamed intramedullary nailing through the lateralised entry point using oblique proximal and biplanar distal interlocking screws. Methods: 15 men and 3 women aged 25 to 58 (mean, 37) years underwent unreamed intramedullary nailing with oblique proximal and biplanar distal interlocking screws for proximal third metaphyseal tibial fractures. The entry point was kept proximal to the tibial tuberosity and slightly lateral to midline. Proximal locking was at 45 degrees to the coronal and sagittal planes. Biplanar distal locking was in the coronal and sagittal planes. Results: 16 patients had bone union within 20 (mean, 17; range, 14-27) weeks; 2 underwent dynamisation for delayed union. Three patients had valgus angulation of less than 5 degrees; 2 had a loss of terminal knee flexion; 3 had a loss of ankle dorsiflexion; and 3 had shortening of more than 0.5 cm. Functional outcomes were excellent in 13, good in 4, and fair in one patient. No patient endured neurovascular injury, compartment syndrome or implant failure. Conclusion: Unreamed intramedullary nailing with oblique proximal and biplanar distal interlocking screws for proximal third tibial fractures was effective in preventing malalignment. abstract_id: PUBMED:28566781 A comparative study of intramedullary interlocking nailing and minimally invasive plate osteosynthesis in extra articular distal tibial fractures. Background: Extraarticular distal tibial fractures are among the most challenging fractures encountered by an orthopedician for treatment because of its subcutaneous location, poor blood supply and decreased muscular cover anteriorly, complications such as delayed union, nonunion, wound infection, and wound dehiscence are often seen as a great challenge to the surgeon. Minimally invasive plate osteosynthesis (MIPO) and intramedullary interlocking nail (IMLN) are two well-accepted and effective methods, but each has been historically related to complications. This study compares clinical and radiological outcome in extraarticular distal tibia fractures treated by intramedullary interlocking nail (IMLN) and minimally invasive plate osteosynthesis (MIPO). Materials And Methods: 42 patients included in this study, 21 underwent IMLN and 21 were treated with MIPO who met the inclusion criteria and operated between June 2014 and May 2015. Patients were followed up for clinical and radiological evaluation. Results: In IMLN group, average union time was 18.26 weeks compared to 21.70 weeks in plating group which was significant (P &lt; 0.0001). Average time required for partial and full weight bearing in the nailing group was 4.95 weeks and 10.09 weeks respectively which was significantly less (P &lt; 0.0001) as compared to 6.90 weeks and 13.38 weeks in the plating group. Lesser complications in terms of implant irritation, ankle stiffness, and infection, were seen in interlocking group as compared to plating group. Average functional outcome according to American Orthopedic Foot and Ankle Society score was measured which came out to be 96.67. Conclusion: IMLN group was associated with lesser duration of surgery, earlier weight bearing and union rate, lesser incidence of infection and implant irritation which makes it a preferable choice for fixation of extra-articular distal tibial fractures. However, larger randomized controlled trials are required for confirming the results. Answer: The question of whether to use double or triple interlocking when nailing proximal tibial fractures has been investigated in biomechanical studies. According to a study by PUBMED:19685060, triple interlocking provides more stability than double interlocking for nailed proximal tibia fractures. This study performed a biomechanical test on human tibiae with a 10-mm defect osteotomy, comparing the stability of intramedullary nails with two interlocking options (PTN 2s) versus three interlocking options (PTN 3s). The results showed that the stiffness values for PTN 3s were significantly higher than for PTN 2s, and two out of six implants in the PTN 2s group failed biomechanically with the breakage of one proximal interlocking screw. The conclusion drawn from this study was that triple proximal interlocking is more stable than double proximal interlocking for such fractures. Another study, PUBMED:34859267, evaluated the use of retrograde tibial nailing for far distal tibia fractures and compared double versus triple-distal interlocking. The study found that double locking preserved 60-70% of compressive stiffness and 90% of torsional stiffness compared to triple locking. Although there was a reduction in stiffness with double locking, interfragmentary movement remained minimal for both compressive and torsional loading. This suggests that double locking can be considered for far distal tibia fractures where nailing is preferred over plating. In summary, the evidence suggests that triple interlocking may offer increased stability for proximal tibial fractures compared to double interlocking. However, in certain cases, such as far distal tibia fractures, double locking may still provide sufficient stability. It is important to consider the specific fracture type and location when deciding on the interlocking strategy for intramedullary nailing.
Instruction: Efficacy of entecavir treatment for lamivudine-resistant hepatitis B over 3 years: histological improvement or entecavir resistance? Abstracts: abstract_id: PUBMED:19226381 Efficacy of entecavir treatment for lamivudine-resistant hepatitis B over 3 years: histological improvement or entecavir resistance? Background And Aims: Long-term lamivudine therapy is required for patients with chronic hepatitis B, because hepatitis reappears frequently after it has withdrawn. However, hepatitis B virus (HBV) mutants resistant to lamivudine emerge frequently accompanied by breakthrough hepatitis. Methods: Effects of entecavir were evaluated in 19 patients who had developed breakthrough hepatitis during lamivudine therapy for longer than 5 years. This study is a subgroup analysis of a previously reported study. Entecavir, in either 0.5 or 1.0 mg/day doses, was given to 10 and nine patients for 52 weeks, respectively, and then all received 1.0 mg/day entecavir for an additional 68-92 weeks. Results: There were no differences in biochemical and virological responses in the two groups of patients with respect to the two different initial doses of entecavir. Serum levels of alanine aminotransferase were normalized in 17 (90%) patients, and hepatitis B e antigen (HBeAg) disappeared from the serum in two (14%) of the 14 patients who were HBeAg-positive before. Furthermore, a decrease in histological activity index score greater than 2 points was achieved in nine of the 11 (82%) patients in whom annual liver biopsies were performed during 3 years while they received entecavir. HBV mutants resistant to entecavir emerged in five of the 19 (26%) patients, and hepatitis flare occurred in two of them (40%). Conclusion: Entecavir in the long term would be useful for histological improvement of breakthrough hepatitis induced by lamivudine-resistant HBV mutants in patients with chronic hepatitis B. However, the relatively high rate of entecavir resistance is a concern, and other strategies need to be considered when available. abstract_id: PUBMED:20305760 Efficacy and resistance of entecavir following 3 years of treatment of Japanese patients with lamivudine-refractory chronic hepatitis B. Purpose: Lamivudine treatment of chronic hepatitis B (CHB) is associated with frequent resistance and loss of clinical benefit. We present outcomes of lamivudine-refractory Japanese patients treated with entecavir for 3 years. Methods: Eighty-two patients refractory to lamivudine therapy received entecavir 0.5 or 1 mg daily for 52 weeks in phase II study ETV-052, directly entered rollover study ETV-060, and received entecavir 1 mg daily. Responses were evaluated among patients with available samples. Results: After 96 weeks in ETV-060 (148 weeks total entecavir treatment time), 55%(36/65) of patients had hepatitis B virus(HBV) DNA of\400 copies/mL, 85% (52/61) had alanine aminotransferase (ALT) of ≤1 × upper limit of normal (ULN), and 14.6% (7/48) achieved HBe seroconversion.A subset of 42 patients received entecavir 1 mg from phase II baseline through 148 weeks: 54% (19/35) had HBV DNA of &lt;400 copies/mL, 84% (27/32) had ALT of ≤1 × ULN, and 15% (4/27) achieved HBe seroconversion.Sixteen patients in the 1-mg subset had baseline and week 148 evaluable biopsy pairs: 81% (13/16) showed histologic improvement and 38% (6/16) showed improvement in fibrosis. Genotypic resistance to entecavir emerged in 31 patients for a 3-year cumulative resistance probability of 35.9%. Entecavir was generally well tolerated during ETV-060, with no on-treatment ALT flares. Conclusions: Long-term entecavir treatment of lamivudine-refractory CHB resulted in virologic suppression, ALT normalization, and improvements in liver histology. Resistance was consistent with that observed in worldwide studies. abstract_id: PUBMED:22246827 Antiviral efficacy of combination therapy with entecavir and adefovir for entecavir/lamivudine-resistant hepatitis B virus with or without adefovir resistance. There is little clinical information on the management of hepatitis B virus (HBV) that is resistant to multiple drugs including entecavir (ETV). The present retrospective cohort study assessed the antiviral efficacy of ETV/adefovir dipivoxil (ADV) combination therapy for ETV-resistant HBV with prior lamivudine (LAM) resistance, and either with or without previous ADV resistance. The cumulative probability of achieving a virological response (undetectable serum HBV DNA) was compared by Kaplan-Meier analysis and the Breslow method. Seventeen patients with ETV-resistant HBV who were treated with ETV/ADV combination therapy for at least 6 months at a tertiary care center, were included; seven had dual resistance to ETV and LAM [ADV-r(-) group] and 10 had triple resistance to ETV, LAM, and ADV [ADV-r(+) group]. The median follow-up period was 9 months (range, 6-23). A virological response was noted in seven patients after a median of 3 months (range, 3-12) of treatment; five in the ADV-r(-) group and two in the ADV-r(+) group. The cumulative probability of a virological response was significantly higher in the ADV-r(-) group than in the ADV-r(+) group (6 months cumulative probability, 57.1% vs. 11.1%). In conclusion, ETV/ADV combination therapy led to virological responses in five of seven patients with resistance to ETV and LAM, but a significantly poorer response in patients with prior ADV resistance than in those without prior ADV resistance. Therefore, ETV/ADV combination therapy could be a useful therapeutic option for ETV- and LAM-resistant HBV without prior ADV resistance. abstract_id: PUBMED:24238791 Comparison of the efficacy of Lamivudine plus adefovir versus entecavir in the treatment of Lamivudine-resistant chronic hepatitis B: a systematic review and meta-analysis. Background: Hepatitis B virus infection remains 1 of the major health threats worldwide. Currently, lamivudine plus adefovir combination therapy or entecavir monotherapy is usually used for the treatment of patients with lamivudine-resistant chronic hepatitis B (CHB). However, there are few systematic comparisons between the efficacy of lamivudine plus adefovir and the efficacy of entecavir in the treatment of these patients. Objective: The goal of this systematic study and meta-analysis was to assess the efficacy of lamivudine plus adefovir compared with entecavir for the treatment of patients with lamivudine-resistant CHB. Methods: A comprehensive literature search of PUBMED, Web of Science, WANFANG database, the Cochrane Central Register of Controlled Trials, and the Cochrane Database of Systematic Review, were screened to obtain citations from January 1990 to January 2012 in this study. Data analysis was done by using the Review Manager Software 5.1. Results: Eight studies were suitable for analysis. A total of 696 patients with lamivudine-resistant CHB were studied and grouped according to treatment: 341 patients in the entecavir group and 355 patients in the lamivudine plus adefovir group. The results found that the rates of undetectable hepatitis B virus DNA levels, alanine aminotransferase normalization, hepatitis B e antigen loss, and hepatitis B e antigen seroconversion were not significantly different between the lamivudine plus adefovir group and the entecavir group. Moreover, the rate of adverse reactions was also not significantly different between the 2 groups. However, virologic breakthrough for the patients with lamivudine resistance was higher in the entecavir group than in the lamivudine plus adefovir group. Conclusions: For these CHB patients with lamivudine resistance, lamivudine plus adefovir was a better treatment option than entecavir alone. abstract_id: PUBMED:22878399 Clonal analysis of the quasispecies of antiviral-resistant HBV genomes in patients with entecavir resistance during rescue treatment and successful treatment of entecavir resistance with tenofovir. Background: Clonal analysis of quasispecies of resistant HBV genomes in patients with entecavir (ETV) resistance receiving lamivudine (3TC) plus adefovir (ADV) rescue therapy has never been performed. Methods: A sample of 10 patients with ETV resistance who were switched to 3TC+ADV treatment were analysed for changes in viral quasispecies. Serum samples at baseline, and at months 3 and 6 of 3TC+ADV treatment could be clonally analysed in 7 of 10 patients; 3-82 clones per sample (total 1,068 clones, mean 63) were sequenced. Results: 3TC+ADV therapy led to a modest decline in HBV DNA. Almost all clones had L180M and M204V 3TC resistance mutations before and during combination therapy. All clones had ≥1 of the S202G, T184F, T184A, T184L, T184I and M250V ETV resistance mutations. The percentages of detected clones bearing 3TC (rtL180M and rtM204V) and ETV mutations did not change with rescue 3TC+ADV therapy. In 7 of 8 patients with detectable HBV DNA (median 5.17 log(10) copies/ml) after a median 24 months of ADV therapy, HBV DNA became undetectable with 3TC plus tenofovir after 6 months of treatment. Conclusions: In patients with ETV resistance tenofovir is effective. Clonal analysis data indicate no selection of specific HBV mutants during rescue 3TC+ADV. abstract_id: PUBMED:24929235 No resistance to tenofovir disoproxil fumarate through 96 weeks of treatment in patients with lamivudine-resistant chronic hepatitis B. Background & Aims: A recent study compared the efficacy of tenofovir disoproxil fumarate (TDF) vs the combination of emtricitabine and TDF (FTC/TDF) in patients with lamivudine-resistant chronic hepatitis B who were treated for as long as 96 weeks. We report findings from resistance analyses conducted for this study. Methods: Two hundred eighty patients with chronic hepatitis B virus (HBV) infection and lamivudine resistance (confirmed by INNO-LiPA Multi-DR) were randomly assigned (1:1) to groups treated with TDF or FTC/TDF. The HBV reverse transcriptase domain from the polymerase gene from all patients was sequenced at baseline and from 18 viremic patients at week 96 or early discontinuation. Results: At screening for the efficacy study, 99% of patients were found to have lamivudine resistance. Prior exposure to entecavir or entecavir resistance was observed in 12% of patients, and 22% of patients had been previously exposed to adefovir; 1.8% were resistant to adefovir. Only 18 patients (6.4%) qualified for sequence analysis, including 1 patient who experienced virologic breakthrough and 17 with persistent viremia. Six of these patients did not have any sequence changes from baseline in HBV reverse transcriptase (33%), and sequence analysis could not be performed for 5 patients (28%). In 2 patients who qualified for phenotypic analysis (1 given TDF and 1 given FTC/TDF), no resistance to TDF was observed. Neither previous treatment exposure nor resistance to entecavir or adefovir affected viral kinetics. However, the mean baseline level of HBV DNA was significantly higher in viremic patients than in patients with viral suppression by week 96 (7.28 log10 IU/mL vs 5.62 log10 IU/mL; P = .0003). Conclusions: No resistance to TDF was detected through 96 weeks of treatment in patients with lamivudine-resistant chronic hepatitis B. Prior treatment or resistance to entecavir or adefovir did not affect viral kinetics through 96 weeks. No additional benefit was observed with the addition of emtricitabine vs TDF monotherapy. ClinicalTrial.gov number: NCT00737568. abstract_id: PUBMED:28669175 Efficacy of tenofovir-based rescue therapy for chronic hepatitis B patients with resistance to lamivudine and entecavir. Background/aims: Tenofovir disoproxil fumarate (TDF) monotherapy for 48 weeks provided a virological response comparable to that of TDF and entecavir (ETV) combination therapy in patients infected with ETV-resistant hepatitis B virus (HBV). Little long-term data in routine clinical practice are available regarding the optimal treatment of patients with ETV-resistant HBV. Methods: We investigated the long-term antiviral efficacy of combination therapy of TDF+lamivudine (LAM) or TDF+ETV compared to that of TDF monotherapy in 73 patients with resistance to both LAM and ETV. Results: Patients were treated with TDF monotherapy (n=12), TDF+LAM (n=19), or TDF+ETV (n=42) for more than 6 months. The median duration of TDF-based rescue therapy was 37 months. Virologic response (VR) was found in 63 patients (86.3%). The rates of VR among the three groups (TDF monotherapy, TDF+LAM, and TDF+ETV) were not statistically different (log-rank P=0.200) at 12 months (59.3%, 78.9%, and 51.8%, respectively) or at 24 months (88.4%, 94.7%, and 84.2%). In addition, treatment efficacy of TDF-based combination or TDF monotherapy was not statistically different with ETV-resistant strains or exposure to other antiviral agents. In multivariate analysis, only lower baseline HBV DNA level was an independent predictor for VR (hazard ratio, 0.723; 95% confidence interval, 0.627-0.834; P&lt;0.001). Conclusions: TDF monotherapy was as effective as combination therapy of TDF+LAM or TDF+ETV in maintaining long-term viral suppression in chronic hepatitis B patients with resistance to both LAM and ETV. HBV DNA level at the start of TDF rescue therapy was the only independent predictor of subsequent VR. abstract_id: PUBMED:24914332 Molecular diagnosis and treatment of drug-resistant hepatitis B virus. Oral antiviral agents have been developed in the last two decades for the treatment of chronic hepatitis B (CHB). However, antiviral resistance remains an important challenge for long-term CHB therapy. All of the clinically available oral antiviral agents are nucleoside or nucleotide analogues that target the activity of viral reverse transcriptase (RT), and all are reported to have resistant mutations. Since the hepatitis B virus (HBV) RT, like other viral polymerases, lacks proofreading activity, the emergence of drug-resistance occurs readily under selective pressure from the administration of antiviral agents. The molecular diagnosis of drug-resistant HBV is based on sequence variations, and current diagnostic methods include sequencing, restriction fragment polymorphism analysis, and hybridization. Here, we will discuss the currently available molecular diagnosis tools, in vitro phenotypic assays for validation of drug-resistant HBV, and treatment options for drug-resistant HBV. abstract_id: PUBMED:17237626 Resistance to adefovir in patients with chronic hepatitis B Adefovir dipivoxil (ADV) is effective in the treatment of chronic hepatitis patients with wild type and lamivudine-resistant hepatitis B virus. The occurrence of viral resistance to long-term adefovir therapy is rare, the cumulative rates of resistance were 0%, 3%, 11%, 18%, and 28% at 1, 2, 3, 4, and 5 years of therapy, respectively. The emergence of adefovir resistant mutant in patients with lamivudine resistance is more common than in treatment-naive patients. Two major mutations of adefovir resistance are rtN236T and rtA181V/T. Other mutations in the HBV polymerase (rtP237H, rtN238T/D, rtV84M, rtS85A, rtV214A, rtQ215S) reduce sensitivity to adefovir, but the significance of these mutations is unclear. The adefovir mutations slightly decrease adefovir susceptibility in vitro, suggesting mild clinical course after the occurrence of adefovir resistance. However, some patients show viral rebound and hepatic decompensation. Lamivudine, entecavir, and tenofovir are used currently for salvage therapy in patients with adefovir resistance. To reduce adefovir resistance, combination therapy with adefovir and lamivudine in patients with lamivudine resistance may be a treatment option. abstract_id: PUBMED:25155601 Efficacy of entecavir-tenofovir combination therapy for chronic hepatitis B patients with multidrug-resistant strains. The emergence of multidrug-resistant (MDR) strains of hepatitis B virus (HBV) is a major concern. This study aimed to investigate the efficacy and safety of combination therapy with entecavir (ETV) plus tenofovir disoproxil fumarate (TDF) against MDR HBV. To adjust for differences in baseline characteristics, inverse probability weighting (IPW) using propensity scores for the entire cohort and weighted Cox proportional hazards models were applied. Ninety-three consecutive patients who were treated with ETV-TDF combination therapy for &gt;6 months were included; at baseline, 45 were infected with HBV strains with genotypic resistance to lamivudine (LAM) and ETV (the LAM/ETV-R group), 28 with strains resistant to LAM and adefovir (ADV) (the LAM/ADV-R group), and 20 with strains resistant to LAM, ETV, and ADV (the LAM/ETV/ADV-R group). The median duration of rescue therapy was 13.0 (range, 6.7 to 31.7) months. Seventy-four of 93 patients (79.6%) achieved complete virologic suppression, after a median of 4.5 (95% confidence interval, 3.0 to 6.0) months. The cumulative probability of complete virologic suppression at month 6 was 63.6% (55.7%, 75.0%, and 65.0% in the LAM/ETV-R, LAM/ADV-R, and LAM/ETV/ADV-R groups, respectively). During the treatment period, these probabilities were not significantly different across the resistance profiles before and after IPW (P = 0.072 and P = 0.510, respectively). In multivariate analysis, a lower baseline HBV DNA level, but not resistance profiles, was an independent predictor of complete virologic suppression. Renal dysfunction was not observed during the treatment period. In conclusion, rescue therapy with ETV-TDF combination is efficient and safe in patients infected with MDR HBV strains regardless of the antiviral drug resistance profiles. Answer: The efficacy of entecavir treatment for lamivudine-resistant hepatitis B over 3 years shows both histological improvement and the emergence of entecavir resistance. Studies have demonstrated that long-term entecavir treatment can lead to histological improvement in patients with chronic hepatitis B who have developed resistance to lamivudine. For instance, one study showed that after 3 years of entecavir treatment, 82% of patients achieved a decrease in histological activity index score greater than 2 points, indicating histological improvement (PUBMED:19226381). Another study reported that 81% of patients showed histologic improvement and 38% showed improvement in fibrosis after 148 weeks of entecavir treatment (PUBMED:20305760). However, the same studies also reported the emergence of entecavir resistance. In the first study, HBV mutants resistant to entecavir emerged in 26% of patients, with hepatitis flare occurring in 40% of those with resistance (PUBMED:19226381). The second study found a 3-year cumulative resistance probability of 35.9% to entecavir (PUBMED:20305760). These findings suggest that while entecavir can be effective for histological improvement in the long term, the development of resistance remains a significant concern. To address entecavir resistance, combination therapies have been explored. For example, combination therapy with entecavir and adefovir has shown to be effective in patients with resistance to both entecavir and lamivudine, especially in those without prior adefovir resistance (PUBMED:22246827). Additionally, tenofovir-based therapies have been effective in managing patients with resistance to both lamivudine and entecavir, with no resistance to tenofovir disoproxil fumarate observed through 96 weeks of treatment (PUBMED:24929235, PUBMED:28669175). In conclusion, entecavir treatment for lamivudine-resistant hepatitis B can lead to significant histological improvement over 3 years, but the risk of developing entecavir resistance is a concern that needs to be addressed, potentially through combination therapies or alternative antiviral agents.
Instruction: Is uterine blood flow controlled locally or systemically in the pregnant rabbit? Abstracts: abstract_id: PUBMED:8267054 Is uterine blood flow controlled locally or systemically in the pregnant rabbit? Objective: We tested the hypothesis that uterine blood flow is regulated by systemic circulating factors. The alternative hypothesis is that uterine blood flow is regulated by local factors. Study Design: Adult female New Zealand White rabbits were subjected to a unilateral tubal ligation and thereafter allowed to become pregnant (n = 9). A group of nonpregnant one-tube-ligated animals served as controls (n = 8). On day 21 of gestation uterine blood flow in the pregnant and nonpregnant uterine horns were measured with 15 microns microspheres. The concentration of prostaglandin E2 metabolites were measured in blood from the uterine veins and from the arterial circulation. Results: Absolute uterine blood flow in the pregnant uterine horn was 12.9 +/- 4.7 versus 5.2 +/- 1.4 ml in the nonpregnant horn (p &lt; 0.05). However, when expressed by blood flow per gram of tissue they were not different (p &gt; 0.1). The uterine blood flow for the nonpregnant uterine horn in the pregnant animals was the same as that of the horns from nonpregnant animals. The level of prostaglandin E metabolites was greater in the uterine vein draining the pregnant horn compared to the nonpregnant horn (p &lt; 0.05). Conclusion: These data support the conclusion that the increase in uterine blood flow observed during pregnancy is controlled largely by local factors induced by pregnancy. abstract_id: PUBMED:123276 Uterine blood flow in the pregnant rabbit. Two methods are described for the measurement of uterine blood flow in the pregnant rabbit. The first involves the use of a Parks ultrasonic Doppler probe placed over the exposed uterine artery. The second method uses a drop counter system connected between the uterine and jugular veins. The Doppler flowmeter was used to measure uterine arterial blood flow in twenty rabbits on Day 28 or 29 of pregnancy. No significant difference was observed between blood flow on these 2 days and the absolute blood flow to one horn (plus or minus S. E.) was found to be 16.8 plus or minus 1.4 ml/min, equivalent to 27.1 plus or minus 1.8 ml/100 g tissue/min. Using the drop recorder technique, the flow to one uterine horn in eleven rabbits on Day 27 or 28 of pregnancy was 12.5 plus or minus 1.9 ml/min, equivalent to 23.6 plus or minus 3.2 ml/100 g tissue/min. The pressure-flow relationship in the uterine vascular bed was studied using the Doppler flowmeter and graded mechanical occlusion of the arterial supply. Within the range of pressures studied, the flow was found to be linearly related to the arterio-venous pressure difference. This suggests that the uterine vascular bed was fully dilated under the conditions of study. abstract_id: PUBMED:1208591 Local regulation of the uterine blood flow by the umbilical circulation. Observations were made of the responses of the uterine blood flow in the near-term pregnancy to occlusion of the umbilical circulation to a few cotyledons of the near-term sheep placenta and in one placenta of the multiparous rabbit pregnancy. It was found that the uterine blood flow declined to 67% of its predicted value 1 day after umbilical ligation in the sheep placenta and to 61% of its predicted value 1 day after the death of one of the fetuses of the rabbit pregnancy. The change in the uterine blood flow in response to the occlusion of the umbilical blood supply to the adjacent area is a local response and is similar in its time course and magnitude to the response of the whole placenta which has been previously observed by Raye et al. (9). This local response of the uterine blood flow is considered to be evidence that the uterine blood flow is in part determined and controlled by the structural or chemical nature of the adjacent fetal compartment. abstract_id: PUBMED:7246659 Uterine blood flow distribution after indomethacin infusion in the pregnant rabbit. The distribution of uterine blood flow (UBF) in the chronically instrumented pregnant term rabbit was examined before and after indomethacin infusion. A significant fall in placental but not myometrial blood flow was observed. The fall in placental blood flow was significantly higher in placentas which were implanted in the midsection of the uterine horn than in those implanted in both ends of the horn. The results suggest that different physiologic mechanisms may regulate the blood flows in the placenta and the myometrium. abstract_id: PUBMED:24888932 Importance of optimal local uterine blood flow for implantation. Aims: The aim of this study was to determine whether uterine blood flow is an effective parameter to anticipate uterine receptivity. Material And Methods: The local uterine blood flow was measured in the endometrium and on the outside of the uterus in mice during the early stage of pregnancy and an implantation failure mouse model using transient and local suppression of signal transducer and activator of transcription-3 activity during implantation. Results: The local uterine blood flow was dramatically increased after mating and was decreased towards the time of implantation. The local uterine blood flow at 2.5 days post-coitus in signal transducer and activator of transcription-3 decoy transferred mice was significantly higher than in control mice. However, the range of individual values was too wide to find a cut-off point. Conclusions: It is necessary to decrease local uterine blood flow after ovulation to prepare the opening implantation window. The optimal local uterine blood flow is regulated by time events during pregnancy. The range of individual values of uterine blood flow is wide using a laser Doppler blood flow meter. This parameter itself may not be an appropriate parameter to evaluate the prospect of uterine receptivity. abstract_id: PUBMED:1109179 Uterine prostaglandin E secretion and uterine blood flow in the pregnant rabbit. Studies were performed in pregnant rabbits to assess the effect of inhibition of prostaglandin synthesis on uterine blood flow. Cardiac output and uteroplacental blood flow (UPBF) were measured using radiolabeled microspheres. Prostaglandin E (PGE) concentration was measured by radioimmunoassay in the uterine vein and peripheral artery of the pregnant nephrectomized rabbit. Either meclofenamate or indomethacin 2 mg/kg were utilized to inhibit prostaglandin synthesis. Systemic arterial pressure increased from 86 mm Hg to 98 mm Hg (P less than0.0001) after prostaglandin inhibition. Cardiac output was unchanged after the inhibition of prostaglandin synthesis, 326 ml/min to 7.8 ml/min. Uterine vein PGE concentration was extremely high, 172.4 ng/ml, with concomitant peripheral arterial PGE 2.1 NG/ML. Intravenous administration of either meclofenamate or indomethacin reduced uterine vein PGE to 23 ng/ml (P less than 0.01) and arterial PGE to 1.0 ng/ml (P less than 0.05). Male and nonpregnant female rabbits had lower arterial PGE, 0.37 ng/ml (P less 0.05). Studies in non-nephrectomized pregnant animals demonstrated that uteroplacental secretion of PGE was greater than five times renal secretion. These studies demonstrate that the rabbit uteroplacental unit is a rich source of PGE and suggest that production of the vasoactive lipid may have a key role in regulating UPBF during pregnancy. abstract_id: PUBMED:20061112 Determining uterine blood flow in pregnancy with magnetic resonance imaging. Objective: The purpose of this study is to determine the feasibility of measuring total uterine blood flow in pregnancy using magnetic resonance imaging (MRI) technique. Methods: Uterine blood flow was determined in pregnant women in whom MRI was being carried out to assess a fetal anomaly. A two-dimensional time-of-flight magnetic resonance (MR) angiogram sequence was performed. Scout images and a peripherally gated phase contrast MR sequence were planned to study simultaneous blood flow in the uterine and ovarian arteries. Results: The MR pelvic angiogram sequence was completed in 13 women. The uterine arteries were visualized and their cross-sectional area determined. The complexity of the pelvic blood supply prevented the calculation of blood flow velocity and, thus, total uterine blood flow. Conclusion: The measurement of total uterine blood flow during pregnancy was not possible using our MR technique. The ovarian vessels were not consistently visualized. Doppler ultrasonography remains the best modality by which to estimate total uterine blood flow in pregnancy. abstract_id: PUBMED:14689529 Uterine artery blood flow velocity waveforms during uterine contractions. Objective: No quantitative or qualitative Doppler velocimetry classification of vascular flow resistance covering all stages of forward and reversed flows exists. The objective of this study was to characterize uterine artery (UtA) flow velocity waveforms (FVWs) obtained during an oxytocin challenge test (OCT) and compare them to FVWs in spontaneous normal labor. Methods: Uterine artery Doppler velocimetry was performed during and between uterine contractions in 61 high-risk pregnancies subjected to an OCT and in 20 normal pregnancies undergoing spontaneous labor. FVWs were classified relative to the presence of forward/absent/reversed flow during systole and diastole, and the time-averaged flow velocity over the heart cycle. Results: Eleven different FVW classes were identified. No relationship between FVWs recorded during uterine inertia and contractions was found (P &gt;/= 0.2). In both groups, only forward FVWs were recorded between contractions, whereas during contractions flow reversal was more common in the OCT group (P &lt;/= 0.002). In cases of predominantly reversed flow, a reciprocal relationship to FVW classes recorded in the contralateral artery was found. Conclusions: UtA FVW patterns recorded during uterine contractions were not predicted by flow patterns recorded during uterine inertia. Reversal of flow direction was more common during oxytocin-induced uterine contractions than during spontaneous contractions. In cases of predominantly reversed flow, domains of the uterus may be supplied by blood from the contralateral UtA. These observations give new insights into the circulatory dynamics of the uterus during labor, and also point to a possible vasoconstrictory effect in the UtAs of oxytocin at high concentrations. abstract_id: PUBMED:7137236 Effect of vasoactive intestinal polypeptide on uterine blood flow in pregnant ewes. Vasoactive intestinal polypeptide has been localized in the uterine vasculature, uterine smooth muscle and the placenta of several species. Vasoactive intestinal polypeptide is a potent uterine vasodilator in nonpregnant sheep and also abolishes spontaneous uterine contractile activity, but the effects of this polypeptide on the uterine vasculature of the pregnant animal is currently unknown. The present experiments were performed in seven late-term pregnant sheep which were chronically catheterized to evaluate the uterine vascular effects of vasoactive intestinal polypeptide. An intra-arterial catheter was placed in a branch of the main uterine artery to allow administration of vasoactive intestinal polypeptide directly into the uterine vasculature. Uterine blood flow was continuously monitored via an electromagnetic flow transducer on both main uterine arteries. Vasoactive intestinal polypeptide infused at the rate of 1 to 30 micrograms/min produced dose-related reductions in uterine blood flow (33% +/- 9% at 30 micrograms/min). This decrease was due to a reduction in systemic arterial blood pressure, since calculated resistance in the uterine vasculature that received the vasoactive intestinal polypeptide did not change significantly. In addition, the contralateral uterine vasculature that did not receive direct intra-arterial infusions of vasoactive intestinal polypeptide showed identical changes. These data suggest that vasoactive intestinal polypeptide produces peripheral vasodilation at doses which have very little uterine effect locally. These data can be interpreted to mean either that high local endogenous production of vasoactive intestinal polypeptide prevents exogenously administered vasoactive intestinal polypeptide from exerting its vascular effects, or that vasoactive intestinal polypeptide is a very weak uterine vasodilator in pregnant ewes. The clarification of these possibilities will require further experimentation. abstract_id: PUBMED:3062954 Involution-induced changes in arterial uterine blood flow The change of resistance in the uterine arteries was studied in 20 patients showing a normal involution on five consecutive days. A uterine artery was examined by Doppler sonography, the Doppler profile evaluated, and the quotient, Fmean, resistance index, and pulsatility index were calculated from the systole and diastole. These parameters, which provided information on vascular resistance independently of the angle between the Doppler beam and the vessel, indicated a continuous increase during the puerperium; the differences in values for different days were significant. The changes in the Doppler profiles in the puerperial period pointed to an increasing vascular resistance. In one-half of the patients this was shown only by an increasing slope to the systolic peak, a sharp drop to early diastole, and a clear distinction between systole and diastole. In the other half of the patients the Doppler curves were like those found in pregnant patients with EPH gestosis. This indicated that the pathologic mechanisms leading to increased resistance in the uterine vessels were the same in both groups. One cause of this may be contraction and compression of the blood vessels, the other, and probably principal, cause was reduction of the vascular system due to histolysis, as in the puerperium, or insufficient development of the vascular system, as in EPH gestosis. That is to say, functional and morphological changes occur in both cases. No decrease in resistance in the uterine vascular bed was detected in cases of puerperial subinvolution of the uterus. Answer: Uterine blood flow in the pregnant rabbit is controlled largely by local factors induced by pregnancy. This conclusion is supported by a study where adult female New Zealand White rabbits underwent unilateral tubal ligation and were allowed to become pregnant. The study found that absolute uterine blood flow in the pregnant uterine horn was significantly higher than in the nonpregnant horn, but when expressed per gram of tissue, they were not different. Additionally, the uterine blood flow for the nonpregnant uterine horn in the pregnant animals was the same as that of the horns from nonpregnant animals. The level of prostaglandin E metabolites was greater in the uterine vein draining the pregnant horn compared to the nonpregnant horn, suggesting a local regulation of blood flow (PUBMED:8267054). Other studies have also indicated local regulation of uterine blood flow. For instance, the local response of the uterine blood flow to occlusion of the umbilical blood supply in near-term sheep and rabbit pregnancies suggests that the uterine blood flow is in part determined and controlled by the structural or chemical nature of the adjacent fetal compartment (PUBMED:1208591). Additionally, the distribution of uterine blood flow after indomethacin infusion in the pregnant rabbit showed a significant fall in placental but not myometrial blood flow, indicating that different physiological mechanisms may regulate blood flows in the placenta and the myometrium, which is consistent with local control (PUBMED:7246659). Furthermore, studies on the secretion of prostaglandin E (PGE) and its effect on uterine blood flow in pregnant rabbits have demonstrated that the rabbit uteroplacental unit is a rich source of PGE, which may play a key role in regulating uteroplacental blood flow during pregnancy, again supporting local regulation (PUBMED:1109179). In summary, the evidence from these studies suggests that uterine blood flow in the pregnant rabbit is predominantly controlled by local factors rather than systemic circulating factors.
Instruction: Does cataract surgery reduce the long-term risk of glaucoma in eyes with pseudoexfoliation syndrome? Abstracts: abstract_id: PUBMED:26749122 Does cataract surgery reduce the long-term risk of glaucoma in eyes with pseudoexfoliation syndrome? Purpose: To compare glaucoma development and intraocular pressure (IOP) in the longer term following phacoemulsification cataract surgery in eyes with and without pseudoexfoliation syndrome (PEX). Methods: Fifty-one patients with PEX were compared with 102 age- and gender-matched controls without PEX. Patients were re-examined a mean of 76 (SD 5.4) months after cataract surgery, recording IOP, glaucoma diagnosis, glaucoma treatment and LogMAR. Data from the preoperative visit (baseline) and IOP on the first postoperative day were obtained from medical records. A glaucoma parameter was predefined as patients developing glaucoma or needing increased glaucoma treatment during the postoperative time period. Results: One new glaucoma case in each group was diagnosed postoperatively, yielding glaucoma incidences of 0.47 cases per 100 person-years [95% confidence interval (CI) 0.006-2.61] and 0.17 cases per 100 person-years (CI 0.002-0.95) in the PEX and control groups respectively (p = 0.53). IOP declined by 2.6 (SD 4.0) mmHg in the PEX group (p &lt; 0.001) and 1.9 (SD 3.5) mmHg in the control group (p &lt; 0.001) from baseline to the re-examination, with a non-significant group difference (p = 0.310). IOP spike (≥6 mmHg increase) was significantly associated with the glaucoma parameter, both within the PEX (p = 0.034) and the control group (p = 0.044). Conclusion: The number of newly diagnosed glaucoma cases was lower than expected 6-7 years following cataract extraction, especially in the PEX group, which indicates that PEX eyes benefit particularly from cataract surgery in terms of IOP and glaucoma development. abstract_id: PUBMED:25950660 The Prevalence of Pseudoexfoliation and the Long-term Changes in Eyes With Pseudoexfoliation in a South Indian Population. Purpose: To report the prevalence, long-term changes and associated factors for pseudoexfoliation (PEX) in a population aged 40 years and above from rural and urban south India. Materials And Methods: At baseline (the Chennai Glaucoma Study), 7774 subjects were examined. After 6 years, as a part of the incidence study, 133 of the 290 subjects diagnosed with PEX at baseline were reexamined for long-term changes. Participants had detailed examination at base hospital. Results: At baseline PEX was noted in 290 [3.73%, 95% confidence interval (CI), 3.3-4.2] subjects. It was associated with glaucoma in 24 (8.3%), ocular hypertension (OHT) in 21 (7.2%), and occludable angles in 24 (8.3%) subjects. The age-adjusted and sex-adjusted prevalence was 3.41% (95% CI, 3.39-3.43). Increasing age was a significant associated factor. Using the 40- to 49-year age group as a reference, the odds ratio increased from 8.4 (95% CI, 4.1-17.1) for the 50- to 59-year age group to 51.2 (95% CI, 25.8-101.6) for the 70 years and above age group. Other associated factors were rural residence (P&lt;0.001), higher intraocular pressure (P&lt;0.001), cataract (P&lt;0.001), being underweight (P=0.01), manual labor (P=0.03), and aphakia (P&lt;0.001). Of the 133 subjects reexamined, 8 (6.0%) subjects developed glaucoma and all had OHT at baseline. Rates of cataract surgery were (P&lt;0.001) higher in subjects with PEX. Conclusion: Prevalence of PEX was higher in rural population and baseline OHT was a significant factor for conversion to glaucoma. abstract_id: PUBMED:1427130 Exfoliation syndrome as a risk factor for optic disc changes in nonglaucomatous eyes. Exfoliation syndrome as a possible risk factor for morphologic changes of the optic nerve was examined in 66 patients with unilateral exfoliation and no glaucoma. K-readings (7.74 +/- 0.3 mm and 7.75 +/- 0.3 mm), axial lengths (23.1 +/- 1.1 mm and 23.1 +/- 1.1 mm), and refraction (+ 0.9 +/- 2.3 mm and + 1.1 +/- 2.3 mm) did not differ in exfoliative and contralateral nonexfoliative eyes. The mean intraocular pressure (IOP) difference, 17.2 +/- 3.3 mmHg and 15.6 +/- 3.2 mmHg, respectively, was statistically highly significant (P &lt; 0.001). The mean visual acuity difference, 0.8 +/- 0.3 and 0.9 +/- 0.2, respectively, was significant (P &lt; 0.05). The difference in visual acuity between the pairs of eyes was explained by the more frequent subcapsular cataract in exfoliative eyes. Lens opacity values (opacity lens meter), 27.9 +/- 8.3 and 28.0 +/- 8.4 opacity units, respectively, were similar. Disc area, neuroretinal rim area, rim/disc ratio, cup area, and cup volume values analyzed with the Imagenet (Topcon) nerve head analyzer did not differ significantly between the eyes. It was concluded that exfoliation as such does not induce optic nerve head changes but indicates a risk factor for elevated IOP and lens opacification. abstract_id: PUBMED:23280247 Positioning of the posterior intraocular lens in the longer term following cataract surgery in eyes with and without pseudoexfoliation syndrome. Purpose: To assess long-term positioning of posterior chamber intraocular lenses within the capsular bag in eyes with pseudoexfoliation syndrome. Methods: The study includes 44 patients with pseudoexfoliation syndrome and 85 age-matched controls, who underwent cataract surgery in 2001 and 2002 at the Eye Department, Oslo University Hospital. In 2008, all patients were re-examined. A comparison of the extent of possible decentration in eyes with and without pseudoexfoliation syndrome was made by evaluating Scheimpflug images (Pentacam) of the anterior segment. Results: It was found that, 6-7 years following cataract surgery, posterior chamber intraocular lenses were positioned lower in eyes with pseudoexfoliation syndrome than in control eyes. The difference was statistically significant (p=0.01). Downward shift was associated with presence of glaucoma only in eyes with pseudoexfoliation syndrome (p=0.01). No patients had visual disturbances related to displacement of the intraocular lens. Three of the patients with pseudoexfoliation syndrome (6.8%) had observable pseudophacodonesis by slit-lamp examination, compared to one in the control group (1.2%). The study demonstrated that Pentacam is an appropriate instrument to measure decentration of intraocular lenses. Conclusion: The study suggests that, 6-7 years after cataract surgery, the intraocular lenses within the capsular bag are more prone to decentration in pseudoexfoliation syndrome eyes, compared to controls. abstract_id: PUBMED:19525725 Long-term results of deep sclerectomy with collagen implant in exfoliative glaucoma. Purpose: To evaluate the long-term results and complications of deep sclerectomy with collagen implant in exfoliative glaucoma (EXG). Patients And Methods: A total of 22 eyes of 22 patients with medically uncontrolled EXG were consecutively included in this study and were followed-up prospectively. Intraocular pressure (IOP), number of antiglaucoma medications, visual acuity, and slit-lamp examination were performed before and after surgery, at day 1, week 1, and at months 1, 3, 6, 9, 12, 18, 24, 30, 36, 48, and 54. Intraoperative and postoperative complications were recorded and managed accordingly. Complete success was defined as IOP &lt; or =18 mm Hg without antiglaucoma medications and qualified success as IOP &lt; or =18 mm Hg with or without antiglaucoma medications. Results: After a mean follow-up time of 48.5+/-12.2 months (range, 12 to 54), mean IOP was significantly reduced from 29.9+/-8.1 mm Hg preoperatively to 13.2+/-3.2 mm Hg (P&lt;0.0001). Complete and qualified success rates were 54.5% and 90.9%, respectively. The mean number of antiglaucoma medications per patient was significantly reduced from 2.4+/-0.67 to 0.59+/-0.85 (P&lt;0.0001). Goniopuncture with Nd:YAG laser was performed on 14 eyes (63.6%). Mean IOP was reduced from 21.8+/-8.8 mm Hg to 9+/-3.2 mm Hg after goniopuncture (P=0.00058). Four eyes (18.2%) required 5-fluorouracil subconjunctival injections and 7 eyes (31.8%) showed cataract progression. Conclusions: Deep sclerectomy with collagen implant seems to provide reasonable long-term IOP control in EXG with few postoperative complications. abstract_id: PUBMED:16187993 Phacoemulsification in trabeculectomized eyes. Purpose: To evaluate retrospectively risk indicators for cataract surgery and the effect of phacoemulsification on intraocular pressure (IOP) control in eyes that have undergone trabeculectomy. Methods: We undertook a retrospective analysis of 138 eyes with primary open-angle glaucoma (POAG) or exfoliation glaucoma (EG) in 138 consecutive patients over the age of 40 years undergoing trabeculectomy with no antimetabolites performed by one surgeon. Of the 48 eyes (35%) undergoing a cataract operation during the follow-up period of 2-5 years, 46 were included in this analysis. Their IOP, glaucoma medication and best corrected visual acuity (BCVA) before cataract surgery and at the last follow-up were compared. Risk indicators for cataract surgery were analysed. Results: Cataract operations were performed 5.1-58.1 months (median 14.4 months) after trabeculectomy. The mean length of follow-up after cataract surgery was 25.3 months (SD 12.9, median 24.8 months). Before cataract surgery, the mean IOP was 16.2 mmHg (SD 4.9) and the mean number of topical antiglaucoma medicines 0.8 (SD 1.0). At the most recent visit, mean IOP was 17.3 mmHg (SD 6.4) (p = 0.35), and the mean number of medicines was 1.3 (SD 1.1) (p = 0.0007). Of the 22 eyes in which treatment had been categorized as completely successful (IOP &lt; or = 21 mmHg without other therapy) before cataract surgery, 13 (59%) had remained so. The number of failures (IOP &gt; 21 mmHg, or more than one medication needed or further surgery performed) increased from 14 (30%) before surgery to 28 (61%) afterwards. The proportion of failures in the cataract surgery group was twice that in the no cataract surgery group (61% versus 31%). In a proportional hazards regression, only age (73.9 years [SD 9.4] and 68.1 years [SD 9.8] in patients with and without cataract surgery, respectively) proved to be a significant (p = 0.001) indicator for surgery. Conclusion: The results of this retrospective study on consecutive clinical cases of trabeculectomy indicate that cataract progression after trabeculectomy is mainly an age-related process. In more than half the eyes with good preoperative IOP control, this good control was maintained after cataract surgery. On the other hand, in some eyes cataract surgery may compromise IOP control even when surgery avoids the area of the bleb. abstract_id: PUBMED:21907537 Combined cataract and trabeculectomy surgery in eyes with pseudoexfoliation glaucoma. Purpose: To assess the short- and long-term effect of uneventful phacoemulsification, posterior chamber intraocular lens (IOL) implantation, and trabeculectomy on intraocular pressure (IOP) and glaucoma medication requirements in eyes with pseudoexfoliation glaucoma (PXG) and compare the results with those in eyes that had uneventful phacoemulsification only (reported in a previous study of the same cohort of pseudoexfoliation eyes). Setting: Private practice, Boston, Massachusetts, USA. Design: Comparative case series. Methods: A retrospective analysis was performed of consecutive PXG eyes that had uneventful combined phacoemulsification and trabeculectomy by the same surgeon. The change in IOP, glaucoma medication requirements, and logMAR corrected distance visual acuity was compared between the combined surgery group and the phaco-alone group. Results: The combined-surgery group (n = 138) had statistically significant reduced mean IOP and glaucoma medication requirements through 10 years postoperatively (P&lt;.018). The change in IOP and glaucoma medication requirements was greater in the combined-surgery group than in the phaco-alone group (n = 240); this was statistically significant up to 7 years postoperatively (P&lt;.022). The reduction in mean postoperative IOP was greater in eyes with a higher mean preoperative IOP. In the combined-surgery group, 13.8% of eyes required subsequent laser trabeculoplasty, glaucoma surgery, or both. Conclusions: Uneventful phacoemulsification, IOL implantation, and trabeculectomy resulted in significant long-term reduction in IOP and glaucoma medication requirements in eyes with PXG. Combined procedures resulted in greater and more longstanding reductions in IOP and glaucoma medication requirements and fewer 1-day postoperative IOP spikes than phacoemulsification alone. Financial Disclosure: No author has a financial or proprietary interest in any material or method mentioned. abstract_id: PUBMED:35477245 INCIDENCE OF PSEUDOEXFOLIATION SYNDROME AND GLAUCOMA IN A SET OF MORE THAN 14,000 EYES OF PATIENTS OPERATED FOR A CATARACT. Purpose: Evaluation of the incidence of pseudoexfoliation (PEX) syndrome and glaucoma in cataract patients operated at our Clinic, with an analysis of possible complications. Methodology: Retrospective evaluation of medical records of PEX syndrome patients who have undergone cataract surgery at the Gemini Eye Clinic Ostrava-Hrusov was undertaken. The study period was from November 2016 to April 2021. The evaluated parameters were the incidence of PEX syndrome, age and gender of patients, intraocular pressure (IOP) before the surgery, pre-existing therapy of previously diagnosed secondary glaucoma and the occurrence of perioperative complications. Results: In the study period of 4.5 years, out of the total number of 14 167 operated eyes with cataracts there were 852 eyes of 689 patients with PEX syndrome diagnosed at our Clinic, i.e. 6.0 %. The mean age was 76.9 years, the median 77 years, range 54-100 years. The observed pathology was more common in women at a ratio of 1.84: 1 (552: 300). Elevation of IOP above 21 mmHg was recorded in 118 eyes, in 14 of them IOP reached values over 30 mmHg. Diagnosed and long-term treated secondary glaucoma was confirmed by 153 patients (204 eyes), out of which 22 eyes have undergone antiglaucoma laser (19 eyes) and / or surgery (5 eyes) in the anamnesis. Perioperatively, we recorded the following pathological findings accompanying the occurrence of PEX syndrome in 231 eyes. Most often it was poor artificial mydriasis (189 eyes), then subluxation of the lens (31 eyes) or zonular fragility (17 eyes). To reduce the risk of perioperative and postoperative complications, implantation of a capsular tension ring was indicated in 20 eyes. Complications during the procedure occurred in 11 eyes, of which 8 eyes were diagnosed with advanced cataract. Conclusion: PEX syndrome and glaucoma are relatively common diseases that can complicate the lives of patients and eye surgeons. The incidence of PEX syndrome in our cataract patients was 6 %. Proper diagnosis of this disease is important not only for the possible occurrence of numerous complications during and after cataract surgery, but also for the possible presence of secondary glaucoma. It also serves to detect possible involvement of the contralateral eye. In addition, due to the involvement of practically all tissues in the body, the patient is endangered by numerous, especially vascular comorbidities. For these reasons, we find it appropriate that these patients are observed by other healthcare specialists. In our experience, early indication of cataract surgery is important to achieve a lower degree of zonular fragility and a softer lens core. In addition, lower levels of proinflammatory pseudoexfoliation material occur in the anterior segment of the eye in the early stages, which may have a beneficial effect on the postoperative healing. abstract_id: PUBMED:23036566 Pseudoexfoliation syndrome and the long-term incidence of cataract and cataract surgery: the blue mountains eye study. Purpose: To assess whether the pseudoexfoliation syndrome (PXS) is associated with the long-term incidence of cataract or cataract surgery. Design: Population-based cohort study. Methods: The Blue Mountains Eye Study examined 3654 persons 49 years of age and older at baseline; 2564 were re-examined after 5 or 10 years, or both. PXS was recorded at the baseline eye examination by an ophthalmologist. Masked graders assessed lens photographs using the Wisconsin Cataract Grading System. Generalized estimating equation regression models were used to examine the association between PXS and cataract by eye. Results: Eyes with PXS had a significantly greater prevalence of cortical cataract (P=.02) and nuclear cataract (P &lt; .0001) than eyes without PXS. The association between PXS and cortical cataract, however, did not persist after further adjustment for age, gender, smoking, diabetes, steroid use, myopia, socioeconomic status, and open-angle glaucoma (odds ratio [OR], 0.89; 95% confidence interval [CI], 0.53 to 1.46), whereas the association between PXS and nuclear cataract persisted after adjustment for the above confounders (OR, 1.90; 95% CI, 1.04 to 3.48). In addition, significant associations were found between the presence of PXS at baseline and the 10-year incidence of either nuclear cataract (P &lt; .0001) or cataract surgery (P &lt; .0001). These associations persisted after adjustment for the potential confounders listed above (OR, 3.25; 95% CI, 1.38 to 7.65; and OR, 4.09; 95% CI, 2.25 to 7.44; respectively). No significant cross-sectional or longitudinal associations were found between PXS and posterior subcapsular cataract. Conclusions: Long-term follow-up data from this population-based older cohort suggest that the presence of PXS is associated with an increased risk of nuclear cataract and cataract surgery. abstract_id: PUBMED:22973738 Long-term surgical results of initial trabeculotomy combined with sinusotomy performed inferiorly Purpose: To evaluate retrospectively the long-term effects of initial trabeculotomy combined with sinusotomy performed inferiorly. Patients And Method: Enrolled were 128 eyes of 100 patients who received initial glaucoma surgery. In 36 eyes, the removal of Schlemm's canal endothelium was also performed (removed group). The results were compared with the intact group Results: In the primary open angle glaucoma (POAG), mean intraocular pressure (IOP) at 3 years after surgery was 14.6 (intact) and 15.4 mmHg (removed). Kaplan-Meier life-table analysis showed that qualified success rates for the intact group at 8 years were 62.2% and for the removed group at 5 years 45.2% defined by 20 mmHg or lower. The results in developmental glaucoma (DG) were similar to those in POAG. No statistical differences in postoperative IOP between the intact and removed groups were seen in either POAG or DG. In exfoliation glaucoma (XFG), mean IOPs for the intact group at 3 years were 17.3 mmHg and for the removed group at 2 years 15.4 mmHg. The success rates for the intact group at 3.5 years were 25.2% and for the removed group at 4.5 years 64.3%. The results in the intact group were worse than in the POAG patients. Although visual disturbance was seen in 13% of the patients, the major cause was the progression of the cataracts. Conclusions: The long-term results were the same as those of previous reports on surgery performed superiorly, including the frequency of visual disturbance. However the removal of Schlemm's canal endothelium is necessary in XFG for better IOP control. Answer: Cataract surgery appears to have a beneficial effect on intraocular pressure (IOP) and may reduce the long-term risk of glaucoma development in eyes with pseudoexfoliation syndrome (PEX). A study comparing glaucoma development following phacoemulsification cataract surgery in eyes with and without PEX found that the number of newly diagnosed glaucoma cases was lower than expected 6-7 years after cataract extraction, particularly in the PEX group. This suggests that eyes with PEX may particularly benefit from cataract surgery in terms of IOP and glaucoma development (PUBMED:26749122). Additionally, in a South Indian population, the prevalence of PEX was associated with glaucoma, ocular hypertension (OHT), and occludable angles. Baseline OHT was a significant factor for conversion to glaucoma, and rates of cataract surgery were higher in subjects with PEX, indicating a potential link between PEX, cataract development, and glaucoma (PUBMED:25950660). In another study, exfoliation syndrome was identified as a risk factor for elevated IOP but not directly for optic nerve head changes in non-glaucomatous eyes. This suggests that while PEX itself may not induce optic nerve changes, it is a risk factor for conditions that could lead to glaucoma (PUBMED:1427130). Furthermore, a study on the positioning of posterior chamber intraocular lenses post-cataract surgery found that lenses were more prone to decentration in PEX eyes compared to controls, which could have implications for IOP and glaucoma risk (PUBMED:23280247). In summary, while direct causality cannot be definitively established from these studies, the evidence suggests that cataract surgery may have a protective effect against the development of glaucoma in eyes with pseudoexfoliation syndrome, likely due to the reduction in IOP that follows the procedure.
Instruction: Does the size of ureteral stent impact urinary symptoms and quality of life? Abstracts: abstract_id: PUBMED:33459631 Quality of life for ureteral stone patients (diversion). The high prevalence and incidence of urinary stone disease, the severity of its symptoms, its high recurrence rate and resulting healthcare costs, make urolithiasis a chronic disease with significant impact on healthcare services and patient quality of life. There are several general tools available to assess health related quality of life in patients with chronic illnesses, as wellas some specific ones directed to urinary stone disease, such as the ureteral stent symptom questionnaire. Patient swith an obstructive ureteral stone or those indwelling aureteral stent, often present symptoms that may affecttheir quality of life considerably. Patient education and counselling regarding stent-related symptoms, as well as medical treatment, may help improve their perception of quality of life. abstract_id: PUBMED:36905357 Antireflux Ureteral Stent Improves Stent-Related Symptoms and Quality of Life: A Prospective Randomized Controlled Trial. Objectives: To compare the effectiveness of antireflux ureteral stents on improving symptoms and quality of life of patients with ureteral stents. Materials and Methods: We randomized 120 patients with ureteral stone who required ureteral stent placement after ureteroscopic lithotripsy, of which 107 (56 in standard ureteral stent group and 51 in antireflux ureteral stent group) entered the final analysis. Severity of flank pain and suprapubic pain, visual analog scale (VAS), analgesic used after hospitalization, back soreness during micturition, gross hematuria, creatinine abnormality, hydronephrosis grade, symptomatic urinary tract infection (UTI), and quality of life were compared between the two groups. Results: There were no serious complications after operation in all 107 cases. The antireflux ureteral stent group had less flank pain and suprapubic pain (p &lt; 0.05), analgesic used after hospitalization (p &lt; 0.05), back soreness during micturition (p &lt; 0.05), and lower VAS (p &lt; 0.05). The health status index scores (p &lt; 0.05), dimensions of usual activities, and pain/discomfort (p &lt; 0.05) in the antireflux ureteral stent group were statistically better than those in the standard ureteral stent group. There were no significant differences between the groups in creatinine abnormality, hydronephrosis grade, gross hematuria, and symptomatic UTI. Conclusions: The antireflux ureteral stent has the same safety and efficacy as the standard ureteral stent, and is significantly better than the standard ureteral stent in flank pain and suprapubic pain, VAS, analgesic used after hospitalization, back soreness during micturition, and quality of life. abstract_id: PUBMED:31441009 Intravesical stent position as a predictor of quality of life in patients with indwelling ureteral stent. Purpose: The internal drainage provided by a ureteral stent helps with the relief and prevention of ureteral obstruction. By definition, correct stent placement is one with a complete loop in both the renal pelvis and bladder. This prevents stent migration proximally or distally despite urinary flow, patient movement, and ureteral peristalsis. Methods: We performed a comparative prospective cross-sectional study assessing the impact of intravesical stent position on the quality of life in 46 patients with a ureteral stent. This is done using the Ureteral Stent Symptom Questionnaire (USSQ). Results: 52.5% of patients had an ipsilateral positioned intravesical stent, while the remaining had their stent positioned contralaterally. Intravesical stent position significantly influenced the quality of life. The USSQ score was worse for the contralateral group. Subscore analysis found that urinary symptoms and body pain index contribute significantly to the morbidity. Majority of patients in the ipsilateral group reported no discomfort as compared to the contralateral group. Conclusions: To the best of our knowledge, this is the first study assessing the impact of intravesical stent position on the quality of life in the Asian population. Intravesical stent position has a significant influence on patient's morbidity and quality of life in particular towards their urinary irritative symptoms and body pain. It is imperative to ensure correct distal placement of ureteric stent that does not cross the midline to the contralateral site. We believe that the USSQ should be used in daily clinical practice in assessing the symptoms related to indwelling ureteric stents. abstract_id: PUBMED:16039775 Does the size of ureteral stent impact urinary symptoms and quality of life? A prospective randomized study. Objective: To evaluate the effect of stent diameter on patients' symptoms and quality of life (QoL) by using dedicated questionnaires. Methods: We prospectively enrolled 34 patients with unilateral ureteral obstruction due to urinary stone undergoing to ureteral stenting (17 pts with 4.8 F and 17 pts with 6 F) before treatment of stone disease. Twenty-one patients with lower urinary symptoms from other causes were used as a control group. Two questionnaires, one on QoL and another on stent specific symptoms, were administered to patients one week after stent positioning and 4 week after removal. Results: There was a significant association between stent state and answers on pain and discomfort on QoL questionnaire. A high percentage of patients reported anxiety and depression associated with the stent. Similar significant association was found between stent state and urinary symptoms and pain. No differences in QoL and urinary symptoms and pain were detected using stents with different size. Conclusions: Ureteral stents are invariably associated with urinary symptoms and impaired QoL. We did not find any difference between stent with different size, whereas there was a tendency for stent with smaller diameter to dislodge more often. abstract_id: PUBMED:35387623 Quality of life impact and recovery after ureteroscopy and stent insertion: insights from daily surveys in STENTS. Background: Our objective was to describe day-to-day evolution and variations in patient-reported stent-associated symptoms (SAS) in the STudy to Enhance uNderstanding of sTent-associated Symptoms (STENTS), a prospective multicenter observational cohort study, using multiple instruments with conceptual overlap in various domains. Methods: In a nested cohort of the STENTS study, the initial 40 participants having unilateral ureteroscopy (URS) and stent placement underwent daily assessment of self-reported measures using the Brief Pain Inventory short form, Patient-Reported Outcome Measurement Information System measures for pain severity and pain interference, the Urinary Score of the Ureteral Stent Symptom Questionnaire, and Symptoms of Lower Urinary Tract Dysfunction Research Network Symptom Index. Pain intensity, pain interference, urinary symptoms, and bother were obtained preoperatively, daily until stent removal, and at postoperative day (POD) 30. Results: The median age was 44 years (IQR 29,58), and 53% were female. The size of the dominant stone was 7.5 mm (IQR 5,11), and 50% were located in the kidney. There was consistency among instruments assessing similar concepts. Pain intensity and urinary symptoms increased from baseline to POD 1 with apparent peaks in the first 2 days, remained elevated with stent in situ, and varied widely among individuals. Interference due to pain, and bother due to urinary symptoms, likewise demonstrated high individual variability. Conclusions: This first study investigating daily SAS allows for a more in-depth look at the lived experience after URS and the impact on quality of life. Different instruments measuring pain intensity, pain interference, and urinary symptoms produced consistent assessments of patients' experiences. The overall daily stability of pain and urinary symptoms after URS was also marked by high patient-level variation, suggesting an opportunity to identify characteristics associated with severe SAS after URS. abstract_id: PUBMED:30957387 Effect of ureteral stent diameter on ureteral stent-related symptoms. Objective: This study investigated the correlation between ureteral stent diameter and stent-related symptoms. Methods: This study evaluated 17 patients (53 [74.6%] men, 18 [25.4%] women; mean [±SD] age 59.3 ± 14.2 years) who underwent ureteral stent placement before ureteroscopic lithotripsy (URSL) and in whom the ureteral stent tail was positioned inside the bladder without crossing the midline. All stents were Inlay Optima stents. Stent diameter (6 or 4.7 Fr) and length (24 or 26 cm) were chosen at the surgeon's discretion. Patients were classified into two groups (Group 1, 6-Fr stent; Group 2, 4.7-Fr stent). Urinary symptoms before insertion of the ureteral stents and the day before URSL were assessed using the International Prostate Symptom Score (IPSS) and Overactive Bladder Symptom Score (OABSS). In addition, patient background and changes in the IPSS and OABSS were compared. Factors affecting ureteral stent-related symptoms were evaluated using multivariate analysis. Results: Compared with Group 2, Group 1 had a worse total IPSS (P = 0.02), as well as intermittency (P = 0.009), urgency (P = 0.008), voiding symptoms (Q1 + Q3 + Q5 + Q6; P = 0.046), and storage symptoms (Q2 + Q4 + Q7; P = 0.017) subscores on the IPSS, total OABSS (P = 0.045) and OABSS urgency subscore (P = 0.002). Multivariate analysis revealed that stent diameter was significantly associated with total IPSS (P = 0.007) and OABSS (P = 0.036). Conclusion: This is the first study to show that larger-diameter ureteral stents induce significantly worse urinary symptoms. Ureteral stents with smaller diameters are recommended to improve ureteral stent-related symptoms. abstract_id: PUBMED:26019964 Validation of the Arabic linguistic version of the Ureteral Stent Symptoms Questionnaire. Objective: To validate the Arabic version of the Ureteral Stent Symptoms Questionnaire (USSQ). Patients And Methods: The English version of the USSQ was translated into Arabic using a multi-step process by three urologists and two independent translators. The Arabic version was validated by asking 37 patients with temporary unilateral ureteric stents to complete the questionnaire at 2 weeks after stent insertion. The second group included 53 healthy individuals who agreed to complete the Arabic version of the questionnaire. The reliability of the Arabic version was evaluated for internal consistency using Cronbach's α test. Domain structures were examined by interdomain (section) associations using Spearman's correlation coefficient (r). The discrimination validity was evaluated by comparing the scores of patients with those of healthy individuals, using the Mann-Whitney test. Results: Internal consistency was high for the sexual index and intermediate for urinary, pain and general health indices. There were good correlations of urinary symptoms with body pain (r = 0.596) and general health (r = 0.690). There was also a good correlation between body pain and general health (r = 0.681). For discrimination validity, there were significant changes in all domain scores when comparing patients with ureteric stents and healthy individuals (P &lt; 0.001). Conclusion: The Arabic version of the USSQ is a reliable and valid instrument that can be used to evaluate symptoms and health-related quality of life in Arabic patients with ureteric stents. abstract_id: PUBMED:34688588 Mirabegron for the Treatment of Ureteral Stent-related Symptoms: A Systematic Review and Meta-analysis. Context: Ureteral stent-related symptoms (SRSs) are very common and may potentially influence the quality of life and functional capacity of patients. It remains unclear whether mirabegron has a place in the treatment of SRSs. Objective: To summarize the evidence of mirabegron for the treatment for SRSs in adult patients. Evidence Acquisition: A systematic review of literature was performed using the PubMed, Embase, and Google Scholar databases. Studies published up to June 2021 that met the search terms ("mirabegron" OR "B3-agonist") AND ("stent-related symptoms" OR "stent-related discomfort" OR "stent") were considered. References from relevant sources were examined to identify additional sources for this review. Relevant studies were selected according to Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines. The revised Cochrane tool "RoB 2" was used to assess the quality of included randomized clinical trials. Evidence Synthesis: Eight studies were selected for final quantitative and qualitative synthesis. The Ureteral Stent Symptom Questionnaire (USSQ) body pain score was significantly improved by mirabegron in five studies, although a pooled data analysis did not reveal any significant changes. The USSQ urinary symptom score was significantly improved by mirabegron in four independent studies as well as in the corresponding meta-analysis, while no changes were found in two studies. International Prostate Symptom Score was improved significantly by mirabegron in three independent studies as well as in the corresponding meta-analysis. The USSQ general health and the quality of life score were improved significantly by mirabegron on meta-analysis. Conclusions: Mirabegron may have a beneficial effect on pain and urinary symptoms due to ureteral stents, although the evidence is based on low-quality studies. Patient Summary: In this report, we looked at the evidence of mirabegron for the treatment of ureteral stent-related symptoms. We found that mirabegron can potentially alleviate pain and bothersome urinary symptoms due to ureteral stents. abstract_id: PUBMED:35969727 Considerations in ureteral stent selection in order to minimize symptoms. Introduction: Ureteral stent-related symptoms are common after stent placement. Various characteristics of stent design have been previously investigated to mitigate this issue. Our review summarizes available literature on stent design parameters (diameter, material, position, length, distal loop modifications) and their effect on stent-related symptoms, including pain. Materials And Methods: We identified articles from PubMed, Medline, EMBASE, Web of Science, and Grey Literature using a search strategy employing MESH search headings (i.e, ureteral stent diameter, length, composition, material, durometer, and stent-related pain). Results: Out of 2,970 identified studies, 26 met eligibility criteria. Most diameter studies found patients with &gt; 6Fr stents reported significantly increased stent-related symptoms. A few did report more migration with thinner stents. Almost half of durometer studies found composition made no difference in symptoms. Distal loop modification studies found minimizing intravesical material decreased stent-related pain. All studies on positioning found patients reported more severe urinary, pain and quality of life symptoms when stents crossed the bladder midline. No difference in stent-related symptoms was seen between multi-length and standard stents patients. Conclusion: Adverse symptoms occur commonly after ureteral stent placement. No definitive recommendations on the model stent can be provided due to the heterogeneity of studies. Though the number of robust studies is limited, data suggest stents crossing midline, larger diameters, and those without distal material-reduction modifications may worsen stent-related symptoms. Future studies are needed to better understand the ideal stent design. abstract_id: PUBMED:33391907 Effect of Ureteral Stent Length and Position of Stent Coil in Bladder on Stent-Related Symptoms and Quality of Life of Patients. Introduction: Various standardized questionnaires can evaluate ureteral stent-related symptoms. The present study utilized a validated instrument, Ureteral Stent Symptoms Questionnaire (USSQ), to know the impact of the length of double J stent (DJS) in comparison to ureter length on patients and their quality of living. Materials And Method: This study is a prospective clinical study conducted in the Department of Urology at a tertiary care center in South India. Patients who underwent DJS after endoscopic ureteral lithotripsy were included in the study. On a computerized tomography scan, the ureteral to stent length ratio (USR) was calculated. USSQ scores at the time of DJS removal and two weeks thereafter were recorded. The distal coil of the stent in the bladder was recorded as grade 1 - not crossing the midline and grade 2 - crossing the midline. Different symptom scores were compared between both grades of bladder coil and for USR of all the patients. Result: A total of 157 patients were included in the study. Over 46 (29.3%) patients had grade-1 and 111 (70.7%) had grade-2 bladder coil. Totally 93 (59.23%) patients reported pain, while 64 (40.77%) patients had no pain. Grade-2 coil patients had more pain than grade 1 (P=0.01). There was a weak inverse relationship between the USR and urinary symptom (P=0.004), pain symptom (P=0.04), and quality of work (P=0.005). Conclusion: Stent length or position of the intravesical stent coil does not appear to affect the quality of life except for the pain. Hence, choosing stent length according to ureteral length seems to have a minimal role in decreasing stent-related morbidity. Answer: Yes, the size of a ureteral stent does impact urinary symptoms and quality of life. Studies have shown that larger-diameter ureteral stents are associated with worse urinary symptoms. Specifically, a study found that patients with 6-Fr stents had worse total International Prostate Symptom Scores (IPSS), as well as subscores for intermittency, urgency, voiding symptoms, and storage symptoms, compared to those with 4.7-Fr stents (PUBMED:30957387). Additionally, the same study reported that larger stent diameters were significantly associated with worse total IPSS and Overactive Bladder Symptom Score (OABSS). Another study that compared stent diameters found no differences in quality of life and urinary symptoms when using stents of different sizes. However, there was a tendency for stents with smaller diameters to dislodge more often (PUBMED:16039775). This suggests that while smaller stents may be associated with fewer symptoms, they may also have a higher risk of migration. Furthermore, a systematic review and meta-analysis on the use of mirabegron, a medication for overactive bladder, for the treatment of ureteral stent-related symptoms found that mirabegron may have a beneficial effect on pain and urinary symptoms due to ureteral stents (PUBMED:34688588). This indicates that stent-related symptoms can significantly affect quality of life and that interventions to alleviate these symptoms are being explored. In summary, the size of a ureteral stent can influence urinary symptoms and quality of life, with larger-diameter stents potentially causing more severe symptoms. However, the choice of stent size must also consider the risk of stent migration and the specific clinical scenario of each patient.
Instruction: Are there risk factors for hepatitis B infection in inner-city adolescents that justify prevaccination screening? Abstracts: abstract_id: PUBMED:9589340 Are there risk factors for hepatitis B infection in inner-city adolescents that justify prevaccination screening? Purpose: This study was undertaken to determine if homelessness could serve as a marker for previous hepatitis B infection (HBI), and thus justify prevaccination screening. Methods: One hundred sexually active 13-21-year-olds (mean = 17 years), 74% female, attending an inner-city hospital-based adolescent clinic (HOSP), and 48 sexually active 13-21-year-olds (mean = 19 years), 40% female, attending a clinic based at an urban drop-in center (UDC) for street youth were consecutively enrolled, screened for HBI serum markers and administered a structured interview about sexual practices, sexual abuse, prior sexually transmitted diseases (STDs), and injection drug use. Results: For the HOSP group, 7% were homeless and 4% were HBI positive. In the UDC group, 96% were homeless and 23% were HBI positive. Homelessness was significantly associated with HBI (p &lt; 0.001), and this was corroborated by logistic regression analysis (p &lt; 0.01). Other factors significantly associated with HBI in adolescents included a history of anal sex (p &lt; or = 0.002), anal-receptive sex (p &lt; or = 0.01), genital Chlamydia (p &lt; or = 0.03), prostitution (p &lt; or = 0.03), and sexual abuse (p &lt; or = 0.002). For both populations, gender, sexual orientation, intravenous drug use, and genital sex were not related to HBI. Conclusion: These data indicate that homelessness and associated high-risk sexual practices may be indications for prevaccination screening for HBI in adolescents. abstract_id: PUBMED:10821603 Risk factors for pelvic inflammatory disease in inner-city adolescents. Objective: To determine risk factors associated with pelvic inflammatory disease (PID) among inner-city adolescents. Study Design: A case-control study was performed from 1994 to 1997 in an inner-city hospital. Methods: Seventy-one adolescent girls diagnosed with PID and 52 sexually active adolescents girls without PID participated in a confidential face-to-face interview using a questionnaire about risk behaviors. Established criteria were used for the diagnosis of PID. Data were analyzed using t tests, chi-square tests, and stepwise logistic regression. Results: Persons with PID were significantly more likely to show younger age at first intercourse, older sex partners, involvement with a child protection agency, prior suicide attempt(s), consumption of alcohol before last sex, and a current Chlamydia trachomatis infection. There were no significant differences between the two groups regarding number of lifetime sex partners, condom use, rape, syphilis, prior PID, hepatitis B, hepatitis C, or HIV infection. Conclusions: Not previously noted in the literature are the association of PID with older sex partners, prior involvement in a child protection agency, and a prior suicide attempt. Confirming prior studies are the association of PID with earlier age at first sex, alcohol use, and C trachomatis infection. abstract_id: PUBMED:10907909 Heterosexual transmission of hepatitis C, hepatitis B, and HIV-1 in a sample of inner city women. Background: To clarify the role of heterosexual transmission of hepatitis C virus (HCV) and to identify associated risk factors. Goal: To compare risk factors with infection among women with HCV, HIV-1, and hepatitis B virus (HBV). Study Design: A cross-sectional study of the prevalence of HCV, HIV-1, and HBV in a sample of 599 sexually active, nontransfused, inner-city women with no evidence of intravenous drug use. Results: The prevalence of HCV was 1.6%, compared with 2.0% for HIV-1 and 18.8% for HBV; 75% of women infected with HCV were also infected with HIV-1 or HBV (P &lt; 0.001). Women engaging in very high-risk sexual behavior were 14.2 times more likely to have HCV than other women (95% CI, 1.8-642.5). Conclusions: The epidemic of HCV may be facilitated by high-risk sexual behavior. The relatively high prevalence of HCV suggests the need for more widespread screening among inner-city females. abstract_id: PUBMED:3100594 Cost effectiveness of prevaccination screening for hepatitis B antibody. The cost effectiveness of screening for hepatitis B antibodies prior to instituting a vaccination program was investigated. Prevaccination screening tests for anti-HBs were performed on sera from 295 faculty and staff. Anti-HBs positive titers were found for 58 persons, and 43 (14.6 percent) of these had titers high enough to confer immunity. The cost of screening 295 persons was $1,666.64, or $5.65 per participant. By not vaccinating 43 persons, $4,498.23 was saved. A prevaccination screening would be most cost effective if only faculty with patient contact were screened. abstract_id: PUBMED:32335864 Identifying Factors Associated with Cancer Screening in Immigrant Populations Living in New York City. New York City rates for cancer screening with colonoscopy, Papanicolaou smear and mammography are higher than the rest of the nation yet immigrant populations still have barriers accessing healthcare. With 38% of the city identifying as foreign born, there is a growing need to understand immigrant health and cancer screening behaviors to better assist them in accessing care. Through the Hepatitis Outreach Network (HONE), almost 1300 consenting participants completed a questionnaire on their demographics, hepatitis risk factors, and cancer screening behaviors as well as accessed Hepatitis B Virus screening from 2013 to 2015. Using the information gathered from the completed surveys and the data analysis in 2016, age and English language proficiency had significant association to accessing cancer screening using the three noted methods. Overall, cancer screening rates were lower for the African born (54%), Asian born (23.9%) and US born (22%) participants than those of the rest of New York. English language proficiency appeared to be a barrier for some screening methods such as colorectal cancer screening with colonoscopy, and cervical cancer with Papanicolaou smear but not mammography. Immigrant health is a fundamental part of the public health field and so further investigation into disparities associated with other cancer screening methods is a necessity. An increase in culturally sensitive, language and age-specific health education programs may also improve cancer screening rates for immigrant populations in the city. abstract_id: PUBMED:24933144 Factors related to hepatitis B screening among Africans in New York City. Objective: To understand factors that US Africans identify as barriers and facilitators for accessing hepatitis B (HBV) screening. Methods: In-depth interviews were conducted and guided by the PEN-3 model to elicit culturally driven information in minority communities. Results: Interviews were conducted with 22 US Africans. Salient themes that emerged were HBV knowledge, complexity of the US medical system, unaccustomed to preventive care, language and health literacy, availability and accessibility of screening, fear of disclosure, reliance on faith community, stigma of HBV, primacy towards a higher power on illnesses, and social systems influences. Conclusions: Findings were consistent with other at-risk populations, however, emphasis on privacy and fear of disclosure are distinct to Africans. This reinforces the need for a culturally targeted intervention for this at-risk population. abstract_id: PUBMED:11847302 Cost-effectiveness of preimmunization hepatitis B screening in high-risk adolescents. Objective: The goals of this study were to estimate seroprevalence of prior hepatitis B infection among high-risk adolescents and to determine the cost-effectiveness of prevaccination immunity screening. Methods: The authors computed a "break-even" seroprevalence level calculated from current vaccine and administration costs. They then conducted a seroprevalence study of hepatitis B core antibody using sera previously submitted for syphilis serology from four-hundred adolescent and adult clients of sexually transmitted disease clinics. Finally, the authors compared age group-specific seroprevalence rates to the computed break-even seroprevalence. Results: Levels of prior hepatitis B infection for all age groups were lower than the break-even seroprevalence standard from which cost-effectiveness was calculated. Conclusions: From the findings of this study, the authors concluded that routine preimmunization screening for prior hepatitis B infection would not be cost-effective for this population. abstract_id: PUBMED:15918143 Hepatocellular carcinoma: epidemiology, risk factors, and screening. In this article, the epidemiology of hepatocellular carcinoma (HCC), risk factors for the development of HCC, and how these factors affect the decision about whether an individual should or should be entered into a screening program are considered. The factors determining the risk for HCC include age, male gender, and the nature of the underlying liver disease. In particular, cirrhosis is associated with a significant risk for HCC. However, in hepatitis B HCC also occurs in noncirrhotic liver. Decision analysis can be used to identify patients at greatest risk for HCC and who might be candidates for screening. Screening itself should be developed in a programmatic manner to ensure that appropriate target populations are identified, that appropriate screening tests are chosen, and that appropriate recall and enhanced follow-up are instituted for patients who have positive screening test results. Screening should be by ultrasonography at 4- to 12-month intervals. Patients with abnormal screening tests require additional investigation using computed tomography scanning, magnetic resonance imaging, or liver biopsy. Negative results do not exclude the possibility of cancer and further follow-up is necessary. abstract_id: PUBMED:3341502 Hepatitis B screening in a New York City obstetrics service. A cross-sectional chart review study was performed of hepatitis B virus (HBV) surface antigen screening of 532 women admitted to a New York City hospital obstetrics service from 1984 to 1985. Comparison of serologic results to risk factors for hepatitis B revealed that women born outside the United States and those with a positive test for or history of a sexually transmitted disease were more likely to be infected with HBV. Obstetric clinics with large immigrant populations should consider screening all patients for HBV. abstract_id: PUBMED:8075733 A cost-effectiveness analysis of prevaccination testing for hepatitis B in adolescents and preadolescents. Objective: To study the clinical effectiveness and cost-effectiveness of prevaccination testing for hepatitis B virus in adolescents and preadolescents. Design: Decision analysis model comparing (1) prevaccination testing for hepatitis B surface antibodies, (2) no testing, and (3) testing at the same time as the first vaccine dose. Sensitivity analyses of the first two strategies were performed varying the seroprevalence of hepatitis B surface antibodies, compliance with follow-up, and costs of testing. Setting: Charges for testing and vaccination were obtained from Children's Hospital and Medical Center in Seattle, Wash. Vaccination compliance, hepatitis B surface antibodies seroprevalence, and vaccine response rates are from published literature. Patients: A hypothetical cohort of 100,000 11-year-old children presenting for well-child care. Main Outcome Measures: Rate of complete vaccination, cost of testing and vaccination for each cohort, and cost per patient protected from hepatitis B virus. Results: No testing was the most cost-effective strategy. Prevaccination testing elevated costs by $2.9 million for every 100,000 patients and lowered the rate of complete vaccination by 22% compared with vaccination without testing. For prevaccination testing to be cost-effective, the seroprevalence of hepatitis B surface antibodies needed to be greater than 40%. As the seroprevalence of hepatitis B surface antibodies decreased, the ratio of testing costs to vaccination costs had to decrease for prevaccination testing to remain cost-effective. Even with perfect compliance, prevaccination testing required high seroprevalence rates to be cost-effective. Conclusions: For most populations of preadolescents and adolescents, prevaccination testing for hepatitis B virus is not effective in terms of cost or rate of complete vaccination. Answer: Yes, there are risk factors for hepatitis B infection (HBI) in inner-city adolescents that justify prevaccination screening. A study found that homelessness was significantly associated with HBI, with 23% of homeless adolescents in an urban drop-in center testing positive for HBI compared to only 4% in a hospital-based adolescent clinic group. Other factors significantly associated with HBI included a history of anal sex, anal-receptive sex, genital Chlamydia, prostitution, and sexual abuse. These findings suggest that homelessness and associated high-risk sexual practices may be indications for prevaccination screening for HBI in adolescents (PUBMED:9589340). Additionally, the cost-effectiveness of prevaccination screening for hepatitis B has been studied. One study found that prevaccination screening tests for anti-HBs were cost-effective, saving money by not vaccinating those who already had immunity (PUBMED:3100594). However, another study concluded that routine preimmunization screening for prior hepatitis B infection would not be cost-effective for high-risk adolescents, as the levels of prior infection were lower than the break-even seroprevalence standard (PUBMED:11847302). Furthermore, a cost-effectiveness analysis indicated that for most populations of preadolescents and adolescents, prevaccination testing for hepatitis B virus is not effective in terms of cost or rate of complete vaccination, unless the seroprevalence of hepatitis B surface antibodies is greater than 40% (PUBMED:8075733). In summary, while certain high-risk behaviors and conditions such as homelessness and sexual practices are associated with increased risk of HBI and could justify prevaccination screening, the overall cost-effectiveness of such screening may vary depending on the specific population and the prevalence of hepatitis B antibodies.
Instruction: Chronic Disease Management Programmes: an adequate response to patients' needs? Abstracts: abstract_id: PUBMED:22712877 Chronic Disease Management Programmes: an adequate response to patients' needs? Background: Inspired by American examples, several European countries are now developing disease management programmes (DMPs) to improve the quality of care for patients with chronic diseases. Recently, questions have been raised whether the disease management approach is appropriate to respond to patient-defined needs. Objective: In this article we consider the responsiveness of current European DMPs to patients' needs defined in terms of multimorbidity, functional and participation problems, and self-management. Method: Information about existing DMPs was derived from a survey among country-experts. In addition, we made use of international scientific literature. Results: Most European DMPs do not have a solid answer yet to the problem of multimorbidity. Methods of linking DMPs, building extra modules to deal with the most prevalent comorbidities and integration of case management principles are introduced. Rehabilitation, psychosocial and reintegration support are not included in all DMPs, and the involvement of the social environment of the patient is uncommon. Interventions tailored to the needs of specific social or cultural patient groups are mostly not available. Few DMPs provide access to individualized patient information to strengthen self-management, including active engagement in decision making. Conclusion: To further improve the responsiveness of DMPs to patients' needs, we suggest to monitor 'patient relevant outcomes' that might be based on the ICF-model. To address the needs of patients with multimorbidity, we propose a generic comprehensive model, embedded in primary care. A goal-oriented approach provides the opportunity to prioritize goals that really matter to patients. abstract_id: PUBMED:24029582 Self-management support needs of patients with chronic illness: do needs for support differ according to the course of illness? Objective: To determine whether chronically ill patients' needs for self-management support depend on their course of illness. Methods: Cross-sectional and longitudinal linear regression analyses were conducted using data from 1300 patients with chronic disease(s) who participated in a nationwide Dutch panel-study. Self-management support needs were assessed by the Patient Assessment of Self-management Tasks questionnaire (PAST). Course of illness was operationalized as: illness duration, patients' perception of the course of illness and changes in self-rated general health (RAND-36). Results: Self-management support needs are not related to illness duration. Patients who perceive their illness as episodic and/or progressively deteriorating have greater self-management support needs than patients who perceive their illness as stable. Deterioration of self-rated health is related to increased support needs. The effect of the course of illness on support needs depends on the type of self-management activities. Conclusion: How chronically ill patients perceive the course of illness and actual changes in self-rated health are predictive for their need for support for self-management activities. Illness duration is not. Practice Implications: Helping patients to self-manage should not be confined to the first years after diagnosis. Healthcare providers should be alert to patients' own perceptions of their course of illness and health status. abstract_id: PUBMED:27391471 Expectations and needs of patients with a chronic disease toward self-management and eHealth for self-management purposes. Background: Self-management is considered as an essential component of chronic care by primary care professionals. eHealth is expected to play an important role in supporting patients in their self-management. For effective implementation of eHealth it is important to investigate patients' expectations and needs regarding self-management and eHealth. The objectives of this study are to investigate expectations and needs of people with a chronic condition regarding self-management and eHealth for self-management purposes, their willingness to use eHealth, and possible differences between patient groups regarding these topics. Methods: Five focus groups with people with diabetes (n = 14), COPD (n = 9), and a cardiovascular condition (n = 7) were conducted in this qualitative research. Separate focus groups were organized based on patients' chronic condition. The following themes were discussed: 1) the impact of the chronic disease on patients' daily life; 2) their opinions and needs regarding self-management; and 3) their expectations and needs regarding, and willingness to use, eHealth for self-management purposes. A conventional content analysis approach was used for coding. Results: Patient groups seem to differ in expectations and needs regarding self-management and eHealth for self-management purposes. People with diabetes reported most needs and benefits regarding self-management and were most willing to use eHealth, followed by the COPD group. People with a cardiovascular condition mentioned having fewer needs for self-management support, because their disease had little impact on their life. In all patient groups it was reported that the patient, not the care professional, should choose whether or not to use eHealth. Moreover, participants reported that eHealth should not replace, but complement personal care. Many participants reported expecting feelings of anxiety by doing measurement themselves and uncertainty about follow-up of deviant data of measurements. In addition, many participants worried about the implementation of eHealth being a consequence of budget cuts in care. Conclusion: This study suggests that aspects of eHealth, and the way in which it should be implemented, should be tailored to the patient. Patients' expected benefits of using eHealth to support self-management and their perceived controllability over their disease seem to play an important role in patients' willingness to use eHealth for self-management purposes. abstract_id: PUBMED:34091994 A longitudinal study of educational needs among patients with inflammatory arthritis. Introduction: Patient education is important in the follow-up and disease management for patients with chronic inflammatory arthritis. Patients' needs for education and information varies, and it is important that the education is tailored to the individual patient. Hence, the aim of this study is to investigate whether patients' educational needs change over time, and which demographic, disease-related or self-management characteristics that are associated with patients' educational needs. Methods: The Mann-Whitney U-test was used to study patients' longitudinal educational needs and whether their needs change over time, while multivariable linear regression analyses were used to investigate associations between patients' educational needs and demographic variables, disease-related and self-management characteristics. Results: There were no changes in patients' educational needs in the domains of managing pain, movement, feelings, arthritis process and treatment from health professionals during the study period of seven years. A small decrease in educational needs in the domains self-help measures (p-value 0.047) and support from others (p-value 0.010) was detected. The regression analyses showed that higher educational needs were associated with being female, lower educational level, shorter disease duration, and a lower level of patient activation. Conclusions: Patients with chronic inflammatory arthritis have continual needs for patient education throughout their disease trajectory. Nurses and health care professionals must therefore ask their patients what kind of education they need at every follow-up throughout the disease course. abstract_id: PUBMED:30121581 Self-management, self-management support needs and interventions in advanced cancer: a scoping review. Patients with advanced cancer can experience illness trajectories similar to other progressive chronic disease conditions where undertaking self-management (SM) and provision of self-management support (SMS) becomes important. The main objectives of this study were to map the literature of SM strategies and SMS needs of patients with advanced cancer and to describe SMS interventions tested in this patient population. A scoping review of all literature published between 2002 and 2016 was conducted. A total of 11 094 articles were generated for screening from MEDLINE, Embase, PsychINFO, CINAHL and Cochrane Library databases. A final 55 articles were extracted for inclusion in the review. Included studies identified a wide variety of SM behaviours used by patients with advanced cancer including controlling and coping with the physical components of the disease and facilitating emotional and psychosocial adjustments to a life-limiting illness. Studies also described a wide range of SMS needs, SMS interventions and their effectiveness in this patient population. Findings suggest that SMS interventions addressing SMS needs should be based on a sound understanding of the core skills required for effective SM and theoretical and conceptual frameworks. Future research should examine how a patient-oriented SMS approach can be incorporated into existing models of care delivery and the effects of SMS on quality of life and health system utilisation in this population. abstract_id: PUBMED:28156143 Self-Management Support Needs of Patients with Chronic Diseases in a South African Township: A Qualitative Study. Despite the need for chronic disease self-management strategies in developing countries, few studies have aimed to contextually adapt programs; yet culture has a direct impact on the way people view themselves and their environment. This study aimed to explore the knowledge, attitudes, and self-management needs and practices of patients with chronic diseases. Four patient focus groups (n = 32), 2 patient interviews, group observations, and key informant interviews (n = 12) were conducted. Five themes emerged: health-system and service-provision challenges, healthcare provider attitudes and behavior, adherence challenges related to medication and lifestyle changes, patients' personal and clinic experiences and self-management tool preferences. The findings provide a window of opportunity for the development of contextually adapted self-management programs for community health nursing in developing countries. abstract_id: PUBMED:26261390 Fears and Health Needs of Patients with Diabetes: A Qualitative Research in Rural Population. Introduction: Insulin-dependent patients are individuals with chronic disease who are well adapted to living and dealing with any health needs and fears arising. An important aspect in the process of adaptation to chronic illness is the provision of nursing care in the early stages of the disease, because this contributes to its acceptance and the early identification and management of potential complications. Purpose: To investigate the health needs and self-management problems faced by patients with diabetes daily, especially those who use insulin. Furthermore purpose of this study was to investigate the fears experienced by patients in the early stage of the disease, but also in its subsequent development and to study possible differences between sexes. Methodology: This is a qualitative study, using interpretative phenomenological approach. Fifteen (nine women and six men) insulin-dependent patients, recounted their personal fears and their needs, through semi-structured interviews, which took place in Central Greece. The method used for processing the results is the Mayering one. Results: The analysis of the narratives showed that patients have a variety of fears and needs associated with the diagnosis, treatment, expected consequences, prognosis and everyday life in the management of the disease. Most patients express the concept of need as desire. Care needs, psychological support and education to recognize and prevent hypoglycemia. Conclusions: Insulin-dependent patients express fears and needs in their daily lives. Nurses providing care aimed at enhancing the level of health, while putting self-care information and training them. Patients want the nurse next to them, so that information is continuous and permanent. abstract_id: PUBMED:32851148 Patients' Experiences of Nurse Case-Managed Osteoporosis Care: A Qualitative Study. Background: Osteoporosis is a chronic condition that is often left untreated. Nurse case-managers can double rates of appropriate treatment in those with new fractures. However, little is known about patients' experiences of a nurse case-managed approach to osteoporosis care. Objective: Our aim was to describe patients' experiences of nurse case-managed osteoporosis care. Methods: A qualitative, descriptive design was used. We recruited patients enrolled in a randomized controlled trial of a nurse case-management approach. Individual semi-structured interviews were conducted which were transcribed and analyzed using content analysis. Data were managed with ATLAS.ti version 7. Results: We interviewed 15 female case-managed patients. Most (60%) were 60-years or older, 27% had previous fracture, 80% had low bone mineral density tests, and 87% had good osteoporosis knowledge. Three major themes emerged from our analysis: acceptable information to inform decision-making; reasonable and accessible care provided; and appropriate information to meet patient needs. Conclusions: This study provides important insights about older female patients' experiences with nurse case-managed care for osteoporosis. Our findings suggest that this model to osteoporosis clinical care should be sustained and expanded in this setting, if proven effective. In addition, our findings point to the importance of applying patient-centered care across all dimensions of quality to better enhance the patients' experience of their health care. abstract_id: PUBMED:34988431 Unmet needs in inflammatory bowel disease. Despite the recent developments in the diagnosis and management of inflammatory bowel diseases (IBD), patients still suffer from disabling bowel symptoms and significant disease complications and many questions remain to improve their care. IBD is a chronic disease, whose management could be divided into the five different stages of chronic diseases, ranging from the pre-treatment evaluation phase to the induction therapy, maintenance therapy, monitor and re-establishment of control and the cessation of the disease. Reconciling these phases with the current unmet needs in IBD could help tailor priorities for research. In this review, some of the unanswered questions in the management of both Crohn's Disease and Ulcerative Colitis will be addressed, by following this paradigm of chronic diseases' management. abstract_id: PUBMED:37581816 Investigating the Needs of Patients Suffering from Chronic Diseases: A Cross-Sectional Study. Introduction: Chronic diseases represent a huge challenge for the health systems globally due to the rapidly increasing number of patients and their long-term need for healthcare. The purpose of this study was to investigate the needs of patients suffering from chronic diseases. Methodology: This is a cross-sectional study. The study population consisted of 840 adults with chronic diseases. The data collection was done with an improvised needs survey questionnaire, which included 56 questions. Statistical analyses were performed using IBM SPSS Statistics for Windows, v.25.0, statistical significance being considered at p &lt; 0.05. Results: The main diseases of the patients were chronic renal failure (22.6%), multiple sclerosis (19%), cancer (19%), diabetes mellitus (7.1%), dementia (6%), and chronic obstructive pulmonary disease (6%). The majority of patients (82.1%) were sick for more than 24 months. Patients seek information from health professionals (4.07 ± 1.4), feel tired (4.05 ± 1.4), have to share their feelings with other family members (4.01 ± 1.4), feel anxious about the future (3.94 ± 1.3), and feel out of control (3.80 ± 1.5). Conclusions: Patients with chronic diseases suffer from numerous physical, mental, emotional, and cognitive problems. Paying attention to the unmet needs of patients could have beneficial effects on both patients and their caregivers. Answer: Chronic Disease Management Programmes (DMPs) have been developed in several European countries to improve the quality of care for patients with chronic diseases. However, the adequacy of these programmes in responding to patients' needs has been questioned, particularly in terms of multimorbidity, functional and participation problems, and self-management (PUBMED:22712877). Most European DMPs do not yet have a solid answer to the problem of multimorbidity, and while some are introducing methods to link DMPs or build extra modules for prevalent comorbidities, rehabilitation, psychosocial, and reintegration support are not included in all DMPs. The involvement of the social environment of the patient is uncommon, and interventions tailored to specific social or cultural patient groups are mostly not available. Few DMPs provide access to individualized patient information to strengthen self-management, including active engagement in decision-making (PUBMED:22712877). Moreover, the needs for self-management support do not necessarily relate to the duration of the illness but rather to how patients perceive the course of their illness and actual changes in self-rated health. Patients who perceive their illness as episodic or progressively deteriorating have greater self-management support needs than those who perceive their illness as stable (PUBMED:24029582). Patients' expectations and needs regarding self-management and eHealth for self-management purposes vary among different chronic conditions. Patients with diabetes reported the most needs and benefits regarding self-management and were most willing to use eHealth, followed by the COPD group. In contrast, those with a cardiovascular condition reported fewer needs for self-management support (PUBMED:27391471). In the context of inflammatory arthritis, patients' educational needs remain constant over time, indicating a continual need for patient education throughout their disease trajectory (PUBMED:34091994). Similarly, patients with advanced cancer have a wide range of self-management behaviors and support needs, suggesting that interventions should be based on a sound understanding of the core skills required for effective self-management (PUBMED:30121581). In a South African township, patients with chronic diseases expressed challenges related to health-system and service provision, healthcare provider attitudes, medication adherence, and lifestyle changes, highlighting the need for contextually adapted self-management programs (PUBMED:28156143). Patients with diabetes, particularly those insulin-dependent, have expressed a variety of fears and needs related to the diagnosis, treatment, and management of their disease, indicating indicating the importance of care that enhances health levels and provides continuous information and training in self-care (PUBMED:26261390). Patients' experiences with nurse case-managed osteoporosis care revealed the importance of providing acceptable information for decision-making, reasonable and accessible care, and appropriate information to meet patient needs (PUBMED:32851148). In the case of inflammatory bowel disease (IBD), despite advancements in diagnosis and management, patients still face significant challenges and unmet needs throughout the various stages of chronic disease management (PUBMED:34988431). A cross-sectional study investigating the needs of patients with chronic diseases found that these patients suffer from a range of physical, mental, emotional, and cognitive problems, emphasizing the importance of addressing their unmet needs for the benefit of both patients and caregivers (PUBMED:37581816). In conclusion, while Chronic Disease Management Programmes are a step towards improving care for chronic disease patients, they may not yet be fully adequate in responding to the diverse and evolving needs of these patients. The responsiveness of DMPs could be improved by monitoring patient-relevant outcomes, adopting a goal-oriented approach, and ensuring that interventions are tailored to the specific needs of different patient groups. Additionally, the integration of self-management support and eHealth tailored to patient preferences and disease controllability could enhance the effectiveness of these programmes.
Instruction: Are the World Health Organisation case definitions for severe acute respiratory distress syndrome sufficient at initial assessment? Abstracts: abstract_id: PUBMED:16049612 Are the World Health Organisation case definitions for severe acute respiratory distress syndrome sufficient at initial assessment? Introduction: On March 13, 2003, Singapore doctors were alerted about an outbreak of atypical pneumonia that became known as severe acute respiratory syndrome (SARS). We now describe a series of patients that did not fit World Health Organisation (WHO) case definitions for SARS at initial assessment. Methods: The Ministry of Health, Singapore centralised SARS cases in the study hospital and its emergency department (ED) became the national screening centre. A screening questionnaire and a set of admission criteria based on WHO case definitions were applied. Patients discharged from ED were tracked via telephone surveillance and recalled if necessary. A retrospective review was done of patients who did not fit WHO definitions initially, were discharged and had re-attended. Results: During the outbreak, 11,461 people were screened for SARS. Among 10,075 (87.9 percent) discharged from the ED, there were 28 re-attendees diagnosed to have SARS later, giving an undertriage rate of 0.3 percent. Among the 28, six (21.4 percent) did not complain of fever and 22 (78.6 percent) had temperatures less than 38.0 degrees Celsius during their first ED visit. One patient was screened to have all three criteria but during consultation, the contact history was found to be unrelated to the known "hot spots". The initial mean temperature was 37.6 degrees Celsius (standard deviation [SD] 0.8), which increased significantly (p-value equals 0.04) to 38.0 degrees Celsius (SD 0.8) during their subsequent visit. Chest radiographs with infective changes increased significantly (p-value equals 0.009) from 16 percent to 52.4 percent over the two ED visits. Conclusion: The WHO case definitions were helpful in evaluating majority of SARS patients initially. However under-triage at ED is inevitable, with a 0.3 percent under-triage in our study population. In this group and asymptomatic individuals who came for screening, a tracking and recall system helped to ensure their timely return to the ED. abstract_id: PUBMED:36762166 Ameliorating effect of erythropoietin in a severe case of COVID-19: case report. The COVID-19 pandemic is arguably one of the greatest public health crises since the 1918 influenza pandemic. Although several vaccines have been approved and rolled out, effective antiviral treatment options are very limited. Here, we present a case of severe COVID-19 that failed to respond to the standard interventions and continued to deteriorate. On day 22 of his illness, after informed consent, the patient was administered 4000IU of erythropoietin (EPO) subcutaneously, in the hope of improving his O2 saturation. Positive response was observed in the patient within 24 hours. This prompted us to continued EPO treatment for a total of 42 days until full recovery and discharge. Our findings warrant further studies to ascertain the use of EPO in severe cases COVID-19. abstract_id: PUBMED:24720856 Typical or atypical pneumonia and severe acute respiratory symptoms in PICU. Background And Aims: Mycoplasma pneumoniae (MP) is a common childhood pathogen associated with atypical pneumonia (AP). It is often a mild disease and seldom results in paediatric intensive care (PICU) admission. In 2003, World Health Organization (WHO) coined the word SARS (severe acute respiratory syndrome) in patients with severe acute respiratory symptoms (sars) for an outbreak of AP in Hong Kong due to a novel coronavirus. In 2012, another outbreak of coronavirus AP occurred in the Middle East. Confusing case definitions such as MERS (Middle East respiratory syndrome) and SARI (severe acute respiratory infections) were coined. This paper aims to present a case of MP with sars, ARDS, pneumonia and pleural effusion during the MERS epidemics, and review the incidence and mortality of severe AP with MP. Methods: We presented a case of MP with sars, acute respiratory distress syndrome (ARDS), pneumonia and pleural effusion during the MERS epidemics, and performed a literature review on the incidence and mortality of severe AP with MP requiring PICU care. Results: In early 2013, an 11-year-old girl presented with sars, ARDS (acute respiratory distress syndrome), right-sided pneumonia and pleural effusion. She was treated with multiple antibiotics. Streptococcus pneumoniae was not isolated in this girl with 'typical' pneumonia by symptomatology and chest radiography, but tracheal aspirate identified MP instead. The respiratory equations are computed with PaO2 /FiO2 consistent with severe lung injury. Literature on the incidence and mortality of severe AP with MP requiring PICU care is reviewed. Six, 165 and 293 articles were found when PubMed (a service of the U.S. National Library of Medicine) was searched for the terms 'mycoplasma' and 'ICU', 'mycoplasma' and 'mortality', and 'mycoplasma' and 'severe'. Mortality and PICU admission associated with MP is general low and rarely reported. Experimental and clinical studies have suggested that the pathogenesis of lung injuries in MP infection is associated with a cell-mediated immune reaction, and high responsiveness to corticosteroid therapy has been reported especially for severe disease. Management of severe mycoplasma infection in the PICU includes general cardiopulmonary support and specific antimicrobial treatment. Macrolide resistance genotypes have been detected. Conclusion: We urge health organizations to refrain from the temptation of coining unnecessary new terminology to describe essentially the same conditions each and every time when outbreaks of AP occur. abstract_id: PUBMED:34316169 Severe Acute Respiratory Infection Surveillance during the Initial Phase of the COVID-19 Outbreak in North India: A Comparison of COVID-19 to Other SARI Causes. Introduction: World Health Organization proposes severe acute respiratory infection (SARI) case definition for coronavirus disease 2019 (COVID-19) surveillance; however, early differentiation between SARI etiologies remains challenging. We aimed to investigate the spectrum and outcome of SARI and compare COVID-19 to non-COVID-19 causes. Patients And Methods: A prospective cohort study was conducted between March 15, 2020, to August 15, 2020, at an adult medical emergency in North India. SARI was diagnosed using a "modified" case definition-febrile respiratory symptoms or radiographic evidence of pneumonia or acute respiratory distress syndrome of ≤14 days duration, along with a need for hospitalization and in the absence of an alternative etiology that fully explains the illness. COVID-19 was diagnosed with reverse transcription-polymerase chain reaction testing. Results: In total, 95/212 (44.8%) cases had COVID-19. Community-acquired pneumonia (n = 57), exacerbation of chronic lung disease (n = 11), heart failure (n = 11), tropical febrile illnesses (n = 10), and influenza A (n = 5) were common non-COVID-19 causes. No between-group differences were apparent in age ≥60 years, comorbidities, oxygenation, leukocytosis, lymphopenia, acute physiology and chronic health evaluation (APACHE)-II score, CURB-65 score, and ventilator requirement at 24-hour. Bilateral lung distribution and middle-lower zones involvement in radiography predicted COVID-19. The median hospital stay was longer with COVID-19 (12 versus 5 days, p = 0.000); however, mortality was similar (31.6% versus 28.2%, p = 0.593). Independent mortality predictors were higher mean APACHE II in COVID-19 and early ventilator requirement in non-COVID-19 cases. Conclusions: COVID-19 has similar severity and mortality as non-COVID-19 SARI but requires an extended hospital stay. Including radiography in the SARI definition might improve COVID-19 surveillance. How To Cite This Article: Pannu AK, Kumar M, Singh P, Shaji A, Ghosh A, Behera A, et al. Severe Acute Respiratory Infection Surveillance during the Initial Phase of the COVID-19 Outbreak in North India: A Comparison of COVID-19 to Other SARI Causes. Indian J Crit Care Med 2021;25(7):761-767. abstract_id: PUBMED:32963432 Initial Experience of Critically Ill Patients with COVID-19 in Western India: A Case Series. Background: The novel coronavirus, named SARS-CoV-2, was first described in December 2019 as a cluster of pneumonia cases in Wuhan, China. It has since been declared a pandemic, with substantial mortality. Materials And Methods: In our case series, we describe the clinical presentation, characteristics, and outcomes of our initial experience of managing 24 critically ill COVID-19 patients at a designated COVID-19 ICU in Western India. Results: Median age of the patients was 54 years, and 58% were males. All patients presented with moderate to severe acute respiratory distress syndrome (ARDS); however, only 37.5% failed trials of awake proning and required mechanical ventilation. Patients who received mechanical ventilation typically matched the H-phenotype of COVID-19 pneumonia, and 55.5% of these patients were successfully extubated. Conclusion: The most common reason for ICU admission in our series of 24 patients with severe COVID-19 was hypoxemic respiratory failure, which responded well to conservative measures such as awake proning and oxygen supplementation. Mortality in our case series was 16.7%. How To Cite This Article: Shukla U, Chavali S, Mukta P, Mapari A, Vyas A. Initial Experience of Critically Ill Patients with COVID-19 in Western India: A Case Series. Indian J Crit Care Med 2020;24(7):509-513. abstract_id: PUBMED:15800722 Surveillance of severe acute respiratory syndrome (SARS) in the post-outbreak period. Introduction: This retrospective one-month survey evaluated the practicality of post-severe acute respiratory syndrome (SARS) surveillance recommendations in previously SARS-affected countries, namely Singapore. These included staff medical sick leave for febrile illness, inpatient fevers, inpatient pneumonia, atypical pneumonia, febrile illnesses with significant travel history and sudden unexplained deaths from pneumonia/ adult respiratory distress syndrome (ARDS). Methods: Surveillance data on medical sick leave of staff, all inpatient fevers, all febrile (temperature greater than or equal to 38 degrees Celsius) inpatient pneumonia, including atypical pneumonia, and deaths from pneumonia were collected from sick leave reports, ward reports, isolation room rounds and mortuary reports from 1 to 28 September 2003. Results: Baseline results show 167 (1.4/1000 staff-days) observed in staff sick leave for febrile illnesses, and 1798 (71.3/1000 bed-days) observed for inpatient fever. There were 40, 31 and 12 instances, respectively, of staff having temperatures of high fever (greater than or equal to 38 degrees Celsius), prolonged sick leave (3 days or more), and repeated sick leave (within 7 days) for febrile illnesses. An average of 4.6 wards a day potentially fulfilled the World Health Organisation SARS alert criteria. Of 27 cases with fever, pneumonia and a total white count of less than 10,000 cells per cubic mm as per Ministry of Health, Singapore criteria for the diagnosis of atypical pneumonia, only five were identified by clinicians. Conclusion: Surveillance is time-consuming and current recommendations are not specific enough to be used practically. Surveillance indicators for inpatients must overcome a high degree of background noise. abstract_id: PUBMED:32251794 Extracorporeal membrane oxygenation (ECMO): does it have a role in the treatment of severe COVID-19? The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has emerged since December 2019 in Wuhan city, and has quickly spread throughout China and other countries. To date, no specific treatment has been proven to be effective for SARS-CoV-2 infection. According to World Health Organization (WHO), management of coronavirus disease 19 (COVID-19) has mainly focused on infection prevention, case detection and monitoring, and supportive care. Given to the previous experience, extracorporeal membrane oxygenation (ECMO) has been proven to be an effective therapy in the treatment of respiratory failure or acute respiratory distress syndrome (ARDS). On the basis of similar principle, ECMO may be also an effective therapy in the treatment of severe COVID-19. In this study, we described and discussed the clinical outcomes of ECMO for ARDS patients, ECMO use for severe COVID-19 in China, the indications of ECMO use, and some important issues associated with ECMO. abstract_id: PUBMED:32509205 Clinical interventions for severe and critical COVID-19: what are the options. The coronavirus disease 2019 (COVID-19) has been ongoing outbreak and declared as a global public health emergency by the World Health Organization. Severe and critical COVID-19 has high fatality rate due to complications like acute respiratory distress syndrome, acute respiratory failure or multiple organ failure. So far, there have been mounting research on the epidemiological and clinical characteristics of COVID-19. However, the information regarding treatment of severe and critical COVID-19 is limited. The current study reviewed published evidence of clinical interventions of severe and critical COVID-19, aiming to provide an up-to-date reference for further clinical treatment. abstract_id: PUBMED:35346370 Management of severe neonatal respiratory distress due to vertical transmission of severe acute respiratory syndrome coronavirus 2: a case report. Background: Neonates with severe acute respiratory syndrome coronavirus 2 infection are usually asymptomatic or have mild to moderate symptoms. Acute respiratory distress syndrome due to severe acute respiratory syndrome coronavirus 2 with respiratory insufficiency is rare. Therefore, information about the best intensive care strategy for neonates requiring mechanical ventilation is lacking. We report a neonatal case of severe acute respiratory distress syndrome, probably due to vertical transmission of severe acute respiratory syndrome coronavirus 2, complicated by Staphylococcus aureus sepsis. We aim to inform pediatric providers on the clinical course and acute management considerations in coronavirus disease-related neonatal acute respiratory distress syndrome. Case Presentation: A late preterm (gestational age 36 0/7 weeks) Caucasian girl was born from a severe acute respiratory syndrome coronavirus 2-positive mother and tested positive for severe acute respiratory syndrome coronavirus 2 at 19 hours after birth. She developed acute respiratory distress syndrome requiring intensive care admission and mechanical ventilation. The clinical course was complicated by S. aureus pneumonia and bacteremia. Multimodal management included well-established interventions for respiratory distress syndrome such as surfactant therapy, high-frequency oscillatory ventilation, and inhaled nitric oxide, combined with therapies extrapolated from adult care for severe acute respiratory syndrome coronavirus 2 patients such as dexamethasone, coronavirus disease 2019-specific immunoglobins, and prophylactic low-molecular-weight heparin. The neonate was successfully weaned from the ventilator and improved clinically. Conclusion: This case shows a rare but serious neonatal severe acute respiratory syndrome coronavirus 2 infection, leading to severe acute respiratory distress syndrome. Because of limited therapy guidelines for neonates, we suggest multimodal management with awareness of the possibility of S. aureus coinfection, to treat this age group successful. abstract_id: PUBMED:16243192 Severe imported malaria: clinical presentation at the time of hospital admission and outcome in 42 cases diagnosed from 1996 to 2002. The objectives of this retrospective study were to describe initial clinical profiles and subsequent outcome of adult patients in France who were diagnosed with severe imported malaria, as defined by the World Health Organization (WHO). Forty-two patients diagnosed from 1996 to 2002 were included (median age: 30 years, men: 78%, non-immune persons: 74%, return from Africa: 100%, inappropriate antimalarial chemoprophylaxis: 95%). At the time of hospital admission, jaundice (62%), hyperparasitemia (56%), and prostration (52%) were the most frequent findings, followed by acute renal failure (31%). Other findings, as described by the WHO criteria, were less common. Twenty-three patients presented only with jaundice, hyperparasitemia, or prostration in isolation, or in combination. Of these 23, five non-immune persons subsequently developed coma, shock, acute respiratory distress syndrome or acute renal failure; this led to death in 2 of these cases. This suggests that non-immune persons with imported malaria who present with jaundice, hyperparasitemia, or prostration should be admitted to the intensive care unit for close monitoring. Answer: The World Health Organisation (WHO) case definitions for severe acute respiratory distress syndrome (SARS) have been helpful in evaluating the majority of SARS patients initially. However, evidence suggests that these definitions may not be entirely sufficient at initial assessment. A study conducted during the SARS outbreak in Singapore found that among patients discharged from the emergency department (ED), there was an undertriage rate of 0.3 percent, with some patients not fitting the WHO definitions initially and being diagnosed with SARS upon re-attendance. Specifically, 21.4 percent of these patients did not complain of fever, and 78.6 percent had temperatures less than 38.0 degrees Celsius during their first ED visit. This indicates that the WHO case definitions may not capture all cases at initial assessment, and a tracking and recall system was found to be helpful in ensuring timely return to the ED for these individuals (PUBMED:16049612). Additionally, the WHO SARS surveillance recommendations in the post-outbreak period were found to be not specific enough to be used practically, as surveillance is time-consuming and must overcome a high degree of background noise. The study suggested that surveillance indicators for inpatients need to be more refined (PUBMED:15800722). Moreover, the WHO proposes a SARI (severe acute respiratory infection) case definition for COVID-19 surveillance, but early differentiation between SARI etiologies remains challenging. A study in North India found that including radiography in the SARI definition might improve COVID-19 surveillance, as bilateral lung distribution and middle-lower zones involvement in radiography predicted COVID-19 (PUBMED:34316169). In conclusion, while the WHO case definitions for SARS and related severe respiratory conditions have been instrumental in guiding initial assessments and surveillance, they may not be entirely sufficient due to their inability to capture all cases, particularly those with atypical presentations or mild symptoms. Additional measures such as tracking systems, refinement of surveillance indicators, and inclusion of radiographic evidence may enhance the effectiveness of initial assessments and surveillance efforts.
Instruction: Does axial view still play an important role in dealing with calcaneal fractures? Abstracts: abstract_id: PUBMED:25887075 Does axial view still play an important role in dealing with calcaneal fractures? Background: The study aimed to analyze the role of axial view in different phases of treatment and demonstrate whether axial view is still useful in evaluating the calcaneal fractures. Methods: 156 patients with suspected unilateral calcaneal fractures were enrolled in the study, axial and lateral view of the affected foot and single axial view of the unaffected foot were gained. 16 were excluded due to unsatisfying axial radiograph. The remain 140 patients were eventually included into the study. Two separate assessments were conducted on two occasions with a three weeks interval to diagnose fractures. Lateral views were assessed firstly, and lateral combined with axial views were assessed three weeks later. Each of the 140 sets was evaluated by one of 6 surgeons randomly. Sensitivity and specificity value were compared between the two assessments. A new value Z which can directly reflect the degree of bulge on the calcaneal lateral wall on the axial view were introduced into the study. Z value of the 140 unaffected feet were measured. Data of another group of 31 patients who confirmed their lateral hindfoot pain caused by widening of calcaneus was reviewed. Liner regression was employed to analyze the relationship between angle Z and the severity of lateral pain. Results: According to the two assessments, without axial view, specificity value will be significantly lower in diagnosing calcaneal fractures (p = 0.024) and sensitivity value will be significantly lower in distinguishing intra-articular fractures (p &lt; 0.001). The normal threshold of angle Z was estimated from 98.06° to 100.64° (p &lt;0.001). Liner regression shows that the lateral hindfoot pain will obviously aggravate along with the increasing of angle Z value (p &lt;0.001). Conclusions: Axial view is useful in diagnosing a patient with suspected calcaneal fracture especially for distinguishing intra-articular fractures and selection for CT scan. With the introduction of angle Z, axial view can get excellent performance in intra-operative assessment as well as in post-operative follow up procedure. Axial view can still play an irreplaceable role in assessing and evaluating calcaneal fractures, and can be employed as an essential reference during surgical procedure . abstract_id: PUBMED:32660271 Anterior-Posterior (AP) Calcaneal Profile View: A Novel Radiographic Image to Assess Varus Malalignment. Background: Assessing and correcting malalignment is important when treating calcaneus fractures. The Harris axial view is commonly utilized to assess varus deformity but may be inherently inaccurate due to its tangential nature. The anterior-posterior (AP) calcaneal profile view is a novel radiographic view that is easily obtained with demonstrated increased accuracy for assessing calcaneal axial alignment. Methods: Five nonpaired ankle cadaveric specimens were used in this investigation. Oblique osteotomies were created in relation to the long axis, and varus deformities were produced by inserting solid radiolucent wedges into the osteotomies to create models of 10, 20, and 30 degrees of angulation of the calcaneal tuberosity. Specimens were imaged using both the Harris axial view and the AP calcaneal profile view. Results: For cadavers with 10 degrees of actual varus angulation, the mean Harris axial view angle and the AP calcaneal profile view angle were 10.9 ± 4.8 (range, 5.5-16.0) degrees and 13.0 ± 5.5 (range, 7.3-20.9) degrees, respectively. For cadavers with 20 degrees of actual varus angulation, the mean Harris view angle and the AP calcaneal profile view angle were 11.5 ± 2 (range, 8.2-13.6) degrees and 18.1 ± 4.8 (range, 11.7-23.5) degrees, respectively (P = .005). On pairwise comparison with Bonferroni correction, there was a significant difference between the Harris axial view angle and both the AP calcaneal profile view angle (P = .012) and actual angulation (P = .011). For cadavers with 30 degrees of actual varus angulation, the mean Harris axial view angle and the AP calcaneal profile view angle were 18.3 ± 4.3 (range, 13.3-23.6) degrees and 28.3 ± 2.9 (range, 24.4-31.1) degrees, respectively (P &lt; .001). On pairwise comparison with Bonferroni correction, there was a significant difference between the Harris axial view angle and both the AP calcaneal profile view angle (P = .001) and actual angulation (P &lt; .001). There was no significant difference between the AP calcaneal profile view angle and actual angulation (P &gt; .999). Conclusion: The AP calcaneal profile view is a novel radiographic view that is easily obtained with demonstrated increased accuracy for assessing calcaneal axial alignment. While both views demonstrated similar measurement error for lesser degrees of varus malalignment, the AP calcaneal profile view demonstrated more accurate measurement of increasing heel varus compared with the Harris view. Clinical Relevance: The AP calcaneal profile view could be used in addition to other radiographic views when treating displaced, intra-articular calcaneus fractures to help optimize correction of hindfoot alignment. abstract_id: PUBMED:10775685 The value of the axial view in assessing calcaneal fractures. We studied the value of the axial view of the calcaneum in diagnosing fractures. Fifty sets calcaneal radiographs were studied by four senior trauma staff and four orthopeadic trainees on two occasions 2-3 weeks apart. On the first occasion only the lateral view was studied; on the second, both lateral and axial views were studied. The axial view did not improve the sensitivity or specificity of the lateral view alone. Senior staff were more accurate in assessing the radiographs. We suggest that the axial view should not be used routinely in assessing a patient with a possible calcaneal fracture. abstract_id: PUBMED:36175879 Axial and frontal X-ray fluoroscopy technique of the sustentaculum tali can improve the accuracy of sustentacular screw placement. Introduction: Calcaneal fractures, especially those involving the articular surface, should be anatomically reduced as much as possible. Fixing the fracture by placing a screw into the sustentaculum tali from the lateral side of the calcaneus is generally considered to be the key to successful surgery. However, due to the limited visibility during surgery, it is not easy to place screws into the sustentaculum tali accurately. The purpose of this study was to explore a new fluoroscopy method for the sustentaculum tali and verify the value of this method in improving screw placement accuracy. Methods: In this study, a total of 42 human foot and ankle specimens were dissected and measured. The shape and position of the sustentaculum tali were observed, and the influence of adjacent bones on imaging findings was analysed. The axial and frontal X-ray fluoroscopy method to view the sustentaculum tali was formulated, and the appropriate projection angle through anatomical and image measurements was explored. Thirty specimens were randomly selected for screw placement, and the direction of the screw was dynamically adjusted under the new imaging method. The success rate of sustentacular screw placement was evaluated. Results: The anteversion angles of the sustentaculum tali were 30.81 ± 2.21° and 30.68 ± 2.86° by anatomical and imaging measurements, respectively. There was no statistically significant difference in the anteversion angle between the two measurement methods. Harris heel views should be obtained at 30° to identify the sustentaculum tali on axial X-ray images. Frontal X-ray imaging was performed perpendicular to this projection angle. Through frontal and axial X-ray imaging, the position and shape of the sustentaculum tali can be clearly observed, and these factors are seldom affected by adjacent bones. Under the new fluoroscopy method, the screws were placed from the anterior region of the lateral wall of the calcaneus to the sustentaculum tali. A total of 60 screws were placed in the 30 specimens; of these, 54 screws were in good position, 2 screws penetrated the cortical bone, and 4 screws did not enter the sustentaculum tali. The success rate of sustentacular screw placement was 90% (54/60). Conclusions: Axial and frontal X-ray images of the sustentaculum tali can clearly show the shape of the structure, which improves sustentacular screw placement accuracy. abstract_id: PUBMED:22949954 Revisit of Broden's view for intraarticular calcaneal fracture. Background: This study was performed to investigate the relationship between coronal computed tomography (CT) and Broden's view in terms of location of the fracture line and fracture pattern. Methods: Forty-five feet of 45 patients with intraarticular calcaneal fractures were evaluated. The mean age of the patients was 46.3 years (standard deviation, 18.1; range, 15 to 80 years), and there were 34 men and 11 women. The Broden's views were acquired using the ray sum projection, reviewed, and correlated with the coronal CT image to determine the location of the fracture on the posterior facet and fracture pattern described by the Sanders classification. The quantified location of the fracture line was defined as the distance between the medial margin of posterior facet and the fracture line divided by the whole length of the posterior facet, which was expressed as a percentage. Results: The fracture line on the Broden's view was positioned at 22.3% (standard deviation, 29.6) laterally compared to that on coronal CT (p &lt; 0.01). Although all cases showed posterior facet involvement on the CT scan, the fracture line was positioned lateral to the posterior facet in 6 cases (13.3%) in the Broden's view. The coronal CT and Broden's view showed a low level of agreement in the fracture pattern according to the Sanders classification, with kappa values of 0.23. Conclusions: Surgeons should consider that the fracture line on the Broden's view shows positioning laterally compared to coronal CT and they should consider that the fracture line at the lateral to posterior facet on the Broden's view might be an intraarticular fracture line. There are some limitations when applying the Sanders classification with the Broden's view. abstract_id: PUBMED:31147185 Tuber-to-Anterior Process Angle (TAPA): A cadaveric study and surgical technique for placing axial calcaneal screws. We describe results of a cadaveric study and an accompanying surgical technique which simplifies posterior-to-anterior axial screw placement into the calcaneus, often utilized during fixation of displaced intra-articular calcaneus fractures or calcaneal osteotomies. By defining the Tuber-to-Anterior Process Angle (TAPA), this technique facilitates axial screw placement, thereby decreasing reliance on intraoperative fluoroscopy and reducing operative time. abstract_id: PUBMED:26015613 Measurement technique of calcaneal varus from axial view radiograph. Background: Medial displaced posterior calcaneal tubercle creates varus deformity of an intraarticular calcaneal fracture. The fracture involves posterior calcaneal facet and the calcaneal body so we developed a measurement technique representing the angle between posterior facet and long axis of calcaneus using lateral malleolus and longitudinal bone trabeculae of posterior calcaneal tubercle as references to obtain calcaneal varus angle. Materials And Methods: 52 axial view calcaneal radiographs of 26 volunteers were studied. Angles between posterior facet and long axis of calcaneus were measured using the measurements 1 and 2. Angle of measurement 1, as gold standard, was obtained from long axis and posterior facet of calcaneus whereas measurement 2 was obtained from a line, perpendicular to apex curve of lateral cortex of the lateral malleolus and a line parallel to the longitudinal bone trabeculae of posterior calcaneal tubercle. No more than 3° of difference in the angle of both measurements was accepted. Reliability of the measurement 2 was statistically tested. Results: Angles of measurement 1 and 2 were 90.04° ± 4.00° and 90.58° ± 3.78°. Mean of different degrees of both measurements was 0.54° ± 2.31° with 95% of confidence interval: 0.10°-1.88°. The statistical analysis of measurement 1 and 2 showed more than 0.75 of ICC and 0.826 of Pearson correlation coefficient. Conclusion: Technique of measurement 2 using lateral malleolus and longitudinal bone trabeculae of posterior calcaneal tubercle as references has strong reliability for representing the angle between long axis and posterior facet of calcaneus to achieve calcaneal varus angle. abstract_id: PUBMED:30957544 Identification of Postoperative Step-Offs and Gaps With Brodén's View Following Open Reduction and Internal Fixation of Calcaneal Fractures. Background: To date, there is no consensus regarding which postoperative imaging technique should be used after open reduction and internal fixation of an intra-articular calcaneal fracture. The aim of this study was to clarify whether Brodén's view is sufficient as postoperative radiologic examination to assess step-offs and gaps of the posterior facet. Methods: Six observers estimated the size of step-offs and gaps on Brodén's view in 42 surgically treated intra-articular calcaneal fractures. These findings were compared to postoperative CT scans (gold standard). Inter- and intraobserver reliability were calculated and compared using intraclass correlation coefficients (ICCs). Results: An accuracy of approximately 75% for both step-offs and gaps was found in foot and ankle experts. Less experienced observers correctly identified step-offs and gaps in approximately 62% of cases on fluoroscopy and in 48% on radiographs. Interobserver reliability for intraoperative fluoroscopy as well as postoperative radiographs was fair for step-offs, whereas interobserver reliability for gaps was excellent. Intraobserver reliability showed a low level of agreement for intraoperative fluoroscopy, in contrast to postoperative radiographs with excellent agreement for step-offs and good agreement for gaps. Conclusion: Our results show that especially for more experienced foot and ankle surgeons, in the majority of fractures, Brodén's view accurately showed step-offs and gaps following open reduction and internal fixation. Interobserver reliability showed a fair level of agreement for step-offs and excellent agreement for gaps. Intraobserver reliability was only enough for radiographs, not for fluoroscopy. Level Of Evidence: Level IV, case series. abstract_id: PUBMED:28501960 A modified minimally invasive technique for intra-articular displaced calcaneal fractures fixed by transverse and axial screws. The management of displaced, intra-articular calcaneal fracture represents a surgical challenge to even an experienced orthopedic surgeon. Plate osteosynthesis using an extended lateral approach is complicated by soft tissue problems, while those treated by closed reduction and percutaneous pinning cannot address all the intra-articular fragments sufficiently. The objective of our study is to evaluate restoration of subtalar joint and long-term functional outcomes in intra-articular displaced calcaneal fractures treated with transverse subcondral screws through a small incision on lateral aspect of calcaneus and percutaneously placed axial screws through the calcaneal tuberosity. Forty-five intra-articular calcaneal fractures were managed with this minimally invasive technique. Calcaneal height, width, length, Bohler's angle, and Gissane angle were measured preoperatively and last follow-up visit. Functional outcomes were assessed on the basis of American Orthopedic Foot and Ankle Society (AOFAS) ankle/hind foot score. Preoperative calcaneal length, height, width, Bohler's angle, and Gissane angle were improved from 68.62 ± 2.64 to 72.44 ± 2.63 mm, 39.28 ± 2.72 to 32.37 ± 2.65 mm, 47.04 ± 2.56 to 49.55 ± 2.45 mm, 12.66° ± 2.86° to 26.93° ± 2.57°, 123.91° ± 3.13° to 96.06° ± 3.92°, respectively, after surgery with P value &lt;0.001. There were 21 (46.7%) excellent, 17 (37.8%) good, 4 (8.8%) fair, and 3 (6.7%) poor outcomes based on AOFAS ankle/hindfoot scores. Time to unite the fracture was 11.06 ± 1.82 weeks (range 8-16 weeks), and all fractures were united without major complications. Minimally invasive technique through a small incision on lateral aspect of calcaneus gives a moderately good exposure for anatomical restoration of Sander's type II and III calcaneal fractures fixed with both transverse and axial screws under fluoroscopic guidance. abstract_id: PUBMED:35987107 Calcaneus fracture pattern and severity: Role of local trabecular bone density. Calcaneus fracture is the most common tarsal bone fracture and is associated with external loads resulting from vehicle crashes, under body blasts, or sports. Almost 50% of weight bearing by the foot occurs through the calcaneus and its surgical fixation remains a challenging procedure. Postmortem human subjects were used to measure the regional trabecular BMD of the calcaneus. Mean age, height and weight of the included 14 specimens was 69 years, 177 cm and 80 kg respectively. Using a custom mode within Quantitative Computed Tomography clinical software; calcaneal trabecular BMD in the anterior and posterior regions was quantified. Tolerance data and calcaneus fracture patterns were also available for these specimens from previous tests. The posterior region of the calcaneus had a higher mean BMD (114 mg/cc) than the anterior region (81 mg/cc). These BMD differences also paralleled injury outcome of specimens from axial loading with 50% of specimens resulting in high severity anterior region calcaneal fractures and 36% of specimens resulting in low severity posterior calcaneal fractures. These findings may be reflective of the lower BMD in the anterior region, although the load was uniformly distributed across the plantar surface of the foot. Severity of fracture was greater (intraarticular/crush) in the anterior region as compared to fractures of the posterior region. The BMD ratio between anterior and posterior was significant (p = 0.02) between anterior region fractures and posterior region fractures. The ratio parameter may indicate that the disparity in trabecular BMD between anterior and posterior calcaneus regions is more important in predicting injury outcome than the absolute BMD value of each region. Answer: Yes, the axial view continues to play an important role in dealing with calcaneal fractures. According to the study with PUBMED:25887075, the axial view is useful in diagnosing patients with suspected calcaneal fractures, particularly for distinguishing intra-articular fractures and selecting patients for CT scans. The introduction of the angle Z, which reflects the degree of bulge on the calcaneal lateral wall on the axial view, has shown that axial view can perform excellently in intra-operative assessment as well as in post-operative follow-up procedures. The study concluded that the axial view has an irreplaceable role in assessing and evaluating calcaneal fractures and can be employed as an essential reference during surgical procedures. However, it is worth noting that there are differing opinions on the utility of the axial view. For instance, the study with PUBMED:10775685 suggested that the axial view did not improve the sensitivity or specificity of the lateral view alone in diagnosing fractures and recommended that it should not be used routinely in assessing a patient with a possible calcaneal fracture. Despite this, other studies have explored and suggested modifications or alternative views to improve the accuracy of assessments. The study with PUBMED:32660271 introduced the Anterior-Posterior (AP) Calcaneal Profile View as a novel radiographic view that demonstrated increased accuracy for assessing calcaneal axial alignment compared to the Harris axial view, especially for higher degrees of varus malalignment. This suggests that while the axial view is important, there may be more accurate alternatives or supplementary views that can be used in specific situations. In summary, while there may be some debate and ongoing research into the best practices for imaging calcaneal fractures, the axial view remains a valuable tool in the diagnosis and management of these injuries, with potential for enhancement through additional views or measurement techniques (PUBMED:25887075, PUBMED:10775685, PUBMED:32660271).
Instruction: Is HIV infection a risk factor for complications of surgery? Abstracts: abstract_id: PUBMED:12415327 Is HIV infection a risk factor for complications of surgery? Background: The literature is inconsistent as to whether HIV-infected patients have higher rates of surgical complication rates than HIV-uninfected patients. This inconsistency reflects the failure to control for confounding variables in many of the previous studies. Methods: A retrospective cohort study of records of HIV-infected individuals who underwent surgical procedures between 1990 and 1995 was matched with the records of HIV-uninfected control patients. We performed a logistic regression analysis to determine the independent effects of HIV infection and other potential risk factors for surgical complications. Results: The crude rates of death and infectious and hematologic complications were higher among HIV-infected patients than among uninfected patients. Although the crude risk of having any complication was higher among the HIV-infected (odds ratio [OR]=2.47, p=0.015), the adjusted risk was not (OR=0.72 [p&lt;0.613]). Variables significantly associated with complications were American Society of Anesthesiology (ASA) risk class (OR=2.7), age (OR=1.06 per year), and weight (OR=0.96 per kg). Conclusions: HIV sero-status was not found to be an independent risk factor for complications of surgery. The most important risk factor for complication of surgery in HIV-infected patients is ASA risk class. abstract_id: PUBMED:7576327 The incidence of complications after caesarean section in 156 HIV-positive women. Objective: To investigate the risks of post-operative complications in HIV-positive mothers who undergo a caesarean section (CS) because the delivery cannot be safely accomplished by the vaginal route or to protect the infant from viral infection. Design: In a multicentre study, we reviewed the incidence and type of post-operative complications in 156 HIV-positive women who underwent a CS. These results were compared with those observed in an equal number of HIV-uninfected women who matched for the indication requiring a caesarean delivery, the stage of labour, the integrity or rupture of membranes, and the use of antibiotic prophylaxis. Setting: Seven teaching hospitals providing obstetrical care for mothers infected with HIV. Results: We found that six HIV-infected mothers suffered a major complication (two cases of pneumonia, one pleural effusion, two severe anaemia and one sepsis) compared with only one HIV-negative woman who required blood transfusion after surgery. Minor complications like post-operative fever, endometritis, wound and urinary tract infections were significantly more frequent in HIV-positive women than controls. Multivariate analysis revealed that in HIV-infected women the only factor associated with a significant increase in the rate of complications was a CD4 lymphocyte count &lt; 200 x 10(6)/l. Conclusions: The results of our study indicate that HIV-positive mothers are at an increased risk of post-operative complications when delivered by CS. The risk of post-operative complications is higher in HIV-infected women who are severely immunodepressed. abstract_id: PUBMED:33433725 Compromised Gut Associated Lymphoid Tissue is a Risk Factor for Postoperative Septic Complications in HIV-Seropositive Trauma Patients. Background: The gut associated lymphoid tissue (GALT) is an important part of the immune system and compromised in HIV treatment-naïve as well as in HIV-seropositive patients on antiretroviral treatment (ART) due to HIV-induced changes. The influence of the impaired GALT on the postoperative complication rate after surgery for penetrating abdominal trauma has not been investigated and the hypothesis that the HIV-induced changes of the GALT contribute to septic complications postoperatively was tested. Material And Methods: This prospective study included patients who required a small bowel resection due to abdominal gunshot wounds. A bowel specimen was obtained in the index operation, and the T-lymphocytic quantity in the specimen was analyzed via immunohistochemistry to scrutinize whether these lymphocyte numbers had an impact on the postoperative outcome. Septic and postoperative complications were documented during the in-hospital course and the first month after discharge. Results: In total, 62 patients were included in the study of which 38 patients were HIV-seronegative and 24 were HIV-seropositive. HIV-seropositive patients had a significantly lower quantity of CD4 + T cells in the GALT compared to the HIV-seronegative patients (p = 0.0001), which was also associated with a significantly higher rate of septic complications in the postoperative course. In the HIV-seropositive group, no significant differences were detected for T-lymphocytic quantity in the GALT between the HIV-treatment naïve and antiretroviral treatment groups. Conclusion: The compromised GALT in HIV-seropositive patients may predispose these patients to postoperative septic complications. Antiretroviral therapy does not result in an adequate immune reconstitution in this tissue. abstract_id: PUBMED:10507107 Knee replacement arthroplasty in hemophilia: results, complications and predictive elements of their occurrence Purpose Of The Study: Determine the effect of supplementation rate and HIV disease on results and complications in total knee replacement in hemophilia. Material: Twenty-nine total knee arthroplasties were performed in twenty-one disabled patients with major hemophilia, from 1986 to 1995. All the implants were postero-stabilised prostheses. The average age of patients at the time of surgery was 40.8 years, average follow-up was 4.8 years. Preoperatively, all patients complained of severe pain, with stage IV or V arthropathies, according to Arnold classification. Twelve patients were HIV sero-positive. Method: Functional and radiological results, and postoperative complication rate was analyzed in relation with mean deficient factor titer, HIV status, and periarticular soft tissues quality. Results: Results of the total knee arthroplasties, as determined by the IKS scoring system, were 86.2/100 for Pain, Motion, Stability and 88.7/100 for function. The average gain of motion was twenty-four degrees. One patient required amputation, and one an arthrodesis after deep infection. Postoperative complications, in addition to infections, included intra articular bleeding in nine patients, one peroneal nerve palsy, two instances of inhibition to factor VIII, four superficial skin necrosis, and one important gastric bleeding which required surgical treatment. A high titer of deficient factor (&gt; 70 per cent of the average normal concentration) seemed correlated with a lower complication rate. In this study, surgery had no incidence on HIV disease evolution. Discussion: Authors emphasize the high level of postoperative infections (6/29), particularly in patients with HIV infection (5/12) frequently after superficial skin necrosis, as well as the high postoperative complication rate. Conclusion: However good results were finally observed, association of HIV disease, insufficient deficient factor concentration, and altered quality of periarticular soft tissues increased particularly complication occurrence. With particular attention to these factors, and despite frequent complications, total knee arthroplasty in hemophilia restores good function and allows patients satisfaction. abstract_id: PUBMED:16941774 Dentistry. I HIV a risk factor for complications of dentoalveolar surgery? N/A abstract_id: PUBMED:32627360 Lung cancer surgery in HIV-infected patients: An analysis of postoperative complications and long-term survival. Background: The purpose of this study was to investigate the risk factors of postoperative complications and reliable prognostic factors of long-term survival in HIV-infected patients with non-small cell lung cancer (NSCLC). Methods: HIV-infected patients with NSCLC who underwent surgical treatment were retrospectively studied; a single-institutional analysis was conducted from November 2011 to August 2018. Pre- and postoperative clinical data, including age, gender, smoking history, highly active antiretroviral therapy (HAART), CD4+ T cell count, HIV viral load, cancer histology, clinical and pathological stage (p-stage), surgical result, Glasgow Prognostic Score (GPS), the Charlson comorbidity index (CCI), survival time and postoperative complications were collected. Results: A total of 33 HIV-infected patients with NSCLC were enrolled of which 18 (54.7%) had preoperative comorbidities and postoperative complications were observed in 22 (66.7%) patients. Thirty-day mortality was not observed in these patients. Median survival time after surgery was 65 months: the MST of p-stage I patients was 65 months; p-stage II MST was unestimable; p-stage III MST was 21 months. Univariate analyses showed that postoperative complications were associated with HIV viral load (P = 0.002), CCI (P = 0.027), HAART (P = 0.028) and CD4+ T cell count (P = 0.045). However, multiple logistic regression analysis showed no correlation between HAART and postoperative complications. The p-stage was an independent prognostic factor for survival time. Conclusions: In our single-arm retrospective analysis, the risk factors for postoperative complications in HIV-infected patients with NSCLC were HIV viral load, CCI and CD4+ T cell counts. The p-stage was a predictive factor for long-term survival. abstract_id: PUBMED:35944567 Human Immunodeficiency Virus Status Does Not Independently Predict 2-Year Complications Following Total Knee Arthroplasty. With improved treatment for human immunodeficiency virus (HIV), the demand for total knee arthroplasty (TKA) in this population has increased. Studying the relationship between HIV and postoperative complications following TKA will allow orthopaedic surgeons to accurately assess their patients' surgical risk and provide appropriate counseling. This study aims to understand how HIV impacts surgical and medical complications following TKA for osteoarthritis (OA). Patients identified in a national insurance database who underwent TKA for OA from 2010 to 2019 were divided into three cohorts: no HIV, asymptomatic HIV, and acquired immunodeficiency syndrome (AIDS). Univariate and multivariable regression analyses were performed to determine 90-day postoperative complications as well as 2-year surgical complications (revision surgery, prosthetic joint infection, aseptic loosening, and manipulation under anesthesia). A total of 855,373 patients were included, of whom 1,338 had asymptomatic HIV and 268 had AIDS. After multivariable regression analysis, patients with HIV had no difference in 2-year surgical complications relative to the control cohort. Within 90 days postoperatively, patients with asymptomatic HIV had increased odds of arrhythmia without atrial fibrillation and lower odds of anemia. Patients with AIDS had increased odds of anemia and renal failure. Patients with HIV and AIDS are at an increased risk for developing 90-day medical complications and 2-year surgical complications. However, after accounting for their comorbidities, the risk of 90-day complications was only mildly increased and the risk of 2-year surgical complications approximated the control cohort. Surgeons should pay particular attention to these patients' overall comorbidities, which appear to be more closely associated with postoperative risks than HIV status alone. Level of evidence: III. abstract_id: PUBMED:11850864 Rates of postoperative complications among human immunodeficiency virus-infected women who have undergone obstetric and gynecologic surgical procedures. Clinical observations indicate that human immunodeficiency virus (HIV)-positive women experience more postoperative problems than do HIV-negative women. To obtain a better estimate of the individual risk of postoperative morbidity among HIV-infected women, and to determine which procedures pose the greatest risk, we performed a retrospective case-control study in which we assessed the outcomes after 235 obstetric and gynecologic surgical procedures. For purposes of comparison, an HIV-negative control patient was matched for each of the 235 surgical procedures performed, on the basis of the type of procedure and patient age. We found a significantly greater number of postoperative complications among the HIV-positive women. Higher complication rates occurred after abdominal surgery (odds ratio [OR], 3.6; P=.001) and curettage (OR, 7.7; P=.06). Among HIV-infected women, the risk of complications was associated with immune status. Antiretroviral therapy and standard perioperative antibiotic prophylaxis did not decrease the risk of complications. Indications for performing abdominal surgery and curettage on HIV-infected women should be carefully weighed against the potential risk of postoperative complications. abstract_id: PUBMED:34347141 Rotator cuff repair in HIV-positive patients ages 65 and older: only slight increase in risk of general postoperative surgical complications. Purpose: To examine postoperative complications associated with rotator cuff repair (RCR) in HIV-positive patients ages 65 and older. Methods: Data were collected from the Medicare Standardized Analytic Files between 2005 and 2015 using the PearlDiver Patient Records Database. Subjects were selected using Current Procedural Terminology (CPT) and International Classification of Diseases (ICD) codes. Demographics including age, sex, medical comorbidities, and smoking status were collected. Complications were examined at 7-day, 30-day, and 90-day postoperative time points. Data were examined with univariate and multivariate analyses. Results: The study included 152,114 patients who underwent RCR, with 24,486 (16.1%) patients who were HIV-positive. Following univariate analysis, patients with HIV were observed to be more likely to develop 7-day, 30-day, and 90-day postoperative complications. However, the absolute risk of each complication was quite low for HIV-positive patients. Univariate and multivariate analysis showed that within 7 days following surgery, patients with HIV were more likely to develop myocardial infarction (OR 2.5, AR 0.1%) and sepsis (OR 2.5, AR 0.04%). Within 30 days, HIV-positive patients were at increased risk for postoperative anemia (OR 2.8, AR 0.1%), blood transfusion (OR 3.3, AR 0.1%), heart failure (OR 2.3, AR 0.8%), and sepsis (OR 2.7, AR 0.1%). Within 90 days, mechanical complications (OR 2.1, AR 0.1%) were increased in the HIV-positive group. Conclusion: Postoperative complications of RCR occurred at increased rates in the HIV-positive group compared to the HIV-negative group in patients ages 65 and older. In particular, increased risk for myocardial infarction, sepsis, heart failure, anemia, and mechanical complications was noted in HIV-positive patients. However, the actual percentage of patients who experienced each complication was low, indicating RCR is likely safe to perform even in older HIV-positive patients. As more older adults living with HIV present for elective orthopedic procedures, the results of the present study may reassure physicians who are considering RCR as an option for patients in this particular population, while also informing providers about potential complications. Level Of Evidence: III. abstract_id: PUBMED:29198873 Short and Long-Term Postoperative Complications Following Total Joint Arthroplasty in Patients With Human Immunodeficiency Virus, Hepatitis B, or Hepatitis C. Introduction: Due to advancement in treatment against human immunodeficiency virus (HIV), hepatitis B virus (HBV), and hepatitis C virus (HCV), the prevalence of this patient population electing to undergo total joint arthroplasty (TJA) is increasing. Current literature is scarce and conflicting especially when evaluating long-term surgical complications. The purpose of this study is to assess the postoperative medical and surgical complications following TJA in these patient populations. Methods: Using a nationwide database between 2005 and 2012, 4 cohorts were created: patients with HIV, HCV, HBV, and HIV and HBV or HCV who underwent TJA. Cohorts were matched to a control group by age, gender, and Charlson Comorbidity Index. Thirty-day and 90-day medical complications and 90-day and 2-year surgical complications were evaluated using odds ratios with 95% confidence intervals. Results: Following TJA, patients with HCV or HBV had increased risk of pneumonia, sepsis, joint infection, and revision surgery at 90 days and 2 years. Patients with HIV did not have increased risk of infection at 90 days and 2 years but did have increased risk of revision at 90 days (odds ratio 3.21, 95% confidence interval 1.31-7.84) following total hip arthroplasty. Conclusions: Patients with HIV, HBV, or HCV have an overall increased risk of postoperative medical and surgical complications following TJA. Patients with HBV or HCV are at risk of more complications than patients with HIV especially for infection within 90 days after TJA. Patients with HIV are at risk of mechanical complications but do not appear to be at significant risk for infection following total hip arthroplasty. Answer: The relationship between HIV infection and surgical complications is complex and varies depending on the type of surgery and the patient's overall health status. Some studies have found that HIV infection is not an independent risk factor for complications of surgery when controlling for other variables. For instance, a retrospective cohort study found that after adjusting for confounding factors, HIV serostatus was not an independent risk factor for surgical complications, with the most important risk factor being the American Society of Anesthesiology (ASA) risk class (PUBMED:12415327). However, other studies have reported increased risks of postoperative complications in HIV-positive patients. For example, HIV-positive mothers who underwent caesarean section were found to be at an increased risk of postoperative complications, particularly if they were severely immunodepressed (PUBMED:7576327). Similarly, compromised gut-associated lymphoid tissue (GALT) in HIV-seropositive patients was associated with a higher rate of septic complications postoperatively (PUBMED:33433725). In the context of knee replacement arthroplasty in hemophilia, HIV-positive patients had a higher level of postoperative infections and complications (PUBMED:10507107). In lung cancer surgery, HIV-infected patients with non-small cell lung cancer (NSCLC) experienced postoperative complications associated with HIV viral load, comorbidity index, and CD4+ T cell counts (PUBMED:32627360). Another study found that HIV status did not independently predict 2-year complications following total knee arthroplasty after accounting for comorbidities (PUBMED:35944567). Moreover, HIV-infected women were found to have a significantly greater number of postoperative complications after obstetric and gynecologic surgical procedures (PUBMED:11850864). In older HIV-positive patients undergoing rotator cuff repair, there was a slight increase in the risk of general postoperative surgical complications (PUBMED:34347141). Lastly, patients with HIV, HBV, or HCV undergoing total joint arthroplasty had an increased risk of postoperative medical and surgical complications, with HBV or HCV patients at higher risk for infection and HIV patients at risk for mechanical complications but not significantly for infection following total hip arthroplasty (PUBMED:29198873).
Instruction: Does active surveillance for men with localized prostate cancer carry psychological morbidity? Abstracts: abstract_id: PUBMED:34037450 The psychological impact of active surveillance in men with prostate cancer: implications for nursing care. Introduction: Active surveillance is a conservative management approach to treating prostate cancer involving regular testing and close monitoring by the health professional. The aim of this literature review is to establish whether men experience a psychological impact of active surveillance and what the prevalent effects might be. Method: The search was carried out in three databases: CINAHL, Medline and PsycINFO. Articles published in English, from October 2015 to March 2018, which focused on the psychological impact of active surveillance, were included. Findings: A total of eight quantitative studies were included in this report. The review identified key psychological impacts of active surveillance, including anxiety, sub-clinical depression, illness uncertainty and hopelessness. Active surveillance was seen by some patients as a positive treatment approach that limited the side effects associated with active treatment. Conclusion: The evidence found a negative impact of active surveillance might be felt by men at any stage during treatment and at differing levels of severity. The article highlights key demographic areas, including ethnicity and age, for future research and recommends more qualitative studies are conducted. abstract_id: PUBMED:17550414 Does active surveillance for men with localized prostate cancer carry psychological morbidity? Objectives: To investigate, in a cross-sectional study, the prevalence of anxiety and depression in patients with localised prostate cancer managed by active surveillance, compared with those receiving immediate treatment, as active surveillance is a relatively new approach to managing this disease, designed to avoid 'unnecessary' treatment, but it is unclear whether the approach contributes to psychological distress, given that men are living with untreated cancer. Patients And Methods: A consecutive series of 764 patients with prostate cancer were approached in outpatient clinics. Of these, 329 men with localized disease (cT1/2, N0/NX, M0/MX) meeting the study entry criteria, completed the Hospital Anxiety and Depression Scale (HADS); 100 were on active surveillance, 81 were currently receiving radical treatment (radiotherapy + neoadjuvant hormone therapy) and 148 had previously received radical radiotherapy. Results: Overall, 16% (51/329) of patients met the HADS criteria for anxiety and 6% (20/329) for depression. Analyses indicated that higher anxiety scores were significantly associated with younger age (P &lt; 0.01) and a longer interval since diagnosis (P &lt; 0.01), but not with management by active surveillance (P = 0.38). Higher depression scores were significantly associated with a longer interval since diagnosis (P &lt; 0.05), but not with management by active surveillance (P = 0.83). Conclusion: Active surveillance for managing localized prostate cancer was not associated with greater psychological distress than more immediate treatment for prostate cancer. abstract_id: PUBMED:22357407 Psychological aspects of active surveillance. Purpose Of Review: Active surveillance is emerging as a serious alternative to radical therapy for low-risk prostate cancer. In a situation in which the difference in effects on disease morbidity and mortality of different treatment options for these malignancies is likely to be small, the quality of life and psychological aspects may be decisive in treatment choice. Recent Findings: The following three are the main issues being covered in the literature on psychological aspects of active surveillance. First, the process of consultation with the physician and treatment choice in men diagnosed with low-risk prostate cancer. Second, the effect of active surveillance on physical domains and resulting anxiety and distress, and on quality of life in general. And third, the possible supportive and educational interventions for patients on active surveillance. Observations are scarce and derived from nonrandomized studies with a limited follow-up after diagnosis. Summary: At the moment of treatment choice, fear of disease progression is the main reason to reject active surveillance. Active surveillance may spare physical domains and does not cause much anxiety or distress on short term in men who choose this strategy. Once men opt for active surveillance, only a minority of them switch to radical treatment due to psychological reasons. Supportive and educational interventions should be considered. abstract_id: PUBMED:25467109 Wellbeing during Active Surveillance for localised prostate cancer: a systematic review of psychological morbidity and quality of life. Background: Active Surveillance (AS) is recommended for the treatment of localised prostate cancer; however this option may be under-used, at least in part because of expectations of psychological adverse events in those offered or accepting AS. Objective: (1) Determine the impact on psychological wellbeing when treated with AS (non-comparative studies). (2) Compare AS with active treatments for the impact on psychological wellbeing (comparative studies). Method: We used the PRISMA guidelines and searched Medline, PsychInfo, EMBASE, CINHAL, Web of Science, Cochrane Library and Scopus for articles published January 2000-2014. Eligible studies reported original quantitative data on any measures of psychological wellbeing. Results: We identified 34 eligible articles (n=12,497 individuals); 24 observational, eight RCTs, and two other interventional studies. Studies came from North America (16), Europe (14) Australia (3) and North America/Europe (1). A minority (5/34) were rated as high quality. Most (26/34) used validated instruments, whilst a substantial minority (14/34) used watchful waiting or no active treatment rather than Active Surveillance. There was modest evidence of no adverse impact on psychological wellbeing associated with Active Surveillance; and no differences in psychological wellbeing compared to active treatments. Conclusion: Patients can be informed that Active Surveillance involves no greater threat to their psychological wellbeing as part of the informed consent process, and clinicians need not limit access to Active Surveillance based on an expectation of adverse impacts on psychological wellbeing. abstract_id: PUBMED:28847461 Active surveillance of prostate cancer Several prospective studies have demonstrated the safety of active surveillance as a first treatment of prostate cancer. It spares many patients of a useless treatment, with its potential sequelae. Patients with a low-risk cancer are all candidates for this approach, as recommended by the American Society of Clinical Oncology (ASCO). Some patients with an intermediate risk could be also concerned by active surveillance, but this is still being discussed. Currently, the presence of grade 4 lesions on biopsy is a contra-indication. Modalities included a repeated prostate specific antigen test and systematic rebiopsy during the first year after diagnosis. MRI is now proposed to better select patients at inclusion and also during surveillance. No life style changes or drugs are significantly associated with a longer duration of surveillance. abstract_id: PUBMED:25374902 Active surveillance in men with low-risk prostate cancer: current and future challenges. Introduction: The implementation of prostate-specific antigen (PSA) screening has coincided with a decrease in mortality rate from prostate cancer at the cost of overtreatment. Active surveillance has thus emerged to address the concern for over-treatment in men with low-risk prostate cancer. Methods: A contemporary review of literature with respect to low-risk prostate cancer and active surveillance was conducted. The premise of active surveillance, ideal candidates, follow-up practices, treatment triggers, and the observed outcomes of delayed interventions are reviewed. Various institutional protocols are compared and contrasted. Results: Eligibility criteria from various institutions share similar principles. Candidates are followed with PSA kinetics and/or repeat biopsies to identify those who require intervention. Various triggers for intervention have been recognized achieving overall and cancer-specific survival rates &gt; 90% in most protocols. New biomarkers, imaging modalities and genetic tests are also currently being investigated to enhance the efficacy of active surveillance programs. Conclusion: Active surveillance has been shown to be safe and effective in managing men with low-risk prostate cancer. Although as high as 30% of men on surveillance will eventually need intervention, survival rates with delayed intervention remain reassuring. Long-term studies are needed for further validation of current active surveillance protocols. abstract_id: PUBMED:28482875 Qualitative insights into how men with low-risk prostate cancer choosing active surveillance negotiate stress and uncertainty. Background: Active surveillance is a management strategy for men diagnosed with early-stage, low-risk prostate cancer in which their cancer is monitored and treatment is delayed. This study investigated the primary coping mechanisms for men following the active surveillance treatment plan, with a specific focus on how these men interact with their social network as they negotiate the stress and uncertainty of their diagnosis and treatment approach. Methods: Thematic analysis of semi-structured interviews at two academic institutions located in the northeastern US. Participants include 15 men diagnosed with low-risk prostate cancer following active surveillance. Results: The decision to follow active surveillance reflects the desire to avoid potentially life-altering side effects associated with active treatment options. Men on active surveillance cope with their prostate cancer diagnosis by both maintaining a sense of control over their daily lives, as well as relying on the support provided them by their social networks and the medical community. Social networks support men on active surveillance by encouraging lifestyle changes and serving as a resource to discuss and ease cancer-related stress. Conclusions: Support systems for men with low-risk prostate cancer do not always interface directly with the medical community. Spousal and social support play important roles in helping men understand and accept their prostate cancer diagnosis and chosen care plan. It may be beneficial to highlight the role of social support in interventions targeting the psychosocial health of men on active surveillance. abstract_id: PUBMED:31663180 A systematic review of the unmet supportive care needs of men on active surveillance for prostate cancer. Objective: Understanding the unmet supportive care needs of men on active surveillance for prostate cancer may enable researchers and health professionals to better support men and prevent discontinuation when there is no evidence of disease progression. This review aimed to identify the specific unmet supportive care needs of men on active surveillance. Methods: A systematic review following PRISMA guidelines was conducted. Databases (Pubmed, Embase, PsycINFO, and CINAHL) were searched to identify qualitative and/or quantitative studies that reported unmet needs specific to men on active surveillance. Quality appraisals were conducted before results were narratively synthesised. Results: Of the 3613 unique records identified, only eight articles were eligible (five qualitative and three cross-sectional studies). Unmet Informational, Emotional/Psychological, Social, and "Other" needs were identified. Only three studies had a primary aim of investigating unmet supportive care needs. Small active surveillance samples, use of nonvalidated measures, and minimal reporting of author reflexivity in qualitative studies were the main quality issues identified. Conclusions: The unmet needs of men on active surveillance is an underresearched area. Preliminary evidence suggests the information available and provided to men during active surveillance is perceived as inadequate and inconsistent. Men may also be experiencing unmet psychological/emotional, social, and other needs; however, further representative, high-quality research is required to understand the magnitude of this issue. Reporting results specific to treatment type and utilising relevant theories/models (such as the social ecological model [SEM]) is recommended to ensure factors that may facilitate unmet needs are appropriately considered and reported. abstract_id: PUBMED:33388921 Psychological aspects of active surveillance. Aim: The objective of this paper was to discuss the psychological impact of active surveillance (AS) for prostate cancer (PCa) and the resulting implications of psychological wellbeing for treatment decision making and acceptance of AS protocols. Method: Qualitative and quantitative research in the area of anxiety, depression, and distress is discussed drawing from PCa literature as well other health conditions from which parallels can be drawn. Results: Attention is given to the role of the clinician in treatment decision making, including the value of information provision, and perceived trust in the AS as a management approach. Conclusion: Given that research is conflicted regarding the psychological impact of AS for PCa, it is suggested that focus shifts away from debate of the 'true' experience of AS rather researchers and clinicians should seek to identify the factors associated with positive and negative psychological response to diagnosis and AS to improve psychological and physical outcomes. Recommendations for clinical practice are discussed. abstract_id: PUBMED:29732278 Quality of care and economic considerations of active surveillance of men with prostate cancer. The current health care climate mandates the delivery of high-value care for patients considering active surveillance for newly-diagnosed prostate cancer. Value is defined by increasing benefits (e.g., quality) for acceptable costs. This review discusses quality of care considerations for men contemplating active surveillance, and highlights cost implications at the patient, health-system, and societal level related to pursuit of non-interventional management of men diagnosed with localized prostate cancer. In general, most quality measures are focused on prostate cancer care in general, rather that active surveillance patients specifically. However, most prostate cancer quality measures are pertinent to men seeking close observation of their prostate tumors with active surveillance. These include accurate documentation of clinical stage, informed discussion of all treatment options, and appropriate use of imaging for less-aggressive prostate cancer. Furthermore, interventions that may help improve the quality of care for active surveillance patients are reviewed (e.g., quality collaboratives, judicious antibiotic use, etc.). Finally, the potential economic impact and benefits of broad acceptance of active surveillance strategies are highlighted. Answer: Active surveillance for men with localized prostate cancer does not appear to be associated with greater psychological distress compared to more immediate treatment for prostate cancer. A cross-sectional study found that active surveillance was not associated with higher anxiety or depression scores when compared to immediate treatment (PUBMED:17550414). Similarly, a systematic review of psychological morbidity and quality of life found no adverse impact on psychological wellbeing associated with active surveillance, and no differences in psychological wellbeing compared to active treatments (PUBMED:25467109). However, the psychological impact of active surveillance can vary among individuals. Some men may experience anxiety, sub-clinical depression, illness uncertainty, and hopelessness (PUBMED:34037450). Fear of disease progression is a main reason some men reject active surveillance, but once men opt for it, only a minority switch to radical treatment due to psychological reasons (PUBMED:22357407). Men on active surveillance may cope with their diagnosis by maintaining a sense of control and relying on support from their social networks and the medical community (PUBMED:28482875). It is important to note that the unmet supportive care needs of men on active surveillance is an under-researched area, and preliminary evidence suggests that the information available to men during active surveillance is perceived as inadequate and inconsistent, which may contribute to unmet psychological/emotional, social, and other needs (PUBMED:31663180). Additionally, the psychological impact of active surveillance and the resulting implications for treatment decision making and acceptance of AS protocols are complex, and research is conflicted regarding the psychological impact of AS for prostate cancer (PUBMED:33388921). In conclusion, while active surveillance for localized prostate cancer does not inherently carry greater psychological morbidity compared to immediate treatment, individual experiences can vary, and there may be unmet supportive care needs that can affect psychological wellbeing. Clinicians should be aware of these potential issues and provide appropriate support and information to men undergoing active surveillance.
Instruction: Do traditional VT zones improve outcome in primary prevention ICD patients? Abstracts: abstract_id: PUBMED:20727095 Do traditional VT zones improve outcome in primary prevention ICD patients? Aims: We reviewed outcomes in our primary prevention implantable cardioverter defibrillator (ICD) population according to whether the device was programmed with a single ventricular fibrillation (VF) zone or with two zones including a ventricular tachycardia (VT) zone in addition to a VF zone. Methods: This retrospective study examined 137 patients with primary prevention ICDs implanted at our institution between 2004 and 2006. Device programming and events during follow-up were reviewed. Outcomes included all-cause mortality, time to first shock, and incidence of shocks. Results: Eighty-seven ICDs were programmed with a single VF zone (mean &gt;193 ± 1 beats per minute [bpm]) comprising shocks only. Fifty ICDs had two zones (mean VT zone &gt;171 ± 2 bpm; VF zone &gt;205 ± 2 bpm), comprising antitachycardia pacing (100%), shocks (96%), and supraventricular (SVT) discriminators (98%) . Discriminator "time out" functions were disabled. Mean follow-up was 30 ± 0.5 months and similar in both groups. All-cause mortality (12.6% and 12.0%) and time to first shock were similar. However, the two-zone group received more shocks (32.0% vs 13.8% P = 0.01). Five of 16 shocks in these patients were inappropriate for SVT rhythms. The single-zone group had no inappropriate shocks for SVTs. Eighteen of 21 appropriate shocks were for ventricular arrhythmias at rates &gt;200 bpm (three VF, 15 VT). This suggests that primary prevention ICD patients infrequently suffer ventricular arrhythmias at rates &lt;200 bpm and that ATP may play a role in terminating rapid VTs. Conclusions: Patients with two-zone devices received more shocks without any mortality benefit. abstract_id: PUBMED:34430946 Assessment of primary prevention patients receiving an ICD - Systematic evaluation of ATP: APPRAISE ATP. Background: The value of antitachycardia pacing (ATP) in the overall cohort of primary prevention patients who receive implantable cardioverter-defibrillators (ICDs) remains uncertain. ATP success reported in prior trials potentially included a large number of patients receiving unnecessary ATP for arrhythmias that may have self-terminated owing to the prematurity of the intervention. Although some patients derive benefit from initial ATP in terminating rapid ventricular arrhythmias and thereby preventing shocks, there are limited data allowing us to identify those patients a priori. Objective: The purpose of APPRAISE ATP is to understand the role of ATP in primary prevention patients currently indicated for ICD therapy in a large prospective randomized controlled trial with modern programming parameters. Methods: The study is a global, prospective, randomized, multicenter clinical trial conducted at up to 150 sites globally, enrolling approximately 2600 subjects The primary endpoint of the trial is time to first all-cause shock in a 2-arm study with an equivalent study design in which the incidence of all-cause shocks will be compared between primary prevention subjects programmed with shocks only vs subjects programmed to standard therapy (ATP and shock). Results: An Electrogram and Device Interrogation Core Laboratory will review interrogation data to determine primary endpoints that occur in APPRAISE ATP. Their decisions are based on independent physician review of the data from device interrogation. Conclusion: The ultimate purpose of the study is to aid clinicians in the selection of ICD technologies based on hard endpoint evidence across the spectrum of indications for primary prevention implantation. abstract_id: PUBMED:38073733 Indications and Effectiveness of ICD for Primary and Secondary Prevention in Patients Admitted in Ahvaz Imam Khomeini Hospital since 2017. Background: Implantable cardioverter-defibrillators (ICDs) have been established for primary and secondary prevention of fatal arrhythmias and effectively reduce the rate of sudden cardiac death (SCD). This study aims to evaluate the indications and effectiveness of ICD for primary and secondary prevention of SCD. Materials And Methods: This retrospective study was conducted on 229 patients (136 for primary and 93 for secondary prevention) with ICD implantations in Imam Khomeini Hospital, Ahvaz, between 2017 and 2020. The incidence of arrhythmic events after implantation of ICDs was saved in electrograms, and the performed treatments (antitachycardia pacing (ATP)/shock) were recorded from the device memory. Results: The indications for ICD implantation in primary and secondary prevention were different (P &lt; 0.0001). The most common cause of ICD implantation for primary prevention was ischemic cardiomyopathy (ICMP, 90.4%) and for secondary prevention was ICMP (58.1%) followed by dilated cardiomyopathy (31.2%). During ICD implantation, 54 patients (39.7%) with ICD implantation for primary prevention and 50 patients (53.8%) for secondary prevention had arrhythmia (P = 0.043). The rate of appropriate therapies in patients with secondary prevention was higher than the primary prevention (57.9% vs. 42.1%), while the rate of inappropriate treatments in patients with primary prevention indication was more than the secondary prevention (63% vs. 37%) (P = 0.060). Conclusions: ICMP was the main cause of ICD implantation for the prevention of SCD in both groups. At follow-up, the high prevalence of appropriate ICD therapy was observed in both groups, and this risk was slightly higher in the secondary prevention group. abstract_id: PUBMED:31082539 Understanding Outcomes with the EMBLEM S-ICD in Primary Prevention Patients with Low EF Study (UNTOUCHED): Clinical characteristics and perioperative results. Background: The subcutaneous implantable cardioverter-defibrillator (S-ICD) has shown favorable outcomes in large registries with broad inclusion criteria. The cohorts reported had less heart disease and fewer comorbidities than standard ICD populations. Objective: The purpose of this study is to characterize acute performance for primary prevention patients with a left ventricular ejection fraction (LVEF) ≤35% (primary prevention ≤35%). Methods: Primary prevention ≤35% patients with no prior documented sustained ventricular tachycardia (VT), pacing indication, end-stage heart failure, or advanced renal failure were prospectively enrolled. Analyses included descriptive statistics, Kaplan-Meier time to event, and multivariable linear and logistic regression. Results: In 1112 of 1116 patients, an S-ICD was successfully implanted (99.6%). Predictors for longer procedure time included 3-incision technique, higher body mass index (BMI), performing defibrillation testing (DFT), imaging, younger age, black race, and European vs North American centers. Patients undergoing DFT (82%) were successfully converted (99.2%; 93.5% converting at ≤65 J). Higher BMI was predictive of failing DFT at ≤65 J. The rate of 30-day freedom from complications was 95.8%. Most complications involved postoperative healing (45%) or interventions after DFT or impedance check (19%). Conclusion: The procedural outcome data of UNTOUCHED reinforce that S-ICD therapy has low perioperative complication rates and high conversion efficacy of induced ventricular fibrillation, even in a higher-risk cohort with low LVEF and more comorbidities than previous S-ICD studies. Higher BMI warrants more careful attention to implant technique. abstract_id: PUBMED:23080327 Single-chamber ICD, single-zone therapy in primary and secondary prevention patients: the simpler the better? Background: It is now well established that implantable cardioverter defibrillator (ICD) implantation reduces mortality in patients at increased risk of sudden cardiac death. However, the best programming parameters remain controversial. Our traditional policy has followed a simple approach in the vast majority of patients. In accordance with ICD programming in the major randomized clinical trials, we programmed a single high-rate, shock-only therapy zone. We aimed to demonstrate in this observational study that simple programming is not associated with higher shock rates or mortality when compared to other published studies. Methods: Consecutive patients who underwent single-chamber ICD implantation with single-zone, high-rate programming at our institution between 1993 and 2008 were retrospectively studied. Data were collected prospectively in a database regarding details of ICD implantation, demographic data, and indication. Results: Three hundred thirty-two patients were included in our study, 31 % primary prevention and 68 % secondary prevention. Mean ejection fraction (EF) is 33.7 ± 15.3. Over a mean follow-up period of 62.5 ± 38.1 months, 135 patients experienced ICD shock (annualized event rate 7.7 %); 89 patients (26.8 %) appropriate shock in VT-ventricular fibrillation (VF), 68 patients (20.5 %) inappropriate shocks, and 22 patients (6.6 %) both. Twenty-nine patients (8.7 %) were reprogrammed to additional VT-ATP zones. Twenty-two (6.6 %) patients underwent heart transplantation. Sixty-two patients (18.6 %) died during follow-up, 43.6 % out of them due to cardiac cause, mainly progressive heart failure. Conclusion: Our results show that simpler settings with single-zone, high-rate programming is associated with ICD shock rates and long-term mortality that does not appear to be worse when compared with contemporary studies which include multizone ICD programming with antitachycardia pacing activated. abstract_id: PUBMED:26002818 Reduction of inappropriate ICD therapies in patients with primary prevention of sudden cardiac death: DECREASE study. Background: A significant number of patients with an implantable cardioverter/defibrillator (ICD) for primary prevention receive inappropriate shocks. Previous studies have reported a reduction of inappropriate therapies with simple modifications of ICD detection settings, however, inclusion criteria and settings varied markedly between studies. Our aim was to investigate the effect of raising the ICD detection zone in the entire primary prevention ICD population. Methods And Results: 543 patients receiving an ICD for primary prevention were randomized to either conventional or progressive ICD programming. The detection rate was programmed at 171 bpm for ventricular tachycardia (VT) and 214 bpm for ventricular fibrillation (VF) in the Conventional group and 187 bpm for VT and 240 bpm for VF in the Progressive group. 43 % of patients received single-chamber and 57 % dual-chamber detection devices (DDD-ICD 19 %; CRT-D 38 %). The primary endpoint consisted of inappropriate therapies and untreated VT/VF. The primary endpoint was reached in 35 patients (13 %) in the Conventional group and 17 patients (6 %) in the Progressive group (p = 0.004). Progressive ICD programming led to significantly fewer amount of patients with ICD therapies (26 vs. 14 %; p &lt; 0.001) and shocks (11 vs. 5 %; p = 0.023) compared to conventional ICD programming. Sub-analyses showed the greatest reduction of inappropriate therapies and shocks in dual-chamber detection devices with progressive compared to single-chamber detection devices with conventional ICD programming (p &lt; 0.001). Conclusions: Progressive ICD programming reduces the number of inappropriate therapies and shocks in a broad primary prevention ICD population particularly in combination with dual-chamber detection algorithms. Clinical Trial Registration: http://clinicaltrials.gov ; ClinicalTrials.gov identifier NCT01217528. abstract_id: PUBMED:32320087 Underdiagnosis of VT due to cycle length variation among cardiac sarcoidosis patients having ICD: Problem with stability discriminator. Background: Implantable cardioverter defibrillator (ICD) is recommended for patients with ventricular tachycardia (VT) due to cardiac sarcoidosis (CS). Programming supraventricular tachycardia (SVT) discriminators (onset, stability, and morphology/template match) is generally recommended to minimize inappropriate therapies. However, VT in patients with CS is known to show cycle length variability (CLV) and pleomorphism. Objective: To determine whether the stability criterion, designed to prevent inappropriate therapy during atrial fibrillation with rapid ventricular rates, could potentially lead to incorrect classification of VT as SVT and inappropriately delay or inhibit ICD therapy. Methods: Cases of biopsy-proven CS with VT were analyzed. For patients with implanted devices, all recorded electrograms of tachycardia episodes and ICD therapies were analyzed at last follow up. Results: A total of 142 patients were included (mean age 38 years, 87 males). One hundred and three of 142 patients had implanted devices (ICD or CRT-D). Thirty eight of 103 (36.9%) patients received appropriate ICD therapies over 3 ± 2.2 years follow up. Four of 38 (10.5%) of patients experienced delayed-detection or underdetection of VT related to CLV, resulting in VT counters being repeatedly "reset" (classified as "unstable" rhythms). Retrospective analysis of other VT episodes in 70 of 103 (68%) patients revealed that 25 of 80 (31.3%) episodes had &gt; 50 ms cycle length oscillations. Conclusion: Among CS patients with VT, CLV is a common occurrence seen in two-thirds of VT episodes. Routine programming of the stability criterion may result in underdetection of VT in a subset of such patients. We recommend that the stability criterion should be programmed "OFF" for patients with CS, unless the patient has documented atrial fibrillation. abstract_id: PUBMED:31087157 Prognostic relevance of new onset arrhythmia and ICD shocks in primary prophylactic ICD patients. Background: The prognostic relevance of new onset arrhythmias compared to ICD shocks in ICD patients is not well known. Objectives: Aim of the study was to evaluate the prognostic relevance of new onset atrial fibrillation (AF) or ventricular arrhythmias (VT/VF) compared to ICD shocks in primary prophylactic ICD-patients. Methods: A total of 622 of 1955 (32%) patients of the prospective single-centre ICD-registry Ludwigshafen with primary prophylactic ICD indication and sinus rhythm (SR) at baseline without history of AF were analyzed. All patients underwent an ICD implantation between 1992 and 2012. Results: During the median follow-up time of 6 years, 200 (32%) ICD patients developed new AF and 249 (40%) patients new VT/VF. There was an approximately 10% increase of 5-year mortality rate depending on the type of new onset arrhythmia (no arrhythmia 19%, new AF 28%, new VT 36% and new VF 55% 5-year mortality). In a multivariate analysis, new onset of AF or VT/VF was an independent predictor for increased mortality whereas VT shocks and inappropriate ICD shocks were not. Conclusion: More than half of primary prophylactic ICD patients with SR at baseline develop new AF or VT/VF after 6 years. New onset arrhythmias of AF and VT/VF are independent prognostic factors for increased mortality in primary prophylactic ICD patients. ICD shocks itself, inappropriate or appropriate, are not additionally associated with a worse outcome. These results support the hypothesis that in clinical practice rather the arrhythmia than the ICD shock itself is responsible for a deteriorated prognosis. abstract_id: PUBMED:27943348 The Design of the Understanding Outcomes with the S-ICD in Primary Prevention Patients with Low EF Study (UNTOUCHED). Background: The UNTOUCHED study will assess the safety and efficacy of the subcutaneous implantable cardioverter defibrillator (S-ICD) in the most common cohort of patients receiving ICDs. The primary goal is to evaluate the inappropriate shock (IAS)-free rate in primary prevention patients with a reduced ejection fraction (EF) and compare with a historical control of transvenous ICD patients with similar programming. Methods And Results: The UNTOUCHED study is a global, multicenter, prospective, nonrandomized study of patients undergoing de novo S-ICD implantation for primary prevention of sudden cardiac death with a left ventricular EF ≤35%. The primary end point of this trial is freedom from IAS at 18 months. The lower 95% confidence bound of the observed incidence will be compared to a performance goal of 91.6%, which was derived from the IAS rate in MADIT-RIT. The secondary end points are all-cause shock-free rate at 18 months, and system- and procedure-related complication-free rate at 1 month and 6 months. Enrollment of a minimum of 1,100 subjects from up to 200 centers worldwide is planned based on power calculations of the primary and principal secondary end points. Conclusions: This trial will provide important data regarding the rates of inappropriate and appropriate shock therapy in real-world use of the S-ICD in the most common group of patients receiving ICDs. abstract_id: PUBMED:34829824 Comparable Efficacy in Ischemic and Non-Ischemic ICD Recipients for the Primary Prevention of Sudden Cardiac Death. (1) Background: In patients suffering from heart failure, the main causes of death are either hemodynamic failure, or ventricular arrhythmias. The only tool to significantly reduce arrhythmic sudden death is the implantable cardioverter defibrillator (ICD), but not all patients benefit to the same extent from these devices. (2) Methods: The primary outcome of this single-center study was defined as cardiovascular death in patients with ischemic and non-ischemic heart failure who have benefited from ICD therapy. The secondary outcomes were death from any cause, sudden cardiac death, ICD-related therapies (appropriate antitachycardia pacing or shock therapy for ventricular tachycardia or fibrillation) and recurrences of ventricular tachyarrhythmias. (3) Results: A total of 403 consecutive ICD recipients-symptomatic heart failure patients with ICD for the primary prevention of sudden cardiac death-were included retrospectively: 59% ischemic cardiomyopathy (ICMP) and 41% non-ischemic cardiomyopathy (NICMP) patients. Within a median follow-up period of 36 months, the incidence of cardiovascular mortality was not significantly different in patients with NICMP and ICMP: the primary outcome had occurred in 9 patients (5.4%) in the NICMP group and in 14 patients (5.9%) in the ICMP group (hazard ratio 1; 95% confidence interval (CI) 0.45 to 2.28; p = 0.97). All-cause mortality occurred in 14 of 166 patients (8.4%) in the NICMP group and 18 of 237 patients (7.6%) in the ICMP group. Sudden cardiac death occurred in two patients (1.2%) in the NICMP group and in four patients (1.7%) in the ICMP group (hazard ratio 0.71; 95% CI, 0.13 to 3.88; p = 0.69). The rate of appropriate device therapies was comparable in both groups. (4) Conclusions: In this study, ICD implantation for primary prevention of sudden cardiac death in patients with symptomatic systolic heart failure was associated with similar rates of cardiovascular and all-cause mortality in patients with ischemic heart disease, and in patients with heart failure from other causes. NICMP and ICMP showed comparable rates of recurrent ventricular tachyarrhythmias and appropriate ICD therapies. Answer: Traditional VT zones in primary prevention ICD patients do not appear to improve outcomes in terms of mortality benefit. A retrospective study examining 137 patients with primary prevention ICDs found that patients with two-zone devices received more shocks but did not show any mortality benefit compared to those with a single VF zone (PUBMED:20727095). Furthermore, the study indicated that primary prevention ICD patients infrequently suffer ventricular arrhythmias at rates below 200 bpm, suggesting that antitachycardia pacing (ATP) may play a role in terminating rapid VTs, but the overall benefit in terms of reducing mortality was not evident. Another study, the DECREASE study, showed that progressive ICD programming, which involves raising the ICD detection zone, led to significantly fewer inappropriate therapies and shocks compared to conventional ICD programming in a broad primary prevention ICD population, particularly when combined with dual-chamber detection algorithms (PUBMED:26002818). This suggests that while traditional VT zones may not improve mortality outcomes, optimizing ICD programming can reduce inappropriate therapies. Additionally, a study on single-chamber ICD with single-zone therapy in primary and secondary prevention patients suggested that simpler settings with single-zone, high-rate programming are associated with ICD shock rates and long-term mortality that do not appear to be worse when compared with studies that include multizone ICD programming with ATP activated (PUBMED:23080327). In summary, the evidence suggests that traditional VT zones do not necessarily improve mortality outcomes in primary prevention ICD patients, and more shocks may be associated with two-zone devices without a mortality benefit. However, optimizing ICD programming to reduce inappropriate therapies is beneficial.
Instruction: Oral premalignant lesions: is a biopsy reliable? Abstracts: abstract_id: PUBMED:27713748 Indomethacin Treatment of Mice with Premalignant Oral Lesions Sustains Cytokine Production and Slows Progression to Cancer. Current treatment options for head and neck squamous cell carcinoma (HNSCC) patients are often ineffective due to tumor-localized and systemic immunosuppression. Using the 4-NQO mouse model of oral carcinogenesis, this study showed that premalignant oral lesion cells produce higher levels of the immune modulator, PGE2, compared to HNSCC cells. Inhibiting prostaglandin production of premalignant lesion cells with the pan-cyclooxygenase inhibitor indomethacin stimulated their induction of spleen cell cytokine production. In contrast, inhibiting HNSCC prostaglandin production did not stimulate their induction of spleen cell cytokine production. Treatment of mice bearing premalignant oral lesions with indomethacin slowed progression of premalignant oral lesions to HNSCC. Flow cytometric analysis of T cells in the regional lymph nodes of lesion-bearing mice receiving indomethacin treatment showed an increase in lymph node cellularity and in the absolute number of CD8+ T cells expressing IFN-γ compared to levels in lesion-bearing mice receiving diluent control treatment. The cytokine-stimulatory effect of indomethacin treatment was not localized to regional lymph nodes but was also seen in the spleen of mice with premalignant oral lesions. Together, these data suggest that inhibiting prostaglandin production at the premalignant lesion stage boosts immune capability and improves clinical outcomes. abstract_id: PUBMED:25419481 An Inflammatory Cytokine Milieu is Prominent in Premalignant Oral Lesions, but Subsides when Lesions Progress to Squamous Cell Carcinoma. While head and neck squamous cell carcinomas (HNSCC) are associated with profound immune suppression, less is known about the immunological milieu of premalignant oral lesions. The present study shows dynamic shifts in the immune milieu within premalignant oral lesions and when they have progressed to HNSCC. Specifically, this study showed that the premalignant lesion environment consists of inflammatory mediators and IL-17, but this inflammatory phenotype declines when premalignant oral lesions have progressed to HNSCC. The cytokine profiles of human tissues did not correspond with plasma cytokine profiles. A murine carcinogen-induced premalignant lesion model that progresses to HNSCC was used to examine cytokine profiles released from tissues as well as regional lymph nodes. As in human tissues, murine premalignant lesions and regional lymph nodes released high levels of inflammatory cytokines and, very prominently, IL-17. Also similar to human tissues, release of inflammatory cytokines declined in HNSCC tissues of mice and in the regional lymph nodes of mice with HNSCC. Studies focusing on IL-17 showed that mediators from premalignant lesions stimulated normal spleen cells to produce increased levels of IL-17, while mediators from HNSCC were less stimulatory toward IL-17 production. IL-17 production by Th17-skewed CD4+ cells was strongly inhibited by normal oral epithelium as well as HNSCC. In contrast, premalignant lesion-derived mediators further increased IL-17 production by Th17-skewed cells. The stimulation of IL-17 production by premalignant lesions was dependent on IL-23, which premalignant lesions released in higher amounts than control tissues or HNSCC. HNSCC tissues instead produced increased levels of TGF-β compared to premalignant lesions, and skewed normal spleen cells toward the Treg phenotype. This skewing was blocked by supplementation with IL-23. These studies suggest IL-23 to be a significant contributor to the inflammatory IL-17 phenotype in premalignant oral lesions and suggest the decline in IL-23 in HNSCC leads to a decline in Th17 cells. abstract_id: PUBMED:22851837 A brief review of common oral premalignant lesions with emphasis on their management and cancer prevention. Unlabelled: Long-term outcomes associated with oral cancer and its management over the past several decades has caused concern and the value of mass oral cancer screenings has come under scrutiny. Though not all oral carcinomas are preceded by premalignant lesions as clinically visible morphological alterations occur secondary to the cellular or molecular changes, certain high risk lesions have been identified. Their management remains controversially polarized between surgical excision to prevent malignant change and conservative medical or surveillance techniques. Though oral cancer is one of the "major killers" of modern times, there seem to be no widely accepted criteria for decision making in clinical practice, the evidence base is scanty and uncertainty persists throughout investigation, diagnosis, and treatment. In this article, we have briefly discussed the common premalignant lesions, with an emphasis on their evidence based management and prevention. Electronic Supplementary Material: The online version of this article (doi:10.1007/s12262-011-0286-6) contains supplementary material, which is available to authorized users. abstract_id: PUBMED:36452814 Expression of Vascular Endothelial Growth Factor in Patients With Premalignant Lesions and Squamous Cell Carcinoma of Oral Cavity. To evaluate and compare expression of VEGF in patients of premalignant lesions and squamous cell carcinoma of oral cavity. The cross sectional observational study is undertaken at the department of otorhionolaryngology and pathology, PGIMER and Dr RML Hospital, New Delhi,from 1st Nov 2017 to 31st March 2019,with a sample size of 30 cases each of premalignant lesions and oral squamous cell carcinoma immunohistochemistry by polymer method. In the participants with oral SCC, VEGF expression of Score 1 was observed in verrucous and well differentiated tumor, Score 2 in moderately differentiated SCC &amp; Score 3 in poorly differentiated SCC with a p value of 0.0001. The observed difference and value of proportion p, is statically significant. In this study we concluded that VEGF expression increases as the lesion progresses from premalignant lesions to oral squamous cell carcinoma and is strongly associated with lymph node status (N-staging). Thus, VEGF can be a target in chemotherapy and its therapeutic implications in the HNSCC needs further research. Levels of Evidence 1A: Systematic review of randomized control trials. abstract_id: PUBMED:34703140 Association between Smokeless Tobacco and risk of malignant and premalignant conditions of oral cavity: A systematic review of Indian literature. Causative linkages of tobacco use with oral potentially malignant disorders and cancers of oral cavity have been studied. Oral squamous cell carcinoma is one of the most common cancers in India. The International Agency for Research on Cancer (IARC) monograph found a significant association between smokeless tobacco (SLT) use and oral cancer. However, only a few limited studies have been represented on the IARC monograph. Published meta-analyses have provided pooled risk estimates for oral cancers caused by tobacco, both on global and regional levels. This systematic review was aimed at summarizing all the available studies exclusively in India by collecting data from PubMed and Medline. Emphasis was laid on cohort and case-control studies, and a few cross-sectional studies for premalignant lesions were also discussed. A significant association was noticed on SLT and premalignant and malignant oral cavity lesions. abstract_id: PUBMED:26120967 Cytokine and Adipokine Levels in Patients with Premalignant Oral Lesions or in Patients with Oral Cancer Who Did or Did Not Receive 1α,25-Dihydroxyvitamin D3 Treatment upon Cancer Diagnosis. Differences in levels of inflammation-modulating cytokines and adipokines in patients with premalignant oral lesions versus in patients that develop squamous cell carcinoma of the head and neck (HNSCC) were assessed. Also assessed was the impact of treating HNSCC patients with the immune regulatory mediator, 1α,25-dihydroxyvitamin D3 [1,25(OH)2D3], on modulators of inflammation. Compared to healthy controls, patients with premalignant oral lesions had increases in their systemic levels of the inflammatory cytokines IL-6 and IL-17, and increases in the adipokine, leptin. However, levels of these pro-inflammatory cytokines and adipokine were reduced in patients with HNSCC. Treatment of HNSCC patients with 1,25(OH)2D3 increased levels of each of the measured immune mediators. Levels of the anti-inflammatory adipokine, adiponectin, were shifted inversely with the levels of the pro-inflammatory cytokines and with leptin. These studies demonstrate heightened immune reactivity in patients with premalignant lesions, which wanes in patients with HNSCC, but which is restored by treatment with 1,25(OH)2D3. abstract_id: PUBMED:30524889 Immunological and classical subtypes of oral premalignant lesions. Oral squamous cell carcinoma (OSCC) is a major cause of cancer-associated morbidity and mortality and may develop from oral premalignant lesions (OPL). An improved molecular classification of OPL may help refining prevention strategies. We identified two main OPL gene-expression subtypes, named immunological and classical, in 86 OPL (discovery dataset). A gene expression-based score was then developed to classify OPL samples from three independent datasets, including 17 (GSE30784),13 (GSE10174) and 15 (GSE85195) OPLs, into either one of the two gene-expression subtypes. Using the single sample gene set enrichment analysis, enrichment scores for immune-related pathways were different between the two OPL subtypes. In OPL from the discovery set, loss of heterozygosities (LOH) at 3p14, 17p13, TP53, 9p21 and 8p22 and miRNA gene expression profiles were analyzed. Deconvolution of the immune infiltrate was performed using the Microenvironment Cell Populations-counter tool. A multivariate analysis revealed that decreased miRNA-142-5p expression (P = 0.0484) and lower T-cell, monocytic and myeloid dendritic cells (MDC) immune infiltration (T-cells, P = 0.0196; CD8 T cells, P = 0.0129; MDC, P = 0.0481; and monocytes, P = 0.0212) were associated with oral cancer development in the immunological subtype only. In contrast, LOH at 3p14 (P = 0.0241), 17p13 (P = 0.0348) and TP53 (P = 0.004) were associated with oral cancer development in the classical subtype only. In conclusion, we identified 2 subtypes of OPLs, namely immune and classical, which may benefit from different and specific personalized prevention interventions. abstract_id: PUBMED:36213468 Role of Vimentin and E-cadherin Expression in Premalignant and Malignant Lesions of Oral Cavity. To determine the role of Vimentin and E-cadherin expression in oral premalignant and malignant lesions. 68 histopathologically confirmed cases of premalignant and malignant oral cavity lesions enrolled. Biopsy specimens were taken from lesion of all cases and subjected to immunohistochemical evaluation of expression of E-cadherin and Vimentin. We examined the relationships between the expression of these markers and specific clinicopathological features were analyzed. Out of 68 cases 28 showed high vimentin expression (3 + and 4 + grade) and 40 showed low vimentin expression (1 + and 2 + grade). 20 cases out of 68 presented with high E-cadherin expression (3 + and 4 +) and rest 48 with low expression (1 + and 2 +) of the same. Smoking and tobacco chewing reflected non-significant association with their expression. In this study all 28 patients (100%) with high vimentin expression had malignant lesions and 17 (60.7%) presented with metastatic lymph nodes Out of 20 patients with high E-cadherin expression 8(40.0%) had malignant lesions and 12 (60.0%) had pre malignant lesions and 4 (20%) showed nodal metastasis. As tumor stage (TNM) progresses, it showed increased vimentin and decreased E-cadherin expression and vice versa. We concluded that increased vimentin and decreased E-cadherin expression in oral cancers are associated with metastasis and disease progression in terms of upstaging of disease. We can use cellular expression of vimentin and E-cadherin for early diagnosis of disease. abstract_id: PUBMED:35257930 Comparative study of effectiveness of colposcopic examination versus visual examination for determining the biopsy site of potentially premalignant oral epithelial lesions. Diagnosis of oral malignant and potentially premalignant oral epithelial lesions (PPOELs) cannot be based solely on clinical findings. Tissue biopsy with histopathologic examination remains the gold standard in diagnosis. Selection of a representative biopsy site becomes essential to arrive at an early and precise diagnosis which substantially reduces the incidence of morbidity and mortality from oral cancer. The site for biopsy, however, is always a subjective choice that sometimes raises doubts about its representativeness. Among various techniques developed to complement clinical examination and facilitate early diagnosis, colposcopy can be a simple, reliable diagnostic method available for the selection of the most appropriate biopsy site based on the vascular patterns. Hence, this study has been planned to assess the role of colposcopic examination in the selection of the most representative specimen for histopathologic examination from the oral lesions. abstract_id: PUBMED:28383824 Implication for second primary cancer from visible oral and oropharyngeal premalignant lesions in betel-nut chewing related oral cancer. Background: Visible oral and oropharyngeal premalignant lesions may be used to monitor for a second primary oral cancer. To control for bias, we focused on the visible oral and oropharyngeal premalignant lesions of patients with oral cancer with a positive betel-nut chewing habit. Visible oral and oropharyngeal premalignant lesions that can predict second primary oral cancers were studied. Methods: Nine hundred ninety-seven patients with positive betel-nut chewing habits and oral cancer were enrolled in this retrospective cohort study. We analyzed the relevance of their visible oral and oropharyngeal premalignant lesion incidence and relative clinicopathological variables to the development of a second primary oral cancer. Results: Second primary oral cancer risk was significantly higher in patients with positive visible oral and oropharyngeal premalignant lesions (P &lt; .0001), especially in younger patients (P = .0023; ≤40 years: adjusted odds ratio [OR] 2.66; 40-60 years: adjusted OR 2.61). The heterogeneous leukoplakia was (adjusted OR 2.17) higher than homogeneous leukoplakia. Conclusion: The predictive value and practicality of visible oral and oropharyngeal premalignant lesions make it a potentially valuable marker in follow-ups of patients with a positive betel-nut chewing habit with oral cancer, especially young patients with heterogeneous leukoplakia. Answer: The diagnosis of oral malignant and potentially premalignant oral epithelial lesions (PPOELs) cannot be solely based on clinical findings, and tissue biopsy with histopathologic examination is considered the gold standard for diagnosis (PUBMED:35257930). However, the selection of a representative biopsy site is crucial for an early and precise diagnosis, which can substantially reduce the incidence of morbidity and mortality from oral cancer (PUBMED:35257930). The site for biopsy is a subjective choice that sometimes raises doubts about its representativeness (PUBMED:35257930). Techniques like colposcopy have been developed to complement clinical examination and facilitate early diagnosis by selecting the most appropriate biopsy site based on vascular patterns (PUBMED:35257930). Therefore, while a biopsy is reliable for diagnosing oral premalignant lesions, the effectiveness of the biopsy can be influenced by the accuracy of the site selection, and adjunctive methods like colposcopy may enhance the reliability of the biopsy (PUBMED:35257930).
Instruction: Does delivery improve maternal condition in the respiratory-compromised gravida? Abstracts: abstract_id: PUBMED:9464731 Does delivery improve maternal condition in the respiratory-compromised gravida? Objective: To describe the effect of delivery on respiratory status and outcome in the respiratory-compromised pregnant woman. Methods: During 1990-1994, 10 patients requiring intubation for respiratory compromise who delivered during ventilatory support were identified by International Classification of Diseases, Ninth Revision codes. Charts were reviewed retrospectively for cardiorespiratory variables and outcome. Results: Pneumonia led to intubation in all but one case. The onset of labor was spontaneous in eight. Three were delivered by cesarean. Mechanical ventilation was used for a median (range) of 7 (2-22) days in surviving patients. Fraction of inspired oxygen requirements decreased an average of 28% by 24 hours after delivery. Positive end-expiratory pressure requirements remained unaltered. Surviving patients remained intubated for a median (range) of 2.6 (1-19) days postpartum. Three women died, all after vaginal delivery (days 4-14). Conclusion: Delivery of respiratory-compromised gravidas resulted in a 28% reduction in fraction of inspired oxygen requirement within 24 hours after delivery. Although most patients were then able to be maintained below critical fraction of inspired oxygen requirement levels (under 0.6), dramatic improvement in overall respiratory function was not observed uniformly. Given the limited benefit of delivery on maternal oxygenation, along with the inherent risks of labor induction in this critically ill population, caution should be exercised in initiating the induction process electively. abstract_id: PUBMED:17399980 Temporizing treatment for the respiratory-compromised gravida: an observational study of maternal and neonatal outcome. Acute lung disease may originate in pregnancy because of the pregnancy itself or because of an intercurrent etiology. The purpose of this study was to describe the effect of prolonged antepartum mechanical ventilatory support on the mother and the neonate when the strategy was to prolong the pregnancy rather than deliver preterm. Among 72 312 parturients over eight years, three gravidae required mechanical ventilation 12-48 h after admission for different conditions, 45-91 days before delivery. Gestational age at intubation was 21-28 weeks. Appropriate analgesia, broad-spectrum antibiotics, vasopressors and betamethsone for fetal lung maturity were used in all cases. None received tocolysis. Despite uterine distension, respiratory support provided adequate oxygenation and FiO2 could be maintained below critical levels, obviating the need for early delivery. All women survived, were weaned from ventilatory support, discharged, and delivered healthy neonates at term. Mode of delivery was dictated by obstetrical indicators only. All five infants (two sets of twins) are healthy at 12-36 months with appropriate developmental milestones. We conclude that when the maternal condition is amenable to therapy, and given the risks of labor induction and of prematurity, there is only limited benefit of delivery while on mechanical ventilation. abstract_id: PUBMED:36073129 Developing and prioritising strategies to improve the implementation of maternal healthcare guidelines in South Africa: The nominal group technique. Background: In South Africa, maternal healthcare guidelines are distributed to primary health care (PHC) facility for midwives to refer and implement during maternal healthcare services. Different training was offered for the use of maternal care guidelines. However, poor adherence and poor implementation of guidelines were discovered. Aim: This study aimed to develop and prioritise strategies to improve the implementation of maternal healthcare guidelines at PHC facilities of Limpopo province, South Africa. Method: Strengths, weaknesses, opportunities and threats analysis and its matrix together with the nominal group technique were used to develop the current strategy. Midwives, maternal, assistant and operational managers from PHC facilities of the two selected district of the Limpopo province were selected. Criterion-based purposive sampling was used to select participants. Data collection and analysis involved the four steps used in the nominal group technique. Results: Strategies related to strengths and weaknesses such as human resources, maternal health services and knowledge deficit were identified. Opportunities and threats such as availability of guidelines, community involvement and quality assurance as factors that influenced the provision of maternal healthcare services were identified. Conclusion: Researchers formulated actions that could build on identified strengths, overcome weaknesses such as human resources, explore opportunities and mitigate the threats such as quality assurance. Implementation of the developed strategies might lead to the reduction of the maternal mortality rate. abstract_id: PUBMED:28161686 Acute abdomen in a patient with paraesophageal hernia, resulting in acute compromised respiratory function: A case report. Introduction: We present a case of acute abdomen, causing increased intra-abdominal pressure, leading to further herniation of an existing paraesophageal hernia, and consequently acute compromised respiratory function. This acute respiratory complication to a paraesophageal hernia has not previously been reported. Presentation Of Case: We present a case of a 75-year-old female who was acutely admitted with stridor. The patient was known to have a paraesophageal hernia monitored using watchful waiting, and dyspnoea. The patient's condition deteriorated, leading to intubation. Diagnostic imaging revealed a paraesophageal hernia pressing onto the trachea as well as appendicitis and ileus. Surgery confirmed perforated appendicitis, peritonitis, and ileus causing high intra-abdominal pressure, resulting in further herniation of the paraesophageal hernia as a cause for acute compromised respiratory function. Appendectomy and gastropexy were performed. The patient was later discharged to rehabilitation. Discussion: Patients with pulmonary symptoms caused by a paraesophageal hernia, especially patients with sizeable hernias, could potentially be in greater risk of severe airway affection if complicated by acute abdomen. These patients could benefit from elective hernia repair, rather than watchful waiting, as it would eliminate pulmonary symptoms and prevent similar cases. Patients monitored using watchful waiting should be informed that acute abdomen could cause acute compromised respiratory function. Conclusion: Any case of acute abdomen causing high intra-abdominal pressure could potentially cause further herniation of an existing paraesophageal hernia, resulting in acute compromised respiratory function. In patients known to have a paraesophageal hernia, similar cases should be suspected if the patient presents with acute breathing difficulties. abstract_id: PUBMED:36220591 Effect of compromised skin barrier on delivery of diclofenac sodium from brand and generic formulations via microneedles and iontophoresis. Application of drugs on skin with compromised barrier can significantly alter permeation of drugs with the possibility of increased adverse side effects or even toxicity. In this study, we tested in vitro delivery of diclofenac sodium from marketed brand and generic formulations across normal and compromised skin using microneedles and iontophoresis, alone and in combination. Ten tape strips on dermatomed human skin were used to create a compromised skin model, as demonstrated by changes in skin resistance and transepidermal water loss. Histology studies further confirmed creation of a compromised skin barrier. There was no significant difference between brand and generic formulations for delivery of diclofenac sodium into and across normal and compromised skin. Compromised skin showed higher total delivery (µg/sq.cm) of diclofenac sodium for all groups - microneedles (brand: 79.45 ± 8.81, generic: 92.15 ± 8.63), iontophoresis (brand: 233.13 ± 8.32, generic: 242.07 ± 11.17), combination (brand: 186.88 ± 6.76, generic: 193.8 ± 5.69) as compared to intact normal skin for same groups, microneedles (brand: 21.83 ± 1.96, generic: 20.38 ± 0.91), iontophoresis (brand: 149.78 ± 18.43, generic: 145.53 ± 12.61), and combination (brand: 80.97 ± 9.86, generic: 70.76 ± 6.56). These results indicate the effect of barrier integrity on delivery of diclofenac sodium which suggests increased absorption and systemic exposure of the drug across skin with compromised skin barrier. abstract_id: PUBMED:20672908 Maternal ethnicity influences on neonatal respiratory outcomes after antenatal corticosteroid use for anticipated preterm delivery. Objective: To explore the influence of maternal ethnicity on neonatal outcomes after antenatal corticosteroid administration. Methods: A retrospective review of ethnicity, maternal factors, and neonatal birth outcomes was performed for preterm births at a single institution. Cases were limited to women who received antenatal corticosteroids. The impact of ethnicity on specific neonatal respiratory outcomes and mortality was analyzed by bivariate comparisons and by logistic regression analysis. Results: Complete ethnicity data were obtained for 548 women. Controlling for gestational age at delivery, diabetes, whether the subject completed a course of steroids, and the dosing of the steroids, logistic regression demonstrated that ethnicity was independently associated with respiratory distress syndrome (compared to Caucasians: African-Americans OR 0.49 (95% CI 0.29-0.85); Filipinos OR 0.45 (95% CI 0.21-0.96). Conclusions: Ethnicity is independently associated with neonatal respiratory outcomes after antenatal corticosteroid use. Perhaps individualized dosing of antenatal corticosteroids is needed to further improve neonatal outcomes. abstract_id: PUBMED:31768744 Does parity affect pregnancy outcomes in the elderly gravida? Purpose: To identify whether older primiparas have more complications than do women who continue to deliver children into their late reproductive age. Patients of at least 35 years of age at delivery were included. Within this cohort, data from primiparous and multiparous women were compared. Methods: This retrospective study was based on electronic medical records from a single academic center, with more than 7000 deliveries annually. The impact of parity on maternal complications was assessed using a multivariate logistic regression model that adjusted for baseline maternal characteristics and medical history. Results: During the study period, there were 54 283 deliveries in our medical center. A total of 13,982 (25.7%) patients were at least 35 years old at delivery. The rate of twin pregnancy was higher in the primiparous group (1.9%) as compared to the multiparous group (0.8%, 95% CI 0.30-0.64, P &lt; 0.001), as was the incidence of delivery prior to 34 weeks (6.1% of the primiparas versus 2.9% of the multiparas, P &lt; 0.001, OR 2.16, 95% CI 1.75-2.68); hypertensive disorders (3.9% versus 1.7%, P &lt; 0.001, 95% CI 0.33-0.57); diabetes (4.6% versus 3.2%, P = 0.003, 95% CI 0.55-0.88); and IUGR (10.5% versus 4.7%, P &lt; 0.001, 95% CI 0.35-049), respectively. The increased risk for pre-term delivery, hypertensive disorders, diabetes, and IUGR was maintained after logistic regression analysis. Conclusion: We found that pregnancy complications typical to older parous women are significantly more common among primiparas, indicating that not only older age, but also having a first child relatively late in the reproductive period contributes to adverse pregnancy outcomes. abstract_id: PUBMED:28568739 MATERNAL INHERITANCE OF CONDITION AND CLUTCH SIZE IN THE COLLARED FLYCATCHER. Maternal effects may strongly influence evolutionary response to natural selection but they have been little studied in the wild. We use a novel combination of experimental and statistical methods to estimate maternal effects on condition and clutch size in the collared flycatcher, where we define "condition" to be the nongenetic component of clutch size. We found evidence of two maternal effects. The first (m) was the negative effect of mother's clutch size on daughter's condition, when mother's condition was held constant. The second (M) was the positive effect of mother's condition on daughter's condition, when mother clutch size was held constant. These two effects oppose one another because mothers in good condition also lay many eggs. The maternal effects were large: Experimentally adding an egg to a mother's nest reduced clutch sizes of her daughters by 1/4 egg (i.e., m = -0.25). Measured degree of resemblance between mother and daughter clutch sizes yielded M = 0.43. The results weakly support the presence of heritable genetic variation in clutch size: additive genetic variance/total phenotypic variance = 0.33. This estimate was highly variable probably because, as we show, mother-daughter resemblance may depend hardly at all on the amount of genetic variance when maternal effects are present. Daughter-mother regression (a standard method for estimating heritability) is consequently a poor guide to the amount of genetic variance in clutch size. Our results emphasize the value of combining field experiments with observations for studying inheritance. abstract_id: PUBMED:38269123 The Combined Influence of Maternal Medical Conditions on the Risk of Primary Cesarean Delivery. Background Common maternal medical comorbidities such as hypertensive disorders, diabetes, tobacco use, and extremes of maternal age, body mass index, and gestational weight gain are known individually to influence the rate of cesarean delivery. Numerous studies have estimated the risk of individual conditions on cesarean delivery. Objective To examine the risk for primary cesarean delivery in women with multiple maternal medical comorbidities to determine the cumulative risk they pose on mode of delivery. Study Design In this population-based retrospective cohort study, we analyzed data from Ohio live birth records from 2006 to 2015 to estimate the influence of individual and combinations of maternal comorbidities on rates of singleton primary cesarean delivery. The exposures were individual and combinations of maternal medical conditions (chronic hypertension [CHTN], gestational hypertension, pregestational diabetes, gestational diabetes, tobacco use, advanced maternal age, and maternal obesity) and outcomes were rates and adjusted relative risk (aRR) of primary cesarean delivery. Results There were 1,463,506 live births in Ohio during the study period, of which 882,423 (60.3%) had one or more maternal medical condition, and of those 243,112 (27.6%) had primary cesarean delivery. The range of rates and aRR range of primary cesarean delivery were 13.9 to 29.3% (aRR 0.78-1.68) in singleton pregnancies with a single medical condition, and this increased to 21.9 to 48.6% (aRR 1.34-3.87) in pregnancies complicated by multiple medical comorbidities. The highest risk for primary cesarean occurred in advanced maternal age, obese women with pregestational diabetes, and CHTN. Conclusion A greater number of maternal medical comorbidities during pregnancy is associated with increasing cumulative risk of primary cesarean delivery. These data may be useful in counseling patients on risk of cesarean during pregnancy. abstract_id: PUBMED:36149126 An observational study on maternal mortality and maternal near miss in a selected facility of West Bengal. An observational study was performed at Purba Medinipur District Hospital, West Bengal, from April 1, 2018, to December 31, 2020, with an aim to find out the magnitude of maternal mortality and near miss cases and to assess the utilization of available maternal health care services by the deceased women and near miss cases. Result showed 4.5% women developed potentially life-threatening condition (PLTC) of which 21% women developed LTC. Maternal Near Miss (MNM) ratio was 9.46/1000 live birth and the MNM-to-Maternal Mortality ratio was 8.3:1 and the leading causes of MNM and maternal death were hemorrhage, pregnancy induced hypertension/eclampsia. The utilization of maternal health-care services revealed that there is a scope to increase the service delivery. Study finding indicates that health-care programs need to enhance the existing efforts to improve timely health seeking behavior of women. Answer: Delivery in respiratory-compromised gravidas can result in a reduction in the fraction of inspired oxygen requirement within 24 hours after delivery. Specifically, a study found a 28% decrease in the fraction of inspired oxygen requirements on average after delivery. However, positive end-expiratory pressure requirements remained unchanged, and not all patients experienced a dramatic improvement in overall respiratory function. The study concluded that while most patients could be maintained below critical fraction of inspired oxygen requirement levels after delivery, the benefit of delivery on maternal oxygenation was limited. Therefore, caution was advised in electively initiating the induction process in this critically ill population due to the limited benefit and inherent risks associated with labor induction (PUBMED:9464731). Another study observed that when the maternal condition is amenable to therapy, and given the risks associated with labor induction and prematurity, there is only limited benefit to delivery while on mechanical ventilation. In cases where prolonged antepartum mechanical ventilatory support was used with the strategy to prolong pregnancy rather than deliver preterm, all women survived, were weaned from ventilatory support, discharged, and delivered healthy neonates at term. This suggests that in certain cases, temporizing treatment and delaying delivery can be beneficial for both maternal and neonatal outcomes (PUBMED:17399980). In summary, while delivery can lead to some improvement in the respiratory status of the compromised gravida, the overall benefit may be limited, and the decision to deliver should be carefully weighed against the potential risks and the possibility of managing the condition with continued medical support.
Instruction: Does esophageal dysfunction affect the course of treadmill stress test in patients with recurrent angina-like chest pain? Abstracts: abstract_id: PUBMED:21178904 Does esophageal dysfunction affect the course of treadmill stress test in patients with recurrent angina-like chest pain? Introduction: cardioesophageal reflex may increase severity of chest pain and signs of myocardial ischemia on electrocardiogram (ECG), both in patients with and without significant coronary artery stenosis. Objectives: the aim of the study was to evaluate the relationships between esophageal pH and pressure and clinical and electrocardiographic signs of myocardial ischemia. Patients And Methods: in 129 consecutive patients with recurrent chest pain, 77 without significant coronary artery lesions in coronary angiography and 52 with myocardial ischemia, panendoscopy, pH-metry, manometry, and treadmill stress test were performed. Results: The prevalence of esophageal disorders was similar in patients with and without significant coronary artery narrowing. Subjects with significant ST interval depression in the stress test had a higher rate of simultaneous esophageal contractions. There were no differences in the results of the treadmill test between patients with and without esophageal disorders. Forty percent of patients with significant coronary artery lesions, who had to stop the test because of chest pain, did not present significant ST interval depression on ECG; however, such depression was observed in 60% of patients with normal coronary angiography. Patients with exercise-provoked chest pain had more pronounced abnormalities in esophageal pH, together with the amplitude and coordination of esophageal contractions. Demographic and clinical factors associated with chest pain and changes in exercise ECG were not evaluated. Conclusions: esophageal disorders are an important cause of chest pain, potentially affecting the results of the treadmill stress test. However, further research is needed to determine the predictors of the cardioesophageal loop activity. abstract_id: PUBMED:20845510 Exercise-provoked esophageal motility disorder in patients with recurrent chest pain. Aim: To investigate the relationship between exercise-provoked esophageal motility disorders and the prognosis for patients with chest pain. Methods: The study involved 63 subjects with recurrent angina-like chest pain non-responsive to empirical therapy with proton pump inhibitor (PPI). In all, a coronary artery angiography, panendoscopy, 24-h esophageal pH-metry and manometry, as well as a treadmill stress test with simultaneous esophageal pH-metry and manometry monitoring, were performed. Thirty-five subjects had no significant coronary artery lesions, and 28 had more than 50% coronary artery narrowing. In patients with hypertensive esophageal motility disorders, a calcium antagonist was recommended. The average follow-up period was 977 ± 249 d. Results: The prevalence of esophageal disorders, such as gastroesophageal reflux or diffuse esophageal spasm, was similar in patients both with and without significant coronary artery narrowing. Exercise prompted esophageal motility disorders, such as a decrease in the percentage of peristaltic and effective contractions and their amplitude, as well as an increase in the percentage of simultaneous and non-effective contractions. In 14 (22%) patients the percentage of simultaneous contractions during the treadmill stress test exceeded the value of 55%. Using Kaplan-Meier analysis and the proportional hazard Cox regression model, it was shown that the administration of a calcium channel antagonist in patients with such an esophageal motility disorder significantly decreased the risk of hospitalization as a result of a suspicion of acute coronary syndrome after the 2.7-year follow-up period. Conclusion: In patients with chest pain non-responsive to PPIs, a diagnosis of exercise-provoked esophageal spasm may have the effect of lowering the risk of the next hospitalization. abstract_id: PUBMED:15328486 Suppression of gastric acid production may improve the course of angina pectoris and the results of treadmill stress test in patients with coronary artery disease. Background: Coronary artery disease (CAD) and gastro-esophageal reflux disease (GERD) often coexist in the same patients. The aim of this study was to estimate the effects of gastric acid output suppression with rabeprazole on course of angina pectoris and results of the treadmill stress test in patients with CAD. Material/methods: We studied 34 patients with stable angina pectoris. In all subjects a medical history, a physical examination, and a stress test were performed at the beginning of the study and after two weeks of add-on rabeprazole therapy (20 mg b.i.d.). Results: Rabeprazole therapy significantly improved the outcome of the stress test in 27 patients (79%), prolonging mean stress exercise time (449+/-147 vs. 489+/-156s, p=0.027) and exercise time, leading to a maximum ST interval depression (360+/-167 vs. 467+/-148s, p=0.001), and also decreasing ST interval depression delta (1.9+/-1.1 vs. 1.5+/-0.9; p=0.013). Conclusions: In 79% of our study subjects, rabeprazole improved stress test results in CAD patients, which implies that at least some of their symptoms were related to GERD. A proton pump inhibitor exerted a favorable effect on the frequency of angina-like chest pain and the results of the treadmill stress test. abstract_id: PUBMED:20818814 Exertional esophageal pH-metry and manometry in recurrent chest pain. Aim: To investigate the diagnostic efficacy of 24-h and exertional esophageal pH-metry and manometry in patients with recurrent chest pain. Methods: The study included 111 patients (54% male) with recurrent angina-like chest pain, non-responsive to therapy with proton pump inhibitors. Sixty-five (59%) had non-obstructive lesions in coronary artery angiography, and in 46 (41%) significant coronary artery narrowing was found. In all patients, 24-h esophageal pH-metry and manometry, and treadmill stress tests with simultaneous esophageal pH-metry and manometry monitoring were performed. During a 24-h examination the percentage of spontaneous chest pain (sCP) episodes associated with acid reflux or dysmotility (symptom index, SI) was calculated. Patients with SI &gt; 50% for acid gastroesophageal reflux (GER) were classified as having GER-related sCP. The remaining symptomatic individuals were determined as having non-GER-related sCP. During the stress test, the occurrence of chest pain, episodes of esophageal acidification (pH &lt; 4 for 10 s) and esophageal spasm with more than 55% of simultaneous contractions (exercise-provoked esophageal spasm or EPES) were noted. Results: Sixty-eight (61%) individuals reported sCP during 24-h esophageal function monitoring. Eleven of these (16%) were classified as having GER-related sCP and 53/68 (84%) as having non-GER-related sCP. The exercise-provoked chest pain during a stress test occurred in 13/111 (12%) subjects. In order to compare the clinical usefulness of 24-h esophageal function monitoring and its examination limited only to the treadmill stress test, the standard parameters of diagnostic test evaluation were determined. The occurrence of GER-related or non-GER-related sCP was assumed as a "gold standard". Afterwards, accuracy, sensitivity and specificity were calculated. These parameters expressed a prediction of GER-related or non-GER-related sCP occurrence by the presence of chest pain, esophageal acidification and EPES. Accuracy, sensitivity and specificity of chest pain during the stress test predicting any sCP occurrence were 28%, 35% and 80%, respectively, predicting GER-related sCP were 42%, 0% and 83%, respectively, and predicting non-GER-related sCP were 57%, 36% and 83%, respectively. Similar values were obtained for exercise-related acidification with pH &lt; 4 longer than 10 s in the prediction of GER-related sCP (44%, 36% and 92%, respectively) and EPES in relation to non-GER-related sCP (48%, 23% and 84%, respectively). Conclusion: The presence of chest pain, esophageal acidification and EPES had greater than 80% specificity to exclude the GER-related and non-GER-related causes of recurrent chest pain. abstract_id: PUBMED:7234836 Esophageal disease in patients with angina-like chest pain. To assess the frequency of esophageal disease in patients with angina-like chest pain and normal coronary arteriograms, 16 patients underwent esophageal manometric studies, acid perfusion (Bernstein) tests, upper gastrointestinal series and cholecystograms. Five patients had evidence of esophageal diseases. Three of the five had manometric criteria of increased nonperistalsis; one patient had idiopathic diffuse esophageal spasm while the other two patients had acid infusion tests which reproduced the presenting chest pain and the manometric findings were regarded as a motor disturbance of the esophagus secondary to chronic gastroesophageal reflux. The remaining two patients had symptomatic gastroesophageal reflux--one with an acid infusion test positive for pressure like chest pain and the other with a decreased resting lower esophageal sphincter pressure associated with reflux of barium on upper gastrointestinal series. All five patients had improvement of symptoms during a follow up period of seven to 17 months. Manometric studies in 18 normal subjects of similar age revealed no evidence of esophageal disease. Since esophageal disorders capable of causing chest pain were diagnosed in one-third of the patients (5/16 or 31%), it is suggested that investigations for esophageal disease, specifically directed at gastroesophageal reflux-induced abnormalities and idiopathic diffuse esophageal spasm, be included in the evaluation of patients with angina-like chest pain of uncertain origin. abstract_id: PUBMED:17689732 The effect of double dose of omeprazole on the course of angina pectoris and treadmill stress test in patients with coronary artery disease--a randomised, double-blind, placebo controlled, crossover trial. Background: Gastroesophageal reflux (GER) and coronary artery disease (CAD) frequently overlap, making the proper diagnosis of chest pain more difficult. GER symptoms may mistake anginal chest pain, and oesophageal acidification may induce myocardial ischaemia both in the rest and in the effort. Increase of oesophageal pH should prevent these conditions. Aim: To estimate the effect of double omeprazole dose on the course of angina pectoris and treadmill stress test in patients with coronary artery disease (CAD), using double-blind, crossover randomised, placebo-controlled study design. Methods: We studied 48 patients with angina pectoris symptoms and significant narrowing of coronary vessels in angiography. After baseline examination and treadmill stress test all subjects were randomised to treat either with omeprazole (20 mg b.i.d.) or placebo for 14 days, using a double-blind, crossover placebo controlled design. Results: Seventeen (35%) subjects reported more than by half decrease in symptoms severity after omeprazole and 6 (12%) after placebo (p=0,01). Omeprazole significantly decreased the number of chest pain episodes and number of nitroglycerin doses taken in the second week of both study phases, as well as the percentage of subjects with significant decrease of ST interval during the stress test (64% vs. 73%, p&lt;0,05). However majority of other stress test parameters (i.e. test duration, DUKE index) have improved both after omeprazole and placebo administration (by 9-38%). Conclusion: Double dose of omeprazole significantly decreased symptoms severity in 35% of patients with CAD, as well as frequency of some electrocardiographic signs of myocardial ischaemia during stress test. abstract_id: PUBMED:16808312 The effect of hypolipemic therapy with diet and simvastatin on the course of angina pectoris and the results of exercise stress test in patients with coronary artery disease Unlabelled: Statins are the multi-directorial acting drugs in atherosclerosis prevention, which decrease the overall and cardiovascular mortality. The aim of this study was to estimate the effect of six-month long hypolipemic therapy with diet and 20 mg of simvastatin on clinical intensity of angina pectoris and the course of exercise stress test. Patients And Methods: We studied 44 patients with typical anginal chest pain. In all blood sampling and treadmill stress test were made, and next in all hypolipemic diet and simvastatin 20 mg were recommended. After four weeks and six months of treatment clinical assessment and exercise test were made. Results: After four weeks and six month long observation period the decrease of total and LDL cholesterol, triglycerides and fibrinogen were found. Moreover, we have observed the improvement in frequency of anginal symptoms, their intensity in CCS classification and number of nitroglycerin tablets taken per week. The course of exercise test was also ameliorated: the percentage of patients, in whom stress test was finished because of chest pain was decreased, time of chest pain duration after exercise cessation was shorter, the percentage of patients with significant ST interval depression diminished, maximal ST interval depression as well as the time of significant ST interval depression duration also decreased. Although improvement in values of mentioned parameters, after six months long therapy with simvastatin the percentage of patients with Duke's treadmill score value showing intermediate cardiovascular risk (between -10 and +4) increased. In conclusion, therapy with hypolipemic diet and simvastatin already after four weeks decreased plasma lipids and fibrinogen levels and improved the course of angina pectoris and exercise stress test, what suggested its effectiveness not only as the treatment improving atherosclerosis risk factors, but also with prompt and clinical important effect ameliorating the handicapped coronary reserve. abstract_id: PUBMED:8373833 Detection of abnormal esophageal motility and gastroesophageal reflux in patients with angina-like chest pain by a radionuclide esophageal transit test. A modified radionuclide esophageal transit test including the esophageal mean transit time (MTT), residual fraction (RF) and retrograde index (RI), was carried out to evaluate esophageal motility and to detect gastroesophageal reflux in three groups: (A) 25 patients (13 males, 12 females, age: 45-65 years) with angina-like chest pain but normal coronary angiogram; (B) 31 patients (14 males, 17 females, age: 42-63 years) with coronary artery disease (CAD) demonstrated by abnormal coronary angiographic findings and intractable angina-like chest pain even after treatment; and (C) 25 normal volunteers (10 males, 15 females, age: 39-67 years). In groups A and B abnormal results were found in 60% (15/25) and 39% (12/31) for MTT; in 28% (7/25) and 39% (12/31) for RF and in 36% (9/25) and 58% (18/31) for RI (i.e., higher than the mean +/- 2 SD of normal values; MTT: 5.72 +/- SD 0.91, RF: 0.129 +/- SD 0.057, RI: 0.055 +/- SD 0.054), respectively. We conclude that the causes of non-cardiac chest pain in group A patients with normal coronary arteries were primarily esophageal dysmotility or spasm (prolonged MTT). However, in group B patients decreased pressure of the lower esophageal sphincter due to the side effects of anti-angina drugs induced delayed cleaning of the esophagus (higher RF) or gastroesophageal reflux (higher RI). abstract_id: PUBMED:6705641 Coronary sinus lactate estimation and esophageal motor anomalies in angina with normal coronary angiogram. Twenty-three patients (ages 44-82) with angina-like chest pain with a negative stress test and a normal coronary angiogram were studied for myocardial lactate metabolism during atrial pacing and for esophageal motor function with manometric measurements and acid infusion test. Eight patients had an abnormal myocardial lactate metabolism during maximal atrial pacing. Esophageal motor anomalies were observed in six of these eight patients and nine of the 15 subjects with normal myocardial lactate metabolism. The frequency and type of these anomalies were not different between the two groups of patients. Perfusion of acid into the esophagus reproduced the spontaneous pain syndrome in three patients, independent of any simultaneous motor dysfunction. These results suggest that esophageal motor anomalies must be interpreted with caution in patients with angina-like chest pain before affirming the exclusively esophageal origin of the pain. abstract_id: PUBMED:12236930 Prognostic value of normal myocardial radionuclide scan in patients with positive treadmill test The purpose of this study was to determine the prognostic value of a normal 201Tl scan in patients with a positive treadmill exercise stress test. We analyzed retrospectively 85 consecutive patients with a positive treadmill exercise test and normal Tl201 scan who were studied for the diagnosis of coronary artery disease from January 1996 to December 2000. The pretest probability of coronary artery disease was intermediate or high in all patients. Seventy-eight patients (89%) were followed up for 2.03 years. Forty patients (51%) were men. Age was 56 years. The cardiac events considered were angina, revascularization, nonfatal myocardial infarction, and cardiac death. During the follow-up period, none of the patients suffered a major cardiac event. Eight patients had episodes of chest pain and one had angor that required revascularization. Patients with an intermediate probability of coronary artery disease before the test and normal myocardial radionuclide scan had a very good prognosis even with a positive treadmill exercise test. Answer: Yes, esophageal dysfunction can affect the course of treadmill stress tests in patients with recurrent angina-like chest pain. Studies have shown that esophageal disorders are an important cause of chest pain and can potentially affect the results of treadmill stress tests (PUBMED:21178904). Patients with exercise-provoked chest pain have been found to have more pronounced abnormalities in esophageal pH, as well as the amplitude and coordination of esophageal contractions (PUBMED:21178904). Additionally, exercise-provoked esophageal motility disorders, such as a decrease in the percentage of peristaltic and effective contractions and an increase in the percentage of simultaneous and non-effective contractions, have been observed in patients with recurrent chest pain (PUBMED:20845510). Furthermore, the administration of a calcium channel antagonist in patients with esophageal motility disorders significantly decreased the risk of hospitalization due to suspicion of acute coronary syndrome after a follow-up period (PUBMED:20845510). Suppression of gastric acid production with rabeprazole has also been shown to improve the outcome of stress tests in patients with coronary artery disease (CAD), suggesting that some symptoms may be related to gastroesophageal reflux disease (GERD) (PUBMED:15328486). In summary, esophageal dysfunction, including disorders such as GERD and esophageal motility disorders, can influence the clinical presentation and the results of treadmill stress tests in patients presenting with angina-like chest pain.
Instruction: Do cardiac output and serum lactate levels indicate blood transfusion requirements in anemia of prematurity? Abstracts: abstract_id: PUBMED:8796405 Do cardiac output and serum lactate levels indicate blood transfusion requirements in anemia of prematurity? Background: Whether and when to transfuse in anemia of prematurity is highly controversial. Some authors suggest transfusions simply if the hemoglobin (Hb) level is below a defined normal range. Others propose the use of clinical or laboratory parameters in anemic patients to decide whether to transfuse or not. Hypothesis: A decreasing amount of circulating Hb should cause a compensatory increase in cardiac output (CO) and an increase in arterial serum lactate. Materials And Methods: In 56 anemic preterm infants (not in respiratory or hemodynamic failure) we analyzed CO after the first week of life using a Doppler sonographic method. At the same time serum lactate levels, Hb levels and oxygen saturation were registered. Nineteen of these patients were given transfusion when they demonstrated clinical signs of anemia by tachycardia &gt; 180/min, tachypnea, retractions, apneas and centralization (group 2). The remaining 37 patients were not transfused (group 1). Serum lactate, CO, heart rate (HR), oxygen delivery, respiratory rate, capillary refill and Hb were analyzed in both groups and in group 2 before and 12-24 h after transfusion. Data between groups 1 and 2 and in group 2 before and after transfusion were compared. Results: In the 56 patients studied no linear correlation between Hb and CO or between Hb and serum lactate was found. Nor could any correlation be demonstrated between the other variables studied. Examining the subgroups separately, a negative linear correlation was demonstrated between serum lactate and oxygen delivery in group 2. No other significant correlations were detected. However, when the pre- and post-transfusion data were compared in group 2 (increase of Hb from 9.45 (SD 3.44) to 12.5 (SD 3.8) g/100 ml), the CO decreased from 281.3 (SD 162.6) to 224 (SD 95.7) ml/kg per min (p &lt; 0.01) and serum lactate decreased significantly from 3.23 mmol/l (SD 2.07) before to 1.71 (SD 0.83) after transfusion. Oxygen delivery was 35.8 (+/- 0.19) ml/kg per min group 1, 27.8 (+/- 0.05) pre- and 43.4 (+/- 0.07) post-transfusion in group 2 (p &lt; 0.01). Conclusions: CO measurements and serum lactate levels add little information to the decision-making process for blood transfusions, as neither CO nor serum lactate levels correlate with HB levels in an otherwise asymptomatic population of preterm infants. In infants where the indication for blood transfusion is made based on traditionally accepted clinical criteria, serum lactate is an additional laboratory indicator of impaired oxygenation, as it correlates significantly with oxygen delivery. A significant lower oxygen delivery in patients in whom blood transfusion is indicated and an increase in oxygen induced by transfusion demonstrate the value of these criteria in identifying preterm infants who benefit from transfusion. abstract_id: PUBMED:31313334 Liberal hemoglobin threshold affects cerebral arterial pulsed Doppler and cardiac output, not cerebral tissue oxygenation: a prospective cohort study in anemic preterm infants. Background: Red blood cell (RBC) transfusion is a standard treatment for anemia of prematurity. Cerebral tissue oxygenation and blood flow velocities improve when a restrictive transfusion threshold is followed, but little is known about the effect of practicing a liberal transfusion threshold on cerebral tissue oxygenation, cerebral blood flow velocities, and cardiac output measurements. Study Design And Methods: A prospective observational study of preterm infants under 32 weeks' gestation who received RBC transfusion. Monitoring was performed immediately before, immediately after, and 24 hours after transfusion. Data obtained included physiologic parameters, cerebral tissue oxygenation index (TOI), anterior and middle cerebral artery pulsed Doppler ultrasound measurements, and cardiac output measurements. Data were analyzed using analysis of variance for repeated measures. Results: Fifty RBC transfusion episodes in 40 preterm infants were monitored. The mean gestational age was 26.72 weeks (±1.6 weeks), and the mean birth weight was 855.25 g (±190.7 g). We did not observe significant changes in cerebral TOI (pretransfusion mean TOI = 70.5 [11.54], immediately after transfusion = 71.38 [12.51], [p = 0.924; 95% confidence interval (CI), -4.64 to 6.39], and 24 hours after transfusion = 75.64 [14.4]; [p = 0.07; 95% CI, -0.37 to 10.65]), cerebral fractional tissue oxygen extraction (pretransfusion = 0.25 [0.12], immediately after transfusion = 0.24 [0.13], and 24 hours after transfusion = 0.20 [0.15]), cerebral resistive index, cerebral pulsatility index, or right ventricular output. Statistically significant changes were observed immediately after transfusion in peak systolic velocity, end-diastolic velocity and time-averaged maximum velocity in the cerebral arterial circulation. Left ventricular output (pretransfusion = 374.32 mL/kg/min, immediately after transfusion = 346.67 mL/kg/min [p = 0.000; 95% CI, -39.61 to -15.68], and 24 hours after transfusion = 361.17 mL/kg/min [p = 0.027; 95% CI, -25.11 to -1.18]) and heart rate (pretransfusion = 163.37 [9.49], immediately after transfusion = 157.29 [10.2] [p = 0.000; 95% CI, -8.96 to -3.20], and 24 hours after transfusion = 160.40 [10.4] [p = 0.041; 95% CI, -5.85 to -0.09]) showed statistically significant changes throughout the monitoring period. Conclusion: Our findings show that practicing liberal transfusion thresholds did not improve cerebral TOI in preterm infants who have mild anemia, but it did improve the compensatory response in cerebral arterial blood flow and cardiac output. abstract_id: PUBMED:28783992 Evaluation of serum ischemia-modified albumin levels in anemia of prematurity. Purpose: Ischemia-modified albumin (IMA) is used to determine tissue hypoxia. We aimed to evaluate the serum IMA levels in preterm infants requiring transfusion due to anemia of prematurity, a clinical condition to cause tissue hypoxia. Materials And Methods: This prospective study was performed in Etlik Zubeyde Hanim Hospital, Turkey. Preterm infants with birth weight less than 1500 g and born between 25 and 32 weeks were included during assessment for anemia of prematurity. The transfused infants with anemia of prematurity formed the "transfusion group", the control group consisted of gender, gestational and postnatal age-matched infants without transfusion requirement. Serum samples of control group and pre-transfusion and post-transfusion samples of transfusion group were analyzed for IMA (ABS unit). Serum IMA levels were compared between control group and pre-transfusion samples of transfusion group and were also evaluated for the significance of change after transfusion. Results: Sixty-two infants were included (transfusion group: 31, control group: 31). The pretransfusion serum IMA levels were higher than that of infants in the control group [ABS unit; transfusion group; pre-transfusion: 1.00 (0.76-1.09) and control group: 0.81 (0.52?1.04); p = .03]. Serum IMA levels decreased significantly to 0.79 (0.59-0.95) after transfusion; p = .007. Infants with hematocrit higher than 30% had lower IMA levels [0.69 (0.54-0.96)] than infants with lower hematocrit [0.96 (0.75-1.05)]; p = .002. Conclusions: Clinicians may bear in mind that serum IMA levels could be utilized as a marker in deciding on erythrocyte transfusion in premature anemia. abstract_id: PUBMED:11280639 The value of capillary whole blood lactate for blood transfusion requirements in anaemia of prematurity. Objective: To evaluate the usefulness of blood lactate as an indication for blood transfusion in anaemia of prematurity by means of a study protocol which considers the site of blood sampling and the repeatability of lactate measurements. Design: Prospective clinical study. Setting: Multidisciplinary, neonatalpaediatric intensive care unit of a non-university, teaching children's hospital. Patients And Methods: Comparison of pre- and 48-h post-transfusion capillary whole blood lactate in 18 anaemic premature babies. In 30 neonates the agreement between capillary and arterial lactate was analysed by using the Bland Altman plot. In 30 stable premature infants four capillary lactate measurements were carried out within 24 h and analysed with regard to variability (coefficient of variation (CV); association between SD and mean) and to establish normal values. Results: In the transfused infants, haematocrit increased from 23 (SD 3)% to 37 (SD 3)%. Mean lactate decreased from 2.5 (SD 1.0) to 1.7 (SD 0.5) mmol/l (p = 0.003). Pretransfusion lactate did not correlate with pre-transfusion haematocrit, heart rate, respiratory rate, number of apnoeas/bradycardias and weight gain (multiple regression). The mean difference between capillary and arterial lactate was 0.17 (SD 0.24) mmol/l and the 95 % confidence interval (CI) was -0.31 to 0.65 mmol/l. The CV of repetitive measurements was 19.8 (SD 9.8)% and SD correlated positively with mean lactate values (p = 0.001); the 95 % CI (normal range for premature infants) was 1.56-1.90 mmol/l. Conclusions: Capillary whole blood lactate measurements in newborn babies agree excellently with arterial values. Lactate measurements add little information to the decision whether to transfuse or not, considering the variability of this parameter in stable premature infants and the lack of correlation with other possible clinical indicators of compromised oxygen delivery. abstract_id: PUBMED:19816209 Erythrocyte transfusions and serum prohepcidin levels in premature newborns with anemia of prematurity. Hepcidin is a regulatory peptide hormone acts by limiting intestinal iron absorption and promoting iron retention. Determining the level of hepcidin in anemia of prematurity might be important in preventing iron overload. This study aimed to determine serum levels of prohepcidin in newborns with anemia of prematurity, to assess the effect of a single erythrocyte transfusion on serum prohepcidin levels, and to determine the possible relationships between prohepcidin levels and serum iron and complete blood count parameters. Nineteen premature newborns with anemia of prematurity who had been treated with erythrocyte transfusions were included in this study. Just before, and 48 hours after, each transfusion, venous blood samples were collected from patients. Serum prohepcidin levels before and after erythrocyte transfusion were 206.5+/-27.3 and 205.7+/-47.1 ng/mL, respectively; no statistically significant differences were found. No significant differences existed before or after transfusion regarding serum total iron and ferritin levels, iron-binding capacity, or mean corpuscular hemoglobin concentration. No significant correlations existed between serum prohepcidin levels and other parameters, either before or after transfusions. Our results showed that there were no statistically significant differences between serum prohepcidin levels before and after a single erythrocyte transfusion in premature newborns. abstract_id: PUBMED:9580761 Myocardial, erythropoietic, and metabolic adaptations to anemia of prematurity in infants with bronchopulmonary dysplasia. Objectives: The effects of anemia of prematurity during bronchopulmonary dysplasia (BPD) as well as on the metabolic and erythropoietic functions were determined before and after a transfusion. Fourteen anemic (Hb range: 65-88 gm/L), oxygen dependent (fraction of inspired oxygen &lt; or = 35%), nonventilated, preterm infants with BPD were studied at a postnatal age of 6 +/- 2 weeks. Study Design: Cardiac output, heart rate, mean velocity of circumferential fiber shortening, shortening fraction (SF), and stroke volume were assessed by pulsed and continuous wave Doppler echocardiography. Values for resting oxygen consumption, carbon dioxide production, and energy expenditure were obtained by indirect calorimetry. The affinity of oxygenated hemoglobin was determined by a blood oxygen dissociation analyzer. Results: An increased hemoglobin level resulted in a suppression of erythropoietin secretion (p &lt; 0.001), whereas heart rate, cardiac output, stroke volume, and SF decreased (p &lt; 0.05). Weight gain before and after transfusion were similar. Plasma lactate levels decreased from 1.6 +/- 0.3 to 1.2 +/- 0.3. Oxygen consumption, carbon dioxide production, and energy expenditure were not affected. Conclusions: Anemia of prematurity and BPD increase heart rate, cardiac output, stroke volume, and SF. These hemodynamic compensatory responses are normalized by transfusion. abstract_id: PUBMED:37866846 Thresholds for Red Blood Cell Transfusion in Preterm Infants: Evidence to Practice. Rapid blood loss with circulatory shock is dangerous for the preterm infant as cardiac output and oxygen-carrying capacity are simultaneously imperilled. This requires prompt restoration of circulating blood volume with emergency transfusion. It is recommended that clinicians use both clinical and laboratory responses to guide transfusion requirements in this situation. For preterm infants with anemia of prematurity, it is recommended that clinicians use a restrictive algorithm from one of two recently published clinical trials. Transfusion outside these algorithms in very preterm infants is not evidence-based and is actively discouraged. abstract_id: PUBMED:8740313 Decreased ferritin levels, despite iron supplementation, during erythropoietin therapy in anaemia of prematurity. Erythropoietin (rHuEPO) therapy has been shown to be beneficial in preventing and treating anaemia of prematurity and to decrease the need for blood transfusions. There is, however, only scanty data on the effect of rHuEPO therapy on iron metabolism. We studied 29 preterm infants (age 34 +/- 14 days) who were randomly assigned to receive either rHuEPO 900 U kg-1 week-1 with 6 mg kg-1 day-1 of iron for 4 weeks (n = 15) or no therapy. The following parameters were evaluated and compared between and within groups at the beginning, during and at the end of the study: Haematocrit (SI), reticulocytes (10(9) micrograms l-1), serum ferritin (microgram 1-1) and iron (mumol l-1). The results were as follows. At the baseline, erythropoietin levels were similar in both groups: 7.2 +/- 5.6 versus 6.2 +/- 3.2 mU ml-1 (NS). In the treated infants the haematocrit remained stable during the study and was significantly higher than in the control group by the end of the study: 0.34 +/- 0.03 versus 0.28 +/- 0.05 (p = 0.001). rHuEPO therapy increased the reticulocyte count from 130 +/- 70 to 430 +/- 200 (p = 0.0002). However, rHuEPO therapy depleted both serum ferritin and iron levels from 321 +/- 191 to 76 +/- 58 micrograms l-1 (p = 0.04) and from 18 +/- 5 to 13 +/- 4 mumol l-1 (p = 0.03), respectively. We conclude that rHuEPO therapy prevented anaemia and its sequelae; however, serum ferritin and iron levels were depleted. We suggest that the effect of rHuEPO may be further increased by higher iron supplementation. abstract_id: PUBMED:9659643 Recombinant human erythropoietin therapy for treatment of anemia of prematurity in very low birth weight infants: a randomized, double-blind, placebo-controlled trial. Objective: To evaluate the efficacy and safety of recombinant human erythropoietin (rHuEPO) in very low birth weight infants with anemia of prematurity. Study Design: Thirty infants were randomly assigned to receive either rHuEPO (300 U/kg per dose) or placebo twice a week. Hematologic parameters, transfusion requirements, caloric intake, and growth were monitored. Results: The number and volume of erythrocyte transfusions were significantly lower in infants treated with rHuEPO. Serum ferritin levels, similar in both groups at study entry, fell and were significantly lower in rHuEPO-group infants at the completion of the study. An inverse correlation was observed between reticulocyte count and absolute neutrophil count both at entry and at completion of the study. Conclusion: Twice-a-week administration of rHuEPO significantly reduces the need for erythrocyte transfusion in very low birth weight infants in stable condition. A significant decrease in serum ferritin levels in infants receiving rHuEPO suggests the need to determine the optimal dose of iron supplementation in these infants. abstract_id: PUBMED:575929 Exchange transfusion and serum vitamin E (tocopherol) concentration in the newborn. 17 babies suffering from haemolytic disease or "hyperbilirubinaemia of prematurity" underwent exchange transfusion in the first 4 days of life. In 14 babies, including all those who had abnormally low pre-exchange serum vitamin E levels (&lt;0.5 mg/100 ml), the vitamin E concentration rose as a result of exchange transfusion. In 3 babies the serum vitamin E concentration fell as result of exchange transfusion, but in no case was the post-exchange level below 0.5 mg/100 ml. The normal range of plasma vitamin E in stored ACD blood was 0.44--1.24 mg/100 ml. It was concluded that although vitamin E deficiency might be a factor in the aetiology of the anaemia of prematurity, prior subjection to exchange transfusion is unlikely to make a significant contribution to the vitamin E deficiency. Answer: Cardiac output (CO) measurements and serum lactate levels provide limited information for determining blood transfusion requirements in anemia of prematurity. In a study of 56 anemic preterm infants, no linear correlation was found between hemoglobin (Hb) levels and CO or between Hb and serum lactate levels in the overall patient population. However, in a subgroup of patients who demonstrated clinical signs of anemia and were transfused, a significant decrease in CO and serum lactate was observed after transfusion, indicating that these measures may have some relevance in symptomatic infants (PUBMED:8796405). Another study involving 50 red blood cell (RBC) transfusion episodes in 40 preterm infants under 32 weeks' gestation found that liberal transfusion thresholds did not improve cerebral tissue oxygenation index (TOI) in preterm infants with mild anemia, but there were improvements in cerebral arterial blood flow and cardiac output immediately after transfusion (PUBMED:31313334). Furthermore, serum ischemia-modified albumin (IMA) levels, which are used to determine tissue hypoxia, were found to be higher in preterm infants requiring transfusion due to anemia of prematurity compared to a control group, and these levels decreased significantly after transfusion. This suggests that serum IMA levels could be a marker for deciding on erythrocyte transfusion in premature anemia (PUBMED:28783992). In contrast, a study evaluating the usefulness of blood lactate as an indication for blood transfusion in anemia of prematurity found that lactate measurements add little information to the decision-making process for transfusions, considering the variability of this parameter in stable premature infants and the lack of correlation with other possible clinical indicators of compromised oxygen delivery (PUBMED:11280639). Overall, while certain parameters like serum lactate and CO may show changes in response to transfusion in symptomatic infants, they are not consistently reliable indicators for blood transfusion requirements in anemia of prematurity. Clinicians are encouraged to use a combination of clinical and laboratory responses, including a restrictive transfusion algorithm, to guide transfusion decisions in this population (PUBMED:37866846).
Instruction: Access to coronary catheterisation: fair shares for all? Abstracts: abstract_id: PUBMED:37623339 Minimal-Access Coronary Revascularization: Past, Present, and Future. Minimal-access cardiac surgery appears to be the future. It is increasingly desired by cardiologists and demanded by patients who perceive superiority. Minimal-access coronary artery revascularisation has been increasingly adopted throughout the world. Here, we review the history of minimal-access coronary revascularization and see that it is almost as old as the history of cardiac surgery. Modern minimal-access coronary revascularization takes a variety of forms-namely minimal-access direct coronary artery bypass grafting (MIDCAB), hybrid coronary revascularisation (HCR), and totally endoscopic coronary artery bypass grafting (TECAB). It is noteworthy that there is significant variation in the nomenclature and approaches for minimal-access coronary surgery, and this truly presents a challenge for comparing the different methods. However, these approaches are increasing in frequency, and proponents demonstrate clear advantages for their patients. The challenge that remains, as for all areas of surgery, is demonstrating the superiority of these techniques over tried and tested open techniques, which is very difficult. There is a paucity of randomised controlled trials to help answer this question, and the future of minimal-access coronary revascularisation, to some extent, is dependent on such trials. Thankfully, some are underway, and the results are eagerly anticipated. abstract_id: PUBMED:38387541 Complications of Radial vs Femoral Access For Coronary Angiography and Intervention: What Do The Data Tell Us? In the last decades, radial access, as an alternative to femoral access, has rapidly evolved and emerged as the preferred vascular access for coronary angiography and percutaneous coronary intervention (PCI). The use of radial access for PCI can reduce access-site bleeding, particularly retroperitoneal bleeding, and risk of developing pseudoaneurysm, while also improving patient comfort after procedure (e.g., early ambulation). However, radial access requires a longer learning curve to develop technical skills and the the data on radial artery graft for coronary artery bypass graft (CABG) after radial access remain unknown. Further, recent clinical trials have shown conflicts regarding whether radial access is associated with lower mortality in patients with STEMI. Despite these recent investigations, it is still debated if there are benefits associated with radial access over femoral access for PCI. In this review, we will evaluate radial access compared with femoral access for PCI on clinical outcomes and further discuss the usefulness of radial access. abstract_id: PUBMED:35235530 Vascular Access-Site Choice and Outcomes in Patients With Previous Coronary Artery Bypass Surgery Undergoing Coronary Catheterization in a High-Volume Transradial Center. Background: Transradial access for coronary angiography was observed to be superior to femoral access. Nevertheless, femoral artery access is still frequently used, especially in challenging subgroups with high procedural complexity, like patients with previous coronary artery bypass grafting (CABG). Purpose: We analyzed access-site choice and outcomes of CABG patients undergoing coronary catheterization in different clinical settings. Methods: A total of 1206 consecutive CABG patients undergoing coronary angiography and intervention were included in this study. Procedural and clinical outcomes were compared between transradial and transfemoral access. Multivariate logistic regression analysis was performed to identify predictors of access-site choice. Results: Coronary catheterization was performed via radial access in 753 patients (63.1%) and via femoral access in 442 patients (36.9%). During the study period, femoral artery utilization dropped from 55.2% to a minimum of 28.2% per year (P&lt;.01). Short stature (odds ratio [OR], 1.62; P&lt;.01), peripheral artery disease (OR, 1.42; P=.04), cardiopulmonary resuscitation (CPR) (OR, 4.17; P&lt;.001), ST-segment elevation myocardial infarction (STEMI) (OR, 2.56; P=.01), and coexisting left and right internal mammary artery (LIMA/RIMA) bypass grafts (OR, 2.67; P&lt;.001) were independently associated with femoral access-site choice. Study outcomes including access-site complications (4.3% vs 1.6%; P&lt;.01) as well as short- and long-term mortality (30-day mortality: 6.8% vs 2.0%; hazard ratio, 3.52; 95% confidence interval, 1.84-6.70; P&lt;.001) were more likely to occur with femoral access. Length of stay was shorter in the radial cohort (3.7 ± 5.1 days vs 5.3 ± 7.2 days; P&lt;.001). Conclusion: Radial access appears to be favorable even in complex CABG patients. Although radial access was set as the standard vascular approach, femoral access was chosen in one-third of all patients. Independent predictors for femoral access were short stature, peripheral artery disease, acute settings like CPR and STEMI, as well as coexisting LIMA and RIMA grafts. abstract_id: PUBMED:32969389 Feasibility and Safety of Distal Radial Artery Access in Anatomical Snuffbox for Coronary Angiography and Coronary Intervention. Background: There is limited data on feasibility and safety of coronary interventions performed using radial artery at anatomical snuffbox as vascular access point in South Asian region. Our study attempts to evaluate the feasibility and safety of coronary angiography and percutaneous coronary intervention using transradial access at anatomical snuffbox. Methods: Transradial access at anatomical snuffbox was attempted in 128 consecutive patients, who were planned for coronary angiography and/or percutaneous coronary intervention. Success in vascular access, completion of planned procedure and complications encountered, including patency of radial artery after the procedure, were investigated. Results: A total of 128 patients (76 males [59.4%]; 52 females [40.6%]) between 44-78 years of age (mean age, 59.0 +/- 10.2 years) were included in the study. Distal radial artery puncture and sheath placement was successful in all patients however planned procedure was completed in 126 (98.4%) patients. Total 90 coronary angiographies and 36 percutaneous coronary interventions were performed of which five were primary percutaneous coronary intervention. We encountered brachial artery spasm among two patient (1.5%) and significant pain and swelling among three patients (2.3%). No bleeding complication, numbness or parasthesia were observed on follow-up. Patients had average pain rating of 2.4+/- 1.1 in visual analogue pain rating scale. There were no instances of radial artery occlusion after the procedure. Conclusions: Distal radial artery, at anatomical snuffbox, is a safe and feasible alternative vascular access site for coronary angiography and percutaneous coronary intervention. abstract_id: PUBMED:26332022 Transulnar versus transradial access for coronary angiography or percutaneous coronary intervention: A meta-analysis of randomized controlled trials. Background: Although transfemoral access (TFA) remains the standard of care for patients undergoing coronary angiography (CA) or percutaneous coronary intervention (PCI) in the USA, TRA is being increasingly used over TFA due to lower bleeding and mortality rates on the basis of meta-analyses and recently published MATRIX trial. In patients with unsuccessful ipsilateral radial access, TUA has been used as an alternative approach. The randomized controlled trials (RCTs) comparing TUA and TRA have reached mixed conclusions regarding the use of transulnar approach for coronary procedures. Objectives: To systematically review and perform a meta-analysis of published RCTs comparing the safety and efficacy of transulnar access (TUA) vs. transradial access (TRA) in patients undergoing CA or PCI. Methods: PubMed, EMBASE, and CENTRAL databases were searched for RCTs since inception through December, 2014. Meta-analysis was performed using random-effects model. Results: Five RCTs involving 2,744 total patients were included in the meta-analysis. TUA compared with TRA had similar risks of MACE [risk ratio (RR): 0.87; 95% confidence interval (CI): 0.56-1.36; P = 0.54] and access-related complications [RR: 0.92 (0.67-1.27); P = 0.62]. Higher rates of access cross-over [RR: 2.31 (1.07-4.98); P = 0.003] and number of punctures [1.57 vs. 1.4; mean difference (MD): 0.17; 95% CI: 0.08-0.26; P = 0.0002] were noted with TUA. There was no difference in arterial access time [12.8 vs. 10.9 min; MD: 1.86 (-1.35-5.7); P = 0.26], fluoroscopy time [7.6 vs. 7.2 min; MD: 0.37 (-0.39 - 1.13); P = 0.34] and contrast volume [151 vs. 153.7 ml; MD: -2.74 (-17.21 - 11.73); P = 0.71]. Conclusion: For patients requiring CA or PCI, TUA compared with TRA has similar efficacy and safety except for higher puncture rates and access cross-over. abstract_id: PUBMED:37370923 Ultrasound-Guided Femoral Vascular Access for Percutaneous Coronary and Structural Interventions. Radial access has largely substituted femoral access for coronary interventions. Nevertheless, the femoral artery remains indispensable for gaining access to structural and complex percutaneous coronary interventions such as transcatheter aortic valve implantation and chronic total occlusion interventions, respectively. Ultrasound-guided femoral puncture is a broadly available, inexpensive, and relatively easy-to-learn technique. According to the existing evidence, ultrasound guidance for gaining femoral access has improved the effectiveness and safety of the technique. In the present paper, we sought to review the current literature in order to provide the reader with up-to-date data regarding the benefits of ultrasound-guided femoral access compared with the conventional technique as well as describing the state-of-the-art technique for gaining femoral access under ultrasound guidance. abstract_id: PUBMED:33507598 Radial versus femoral access for coronary interventions: An updated systematic review and meta-analysis of randomized trials. Objective: It is still debated if benefits associated with radial versus femoral access for coronary angiography and percutaneous coronary interventions (PCI) are due to the access site selection itself, operator expertise or other underlying mechanisms. Methods: We searched PubMed, Embase, and meeting abstracts for randomized trials comparing radial versus femoral access site for coronary angiography and PCI. Primary safety endpoint was major bleeding. Coprimary efficacy endpoints were stroke and myocardial infarction (MI). This study is registered with PROSPERO. Results: We identified 31 trials (30,096 patients, PCI performed in 21,225 patients). Radial compared to femoral access was associated with a significant risk reduction in major bleeding (OR 0.53, 95%CI 0.42-0.66, I2 = 3.3%). Findings were consistent regardless of clinical characteristics or whether coronary angiography was performed with or without PCI. The benefit of radial access was significantly increased in studies published before 2010 and in patients with chronic coronary syndrome. Risk for stroke (OR 1.11, 95%CI 0.76-1.64, I2 = 0%) and MI (OR 0.90, 95%CI 0.79-1.04, I2 = 0%) were comparable between the groups. Risk for mortality and vascular complications were significantly lower with radial than femoral access. Conclusion: In patients undergoing coronary angiography and PCI, radial access is associated with a significant risk reduction in bleeding, vascular complications, and mortality compared to femoral access. The risk of stroke or MI were comparable in patients with radial or femoral access. abstract_id: PUBMED:36182561 A Meta-Analysis of Traditional Radial Access and Distal Radial Access in Transradial Access for Percutaneous Coronary Procedures. Introduction: Radial approaches are classified into traditional radial access (TRA) and more contemporary distal radial access (DRA), with recently published comparative studies reporting inconsistent outcomes. As there have been several recent randomized control trials (RCT), we assessed the totality of evidence in an updated meta-analysis to compare outcomes of DRA and TRA. Methods: We searched PubMed, CENTRAL, Web of Science, EMBASE, and Cochrane Database of Systematic Reviews from inception to August 2022 for studies comparing DRA and TRA for coronary angiography. Primary outcomes were the rate of radial artery occlusion (RAO) and access failure. Secondary outcomes included hematomas and puncture site bleeding. The pooled risk ratio (RR) with 95 % confidence interval (95 % CI) was calculated for each outcome. Results: A total of 14,071 patients undergoing coronary angiography from 23 studies, including 5488 patients from 10 RCTs. The mean age of the study population was 59.8 ± 5.9 years with 66.2 % men. Outcomes for a total of 6796 (48.3 %) patients undergoing DRA and 7166 (50.9 %) patients undergoing TRA were compared. DRA was associated with a lower rate of RAO (RR = 0.36, 95CI [0.27, 0.48], I2 = 0 %) but an increased risk of vascular access failure (RR = 2.38, 95CI [1.46, 3.87], I2 = 82.7 %). There was no significant difference in the rate of bleeding or hematoma formation. Conclusion: In an updated metanalysis, DRA is associated with lower rates of RAO but with higher rates of access failure. abstract_id: PUBMED:24269362 Procedural volume and outcomes with radial or femoral access for coronary angiography and intervention. Objectives: The study sought to evaluate the relationship between procedural volume and outcomes with radial and femoral approach. Background: RIVAL (RadIal Vs. femorAL) was a randomized trial of radial versus femoral access for coronary angiography/intervention (N = 7,021), which overall did not show a difference in primary outcome of death, myocardial infarction, stroke, or non-coronary artery bypass graft major bleeding. Methods: In pre-specified subgroup analyses, the hazard ratios for the primary outcome were compared among centers divided by tertiles and among individual operators. A multivariable Cox proportional hazards model was used to determine the independent effect of center and operator volumes after adjusting for other variables. Results: In high-volume radial centers, the primary outcome was reduced with radial versus femoral access (hazard ratio [HR]: 0.49; 95% confidence interval [CI]: 0.28 to 0.87) but not in intermediate- (HR: 1.23; 95% CI: 0.88 to 1.72) or low-volume centers (HR: 0.83; 95% CI: 0.52 to 1.31; interaction p = 0.021). High-volume centers enrolled a higher proportion of ST-segment elevation myocardial infarction (STEMI). After adjustment for STEMI, the benefit of radial access persisted at high-volume radial centers. There was no difference in the primary outcome between radial and femoral access by operator volume: high-volume operators (HR: 0.79; 95% CI: 0.48 to 1.28), intermediate (HR: 0.87; 95% CI: 0.60 to 1.27), and low (HR: 1.10; 95% CI: 0.74 to 1.65; interaction p = 0.536). However, in a multivariable model, overall center volume and radial center volume were independently associated with the primary outcome but not femoral center volume (overall percutaneous coronary intervention volume HR: 0.92, 95% CI: 0.88 to 0.96; radial volume HR: 0.88, 95% CI: 0.80 to 0.97; and femoral volume HR: 1.00, 95% CI: 0.94 to 1.07; p = 0.98). Conclusions: Procedural volume and expertise are important, particularly for radial percutaneous coronary intervention. (A Trial of Trans-radial Versus Trans-femoral Percutaneous Coronary Intervention [PCI] Access Site Approach in Patients With Unstable Angina or Myocardial Infarction Managed With an Invasive Strategy [RIVAL]; NCT01014273). abstract_id: PUBMED:33223100 Access Selection for Chronic Total Occlusion Percutaneous Coronary Intervention and Complication Management. Dual access for chronic total occlusion percutaneous coronary intervention is considered best practice by many experts. There are 2 access sites: radial and femoral. Both accesses have important advantages and disadvantages. Determining the ratio risk/benefit-efficacy/safety of each access for each patient in a specific procedure should be based on procedural and clinical variables. Given the safety benefit and the minimal procedural disadvantages, radial access should be the standard approach, especially in procedures of low complexity and in patients at high risk of vascular complications. Nonetheless, mastering both approaches is important because they are needed in multiple occasions. Answer: The evolution of vascular access for coronary catheterization has been marked by a shift from the traditional femoral access to the radial approach, which has become increasingly popular due to its association with reduced access-site bleeding, improved patient comfort, and early ambulation post-procedure (PUBMED:38387541). Radial access has been shown to be superior to femoral access, particularly in patients with previous coronary artery bypass grafting (CABG), with radial access associated with fewer access-site complications, shorter hospital stays, and lower short- and long-term mortality (PUBMED:35235530). Despite the advantages of radial access, femoral access remains indispensable for certain complex percutaneous coronary interventions and structural interventions, such as transcatheter aortic valve implantation and chronic total occlusion interventions. Ultrasound-guided femoral puncture has improved the effectiveness and safety of femoral access (PUBMED:37370923). The choice between radial and femoral access is influenced by procedural volume and expertise. High-volume radial centers have shown better outcomes with radial access compared to femoral access, highlighting the importance of procedural volume and expertise, particularly for radial percutaneous coronary intervention (PUBMED:24269362). However, both radial and femoral accesses are important, and mastering both is crucial, as they are needed in multiple occasions (PUBMED:33223100). Recent studies have also explored the feasibility and safety of distal radial artery access in the anatomical snuffbox for coronary angiography and intervention, which has been found to be a safe and feasible alternative vascular access site (PUBMED:32969389). A meta-analysis comparing traditional radial access (TRA) and distal radial access (DRA) found that DRA is associated with lower rates of radial artery occlusion but with higher rates of access failure (PUBMED:36182561). In summary, the choice of access for coronary catheterization should be tailored to the patient's clinical scenario, the complexity of the procedure, and the operator's expertise. While radial access is generally preferred due to its safety profile, femoral access remains a critical option for certain interventions. The goal is to ensure fair shares for all by selecting the most appropriate access site for each individual patient, balancing the benefits and risks associated with each approach.
Instruction: Does the number of trauma lists provided affect care and outcome of patients with fractured neck of femur? Abstracts: abstract_id: PUBMED:19220949 Does the number of trauma lists provided affect care and outcome of patients with fractured neck of femur? Introduction: Delay in surgery for fractured neck of femur is associated with increased mortality; it is recommended that patients with fractured neck of femur are operated within 48 h. North West hospitals provide dedicated trauma lists, as recommended by the British Orthopaedic Association, to allow rapid access to surgery. We investigated trauma list provision by each trust and its effects on the time taken to get neck of femur patients to surgery and patient survival. Patients And Methods: The number of trauma lists provided by 13 acute trusts was determined by telephone interview with the theatre manager. Data on operating delays, reasons for delay and 30-day mortality were obtained from the Greater Manchester and Wirral fractured neck of femur audit. Results: A total of 883 patients were included in the audit (35-126 per hospital). Overall, 5-15 trauma lists were provided each week, and 80% of lists were consultant-led. Of patients, 31.8% were operated on within 24 h and 36.9% were delayed more than 48 h; 37.7% of delays were for non-medical reasons. The 30-day mortality rates varied between 5-19% (mean, 11.8%). There were no significant relationships between the number of trauma lists and these variables. When divided into hospitals with &gt; 10 lists per week (n = 6) and those with &lt; 10 lists per week (n = 7) there were no significant differences in 48-h delay, non-medical delay or mortality. However, 24-h delay showed a trend to be lower in those with &gt; 10 lists (34.6% of patients versus 28.9%; P = 0.09). Conclusions: Most trusts provided at least one dedicated daily list. This study shows that extra lists may enable trusts to cope better with fractured neck of femur but do not change mortality. abstract_id: PUBMED:15640304 POSSUM scoring for patients with fractured neck of femur. Background: POSSUM scoring is validated as an audit tool in general and orthopaedic surgery. It is also used for preoperative triage to assess perioperative risk. However its ability to predict mortality in specific surgical subgroups, such as patients with fractured neck of the femur, has not been studied. This study assessed the predictive capability of POSSUM for 30-day mortality after surgery for fractured neck of femur. Methods: A cohort study was conducted in Queen's Medical Centre, Nottingham over a period of nearly 2 yr. Complete data from 1164 patients were analysed to compare the mortality predicted by POSSUM and the observed mortality. POSSUM risk of death was calculated using the original POSSUM equation, with modifications to the operative score appropriate for orthopaedic surgery. Results: POSSUM predicted 181 (15.6%) deaths and the observed mortality was 119 (10.2%). The area under the receiver operating characteristic curve was 0.62, indicating poor performance by the POSSUM equation. Conclusion: POSSUM overpredicts mortality in hip fracture patients. It should be used with caution whether as an audit tool or for preoperative triage. abstract_id: PUBMED:19397061 Surgical management of fractured neck of femur. The fractured neck of femur is the classically described fracture in osteoporotic elderly patients. Further, the fracture has a strong predominance in post-menopausal women and, although relatively uncommon in both children and young adults, where present in this age group it is usually the result of significant trauma. In elderly patients, with an already weakened bone, even minimal trauma may be sufficient to cause fracture and as such a fractured neck of femur is often referred to as a fragility fracture. abstract_id: PUBMED:23558794 Fractured neck of femur patient care improved by simulated fast-track system. Background: Fractured neck of femur patients represent a large demand on trauma services, and timely management results in improvements in morbidity and mortality. NICE guidance, advocating surgery on the day of admission or the following day, emphasises this. We set out to investigate whether a simulated fast-track management system could improve neck of femur fracture patient care. Materials And Methods: This prospective study was performed in a district general hospital in South West England, following a change in practise. We studied 429 patients over a 1-year period. Patients were phoned through, by the ambulance crew, to a trauma coordinator who arranged prompt radiological assessment and review. Patients with confirmed fractures were transferred to an optimisation area for orthopaedic and anaesthetic assessment prior to surgery the same day or early the following day. Our primary outcome measures were time to theatre (h) and length of hospital stay (days/h). Results: Time to theatre reduced from 44.95 (±27.42) to 29.28 (±21.23) h. Length of stay reduced from 10 days (245.92 (±131.02) h) to 9 days (225.30 (±128.75) h). Both of these improvements were statistically significant (P &lt; 0.05). Despite operating on virtually all patients, no increase in adverse events was seen, there was no increase in 30-day mortality and there were no perioperative deaths. Conclusions: This coordinated management pathway improves the efficiency of the service and reduces inpatient length of stay. Increased productivity may lead to financial savings and improve our ability to meet guidelines. abstract_id: PUBMED:17987305 Use of the Abbreviated Mental Test Score by junior doctors on patients with fractured neck of femur. Introduction: The mental state of patients with fractured neck of femur is important as a predictor of post-operative outcome. The Hodgkinson Abbreviated Mental Test Score (AMTS) is a validated and simple method of assessing the pre-operative mental state of patients with fractured neck of femur. This survey investigated whether or not orthopaedic junior doctors (SHOs) appreciated the importance of mental state assessment in patients with fractured neck of femur and whether they were able to recall the questions used in the AMTS. Method: A total of 47 on-call orthopaedic and trauma SHOs from the UK were randomly contacted by telephone and agreed to answer questions from a standard questionnaire to assess awareness of the ten-question AMTS. Results: A total of 96% of SHOs claimed awareness of the importance of mental state assessment; 89% used the AMTS in their practice, of which 26% were aided by a pro forma. A mean of five (out of the ten) standard questions on the AMTS were correctly identified (95% CI = 0.68); 11% correctly identified all 10 questions. There was no correlation between use of a pro forma and correct identification of questions. Conclusions: Patients with fractured neck of femur and low AMTS have higher morbidity and mortality. If the AMTS is to be used as an assessment tool in this setting, then SHOs need to be better informed and educated as to its use. Furthermore, the validity of data collection for research and audit purposes is potentially flawed; as data collected using such scoring systems may be inaccurate. abstract_id: PUBMED:24757887 Use of lean principals to improve flow of patients with fractured neck of femur--the HOPE study. We describe the implementation of a care pathway for patients with fractured neck of femur (NOF) using Lean and Six Sigma principles. After introduction of the Lean pathway, 32 patients out a total of 86 (37%) with fractured NOF were admitted to the Trauma Ward within 4 hours of presentation to the hospital; prior to implementation this was 16 patients out of a total of 59 (27%). Post-Lean an earlier mean theatre start time of 8.40am was achieved, resulting in a 38 minute increase in daily theatre time. An additional 52 patients (12%) received surgery within 24 hours of admission, resulting in 1 night length of stay reduction. Lean methodology proved an effective method to guide change resulting in an improved journey for the patient and significant workflow gains. abstract_id: PUBMED:30112942 Hemiarthroplasty compared with total hip arthroplasty in fractured neck of femur: a shift in national practice? Introduction: The aim of this study was to determine the trends in national practice regarding total hip arthroplasty compared with hemiarthroplasty in fractured neck of femur between 2010 and 2016. Materials And Methods: A retrospective review was conducted of NHS Digital data (England) between 2010 and 2016. 'Emergency' neck of femur fracture admissions, hemiarthroplasties and total hip arthroplasties were included. Elective total hip arthroplasties, revisions and prostheses relocations were excluded. Annual percentages for each operation were calculated. Trends were tabulated and displayed graphically for analysis. Results: The total number of emergency neck of femur diagnoses was 257,789. Total hip arthroplasty was performed in 2217, 2737, 3305, 3686, 3670 and 3825 patients and hemiarthroplasty was performed in 21,335, 21,744, 21,115, 21,798, 21,804 and 22,163 patients for each year between 2011 and 2016, respectively. The rate of change for total hip arthroplasty slowed from 24.54% increase/year (2011-2013) to 5.24% increase/year (2013-2016). Uncemented arthroplasties decreased over the same time period. Discussion: Increasing numbers of total hip arthroplasties are conducted for hip fractures; however, this trend has slowed since 2013. Possible explanations include all eligible fractures being treated with total hip arthroplasty, trauma surgeon preference for hemiarthroplasty due to lower surgical specialism or publication of individual surgeon data (National Joint Registry) which may lead to surgeons favouring hemiarthroplasties which have a lower complication rate compared to elective total hip arthroplasties. abstract_id: PUBMED:31528078 Simultaneous bilateral hip fractures following a simple fall in an elderly patient: A case report. Bilateral fractured necks of femur are rare, particularly in the absence of high energy trauma or metabolic bone disease. We describe a case of an 89 year old man with no history of metabolic bone disease who presented with bilateral neck of femur fractures following a simple fall. Clinicians must be vigilant to ensure that bilateral neck of femur fractures are identified and treated appropriately. abstract_id: PUBMED:34423096 The Impact of COVID-19 on Neck of Femur Fracture Care: A Major Trauma Centre Experience, United Kingdom. Background: The aim of this study was to investigate the impact of the COVID-19 pandemic on the management and outcome of patients with neck of femur fractures. Methods: Data was collected for 96 patients with neck of femur fractures who presented to the emergency department between March 1, 2020 and May 15, 2020. This data set included information about their COVID-19 status. Parameters including inpatient complications, hospital quality measures, mortality rates, and training opportunities were compared between the COVID-19 positive and COVID-19 negative groups. Furthermore, our current cohort of patients were compared against a historical control group of 95 patients who presented with neck of femur fractures before the COVID-19 pandemic. Results: Seven (7.3%) patients were confirmed COVID positive by RT-PCR testing. The COVID positive cohort, when compared to the COVID negative cohort, had higher rates of postoperative complications (71.4% vs 25.9%), increased length of stay (30.3 days vs 12 days) and quicker time to surgery (0.7 days vs 1.3 days).The 2020 cohort compared to the 2019 cohort, had an increased 30-day mortality rate (13.5% vs 4.2%), increased number of delayed cases (25% vs 11.8%) as well as reduced training opportunities for Orthopaedic trainees to perform the surgery (51.6% vs 22.8%). Conclusion: COVID-19 has had a profound impact on the care and outcome of neck of femur fracture patients during the pandemic with an increase in 30-day mortality rate. There were profound adverse effects on patient management pathways and outcomes while also affecting training opportunities. abstract_id: PUBMED:20589205 Fractured neck of femur--internal fixation versus arthroplasty. Background: Surgery is the treatment of choice for fractured neck of femur. For middle-aged patients (aged ca. 40 to 65), there is considerable debate over the indications for arthroplasty or internal fixation. The choice of surgical technique varies widely from one region to another. In this article, we discuss the main criteria that should be used in making this decision. Methods: We selectively reviewed the literature on the diagnosis and treatment of fractured neck of femur, including the current guideline of the German Society for Trauma Surgery (Deutsche Gesellschaft für Unfallchirurgie, DGU) and recent findings from the field of health services research. Results: The treatment of middle-aged patients with dislocated fractures should be based on rational decision-making. The patient's level of activity before the accident should be judged in terms of his or her previous mobility, independence in daily activities, and mental status. Internal fixation is recommended if the fracture can be adequately repositioned, the bone is of good quality, and there is no evidence of osteoarthritis. Fractures that are more than 24 hours old should be treated with total hip arthroplasty. Hemiprostheses are appropriate for very old patients. Physically frail, bedridden, and/or demented patients should undergo internal fixation of the fracture. For non-displaced or impacted fractures, functional treatment (i.e., prophylactic securing of the fracture with screws or nails) is indicated. Rapid diagnosis and a short time in bed before surgery lower the rate of complications. Internal fixation with preservation of the femoral head should ideally be performed within the first 6 hours of trauma, and within the first 24 hours at most. Conclusion: Despite the increasing scarcity of resources, treatment should still be based on well-founded clinical guidelines. Minimally invasive surgery enables better function in the early postoperative phase and can thereby lower complication rates. An interdisciplinary concept for the postoperative care of elderly patients also has a major effect on the outcome. Answer: The number of trauma lists provided does not appear to significantly affect the care and outcome of patients with fractured neck of femur in terms of mortality rates. A study investigating the provision of trauma lists by 13 acute trusts found that while most trusts provided at least one dedicated daily list, there were no significant relationships between the number of trauma lists and variables such as operating delays, reasons for delay, and 30-day mortality rates. Even when hospitals were divided into those with more than 10 lists per week and those with fewer than 10 lists per week, there were no significant differences in 48-hour delay, non-medical delay, or mortality. However, there was a trend toward a lower 24-hour delay in hospitals with more than 10 lists per week (PUBMED:19220949). This suggests that while extra lists may help trusts cope better with the volume of fractured neck of femur cases, they do not change the mortality outcome for these patients. Other factors, such as the efficiency of the management system and the speed of getting patients to surgery, may have a more direct impact on patient care. For instance, the implementation of a simulated fast-track management system was shown to significantly reduce the time to theatre and length of hospital stay without increasing adverse events or 30-day mortality, indicating that coordinated management pathways can improve service efficiency (PUBMED:23558794). Additionally, the use of Lean principles to improve the flow of patients with fractured neck of femur resulted in more patients being admitted to the Trauma Ward within 4 hours of presentation to the hospital, earlier theatre start times, and a reduction in the length of stay, demonstrating that workflow improvements can have a positive impact on patient care (PUBMED:24757887). Overall, while the number of trauma lists alone may not directly affect mortality outcomes, the organization and efficiency of care pathways play a crucial role in improving the care of patients with fractured neck of femur.
Instruction: Can a six-week exercise intervention improve gross motor function for non-ambulant children with cerebral palsy? Abstracts: abstract_id: PUBMED:29303007 Active exercise interventions improve gross motor function of ambulant/semi-ambulant children with cerebral palsy: a systematic review. Purpose: Evaluate effectiveness of active exercise interventions for improving gross motor activity/participation of school-aged, ambulant/semi-ambulant children with cerebral palsy (CP). Method: A systematic review was conducted following PRISMA guidelines. Five databases were searched for papers including school-aged children with CP, participating in active, exercise interventions with gross motor outcomes measured at the Activity/Participation level. Interventions with previous systematic reviews were excluded (e.g. hippotherapy). Evidence Level and conduct were examined by two raters. Results: Seven interventions (34 studies) met criteria. All studies reported on gross motor function, however, a limited number investigated participation outcomes. Strong positive evidence was available for Gross Motor Activity Training (n= 6, Evidence Level II-IV), and Gross Motor Activity Training with progressive resistance exercise plus additional physiotherapy (n = 3, all Evidence Level II). Moderate positive evidence exists for Gross Motor Activity Training plus additional physiotherapy (n = 2, all Evidence Level II) and Physical Fitness Training (n = 4, Evidence Level II-V). Weak positive evidence was available for Modified Sport (n = 3, Evidence Level IV-V) and Non-Immersive Virtual Reality (n = 12, Evidence Level II-V). There was strong evidence against Gross Motor Activity Training plus progressive resistance exercise without additional physiotherapy (n = 4, all Evidence Level II). Interpretation: Active, performance-focused exercise with variable practice opportunities improves gross motor function in ambulant/semi-ambulant children with CP. Implications for rehabilitation Active exercise interventions improve gross motor function of ambulant/semi-ambulant children with cerebral palsy. Gross Motor Activity Training is the most common and effective intervention. Practice variability is essential to improve gross motor function. Participation was rarely measured and requires further research, particularly in interventions that embed real-world participation opportunities like Modified Sport. abstract_id: PUBMED:22850757 Can a six-week exercise intervention improve gross motor function for non-ambulant children with cerebral palsy? A pilot randomized controlled trial. Objective: To determine the effect of a six-week exercise intervention on gross motor function for non-ambulant children with cerebral palsy. Design: A parallel arm randomized controlled trial. Setting: Four special schools. Participants: Thirty-five children aged 8-17 with bilateral cerebral palsy; Gross Motor Function Classification System levels IV-V. Method: Participants were randomly allocated to a static bike group, a treadmill group or control group. Participants in the bike and treadmill groups received exercise training sessions, three times weekly for six weeks. The control group received their usual care. Blinded assessments were performed at baseline and six weeks and followed up at 12 and 18 weeks. Outcome Measures: Gross Motor Function Measures GMFM-66, GMFM-88D and GMFM-88E. Results: At six weeks significant differences were found in GMFM-88D scores between the bike group and the control group, and the treadmill group and the control group (P &lt; 0.05). The mean change (SD) in GMFM-88D score was 5.9 (6.8) for the bike group; 3.7 (4.4) for the treadmill group and 0.5 (1.9) for the control group. No significant differences were found for GMFM-66 or GMFM-88E scores between the bike group and control group, or the treadmill group and control group, although trends of improvement were observed for both exercise groups. The improvements observed declined during the follow-up period. Conclusion: This study provides preliminary evidence that exercising on a bike or treadmill may provide short-term improvements in gross motor function for non-ambulant children with cerebral palsy. This needs to be tested in a large-scale randomized trial. abstract_id: PUBMED:31263742 Effect of a 10-Week Aquatic Exercise Training Program on Gross Motor Function in Children With Spastic Cerebral Palsy. Introduction. Cerebral palsy (CP) is caused by an injury to the developing brain, and abnormal gross motor function is a hallmark of CP. Properly structured exercises on land have been reported to be effective in improving functional performance in children with CP while only few have been documented on aquatic therapy. Objective. To investigate the effect of a 10-week aquatic exercise training program on gross motor function in children with spastic CP. Methods. Thirty participants aged 1 to 12 years were randomized into the experimental and control groups. Both groups received manual passive stretching and functional training exercises, depending on their level of motor impairment, either in water (temperature 28°C to 32°C) or on land. Each exercise training session lasted for about 1 hour 40 minutes, twice per week for 10 weeks in both groups. Measurement of gross motor function was done using Gross Motor Function Measure (GMFM-88) at baseline and after 4 weeks, 8 weeks, and 10 weeks of intervention. Both groups were compared for differences in change in gross motor function using Mann-Whitney U test. The level of significance was set at P &lt; .05. Results. Only the experimental group showed significant improvement (P &lt; .05) in all dimensions of gross motor function except for walking, running, and jumping (P = .112). Statistically significant difference (P &lt; .05) was found between both groups for all dimensions of gross motor function after 10 weeks of intervention. Conclusion. Aquatic exercise training program is effective in the functional rehabilitation of children with spastic CP. abstract_id: PUBMED:31813423 Effect of task-oriented training on balance and motor function of ambulant children with cerebral palsy. Introduction And Objectives: The study evaluated the effect of task-oriented training (TOT) on the motor function (MF) and balance of ambulant children with cerebral palsy (CP). Materials And Methods: A total of 46 children were randomised into TOT group (n=23) and Control Group (CG [n=23]), but 39 children complete the study. Balance and MF were assessed at baseline, 6th and 12th weeks and 6 weeks post-intervention. Data were analysed with repeated measures ANOVA, Friedman's, Mann-Whitney U, Student's-t and post hoc tests at α≤0.05. Results: The two groups were comparable in all baseline scores (P&gt;0.05). At the 6th week, significant between-group difference was observed in MF only [TOT=81.9 (18.5); CG=72.8 (19.4)] (P&lt;0.05). There were significant between-group differences in MF [TOT=88.8 (9.4); CG=75.5 (18.5); P&lt;0.05] and balance (TOT=9.4±4.5; CG=13.6±6.9; P&lt;0.05) at the 12th week (P&lt;0.05) and 6 weeks post-intervention (P&lt;0.05). Conclusion: TOT improved the balance and MF of ambulant children with CP. abstract_id: PUBMED:37649564 The efficacy of Equine Assisted Therapy intervention in gross motor function, performance, and spasticity in children with Cerebral Palsy. Purpose: To evaluate the efficacy of Equine Assisted Therapy in children with Cerebral Palsy, in terms of gross motor function, performance, and spasticity as well as whether this improvement can be maintained for 2 months after the end of the intervention. Methods: Children with Cerebral Palsy participated in this prospective cohort study. The study lasted for 28 weeks, of which the equine assisted therapy lasted 12 weeks taking place once a week for 30 min. Repeated measures within the subject design were used for the evaluation of each child's physical performance and mental capacity consisting of six measurements: Gross Motor Function Measure-88 (GMFM-88), Gross Motor Performance Measure (GMPM), Gross Motor Function Classification System (GMFCS), Modified Ashworth Scale (MAS) and Wechsler Intelligence Scale for Children (WISC III). Results: Statistically significant improvements were achieved for 31 children in Gross Motor Function Measure and all its subcategories (p &lt; 0.005), also in total Gross Motor Performance Measure and all subcategories (p &lt; 0.005). These Gross Motor Function Measure results remained consistent for 2 months after the last session of the intervention. Regarding spasticity, although an improving trend was seen, this was not found to be statistically significant. Conclusion And Implications: Equine Assisted Therapy improves motor ability (qualitatively and quantitatively) in children with Cerebral Palsy, with clinical significance in gross motor function. abstract_id: PUBMED:26221161 Effects of Neurodevelopmental Therapy on Gross Motor Function in Children with Cerebral Palsy. Objective: Neurodevelopmental treatments are an advanced therapeutic approach practiced by experienced occupational therapists for the rehabilitation of children with cerebral palsy. The primary challenge in children with cerebral palsy is gross motor dysfunction. We studied the effects of neurodevelopmental therapy on gross motor function in children with cerebral palsy. Materials & Methods: In a quasi-experimental design, 28 children with cerebral palsy were randomly divided into two groups. Neurodevelopmental therapy was given to a first group (n=15) with a mean age of 4.9 years; and a second group with a mean age 4.4 years (n=13) who were the control group. All children were evaluated with the Gross Motor Function Measure. Treatments were scheduled for three - one-hour sessions per week for 3 months. Results: We obtained statistically significant differences in the values between the baseline and post treatment in two groups. The groups were significantly different in laying and rolling (P=0.000), sitting (0.002), crawling and kneeling (0.004), and standing abilities (P=0.005). However, there were no significant differences in walking, running, and jumping abilities between the two groups (0.090). Conclusion: We concluded that the neurodevelopmental treatment improved gross motor function in children with cerebral palsy in four dimensions (laying and rolling, sitting, crawling and kneeling, and standing). However, walking, running, and jumping did not improve significantly. abstract_id: PUBMED:27512281 Effects of concentric and eccentric control exercise on gross motor function and balance ability of paretic leg in children with spastic hemiplegia. [Purpose] This study examines the effect of concentric and eccentric control training of the paretic leg on balance and gross motor function in children with spastic hemiplegia. [Subjects and Methods] Thirty children with spastic hemiplegia were randomly divided into experimental and control groups. In the experimental group, 20 min of neurodevelopmental therapy and 20 min of concentric and eccentric control exercise were applied to the paretic leg. In the control group, 40 min of neurodevelopmental therapy was applied. The Pediatric Balance Scale test and standing and gait items of the Gross Motor Function Measure were evaluated before and after intervention. [Results] In the experimental group, Gross Motor Function Measure and Pediatric Balance Scale scores statistically significantly increased after the intervention. The control group showed no statistically significant difference in either score after the intervention. [Conclusion] Concentric and eccentric control exercise therapy in children with spastic hemiplegia can be effective in improving gross motor function and balance ability, and can be used to solve functional problems in a paretic leg. abstract_id: PUBMED:36571210 Synergistic effect of functional strength training and cognitive intervention on gross motor function in children with cerebral palsy. Background: Cerebral palsy (CP) is a posture and movement disorder, however; it often includes disturbance of different aspects of cognitive function. This study aimed to investigate if combined functional strength training (FST) and cognitive intervention are more effective than either of them alone on gross motor function in children with spastic diplegic CP. Methods: Sixty-four children with spastic diplegic CP, with ages ranging from 8 to 12 years, were assigned randomly into four treatment groups; Group I; FST, group II; cognitive training, group III; combined FST and cognitive training, group IV; conventional physical therapy. The Gross Motor Function Measure (GMFM-88) was used to assess gross motor function at baseline, post-treatment, and 6 months follow-up. Results: Group III achieved a significant improvement in GMFM-88 when compared to other groups post-treatment and at follow-up. Conclusion: This study suggests that combined lower limb FST and cognitive intervention had the potential to produce significantly more favorable effects than the single use of either of them on gross motor function in children with spastic diplegia. abstract_id: PUBMED:27390440 Effect of physical therapy frequency on gross motor function in children with cerebral palsy. [Purpose] This study attempted to investigate the effect of physical therapy frequency based on neurodevelopmental therapy on gross motor function in children with cerebral palsy. [Subjects and Methods] The study sample included 161 children with cerebral palsy who attended a convalescent or rehabilitation center for disabled individuals or a special school for children with physical disabilities in South Korea. Gross Motor Function Measure data were collected according to physical therapy frequency based on neurodevelopmental therapy for a period of 1 year. [Results] The correlation between physical therapy frequency and Gross Motor Function Measure scores for crawling and kneeling, standing, walking, running and jumping, and rolling, and the Gross Motor Function Measure total score was significant. The differences in gross motor function according to physical therapy frequency were significant for crawling, kneeling, standing, and Gross Motor Function Measure total score. The differences in gross motor function according to frequency of physical therapy were significant for standing in Gross Motor Function Classification System Level V. [Conclusion] Intensive physical therapy was more effective for improving gross motor function in children with cerebral palsy. In particular, crawling and kneeling, and standing ability showed greater increases with intensive physical therapy. abstract_id: PUBMED:27967298 Effects of Aquatic Intervention on Gross Motor Skills in Children with Cerebral Palsy: A Systematic Review. Aims: To review the literature on the effects of aquatic intervention on gross motor skills for children with cerebral palsy (CP). Data Sources: Six databases were searched from inception to January 2016. Review Methods: Aquatic studies for children aged 1-21 years with any type or CP classification and at least one outcome measuring gross motor skills were included. Information was extracted on study design, outcomes, and aquatic program type, frequency, duration, and intensity. Quality was rated using the Centre of Evidence-Based Medicine: Levels of Evidence and the PEDro scale. Results: Of the 11 studies which met inclusion criteria, only two used randomized control trial design, and the results were mixed. Quality of evidence was rated as moderate to high for only one study. Most studies used quasi-experimental designs and reported improvements in gross motor skills for within group analyses after aquatic programs were held for two to three times per week and lasting for 6-16 weeks. Participants were classified according to the Gross Motor Function Classification System (GMFCS) levels I-V, and were aged 3-21 years. Mild to no adverse reactions were reported. Conclusions: Evidence on aquatic interventions for ambulatory children with CP is limited. Aquatic exercise is feasible and adverse effects are minimal; however, dosing parameters are unclear. Further research is needed to determine aquatic intervention effectiveness and exercise dosing across age categories and GMFCS levels. Answer: Yes, a six-week exercise intervention can improve gross motor function for non-ambulant children with cerebral palsy. A pilot randomized controlled trial that included thirty-five children aged 8-17 with bilateral cerebral palsy and Gross Motor Function Classification System levels IV-V found significant differences in GMFM-88D scores between the intervention groups (static bike group and treadmill group) and the control group at six weeks. The mean change in GMFM-88D score was 5.9 for the bike group and 3.7 for the treadmill group, compared to 0.5 for the control group. Although no significant differences were found for GMFM-66 or GMFM-88E scores, trends of improvement were observed for both exercise groups. However, the improvements observed declined during the follow-up period, indicating that the benefits may not be sustained without ongoing intervention (PUBMED:22850757).
Instruction: Follow-up after cystectomy: regularly scheduled, risk adjusted, or symptom guided? Abstracts: abstract_id: PUBMED:24462548 Follow-up after cystectomy: regularly scheduled, risk adjusted, or symptom guided? Patterns of recurrence, relapse presentation, and survival after cystectomy. Aims: To evaluate the efficacy of follow-up based on the patterns of recurrence, relapse presentation and survival after cystectomy, and to define a risk adjusted follow-up schedule. Patients And Methods: The records of 343 patients with regular follow-up after cystectomy were reviewed for primary site of recurrence, accompanying symptoms, means of recurrence diagnosis, and clinicopathological factors. Based on Cox proportional hazard models, and the results of imaging studies low and high risk groups are identified and a risk adjusted follow-up protocol is proposed. Results: The risk of a recurrence was related to increasing pT, tumour positive lymph nodes, tumour positive surgical margins, and pre-operative dilatation of the upper urinary tract, and low and high risk groups were defined consequently. 84% of all recurrences occurred within 2 years, with only one recurrence beyond 2 years in the low risk group. Although the minority of all patients (34%) is asymptomatic at time of recurrence, symptomatic recurrences were adversely associated with survival. CT-scans and chest X-rays accounted for 90% of the diagnostic tools to detect a recurrence in patients without symptoms. Conclusions: Asymptomatic patients may benefit from early treatment after disease recurrence. A risk adjusted follow-up strategy based on stage of disease and additional clinicopathological factors can dichotomise patients at high and low risk for recurrence. The small benefit in survival after early detection has to be confirmed in future studies, and weighed against the available treatment options of recurrences and their subsequent costs. abstract_id: PUBMED:31580507 Efficacy and safety of scheduled early endoscopic ultrasonography-guided ethanol reinjection for patients with pancreatic neuroendocrine tumors: Prospective pilot study. Endoscopic ultrasonography (EUS)-guided ethanol injection was recently proposed for treatment of patients with small pancreatic neuroendocrine tumors (p-NET); however, tips on how to carry out safe and effective procedures are unclear. We launched a pilot study for scheduled early EUS-guided ethanol reinjection for small p-NET. Major eligibility criteria were presence of pathologically diagnosed grade (G) 1 or G2, tumor size ≤2 cm and being a poor or rejected candidate for surgery. For the treatment, we used a 25-gauge needle and pure ethanol. Contrast-enhanced computed tomography (CE-CT) was carried out on postoperative day 3, and if enhanced areas of the tumor were still apparent, an additional session was scheduled during the same hospitalization period. Primary endpoint was complete ablation rate at 1 month after treatment, and secondary endpoint was procedure-related adverse events. A total of five patients were treated. Median size of the tumor was 10 (range: 7-14) mm. Of the five patients, three underwent an additional session. Median volume of ethanol injection per session was 0.8 (range: 0.3-1.0) mL, and the total was 1.0 (0.9-1.8) mL. Complete ablation was achieved in four of the five tumors (80%) with no adverse events. During 1 year of follow up, none of the patients reported any procedure-related adverse events, and no recurrence of tumor. Scheduled early EUS-guided ethanol reinjection appears to be safe and effective for treating small p-NET (UMIN number: 000018834). abstract_id: PUBMED:38350116 Guided Imagery for Symptom Management of Patients with Life-Limiting Illnesses: A Systematic Review of Randomized Controlled Trials. Background: Patients with life-limiting illnesses receiving palliative care have a high symptom burden that can be challenging to manage. Guided imagery (GI), a complementary and integrative therapy in which patients are induced to picture mental images with sensory components, has proven in quasi-experimental studies to be effective as a complementary therapy for symptom management. Objective: To systematically review randomized controlled trials that report evidence of guided imagery for symptom management in patients with life-limiting illnesses. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guideline was followed for this review and the search strategy was applied in Medline, CINHAL, and Web of Science. The quality of articles was evaluated using the Cochrane Collaboration's Risk-of-Bias Tool 2 (RoB 2). The results are presented using the Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. Results: A total of 8822 studies were initially identified through the search strategy, but after applying exclusion criteria, 14 randomized controlled trials were included in this review. The quality assessment revealed that four studies had a high risk of bias, nine had some concerns, and one had a low risk of bias. Out of the 14 studies, 6 evaluated oncological diagnosis, while the remaining 8 focused on nononcological diagnoses across 6 different diseases. GI was found to be effective in managing symptoms in 10 out of the 14 studies. Regardless of the disease stage, patients who received guided imagery experienced relief from anxiety, depression, pain, sleep disturbances, and fatigue. Conclusion: GI therapy has shown promising results regarding symptom management in palliative care patients with life-limiting illnesses at different stages. abstract_id: PUBMED:35168565 Participation in scheduled asthma follow-up contacts and adherence to treatment during 12-year follow-up in patients with adult-onset asthma. Background: Poor treatment compliance is a common problem in the treatment of asthma. To our knowledge, no previous long-term follow-up studies exist on how scheduled asthma follow-up contacts occur in primary health care (PHC) versus secondary care and how these contacts relate to adherence to medication and in participation to further scheduled asthma contacts. The aim of this study was to evaluate occurrence of scheduled asthma contacts and treatment compliance in PHC versus secondary care, and to identify the factors associated with non-participation to scheduled contacts. Methods: Patients with new adult-onset asthma (n = 203) were followed for 12 years in a real-life asthma cohort of the Seinäjoki Adult Asthma Study (SAAS). The first contacts were mainly carried out in secondary care and therefore the actual follow-up time including PHC visits was 10 years. Results: A majority (71%) of the patients had ≥ 2 scheduled asthma contacts during 10-year follow-up and most of them (79%) mainly in PHC. Patients with follow-up contacts mainly in PHC had better adherence to inhaled corticosteroid (ICS) medication during the whole 12-year period compared to patients in secondary care. In the study population, 29% of the patients had only 0-1 scheduled asthma contacts during the follow-up. Heavy alcohol consumption predicted poor participation in scheduled contacts. Conclusions: Patients with mainly PHC scheduled asthma contacts were more adherent to ICS medication than patients in the secondary care. Based on our results it is necessary to pay more attention to actualization of asthma follow-up visits and systematic assessment of asthma patients including evaluation of alcohol consumption. Trial registration Seinäjoki Adult Asthma Study is retrospectively registered at www.ClinicalTrials.gov with identifier number NCT02733016. Registered 11 April 2016. abstract_id: PUBMED:35694172 Association of the Weight-Adjusted-Waist Index With Risk of All-Cause Mortality: A 10-Year Follow-Up Study. Background: To explore the relationship between weight-adjusted-waist index (WWI) and the risk of all-cause mortality in one urban community-dwelling population in China. Methods: This is a prospective cohort study with a sample of 1,863 older adults aged 60 years or over in Beijing who completed baseline examinations in 2009-2010 and a 10-year follow-up in 2020. WWI was calculated as waist circumference (cm) divided by the square root of weight (kg). Cox regression analysis was performed to investigate the significance of the association of WWI with all-cause mortality. The area under the receiver operating characteristic (ROC) curves were used to compare the ability of each obesity index to predict mortality. Results: During a median follow-up of 10.8 years (1.0 to 11.3 years), 339 deaths occurred. After adjusted for covariates, the hazard ratios (HRs) for all-cause mortality progressively increased across the tertile of WWI. Compared with the lowest WWI category (tertile1 &lt;10.68 cm/√kg), with WWI 10.68 to 11.24cm/√kg, and≥11.25 cm/√kg, the HRs (95% confidence intervals (CIs)) for all-cause mortality were 1.58 (1.12-2.22), and 2.66 (1.80-3.92), respectively. In stratified analyses, the relationship between WWI and the risk of all-cause mortality persisted. The area under ROC for WWI was higher for all-cause mortality than BMI, WHtR, and WC. Conclusion: WWI was associated with a higher risk for all-cause mortality, and the association was more robust with the highest WWI category. abstract_id: PUBMED:34744872 Predicting Treatment Outcomes in Guided Internet-Delivered Therapy for Anxiety Disorders-The Role of Treatment Self-Efficacy. Aim: Guided Internet-delivered therapy has shown to be an effective treatment format for anxiety disorders. However, not all patients experience improvement, and although predictors of treatment outcome have been identified, few are consistent over time and across studies. The current study aimed to examine whether treatment self-efficacy (self-efficacy regarding the mastery of obstacles during treatment) in guided Internet-delivered therapy for anxiety disorders in adults could be a predictor of lower dropout rates and greater symptom reduction. Method: The analyzed data comes from an open effectiveness study including 575 patients receiving guided Internet-delivered therapy for panic disorder or social anxiety disorder. Treatment self-efficacy was measured at pre-treatment. Symptom reduction was measured at 10 measurement points, including a 6-month follow-up. A mixed linear model was applied in the analysis. Results: The results showed that high treatment self-efficacy was a predictor of both lower dropout rates and greater symptom reduction. Significant interaction effects between time and treatment self-efficacy were found for several of the nine modules that constitutes the treatment program, suggesting that treatment self-efficacy could be a moderator of symptom reduction. Three of nine modules in the panic disorder treatment and six of nine in the social anxiety disorder treatment showed significant interaction effects. Conclusion: The results suggest that measuring treatment self-efficacy may be a valuable tool to identify patients at risk of dropping out, and that treatment self-efficacy could be a predictor and moderator of symptom reduction in guided Internet-delivered therapy. The implications of the results are discussed. abstract_id: PUBMED:33895431 Guided graded exercise self-help for chronic fatigue syndrome: Long term follow up and cost-effectiveness following the GETSET trial. Objective: The GETSET trial found that guided graded exercise self-help (GES) improved fatigue and physical functioning more than specialist medical care (SMC) alone in adults with chronic fatigue syndrome (CFS) 12 weeks after randomisation. In this paper, we assess the longer-term clinical and health economic outcomes. Methods: GETSET was a randomised controlled trial of 211 UK secondary care patients with CFS. Primary outcomes were the Chalder fatigue questionnaire and the physical functioning subscale of the short-form-36 survey. Postal questionnaires assessed the primary outcomes and cost-effectiveness of the intervention 12 months after randomisation. Service costs and quality-adjusted life years (QALYs) were combined in a cost-effectiveness analysis. Results: Between January 2014 and March 2016, 164 (78%) participants returned questionnaires 15 months after randomisation. Results showed no main effect of intervention arm on fatigue (chi2(1) = 4.8, p = 0.03) or physical functioning (chi2(1) = 1.3, p = 0.25), adjusting for multiplicity. No other intervention arm or time*arm effect was significant. The short-term fatigue reduction was maintained at long-term follow-up for participants assigned to GES, with improved fatigue from short- to long-term follow up after SMC, such that the groups no longer differed. Healthcare costs were £85 higher for GES and produced more QALYs. The incremental cost-effectiveness ratio was £4802 per QALY. Conclusions: The short-term improvements after GES were maintained at long-term follow-up, with further improvement in the SMC group such that the groups no longer differed at long-term follow-up. The cost per QALY for GES compared to SMC alone was below the usual threshold indicating cost-effectiveness, but with uncertainty around the result. abstract_id: PUBMED:16037901 Surveillance in women with early breast cancer, systematic versus symptom guided follow-up Nearly all national (AGO, DKG) and international guide lines (e. g. ASCO) for follow-up of breast cancer patients do not explicitly recommend regular laboratory and radiological/ultrasound screening procedures. According to these guide lines, follow-up should be focused on the breast, only patients with possibly tumour related symptoms should be screened for metastatic disease. The rejection of more time-consuming and costlier follow-up examinations remains a contradiction to established follow-up guide lines for other solid tumours. In addition, treatment options for metastatic breast cancer disease have improved continuously over the last years. However, treatment options are considerably limited in advanced disease, if e. g. symptoms like dyspnoea or jaundice are already present at first diagnosis of metastatic disease. Therefore we will review available data of older studies as well as discuss arguments for a systematic surveillance in high-risk breast cancer patients. Overall, symptom guided follow-up seems to be adequate for patients with small primary tumours, no lymph node involvement and therefore a high curative probability, whereas in the authors' opinion systematic surveillance should be recommended for high risk patients even in the absence of symptoms. All patients, however, should be fully informed about the possibility of metastatic disease development and should be enabled to select the quality of their postoperative follow-up. abstract_id: PUBMED:25671122 Factors associated with non-adherence to scheduled medical follow-up appointments among Cameroonian children requiring HIV care: a case-control analysis of the usual-care group in the MORE CARE trial. Background: A better understanding of why HIV-exposed/infected children fail to attend their scheduled follow-up medical appointments for HIV-related care would allow for interventions to enhance the delivery of care. The aim of this study was to determine characteristics of the caregiver-child dyad (CCD) associated with children's non-adherence to scheduled follow-up medical appointments in HIV programs in Cameroon. Methods: We conducted a case-control analysis of the usual-care group of CCDs from the MORE CARE trial, in which the effect of mobile phone reminders for HIV-exposed/infected children in attending follow-up appointments was assessed from January to March 2013. For this study, the absence of a child at their appointment was considered a case and the presence of a child at their appointment was defined as a control. We used three multivariate binary logistic regression analyses. The best-fit model was the one which had the smallest chi-square value with the Hosmer-Lemeshow test (HLχ²). Magnitudes of associations were expressed by odds ratio (OR), with a p-value &lt;0.05 considered as statistically significant. Results: We included 30 cases and 31 controls. Our best-fit model which considered the sex of the adults and children separately (HL χ²=3.5) showed that missing scheduled medical appointments was associated with: lack of formal education of the caregiver (OR 29.1, 95% CI 1.1-777.0; p=0.044), prolonged time to the next appointment/follow-up (OR [1 week increase] 1.4, 95% CI 1.03-2.0; p=0.032), and being a female child (OR 5.2, 95% CI 1.2-23.1; p=0.032). One model (HLχ²=10.5) revealed that woman-boy pairs adhered less to medical appointments compared to woman-girl pairs (OR 4.9, 95% CI 1.05-22.9; p=0.044). Another model (HLχ²=11.1) revealed that man-boy pairs were more likely to attend appointments compared to woman-girl pairs (OR 0.23, 95% CI 0.06-0.93; p=0.039). There were no statistical associations for the ages of the children or the caregivers, the study sites, or the HIV status (confirmed vs. suspected) of the children. Conclusion: The profile of children who would not attend follow-up medical appointments in an HIV program was: a female, with a caregiver who has had no formal education, and with a longer follow-up appointment interval. There is a possibility that female children are favored by female caregivers and that male children are favored by male caregivers when they come to medical care. abstract_id: PUBMED:27189862 Prevalence of α(+)-Thalassemia in the Scheduled Tribe and Scheduled Caste Populations of Damoh District in Madhya Pradesh, Central India. This study was carried out to ascertain the allelic frequency of α(+)-thalassemia (α(+)-thal) in Scheduled caste and scheduled tribe populations of the Damoh district of Madhya Pradesh, India. Random blood samples of Scheduled tribe (267) and Scheduled caste (168), considering the family as a sampling unit, were analyzed for the presence of the -α(3.7) (rightward) (NG_000006.1: g.34164_37967del3804) and -α(4.2) (leftward) (AF221717) deletions. α(+)-Thal was significantly higher in the Scheduled tribals (77.9%) as compared to the scheduled caste population (9.0%). About 58.0% scheduled tribals carried at least one chromosome with the -α(3.7) deletion and 20.0% scheduled tribals carried the -α(4.2) deletion. Frequency for the -α(3.7) allele was 0.487 in the scheduled tribal populations in comparison to 0.021 in scheduled castes. Allelic frequency for -α(4.2) was 0.103 and 0.024, respectively, in the above communities. No Hardy-Weinberg equilibrium for α-thal gene (p &lt; 0.05) was detected in the tribal population, indicating the presence of selection pressures in favor of α-thal mutation and adaptation. Answer: The follow-up after cystectomy can be approached in different ways, including regularly scheduled, risk-adjusted, or symptom-guided strategies. The study by PUBMED:24462548 suggests that a risk-adjusted follow-up strategy based on stage of disease and additional clinicopathological factors can dichotomize patients into high and low risk for recurrence. This study found that the risk of recurrence was related to factors such as increasing pT, tumor-positive lymph nodes, tumor-positive surgical margins, and pre-operative dilatation of the upper urinary tract. The majority of recurrences (84%) occurred within 2 years, with only one recurrence beyond 2 years in the low-risk group. The study also noted that symptomatic recurrences were adversely associated with survival, and CT-scans and chest X-rays were the primary diagnostic tools for detecting recurrence in asymptomatic patients. The authors concluded that asymptomatic patients might benefit from early treatment after disease recurrence and that the small benefit in survival after early detection needs to be confirmed in future studies and weighed against the costs of treatment options for recurrences. In summary, the study supports a risk-adjusted follow-up after cystectomy, which takes into account the stage of the disease and other clinicopathological factors to determine the frequency and type of follow-up required for each patient. This approach aims to balance the benefits of early detection of recurrences with the costs and potential overtreatment associated with more intensive follow-up schedules.
Instruction: Patterns of care in hilar node-positive (N1) non-small cell lung cancer: A missed treatment opportunity? Abstracts: abstract_id: PUBMED:27207124 Patterns of care in hilar node-positive (N1) non-small cell lung cancer: A missed treatment opportunity? Background: For patients with non-small cell lung cancer (NSCLC) metastatic to hilar lymph nodes (N1), guidelines recommend surgery and adjuvant chemotherapy in operable patients and chemoradiation (CRT) for those deemed inoperable. It is unclear how these recommendations are applied nationally, however. Methods: The National Cancer Database was queried to identify patients with a tumor &lt;7 cm (T1/T2) with clinically positive N1 nodes. Patients undergoing CRT (comprising chemotherapy and radiation &gt;45 Gy) or surgical resection were considered adequately treated. Remaining patients were classified as receiving inadequate or no treatment. Results: Of the 20,366 patients who met the study criteria, 63% underwent adequate treatment (48% surgical resection, 15% CRT). The remainder received inadequate treatment (23%) or no treatment (14%). In univariate analysis, the patients receiving inadequate or no treatment were older, tended to be non-Caucasian, had a lower income, and had a higher comorbidity score. Patients undergoing adequate treatment had improved overall survival (OS) compared with those receiving inadequate or no treatment (median OS, 34.0 months vs 11.7 months; P &lt; .001). Of those receiving adequate treatment, logistic regression identified several variables associated with surgical resection, including treatment at an academic facility, Caucasian race, and annual income &gt;$35,000. Increasing age and T2 stage were associated with nonoperative management. Following propensity score matching of 2308 patient pairs undergoing surgery or CRT, resection was associated with longer median OS (34.1 months vs 22.0 months; P &lt; .001). Conclusions: Despite the established guidelines, many patients with T1-2N1 NSCLC do not receive adequate treatment. Surgery is associated with prolonged survival in selected patients. Surgical input in the multidisciplinary evaluation of these patients should be mandatory. abstract_id: PUBMED:35681659 Stereotactic Body Radiation Therapy (SBRT) for Oligorecurrent/Oligoprogressive Mediastinal and Hilar Lymph Node Metastasis: A Systematic Review. Introduction: Mediastinal or hilar lymph node metastases are a challenging condition in patients affected by solid tumors. Stereotactic body radiation therapy (SBRT) could play a crucial role in the therapeutic management and in the so-called "no-fly zone", delivering high doses of radiation in relatively few treatment fractions with excellent sparing of healthy surrounding tissues and low toxicity. The aim of this systematic review is to evaluate the feasibility and tolerability of SBRT in the treatment of mediastinal and hilar lesions with particular regard to the radiotherapy doses, dose constraints for organs at risk, and clinical outcomes. Materials And Methods: Two blinded investigators performed a critical review of the Medline, Web of Knowledge, Google Scholar, Scopus, and Cochrane databases according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement (PRISMA), starting from a specific question: What is the clinical impact of SBRT for the treatment of oligorecurrent/oligoprogressive mediastinal and hilar metastasis? All retrospective and prospective clinical trials published in English up to February 2022 were analyzed. Results: A total of 552 articles were identified and 12 of them were selected with a total number of 478 patients treated with SBRT for mediastinal or hilar node recurrence. All the studies are retrospective, published between 2015 and 2021 with a median follow-up ranging from 12 to 42.2 months. Studies following SBRT for lung lesions or retreatments after thorax radiotherapy for stage III lung cancer were also included. The studies showed extensive heterogeneity in terms of patient and treatment characteristics. Non-small cell lung cancer was the most frequently reported histology. Different dose schemes were used, with a higher prevalence of 4-8 Gy in 5 or 6 fractions, but dose escalation was also used up to 52 Gy in 4 fractions with dose constraints mainly derived from RTOG 0813 trial. The radiotherapy technique most frequently used was volumetric modulated arc therapy (VMAT) with a median PTV volume ranging from 7 to 25.7 cc. The clinical outcome seems to be very encouraging with 1-year local control (LC), overall survival (OS) and progression-free survival (PFS) rates ranging from 84 to 94%, 53 to 88% and 23 to 53.9%, respectively. Half of the studies did not report toxicity greater than G3 and only five cases of fatal toxicity were reported. Conclusions: From the present review, it is not possible to draw definitive conclusions because of the heterogeneity of the studies analyzed. However, SBRT appears to be a safe and effective option in the treatment of mediastinal and hilar lymph node recurrence, with a good toxicity profile. Its use in clinical practice is still limited, and there is extensive heterogeneity in patient selection and fractionation schedules. Good performance status, small PTV volume, absence of previous thoracic irradiation, and administration of a high biologically effective dose (BED) seem to be factors that correlate with greater local control and better survival rates. In the presence of symptoms related to the thoracic lymph nodes, SBRT determines a rapid control that lasts over time. We look forward to the prospective studies that are underway for definitive conclusions. abstract_id: PUBMED:30746239 Evaluation of lobar lymph node metastasis in non-small cell lung carcinoma using modified total lesion glycolysis. Background: Volumetric parameters based on 3-dimensional reconstruction have recently been introduced for cancer staging. We aimed to improve the ability to diagnose hilar lymph node metastasis in patients with non-small cell lung cancer. Methods: We evaluated 142 patients with non-small cell lung cancer who underwent right upper lobectomy and radical lymph node dissection. Metastatic involvement of right upper lobar lymph nodes was assessed using high-resolution computed tomography (HRCT) and 18F-2-floro-2-deoxyglucose-positron emission tomography/computed tomography (FDG-PET/CT). Results: On receiver operating characteristic (ROC) curve analysis, the area under the curves (AUC) for short axis, maximum of standardized uptake value (SUVmax), total lesion glycolysis (TLG) and modified TLG (mTLG) were 0.79, 0.77, 0.76, and 0.87, respectively. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of mTLG, using the optimal cut off value (2.45), for diagnosis of lobar lymph node metastasis were 71%, 88%, 44%, and 96%, respectively. Hilar asymmetric uptake (HAU) of FDG was larger in true-positive cases than in false-negative cases (P&lt;0.01). Furthermore, the size of metastatic foci in the lymph node was smaller in false-negative cases (P=0.012). Conclusions: Modified TLG is a good parameter to diagnose metastatic right upper lobar lymph nodes. Micrometastasis in the lymph node is difficult to predict using the current diagnostic method. However, more careful evaluation is required in patients with symmetric FDG accumulation at hilar region because hilar lymph nodes respond to various causes such as benign pulmonary diseases. abstract_id: PUBMED:29767232 Free‑floating cancer cells in lymph node sinuses of hilar lymph node‑positive patients with non‑small cell lung cancer. Previous studies demonstrated that free‑floating cancer cells (FFCCs) in the lymph node sinuses were of prognostic significance for colorectal and gastric cancer. The present study investigated the clinical significance of detecting FFCCs using Fast Red staining for cytokeratin in stage I/II non‑small cell lung cancer (NSCLC) patients and hilar lymph node positive NSCLC patients who underwent curative resection. Between 2002 and 2011, a total of 164 patients (including 22 hilar lymph node positive patients) were investigated. Resected lymph nodes were stained for cytokeratin using an anti‑cytokeratin antibody. In order to achieve a clear distinction from coal dust, an anti‑cytokeratin antibody was labeled with a secondary antibody conjugated with alkaline phosphatase, which was detected by a reaction with Fast Red/naphthol that produced a red color. Patients were considered to be positive for FFCCs (FFCCs+) if one or more than one free‑floating cytokeratin‑positive cell was detected in the lymph node sinuses, which could not be detected by hematoxylin and eosin staining. Among all 164 patients, a significant difference was observed in 5‑year relapse‑free survival (5Y‑RFS) rates, with 76.9 and 33.3% being achieved by FFCCs‑ and FFCCs+ patients, respectively (P&lt;0.001). Similarly, the 5‑year overall survival (5Y‑OS) rate was significantly lower in FFCCs+ patients, with 86.6% being achieved by FFCCs‑ and 65.8% by FFCCs+ patients, respectively (P=0.014). Among 22 hilar lymph node‑positive patients, a significant difference was also observed in 5Y‑RFS, with 53.8 and 0.0% being achieved by FFCCs‑ and FFCCs+ patients, respectively (P=0.006). The 5Y‑OS tended to be lower in FFCCs+ patients, with 69.2 and 53.3% being achieved by FFCCs‑ and FFCCs+ patients, respectively (P=0.463). The findings of the present study suggested the presence of FFCCs in stage I/II NSCLC patients was associated with a poor prognosis. In addition, FFCCs in hilar lymph node‑positive patients may potential be a useful marker in foreseeing the recurrence of cancer. abstract_id: PUBMED:29880414 Surgically Treated Unsuspected N2-Positive NSCLC: Role of Extent and Location of Lymph Node Metastasis. Background: The role of surgery in the treatment of non-small-cell lung cancer that has spread to ipsilateral mediastinal or hilar lymph nodes (LNs) is controversial. We examined whether the location of LNs positive for non-small-cell lung cancer in mediastinum or hilum influences the survival of these patients. Patients And Methods: We reviewed data from 881 patients and analyzed those with unsuspected N2 disease or hilar (station 10) LNs. The patients were stratified into the following groups: group A, positive hilar Naruke 10; group B, superior mediastinal and aortic nodes (Naruke 1, 2, 3, 4, 5, and 6); group C, inferior mediastinal nodes (Naruke 7, 8, and 9), and multilevel group D (2 or more positive N2 levels). Results: A total of 69 pN2 and 19 pN1 patients were included. Progression-free survival (PFS) was statistically significant better in group B versus group C (P = .044) and group B versus group D (P = .0086). The overall survival (OS) of group A did not differ from that of group C. A statistically significant better OS was found between groups B and D (P = .051). Conclusion: Inferior positive mediastinal N2 node patients seem to have an OS and PFS as poor as multilevel N2 disease patients. The OS and PFS of patients with positive hilar disease are similar to those in the inferior mediastinal positive N2 group. Superior positive mediastinal N2 node patients have better OS and PFS than the inferior mediastinal positive N2 group. abstract_id: PUBMED:28050149 Treatment patterns and survival in patients with ALK-positive non-small-cell lung cancer: a Canadian retrospective study. Background: Crizotinib was the first agent approved for the treatment of anaplastic lymphoma kinase (ALK)-positive (+) non-small-cell lung cancer (nsclc), followed by ceritinib. However, patients eventually progress or develop resistance to crizotinib. With limited real-world data available, the objective of the present work was to evaluate treatment patterns and survival after crizotinib in patients with locally advanced or metastatic ALK+ nsclc in Canada. Methods: In this retrospective study at 6 oncology centres across Canada, medical records of patients with locally advanced or metastatic ALK+ nsclc were reviewed. Demographic and clinical characteristics, treatments, and outcomes data were abstracted. Analyses focused on patients who discontinued crizotinib treatment. Results: Of the 97 patients included, 9 were crizotinib-naïve, and 39 were still receiving crizotinib at study end. The 49 patients who discontinued crizotinib treatment were included in the analysis. Of those 49 patients, 43% received ceritinib at any time, 20% subsequently received systemic chemotherapy only (but never ceritinib), and 37% received no further treatment or died before receiving additional treatment. Median overall survival from crizotinib discontinuation was shorter in patients who did not receive ceritinib than in those who received ceritinib (1.7 months vs. 20.4 months, p &lt; 0.001). In a multivariable analysis, factors associated with poorer survival included lack of additional therapies (particularly ceritinib), male sex, and younger age, but not smoking status; patients of Asian ethnicity showed a nonsignificant trend toward improved survival. Conclusions: A substantial proportion of patients with ALK+ nsclc received no further treatment or died before receiving additional treatment after crizotinib. Treatment with systemic agents was associated with improved survival, with ceritinib use being associated with the longest survival. abstract_id: PUBMED:18788639 Significance of dual-time-point 18F-FDG PET imaging in evaluation of hilar and mediastinal lymph node metastasis in non-small-cell lung cancer Objective: To explore the diagnostic value of dual-time-point 18F-FDG PET-CT imaging in detecting hilar and mediastinal lymph node metastasis in non-small-cell lung cancer (NSCLC). Methods: Forty-six patients with NSCLC underwent standard whole body single-time 18F-FDG PET-CT scans and a delayed imaging for the thorax alone before surgery, meanwhile, the standard uptake value (SUV) and retention index (RI) were calculated. Results: A total number of 584 lymph nodes were excised in the 46 patients. Of these, 134 metastatic lymph nodes were pathologically confirmed in 31 patients. There were 189 lymph nodes detected and suspected to be metastatic by standard single-time 18 F-FDG PET-CT imaging, and 161 by dual-time-point imaging. Therefore, the sensitivity, specificity, diagnostic accuracy, positive predictive value and negative predictive value in the detection of hilar and mediastinal lymph node metastasis were 87.3%, 84.0%, 84.8%, 61.9% and 95.7% by standard single-time 18F-FDG PET-CT imaging, versus 94.8%, 92.2%, 92.8%, 78.9% and 98.1%, respectively, by dual-time-point imaging. There was a statistically significant difference in the detection of lymph node metastasis between the standard single-time imaging and dual-time-point 18F-FDG PET-CT imaging. Conclusion: Dual-time-point 18F-FDG PET-CT imaging is more sensitive, specific and accurate than standard single-time 18F-FDG PET-CT imaging in the detection of hilar and mediastinal lymph node metastasis, and may provide more information for diagnosis, staging and treatment of non-small cell lung cancer. abstract_id: PUBMED:35048499 New PET/CT criterion for predicting lymph node metastasis in resectable advanced (stage IB-III) lung cancer: The standard uptake values ratio of ipsilateral/contralateral hilar nodes. Background: The aim of the present study was to use surgical and histological results to develop a simple noninvasive technique to improve nodal staging using preoperative PET/CT in patients with resectable lung cancer. Methods: Preoperative PET/CT findings (pStage IB-III 182 patients) and pathological diagnoses after surgical resection were evaluated. Using PET/CT images to determine the standardized uptake value (SUV) ratio, the SUVmax of a contralateral hilar lymph node (on the side of the chest opposite to the primary tumor) was measured simultaneously. The I/C-SUV ratio was calculated as ipsilateral hilar node SUV/contralateral hilar node SUV. Receiver operating characteristic (ROC) curves were then used to analyze those data. Results: Based on ROC analyses, the cutoff I/C-SUV ratio for diagnosis of lymph node metastasis was 1.34. With a tumor ipsilateral lymph node SUVmax ≥2.5, an IC-SUV ratio ≥1.34 had the highest accuracy for predicting N1/N2 metastasis; the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy of nodal staging were 60.66, 85.11, 84.09, 62.5 and 71.29%, respectively. Conclusions: When diagnosing nodal stage, a lymph node I/C-SUV ratio ≥1.34 can be an effective criterion for determining surgical indications in advanced lung cancer. abstract_id: PUBMED:12027193 Hilar lymph nodes in N2 disease: survival analysis of patients with non-small cell lung cancers and regional lymph node metastasis. Purpose: This study was conducted to accurately define the N status of non-small cell lung carcinoma (NSCLC). Methods: We retrospectively reviewed 147 patients with NSCLC and pathologically positive regional lymph nodes who underwent major pulmonary resections with complete mediastinal lymph node dissections. Results: The overall 5-year survival rate was 41% after a median follow-up period of 33 months. The survival rate of patients with hilar N1 disease (26%) was significantly lower (P = 0.002) than that of those with interlobar and intrapulmonary N1 disease (60%). The survival rate of patients with hilar N1 disease (26%) was similar to that of those with N2 disease (33%; P = 0.56). Cox proportional hazards analysis with the covariates of age, sex, cell type, site of resection, pathological T factor, and pathological N factor revealed that pathological N factor indicated a relative risk for N2 disease of 1.76 (P = 0.028). Grouping hilar N1 disease with N2 disease showed that the relative risk of this "new N2 disease" with the same covariates was 2.65 (P = 0.002). Conclusion: According to our data, hilar N1 disease should be grouped with N2 disease because this combined category accurately reflects surgical outcome. abstract_id: PUBMED:22429676 Fusion positron emission/computed tomography underestimates the presence of hilar nodal metastases in patients with resected non-small cell lung cancer. Background: The 5-year survival for patients with resected stage II (N1) non-small cell lung cancer ranges from 40% to 55%. No data exist addressing the benefit of neoadjuvant therapy for patients with stage II disease. This is largely in part due to the lack of a reliable, minimally invasive method to assess hilar nodes. This study is aimed at determining the ability of fusion positron emission/computed tomography (PET/CT) to identify hilar metastases in patients with resected non-small cell lung cancer. Methods: A retrospective review of surgically resected patients with fusion PET/CT within 30 days of resection was performed. The sensitivity, specificity, positive predictive value, and negative predictive value for PET/CT in detecting hilar nodal metastases was calculated for a range of maximum standardized uptake values (SUVmax). Hilar nodes from patients with falsely positive PET/CT scans were analyzed for the presence of histoplasmosis. Additionally, the impact of hilar node size greater than 1 centimeter on the calculated values was assessed. Results: There were 119 patients evaluated. The number of lymph nodes resected ranged from 1 to 12 (X=2.98). There was decreased sensitivity and increased specificity with higher SUVmax cutoff values. At the standard SUVmax value of 2.5, the sensitivity and specificity were only 48.5% and 80.2%. The addition of size of hilar node by CT led to a modest improvement in sensitivity at all SUVmax cutoff values. Conclusions: Fusion PET/CT lacks sensitivity and specificity in identifying hilar nodal metastasis in patients with resected non-small cell lung cancer. Further prospective studies assessing the utility of PET/CT versus alternative sampling techniques are warranted. Answer: The study identified in PUBMED:27207124 highlights that many patients with T1-2N1 non-small cell lung cancer (NSCLC) do not receive adequate treatment according to established guidelines. Despite recommendations for surgery and adjuvant chemotherapy in operable patients or chemoradiation for those deemed inoperable, the study found that only 63% of the 20,366 patients who met the study criteria underwent adequate treatment (48% surgical resection, 15% CRT). The remaining patients received inadequate treatment (23%) or no treatment (14%). Factors such as older age, non-Caucasian race, lower income, and higher comorbidity score were associated with inadequate or no treatment. Adequate treatment was associated with improved overall survival, and surgery was linked to prolonged survival in selected patients. The study concludes that surgical input should be mandatory in the multidisciplinary evaluation of these patients to avoid missed treatment opportunities. This finding is supported by the observation in PUBMED:29880414 that the presence of free-floating cancer cells (FFCCs) in hilar lymph node-positive NSCLC patients is associated with a poor prognosis, suggesting that more aggressive treatment could be beneficial for these patients. Furthermore, PUBMED:35681659 discusses the role of Stereotactic Body Radiation Therapy (SBRT) in treating mediastinal and hilar lymph node metastasis, indicating that SBRT is a safe and effective option with a good toxicity profile, although its use in clinical practice is still limited and heterogeneous. The study in PUBMED:30746239 introduces modified total lesion glycolysis (mTLG) as a good parameter to diagnose metastatic right upper lobar lymph nodes, which could potentially aid in the selection of patients for adequate treatment. PUBMED:29767232 suggests that the location of lymph node metastasis influences survival, with superior mediastinal N2 node patients having better outcomes than those with inferior mediastinal or hilar disease, which could inform treatment decisions. PUBMED:28050149 highlights the importance of subsequent therapies, particularly ceritinib, after crizotinib discontinuation in ALK-positive NSCLC, which could be relevant for patients with hilar node involvement. PUBMED:18788639 and PUBMED:35048499 both discuss the use of PET/CT imaging to improve the accuracy of staging in NSCLC, which could lead to better treatment selection.
Instruction: Laparoscopic versus open fecal diversion: does laparoscopy offer better outcomes in short term? Abstracts: abstract_id: PUBMED:25796388 Laparoscopic versus open fecal diversion: does laparoscopy offer better outcomes in short term? Background: Laparoscopic fecal diversion is performed in patients with complicated colon and rectal diseases. We aim to compare operative and short-term outcomes between laparoscopic and open fecal diversion. Methods: After obtaining institutional review board approval, patients undergoing laparoscopic or open fecal diversion between February 2010 and September 2012 were reviewed. A straight comparison of the open and laparoscopic groups was made initially; then, patients who underwent laparoscopic fecal diversion were case-matched with open counterparts based on stoma type and primary diagnosis. Results: While body mass index (BMI) was higher in the laparoscopy group (p = 0.04), American Society of Anesthesiologists (ASA) score (p = 0.33) and gender (p = 0.74) were comparable between the study groups in the straight comparison. In the case-matched analysis, type of prior operations (p &gt; 0.05), age (p = 0.79), gender (p &gt; 0.99), BMI (p = 0.1), and ASA (p = 0.25) score were comparable between the groups. Open surgery was associated with increased estimated blood loss (p = 0.01), longer hospital stay (p = 0.0002), higher postoperative ileus (p = 0.03), and higher readmission rates (p = 0.002). Conclusions: Considering the short-term benefits as regards postoperative recovery and morbidity, fecal diversions should be performed laparoscopically when feasible. abstract_id: PUBMED:29323615 Short-Term and Long-Term Outcomes of Laparoscopic Versus Open Surgery for Low Rectal Cancer. Aim: To compare the short-term and long-term outcomes of laparoscopic versus open surgery for low rectal cancer. Methods: Patients with low rectal cancer who underwent laparoscopic or open surgery at our department from January 2009 to December 2013 were enrolled in this retrospective study. The primary end points were 3-year local recurrence and overall and disease-free survival (DFS) rates. Secondary end points were intraoperative and postoperative outcomes. Results: Laparoscopic group had longer operative time (165.0 versus 140.0, P &lt; .001), less blood loss (20.0 versus 40.0, P &lt; .001), shorter length of incision (5.0 versus 18.0, P &lt; .001), and more lymph node harvested (11.0 versus 9.0, P = .002). However, time to first flatus (P = .941), postoperative hospital stay (P = .095), postoperative complications (P = .155), and 30-day mortality (P = .683) was similar between two groups. With the median follow-up period of 65 months, the 3-year local recurrence rate was 4.3% in laparoscopic group and 7.5% in open group (P = .077); the 3-year overall and DFS rates were similar in two groups (85.9% versus 88.8%, P = .229 and 76.9% versus 79.2%, P = .448, respectively); and the overall and DFS curves were comparable between two groups (hazard ratio [HR] = 0.858, 95% confidence intervals [CI] 0.709-1.037, P = .112 and HR = 1.076, 95% CI 0.834-1.389, P = .275, respectively). Conclusions: Laparoscopic surgery is safe and has equivalent long-term oncologic outcomes for low rectal cancer when compared to open surgery. Furthermore, large-scale, prospective randomized clinical trials are needed to confirm the present findings. abstract_id: PUBMED:32566013 Open vs. laparoscopic surgery for locally advanced gastric cancer after neoadjuvant therapy: Short-term and long-term survival outcomes. The aim of the present study was to compare the short-term and long-term survival outcomes of laparoscopic gastrectomy vs. open gastrectomy in treating locally advanced gastric cancer (LAGC) after neoadjuvant therapy. This study retrospectively reviewed the medical records of 270 patients with LAGC, who underwent laparoscopic (n=49) or conventional open (n=221) surgery following neoadjuvant therapy between January 2007 and December 2016 in China National Cancer Center. Postoperative parameters and survival outcomes including overall survival and disease-free survival were analyzed. Patients who underwent laparoscopic gastrectomy (LP) had significantly shorter postoperative stay and a decreased number of metastatic lymph nodes harvested compared to those who underwent open surgery. The 75% disease-free survival (DFS) time in the laparoscopic surgery group (25.7 months) was higher compared with the open surgery group (15.6 months). However, no significant difference was observed in 5-year overall survival and DFS between the two groups. In conclusion, LG provides non-inferior short- and long-term survival outcomes compared with open surgery, suggesting a laparoscopic approach may be justified for patients with LAGC receiving neoadjuvant therapy. More randomized controlled trials are required to investigate the positive effects of LG for LAGC following neoadjuvant therapy. abstract_id: PUBMED:33319158 Short-term and long-term outcomes of laparoscopic colectomy with multivisceral resection for surgical T4b colon cancer: Comparison with open colectomy. Aim: In response to the rising use of laparoscopic surgery, recent studies have shown that laparoscopic multivisceral resections for locally advanced colon cancer are safe, feasible, and provide acceptable oncological outcomes. However, the usefulness of laparoscopic multivisceral resection remains controversial. Here, we aimed to compare short-term and long-term outcomes between laparoscopic and open multivisceral resection approaches for treating locally advanced colon cancer. Methods: We retrospectively collected data on 1315 consecutive patients admitted to the National Hospital Organization, Osaka National Hospital, for surgical treatment of colorectal cancer between 2010 and 2017. We assessed invasiveness in terms of operating times, blood loss, and complications. Oncological outcomes included 5-year survival rates and recurrences. Results: We included 85 patients that underwent a colectomy with a multivisceral resection for locally advanced colon cancer; of these, 38 were treated with a laparoscopic approach and 47 were treated with an open approach. Compared to the open surgery group, the laparoscopic group had significantly less blood loss (median volume: 25 vs 140 mL, P &lt;0.001), a lower complication rate (10.5% vs 29.8%, P = 0.036), and shorter hospital stays (12 vs 15 days, P = 0.028). After excluding patients with stage Ⅳ colon cancer, the groups showed similar pathologic outcomes and no significant differences in 5-year disease-free survival (73.9% vs 67.4%; P = 0.664) or 5-year overall survival (75.8% vs 67.7%; P = 0.695). Conclusion: A laparoscopic approach for locally advanced colon cancer could be less invasive than an open approach without affecting oncological outcomes in selected patients. abstract_id: PUBMED:27487979 Short-term and long-term outcomes of laparoscopic hepatectomy, microwave ablation, and open hepatectomy for small hepatocellular carcinoma: a 5-year experience in a single center. Aim: Laparoscopic hepatectomy (LH), microwave ablation (MWA), and open hepatectomy (OH) are three widely used methods to treat small hepatocellular carcinoma (HCC). However, few studies have compared the short- and long-term outcomes of these three treatments. The aim of this study was to investigate their effectiveness. Methods: The data were reviewed from 280 patients with HCCs measuring ≤3 cm (Barcelona Clinic Liver Cancer stage 0 or A) who received LH (n = 133), OH (n = 87), or MWA (n = 60) in our research center from 2005 to 2010. Short-term outcomes included intraoperative blood loss, operation time, and length of hospital stay. The disease-free survival and overall survival rates were analyzed as long-term outcomes. Results: The patients in the MWA and LH groups showed better short-term outcomes compared with those in the OH group. There were no significant differences in overall survival rates among the three treatments. The LH group showed significantly lower recurrence rates than the MWA group (P = 0.0146). Conclusions: Laparoscopic hepatectomy may be a better option for patients with small HCC located on the liver surface and left lateral lobe. The short-term outcome of MWA is promising, although the high risk of local recurrence after the operation should be considered when planning treatment. abstract_id: PUBMED:28726165 Long-term outcomes of laparoscopic versus open splenectomy for immune thrombocytopenia. Purpose: Splenectomy is the standard therapy for medically refractory immune thrombocytopenia (ITP). Laparoscopic splenectomy (LS) has gained wide acceptance; however, the long-term outcomes of LS versus open splenectomy (OS) for patients with ITP remain unclear. Methods: We analyzed, retrospectively, 32 patients who underwent splenectomy, as LS in 22 and OS in 10, for refractory ITP at our institute. Data were evaluated based on the American Society of Hematology 2011 evidence-based practice guidelines for ITP. Results: Although the operation time was significantly longer in the LS group (p &lt; 0.01), LS was associated with less blood loss (p &lt; 0.01), infrequent blood transfusion during surgery (p &lt; 0.01), quicker resumption of oral intake (p &lt; 0.01), and shorter hospital stay (p &lt; 0.01) than OS. Positive responses, including complete and partial remission, were achieved in 90% of the OS group patients and 77% of the LS group patients. The mean follow-up periods were 183 and 92 months, respectively. Relapse-free survival rates, 15 years after the operation were 63% in the OS group and 94% in the LS group. Conclusions: LS can provide better short-term results and comparable long-term results to those of OS for ITP. abstract_id: PUBMED:33924366 Video-Laparoscopic versus Open Surgery in Obese Patients with Colorectal Cancer: A Propensity Score Matching Study. Background: Minimally invasive surgery in obese patients is still challenging, so exploring one more item in this research field ranks among the main goals of this research. We aimed to compare short-term postoperative outcomes of open and video-laparoscopic (VL) approaches in CRC obese patients undergoing colorectal resection. Methods: We performed a retrospective analysis of a surgical database including 138 patients diagnosed with CRC, undergoing VL (n = 87, 63%) and open (n = 51, 37%) colorectal surgery. As a first step, propensity score matching was performed to balance the comparison between the two intervention groups (VL and open) in order to avoid selection bias. The matched sample (N = 98) was used to run further regression models in order to analyze the observed VL surgery advantages in terms of postoperative outcome, focusing on hospitalization and severity of postoperative complications, according to the Clavien-Dindo classification. Results: The study sample was predominantly male (N = 86, 62.3%), and VL was more frequent than open surgery (63% versus 37%). The two subgroup results obtained before and after the propensity score matching showed comparable findings for age, gender, BMI, and tumor staging. The specimen length and postoperative time before discharge were longer in open surgery (OS) patients; the number of harvested lymph nodes was higher than in VL patients as well (p &lt; 0.01). Linear regression models applied separately on the outcomes of interest showed that VL-treated patients had a shorter hospital stay by almost two days and about one point less Clavien-Dindo severity than OS patients on average, given the same exposure to confounding variables. Tumor staging was not found to have a significant role in influencing the short-term outcomes investigated. Conclusion: Comparing open and VL surgery, improved postoperative outcomes were observed for VL surgery in obese patients after surgical resection for CRC. Both postoperative recovery time and Clavien-Dindo severity were better with VL surgery. abstract_id: PUBMED:37138243 Laparoscopic versus open surgery for perihilar cholangiocarcinoma: a multicenter propensity score analysis of short- term outcomes. Background: Laparoscopic surgery (LS) has been increasingly applied in perihilar cholangiocarcinoma (pCCA). In this study, we intend to compare the short-term outcomes of LS versus open operation (OP) for pCCA in a multicentric practice in China. Methods: This real-world analysis included 645 pCCA patients receiving LS and OP at 11 participating centers in China between January 2013 and January 2019. A comparative analysis was performed before and after propensity score matching (PSM) in LS and OP groups, and within Bismuth subgroups. Univariate and multivariate models were performed to identify significant prognostic factors of adverse surgical outcomes and postoperative length of stay (LOS). Results: Among 645 pCCAs, 256 received LS and 389 received OP. Reduced hepaticojejunostomy (30.89% vs 51.40%, P = 0.006), biliary plasty requirement (19.51% vs 40.16%, P = 0.001), shorter LOS (mean 14.32 vs 17.95 d, P &lt; 0.001), and lower severe complication (CD ≥ III) (12.11% vs. 22.88%, P = 0.006) were observed in the LS group compared with the OP group. Major postoperative complications such as hemorrhage, biliary fistula, abdominal abscess, and hepatic insufficiency were similar between LS and OP (P &gt; 0.05 for all). After PSM, the short-term outcomes of two surgical methods were similar, except for shorter LOS in LS compared with OP (mean 15.19 vs 18.48 d, P = 0.0007). A series subgroup analysis demonstrated that LS was safe and had advantages in shorting LOS. Conclusion: Although the complex surgical procedures, LS generally seems to be safe and feasible for experienced surgeons. Trial Registration: NCT05402618 (date of first registration: 02/06/2022). abstract_id: PUBMED:33194009 Long-term outcomes of laparoscopic versus open donor nephrectomy for kidney transplantation: a meta-analysis. Laparoscopic surgery is widely used for living donor nephrectomy and has demonstrated superiority over open surgery by improving several outcomes, such as length of hospital stay and morphine requirements. The purpose of the present study was to compare the long-term outcomes of open donor nephrectomy (ODN) versus laparoscopic donor nephrectomy (LDN) using meta-analytical techniques. The Web of Science, PubMed and Cochrane Library databases were searched, for relevant articles published between 1980 and January 20, 2020. Lists of reference articles retrieved in primary searches were manually screened for potentially eligible studies. Outcome parameters were explored using Review Manager version 5.3. The evaluated outcomes included donor serum creatinine levels, incidence of hypertension or proteinuria at 1 year postoperative, donor health-related quality of life, donation attitude, and graft survival. Thirteen of the 111 articles fulfilled the inclusion criteria. The LDN group demonstrated similar 1 year outcomes compared with ODN with respect to serum creatinine levels (weighted mean difference [WMD] -0.02 mg/dL [95% confidence interval (CI) -0.18-0.13]; P=0.77); hypertension (odds ratio [OR] 1.21 [95% CI 0.48-3.08]; P=0.68); proteinuria (OR 0.28 [95% CI 0.02-3.11]; P=0.30); and donation attitude (OR 4.26 [95% CI 0.06-298.27]; P=0.50). Donor health-related quality of life and recipient graft survival were also not significantly different between the groups analyzed. Thus, the long-term outcomes between LDN and ODN for living donor kidney transplantation are similar. abstract_id: PUBMED:30002744 Is the laparoscopic approach for rectal cancer superior to open surgery? A systematic review and meta-analysis on short-term surgical outcomes. Introduction: Over the past years the incidence of colorectal cancers has increased worldwide. Currently it is the most common gastrointestinal malignancy worldwide. The laparoscopic approach has become the gold standard for surgical treatment. However, a recently published meta-analysis showed no difference in short- and long-term oncological outcomes of laparoscopy for treating rectal cancer. Aim: To assess current literature on short-term outcomes of rectal cancer treatment using laparoscopic surgery in comparison to the open approach. Material And Methods: We performed a systematic review and meta-analysis according to the PRISMA guidelines. The primary outcomes of interest were morbidity and short-term complications. Results: We identified 4,328 potential references. In the end we included 13 randomized controlled trials (RCTs). We did not find any significant differences in terms of morbidity, haemorrhage, ureter injury, anastomotic leakage, mortality, intra-abdominal abscess or postoperative ileus. We found significant differences in the rate of surgical site infections, operative time, blood loss, length of hospital stay and time to first bowel movement. Conclusions: This systematic review based on available RCTs confirms that laparoscopic rectal cancer surgery is associated with short-term outcomes comparable to the open approach. Moreover, in some aspects it provides better results (e.g. functional postoperative recovery, lower rate of surgical site infections (SSIs)). The quality of evidence is high; therefore in our opinion it is very unlikely that future trials will alter these results, and for this reason the laparoscopic approach can be considered the gold standard for the treatment of the majority of patients. Answer: Yes, laparoscopy offers better short-term outcomes compared to open fecal diversion. According to a study that compared operative and short-term outcomes between laparoscopic and open fecal diversion, the laparoscopic group had a higher body mass index (BMI) but comparable American Society of Anesthesiologists (ASA) score and gender distribution. The case-matched analysis showed that open surgery was associated with increased estimated blood loss, longer hospital stay, higher postoperative ileus, and higher readmission rates. Therefore, the study concluded that considering the short-term benefits regarding postoperative recovery and morbidity, fecal diversions should be performed laparoscopically when feasible (PUBMED:25796388).
Instruction: Dexmedetomidine use in a pediatric cardiac intensive care unit: can we use it in infants after cardiac surgery? Abstracts: abstract_id: PUBMED:25939906 Dexmedetomidine in combination with midazolam after pediatric cardiac surgery. Objective: Although midazolam is one of the most commonly used sedatives for infants in the intensive care unit, it has well-known disadvantages including a dose-dependent potential to induce tolerance, withdrawal, and hemodynamic depression. The aim of this study was to evaluate the clinical effects of dexmedetomidine combined with midazolam in postoperative intensive care following pediatric cardiac surgery. Methods: Forty consecutive infants who underwent cardiac surgery for isolated ventricular septal defects from January 2011 to July 2013 were enrolled in this retrospective study. They were divided into two groups according to postoperative sedation regimen: dexmedetomidine sedation with midazolam (n = 20), or midazolam sedation without dexmedetomidine (control group, n = 20). Perioperative variables were compared between the two groups. Results: There were no significant differences in patient characteristics between the two groups. During the first 24 h after intensive care unit admission, heart rate and serum lactate levels were significantly lower in the dexmedetomidine group compared to the control group (p = 0.0292 and p = 0.0027, respectively). The maximal midazolam dose was also significantly lower in the dexmedetomidine group (0.12 ± 0.09 vs. 0.20 ± 0.08 mg kg(-1) h(-1), p = 0.0059). There were no adverse effects of dexmedetomidine such as bradycardia, hypotension, agitation, or seizures. Three (15%) patients in the control group and none in the dexmedetomidine group experienced sudden cardiopulmonary decompensation. Conclusions: Dexmedetomidine can provide favorable sedative properties with a reduced requirement for concomitant midazolam and stable hemodynamics with tachycardia prevention, for postoperative intensive care following pediatric cardiac surgery. abstract_id: PUBMED:19295456 Dexmedetomidine use in a pediatric cardiac intensive care unit: can we use it in infants after cardiac surgery? Objective: To assess clinical response of dexmedetomidine alone or in combination with conventional sedatives/analgesics after cardiac surgery. Design: Retrospective study. Setting: Pediatric cardiac intensive care unit. Patients: Infants and neonates after cardiac surgery. Measurements And Main Results: We identified 80 patients including 14 neonates, at mean age and weight of 4.1 +/- 3.1 months and 5.5 +/- 2 kg, respectively, who received dexmedetomidine for 25 +/- 13 hours at an average dose of 0.66 +/- 0.26 microgxkgxhr. Overall normal sleep to moderate sedation was documented 94% of the time and no pain to mild pain for 90%. Systolic blood pressure (SBP) decreased from 89 +/- 15 mm Hg to 85 +/- 11 mm Hg (p = .05), heart rate (HR) from 149 +/- 22 bpm to 129 +/- 16 bpm (p &lt; .001), and respiratory rate (RR) remained unchanged. When baseline arterial blood gases were compared with the most abnormal values, pH decreased from 7.4 +/- 0.07 to 7.37 +/- 0.05 (p = .006), Po2 from 91 +/- 67 mm Hg to 66 +/- 29 mm Hg (p = .005), and CO2 increased from 45 +/- 8 mm Hg to 50 +/- 12 mm Hg (p = .001). At the beginning of the study, 37 patients (46%) were mechanically ventilated; and at 48 hours, 13 patients (16%) were still intubated and five patients failed extubation. Three groups of patients were identified: A, dexmedetomidine only (n = 20); B, dexmedetomidine with sedatives/analgesics (n = 38); and C, dexmedetomidine with both sedatives/analgesics and fentanyl infusion (n = 22). The doses of dexmedetomidine and rescue sedatives/analgesics were not significantly different among the three groups but duration of dexmedetomidine was longer in group C vs. A (p = .03) and C vs. B (p = .002). Pain, sedation, SBP, RR, and arterial blood gases were similar. HR was higher in group C vs. B (p = .01). Comparison between neonates and infants showed that infants required higher dexmedetomidine doses, 0.69 +/- 25 microgxkgxhr, and vs. 0.47 +/- 21 microgxkgxhr (p = .003) and had lower HR (p = .01), and RR (p = .009), and higher SBP (p &lt; .001). Conclusions: Dexmedetomidine use in infants and neonates after cardiac surgery was well tolerated in both intubated and nonintubated patients. It provides an adequate level of sedation/analgesia either alone or in combination with low-dose conventional agents. abstract_id: PUBMED:18154474 Use of dexmedetomidine in the pediatric intensive care unit. Study Objective: To determine the safety, effectiveness, and dosing of dexmedetomidine in intensive care infants and children who require sedation, and the rationale for patient selection. Design: Prospective observational study. Setting: Eleven-bed pediatric intensive care unit in a university-affiliated children's hospital. Patients: Seventeen infants and children who received dexmedetomidine consecutively between May 4, 2005, and May 4, 2006. Measurements And Main Results: Data were collected on demographics, blood pressure and heart rate measurements, and adverse effects. The rationale for dexmedetomidine use, its dosing, use of other sedatives, and treatment duration were also recorded. Twenty treatment courses in 17 patients (median age 5 mo, range 1 mo-17 yrs) were evaluated. Ten patients (59%) had chronic neurologic impairments (including Down syndrome in nine [53%]). Thirteen (76%) had undergone cardiac surgery, two (12%) had respiratory failure, one (6%) had endocarditis, and one (6%) had undergone scoliosis repair. In 15 (75%) of 20 cases, dexmedetomidine was started to minimize the use of midazolam before extubation; in 13 (87%) of these cases, the patients were extubated within 24 hours. The remaining patients could not tolerate midazolam, and dexmedetomidine was used as an alternative. No loading doses were given. The mean +/- SD starting dose was 0.2 +/- 0.2 microg/kg/hour, with a maximum of 0.5 +/- 0.2 microg/kg/hour. Mean +/- SD duration was 32 +/- 21 hours (range 3-75 hrs); 10 courses exceeded 24 hours. Mean arterial pressures before and after starting treatment were not significantly different (p=0.76), nor were values at discontinuation (p=0.31) or 12 hours later (p=0.29). No significant differences were noted in heart rate at the start (p=0.09), at discontinuation (p=0.06), or 12 hours later (p=0.17). One patient (6%) developed hypotension; no other adverse effects were noted. Conclusion: With careful patient selection and a conservative approach to dosing, dexmedetomidine was a useful sedative in children requiring mechanical ventilation. It allowed for a reduction or elimination of other sedatives, and it was particularly useful in children with chronic neurologic impairments. Dexmedetomidine was well tolerated, with no clinically significant effects on blood pressure or heart rate. abstract_id: PUBMED:30672840 Reducing Exposure to Opioid and Benzodiazepine Medications for Pediatric Cardiac Intensive Care Patients: A Quality Improvement Project. Objectives: To evaluate the effect of implementation of a comfort algorithm on infusion rates of opioids and benzodiazepines in postneonatal postoperative pediatric cardiac surgery patients. Design: A quality improvement project, using statistical process control methodology. Setting: Twenty-five-bed tertiary care pediatric cardiac ICU in an urban academic Children's hospital. Patients: Postoperative pediatric cardiac surgery patients. Interventions: Implementation of a guided comfort medication algorithm which consisted of key components; a low dose opioid continuous infusion, judicious use of frequent as needed opioids, initiation of dexmedetomidine infusion postoperatively, and minimal use of benzodiazepines. Measurements And Main Results: Among the baseline group admitted over the 18 month period prior to comfort algorithm implementation, 58 of 116 intubated patients (50%) received a continuous opioid infusion, compared with 30 of 41 (73%) for the implementation group over the 9-month period following implementation. Following algorithm implementation, opioid infusion rates were decreased and benzodiazepine infusions were nearly eliminated. Dexmedetomidine use and infusion rates did not change. Although mean duration of sedative drug infusions did not change with implementation, the frequency of high outliers was diminished. Duration of mechanical ventilation, length of ICU stay (outcome measures), and the frequency of unplanned extubation (balancing measure) were not affected by implementation. Conclusions: Implementation of a pediatric comfort algorithm reduced opioid and benzodiazepine dosing, without compromising safety for postoperative pediatric cardiac surgical patients. abstract_id: PUBMED:37882809 Intensive Care Unit Analgosedation After Cardiac Surgery in Children with Williams Syndrome : a Matched Case-Control Study. Objective: Cardiovascular abnormalities are common in patients with Williams syndrome and frequently require surgical intervention necessitating analgesia and sedation in a population with a unique neuropsychiatric profile, potentially increasing the risk of adverse cardiac events during the perioperative period. Despite this risk, the overall postoperative analgosedative requirements in patients with WS in the cardiac intensive care unit have not yet been investigated. Our primary aim was to examine the analgosedative requirement in patients with WS after cardiac surgery compared to a control group. Our secondary aim was to compare the frequency of major ACE and mortality between the two groups. Design: Matched case-control study. Setting: Pediatric CICU at a Tertiary Children's Hospital. Patients: Patients with WS and age-matched controls who underwent cardiac surgery and were admitted to the CICU after cardiac surgery between July 2014 and January 2021. Interventions: None. Measurements And Main Results: Postoperative outcomes and total doses of analgosedative medications were collected in the first six days after surgery for the study groups. Median age was 29.8 (12.4-70.8) months for WS and 23.5 (11.2-42.3) months for controls. Across all study intervals (48 h and first 6 postoperative days), there were no differences between groups in total doses of morphine equivalents (5.0 mg/kg vs 5.6 mg/kg, p = 0.7 and 8.2 mg/kg vs 10.0 mg/kg, p = 0.7), midazolam equivalents (1.8 mg/kg vs 1.5 mg/kg, p = 0.4 and 3.4 mg/kg vs 3.8 mg/kg, p = 0.4), or dexmedetomidine (20.5 mcg/kg vs 24.4 mcg/kg, p = 0.5 and 42.3 mcg/kg vs 39.1 mcg/kg, p = 0.3). There was no difference in frequency of major ACE or mortality. Conclusions: Patients with WS received similar analgosedative medication doses compared with controls. There was no significant difference in the frequency of major ACE (including cardiac arrest, extracorporeal membrane oxygenation, and surgical re-intervention) or mortality between the two groups, though these findings must be interpreted with caution. Further investigation is necessary to elucidate the adequacy of pain/sedation control, factors that might affect analgosedative needs in this unique population, and the impact on clinical outcomes. abstract_id: PUBMED:19593247 Dexmedetomidine sedation in children after cardiac surgery. Objective: To study the efficacy and safety of dexmedetomidine before and after early extubation after pediatric cardiac surgery. Design: Prospective, observational study. Setting: University hospital pediatric intensive care. Participants: Infants and children undergoing cardiac surgery. Interventions: The 141 patients, depending on the treatment period, were divided between: 1) usual, postoperative, continuous, intravenous sedation with chlorpromazine, midazolam, or fentanyl (n = 85); and 2) treatment with dexmedetomidine, 0.4 to 0.6 microg/kg/hr (n = 56). Sedation was titrated to reach a Ramsay score of 4 or 5 by administering rescue boluses, as needed. Measurements And Main Results: The primary and secondary study end points were efficacy of sedation and frequency of adverse events, respectively. The numbers of rescue boluses needed and the proportion of ineffectively sedated patients were similar in both groups. The frequency of bradycardia or hypotension in the dexmedetomidine group was 21.4% (8.2% in usual sedative group, p = .04), requiring interventions to restore hemodynamic stability in 5.3% of patients (0% in usual sedative group, p = .06). Rates of respiratory depression (8.2% vs. 0%, p = .04) and involuntary movements (15.3% vs. 3.6%, p = .01) were higher in the usual sedation group. Conclusions: A usual sedation regimen and dexmedetomidine were similarly efficacious. Although dexmedetomidine was associated with a lower rate of respiratory depression, it caused a higher rate of adverse hemodynamic events, which might be a concern in hemodynamically unstable patients. abstract_id: PUBMED:33891134 Changes in Sedation Practices in Association with Delirium Screening in Infants After Cardiopulmonary Bypass. Sedation in the cardiac intensive care unit (CICU) is necessary to keep critically ill infants safe and comfortable. However, long-term use of sedatives may be associated with adverse neurodevelopmental outcomes. We aimed to examine sedation practices in the CICU after the implementation of the Cornell Assessment of Pediatric Delirium (CAPD). We hypothesize the use of the CAPD would be associated with a decrease in sedative weans at CICU discharge. This is a single institution, retrospective cohort study. The study inclusion criteria were term infants, birthweight &gt; 2.5 kg, cardiopulmonary bypass (CPB), and mechanical ventilation (MV) on postoperative day zero. During the study period, 50 and 35 patients respectively, met criteria pre- and post-implementation of CAPD screening. Our results showed a statistically significant increase in the incidence of sedative habituation wean at CICU discharge after CAPD implementation (24% vs. 45.7%, p = 0.036). There was a statistically significant increase in exposure to opiate (56% vs. 88.6%, p = 0.001) and dexmedetomidine infusions (52% vs 80%, p = 0.008), increased likelihood of clonidine use at CICU discharge (OR 9.25, CI 2.39-35.84), and increase in the duration of intravenous sedative infusions (8.1 days vs. 5.1 days, p = 0.04) No statistical difference was found in exposure to fentanyl (42% vs. 58.8%, p = 0.13) or midazolam infusions (22% vs. 25.7%, p = 0.691); and there was no change in benzodiazepine or opiate use at CICU discharge or dosage. The prevalence of delirium in the CAPD cohort was 92%. CAPD implementation in the CICU was associated with changes in sedation practices, specifically an increase in the use of dexmedetomidine, which possibly explains the increased clonidine weans at CICU discharge. This is the first report of the association between CAPD monitoring and changes in sedative practices. Multi-center prospective studies are recommended to evaluate sedative practices, delirium, and its effects on neurodevelopment. abstract_id: PUBMED:16149752 Sedation and analgesia in the pediatric intensive care unit. Various clinical situations may arise in the PICU that necessitate the use of sedation, analgesia, or both. Although there is a large clinical experience with midazolam in the PICU population and it remains the most commonly used benzodiazepine in this setting, lorazepam may provide an effective alternative, with a longer half-life and more predictable pharmacokinetics without the concern of active metabolites. However, there are limited reports regarding its use in the PICU population, and concerns exist regarding the potential for toxicity related to its diluent, propylene glycol. Although the synthetic opioid fentanyl frequently is chosen for use in the PICU setting because of its hemodynamic stability, preliminary data suggest morphine may have a slower development of tolerance and may cause fewer withdrawal symptoms than fentanyl. Morphine's safety profile includes long-term follow-up studies that have demonstrated no adverse central nervous system developmental effects from its use in neonates and infants. In the critically ill infant at risk following surgery for congenital heart disease, clinical experience supports the use of the synthetic opioids, given their ability to modulate PVR and prevent pulmonary hypertensive crisis. Alternatives to the benzodiazepines and opioids include ketamine, pentobarbital, or dexmedetomidine. Ketamine may be useful for patients with hemodynamic instability or airway reactivity. There are limited reports regarding the use of pentobarbital in the PICU, with one study raising concerns of a high incidence of adverse effects associated with its use. Propofol has gained great favor in the adult population as a means of providing deep sedation while allowing for rapid awakening; however, its routine use is not recommended because of its potential association with "propofol infusion syndrome." As the pediatric experience increases, it appears that there will be a role for newer agents such as dexmedetomidine. abstract_id: PUBMED:36120285 Propofol in the Pediatric Intensive Care Unit, a Safe and Effective Agent in Reducing Pain and Sedation Infusions: A Single-Center Retrospective Study. Introduction Propofol has long been used as an anesthetic agent during pediatric surgery. Its use in pediatric intensive care units has been largely controversial. A beneficial use of propofol is to facilitate weaning of other pain and sedation infusions such as opiates and benzodiazepines. However, some have advocated to not use propofol due to fear of possible adverse effects including propofol infusion syndrome and hemodynamic instability. The purpose of this study was to determine both the safety of propofol infusions in critically ill pediatric patients, as well as the change in the requirement of other pain and sedation infusions by use of a propofol infusion. Methods Single-center, retrospective data (January 2011 to January 2020) was obtained manually using a study-specific data extraction tool created for electronic medical records. The data obtained included variables of interest that measured physiological parameters and pain/sedation infusion (morphine, fentanyl, hydromorphone, midazolam, and dexmedetomidine) rates during three time periods: before propofol initiation, immediately after discontinuation, and four hours after discontinuation. The physiological parameters were then compared to the pain and sedation infusion rates using paired Wilcoxon signed-rank tests. Results There was a total of 33 patients with an average age of 11.1 years who were given a median initial propofol infusion of 50 mcg/kg/min with a peak dose of 75 mcg/kg/min over an average of eight hours. Age had a weak and insignificant correlation with initial rate and duration and a moderate and significant correlation with peak rate and duration. Physiological parameters did not vary at any time point measured. There was a significant reduction in other pain and sedation infusions after discontinuation of propofol. Conclusion Propofol infusions are hemodynamically tolerated and the majority of patients who are on other pain and sedation infusions tolerate complete discontinuation of these infusions following propofol discontinuation. abstract_id: PUBMED:31060934 Selected 2018 Highlights in Congenital Cardiac Anesthesia. THIS ARTICLE IS a review of the highlights of pertinent literature published during the 12 months of 2018 that is of interest to the congenital cardiac anesthesiologist. During a search of the US National Library of Medicine PubMed database, several topics that displayed significant contributions to the field in 2018 emerged. The authors of the present review consider the following topics noteworthy: the patient with high-risk congenital heart disease (CHD) presenting for noncardiac surgery, cardiopulmonary resuscitation in infants and children with CHD, dexmedetomidine use in pediatric patients, point-of-care lung ultrasound, and regional anesthesia in pediatric cardiac surgery. Answer: Yes, dexmedetomidine can be used in infants after cardiac surgery in a pediatric cardiac intensive care unit (CICU). Several studies have evaluated the clinical response of dexmedetomidine alone or in combination with conventional sedatives/analgesics after cardiac surgery in this patient population. A retrospective study assessed the use of dexmedetomidine in infants and neonates after cardiac surgery and found it to be well tolerated in both intubated and nonintubated patients. It provided an adequate level of sedation/analgesia either alone or in combination with low-dose conventional agents (PUBMED:19295456). Another study found that dexmedetomidine was a useful sedative in children requiring mechanical ventilation, allowing for a reduction or elimination of other sedatives, and was particularly useful in children with chronic neurologic impairments. The study reported no clinically significant effects on blood pressure or heart rate (PUBMED:18154474). Furthermore, a study that evaluated the clinical effects of dexmedetomidine combined with midazolam in postoperative intensive care following pediatric cardiac surgery found that dexmedetomidine provided favorable sedative properties with a reduced requirement for concomitant midazolam and stable hemodynamics with tachycardia prevention (PUBMED:25939906). Another study highlighted that dexmedetomidine sedation was efficacious and associated with a lower rate of respiratory depression, although it caused a higher rate of adverse hemodynamic events, which might be a concern in hemodynamically unstable patients (PUBMED:19593247). Additionally, a quality improvement project that implemented a comfort algorithm, which included the initiation of dexmedetomidine infusion postoperatively, resulted in reduced opioid and benzodiazepine dosing without compromising safety for postoperative pediatric cardiac surgical patients (PUBMED:30672840). In summary, dexmedetomidine is a viable option for sedation in infants after cardiac surgery in the CICU, and it can be used alone or in combination with other sedatives/analgesics. It is important to monitor for potential adverse effects, particularly in hemodynamically unstable patients, and to adjust dosing accordingly.
Instruction: The use of composite meshes in laparoscopic repair of abdominal wall hernias: are there differences in biocompatibily? Abstracts: abstract_id: PUBMED:18806942 The use of composite meshes in laparoscopic repair of abdominal wall hernias: are there differences in biocompatibily?: experimental results obtained in a laparoscopic porcine model. Introduction: In recent years, laparoscopic repair of abdominal wall hernias has become increasingly established in routine clinical practice thanks to the myriad advantages it confers. Apart from the risk of intestinal damage following adhesiolysis, to date no information is available on the best way of preventing the formation of new adhesions in the vicinity of the implanted meshes. Numerous experimental investigations, mainly conducted on an open small-animal model, have demonstrated the advantages of coating meshes, inter alia with absorbable materials, compared with uncoated polypropylene meshes. In our established laparoscopic porcine model we set about investigating three of these meshes, which are already available on the market. Materials And Methods: In total, 18 domestic pigs underwent laparoscopic surgery and three different composite meshes were tested in each case on six animals (Dynamesh IPOM, Proceed, Parietene Composite). At 4 months, postmortem diagnostic laparoscopy was carried out, followed by full-wall excision of the specimens. Planimetric analysis was conducted to investigate the size of the entire surface area and the extent of adhesions. Histological investigations were performed on five sections for each specimen. These focused on the partial volumes of inflammatory cells, the proliferation marker Ki67, apoptotic index, inflammatory cell marker CD68 and transforming growth factor beta (TGF-beta) as a marker of the extracellular matrix. Results: A similar value of 14% was obtained for shrinkage of Dynamesh IPOM and Parietene Composite, while Proceed showed a 25% reduction in its surface area. Markedly lower values of 12.8% were obtained for Parietene Composite in respect of adhesions to the greater omentum, compared with 31.7% for Proceed and 33.2% for Dynamesh IPOM (p = 0.01). Overall, Parietene Composite performed best in the histological and immunhistochemistry tests. Conclusions: On the whole, all composite meshes showed evidence of good biocompatibility. However, none of the coatings was completely able to prevent adhesions. Coating of polypropylene meshes with collagen appears to confer significant advantages compared with other coatings. abstract_id: PUBMED:29388080 What is the evidence for the use of biologic or biosynthetic meshes in abdominal wall reconstruction? Introduction: Although many surgeons have adopted the use of biologic and biosynthetic meshes in complex abdominal wall hernia repair, others have questioned the use of these products. Criticism is addressed in several review articles on the poor standard of studies reporting on the use of biologic meshes for different abdominal wall repairs. The aim of this consensus review is to conduct an evidence-based analysis of the efficacy of biologic and biosynthetic meshes in predefined clinical situations. Methods: A European working group, "BioMesh Study Group", composed of invited surgeons with a special interest in surgical meshes, formulated key questions, and forwarded them for processing in subgroups. In January 2016, a workshop was held in Berlin where the findings were presented, discussed, and voted on for consensus. Findings were set out in writing by the subgroups followed by consensus being reached. For the review, 114 studies and background analyses were used. Results: The cumulative data regarding biologic mesh under contaminated conditions do not support the claim that it is better than synthetic mesh. Biologic mesh use should be avoided when bridging is needed. In inguinal hernia repair biologic and biosynthetic meshes do not have a clear advantage over the synthetic meshes. For prevention of incisional or parastomal hernias, there is no evidence to support the use of biologic/biosynthetic meshes. In complex abdominal wall hernia repairs (incarcerated hernia, parastomal hernia, infected mesh, open abdomen, enterocutaneous fistula, and component separation technique), biologic and biosynthetic meshes do not provide a superior alternative to synthetic meshes. Conclusion: The routine use of biologic and biosynthetic meshes cannot be recommended. abstract_id: PUBMED:38005054 A Review of Abdominal Meshes for Hernia Repair-Current Status and Emerging Solutions. Abdominal hernias are common issues in the clinical setting, burdening millions of patients worldwide. Associated with pain, decreased quality of life, and severe potential complications, abdominal wall hernias should be treated as soon as possible. Whether an open repair or laparoscopic surgical approach is tackled, mesh reinforcement is generally required to ensure a durable hernia repair. Over the years, numerous mesh products have been made available on the market and in clinical settings, yet each of the currently used meshes presents certain limitations that reflect on treatment outcomes. Thus, mesh development is still ongoing, and emerging solutions have reached various testing stages. In this regard, this paper aims to establish an up-to-date framework on abdominal meshes, briefly overviewing currently available solutions for hernia repair and discussing in detail the most recent advances in the field. Particularly, there are presented the developments in lightweight materials, meshes with improved attachment, antimicrobial fabrics, composite and hybrid textiles, and performant mesh designs, followed by a systematic review of recently completed clinical trials. abstract_id: PUBMED:29501796 Biological meshes for abdominal hernia: Lack of evidence-based recommendations for clinical use. Background: In the clinical literature on abdominal hernia repair, no sound criteria have been established to support the use of biological meshes as opposed to synthetic ones. Furthermore, the information on biological meshes is quite scarce, and so their place in therapy has not yet been defined. Methods: The treatment of primary and incisional ventral hernia was the target intervention evaluated in our analysis. Our study consisted of the following phases: a) Identification of the biologic meshes available on the market; b) Literature search focused on efficacy and safety of these meshes; c) Analysis of the findings derived from the literature search. The information collected this way was reviewed narratively, and presented according to standard meta-analysis. The main end-points of our analysis included infection of surgical wound at 1 month and recurrence at 12 months. Results: Our clinical literature comprised 11 trials that evaluated 5 biological meshes: Permacol (706 patients), Strattice (324 patients), Surgisis (44 patients), Tutomesh (38 patients) and Xenmatrix (22 patients). These studies generally showed a poor methodological quality. Surgical wound infection showed a wide between-study variability (95%CI: from 12.0% to 22.9%). Also the 12-month relapse rate demonstrated a wide 95%CI (from 5.0% to 19.9%). A significantly lower rate of recurrence at 12 months was found for Permacol compared with Strattice (rate difference: -14.2%; 95%CI: -22.1% to -6.2%). Discussion: Our analysis provided an overview of 5 biological meshes currently available on the market. The different types of meshes showed a marked statistical variability in the clinical outcomes. Hence, nearly all comparisons between different meshes in the two clinical end-points did not reach statistical significance. One exception was represented by the finding that cross-linked meshes had a significantly lower recurrence rate at 12 months than non-cross-linked meshes. abstract_id: PUBMED:28751222 The use of a composite synthetic mesh in the vicinity of bowel - For repair and prophylaxis of parastomal hernias. Does it increase the risk of short term infective complications? Aims: The use of synthetic meshes in potentially infected operative fields such as in the vicinity of large bowel, is controversial. This study describes our experience with the use of a synthetic composite mesh for prophylaxis and repair of parastomal hernias. Methods: Data were collected retrospectively over a 7-year period from 2008 to 2015. An IPOM (DynaMesh™) was used either during the formation of the stoma to reinforce the abdominal wall around the stoma or during the surgical repair of existing parastomal hernias, using keyhole or sandwich technique. Majority of meshes were placed laparoscopically. Clinical data and outcomes any stoma wound complications were collected. Results: Forty seven patients were included with a male to female ratio of 34:13. Median age was 66 years (38-91 years) with median follow-up of 17 months (3-73 months). Twenty seven patients had a prophylactic mesh placement (PMP) around colostomy after resection of colorectal cancer. None of these patients had any wound complications. Twenty patients had repair of parastomal hernias (RPH). One patient (1/20) in this group had a superficial wound infection around the stoma site and underwent an incision and drainage. One patient developed seroma and one had parastomal wound haematoma. Conclusions: The use of a composite synthetic mesh using a laparoscopic IPOM technique for the prophylaxis and treatment of parastomal hernias, even in a clean contaminated surgical field, is safe and feasible. abstract_id: PUBMED:30500445 Meshes in a mess: Mesenchymal stem cell-based therapies for soft tissue reinforcement. Surgical meshes are frequently used for the treatment of abdominal hernias, pelvic organ prolapse, and stress urinary incontinence. Though these meshes are designed for tissue reinforcement, many complications have been reported. Both differentiated cell- and mesenchymal stem cell-based therapies have become attractive tools to improve their biocompatibility and tissue integration, minimizing adverse inflammatory reactions. However, current studies are highly heterogeneous, making it difficult to establish comparisons between cell types or cell coating methodologies. Moreover, only a few studies have been performed in clinically relevant animal models, leading to contradictory results. Finally, a thorough understanding of the biological mechanisms of mesenchymal stem cells in the context of foreign body reaction is lacking. This review aims to summarize in vitro and in vivo studies involving the use of differentiated and mesenchymal stem cells in combination with surgical meshes. According to preclinical and clinical studies and considering the therapeutic potential of mesenchymal stem cells, it is expected that these cells will become valuable tools in the treatment of pathologies requiring tissue reinforcement. STATEMENT OF SIGNIFICANCE: The implantation of surgical meshes is the standard procedure to reinforce tissue defects such as hernias. However, an adverse inflammatory response secondary to this implantation is frequently observed, leading to a strong discomfort and chronic pain in the patients. In many cases, an additional surgical intervention is needed to remove the mesh. Both differentiated cell- and stem cell-based therapies have become attractive tools to improve biocompatibility and tissue integration, minimizing adverse inflammatory reactions. However, current studies are incredibly heterogeneous and it is difficult to establish a comparison between cell types or cell coating methodologies. This review aims to summarize in vitro and in vivo studies where differentiated and stem cells have been combined with surgical meshes. abstract_id: PUBMED:26859026 EFFECTS OF ETHYLENE OXIDE RESTERILISATION AND IN-VITRO DEGRADATION ON MECHANICAL PROPERTIES OF PARTIALLY ABSORBABLE COMPOSITE HERNIA MESHES. Background: Prosthetic mesh repair for abdominal wall hernias is widely used because of its technical simplicity and low hernia recurrence rates. The most commonly used material is pure polypropylene mesh, however newer composite materials are recommended by some centers because of their advantages. However, these meshes are more expensive than pure polypropylene meshes. Resterilisation of a pure polypropylene mesh has been shown to be quite safe, and many centers prefer slicing a large mesh into smaller pieces that suitable for hernia type or defect size. Nevertheless there is no data about the safety after resterilisation of the composite meshes. Objective: To search the effects of resterilisation and In vitro degradation in phosphate buffered saline solution on the physical structure and the mechanical properties of partially absorbable lightweigth meshes. Design: Laboratory-based research. Subjects: Two composite meshes were used in the study: One mesh is consisted of monofilament polypropylene and monofilament polyglecaprone--a copolymer of glycolide and epsilon (ε)-caprolactone--(Ultrapro®, 28 g/m2, Ethicon, Hamburg, Germany),andthe otherone consisted of multifilamentpolypropyleneandmultifilament polyglactine (Vypro II®, 30 g/m2,Ethicon, Hamburg, Germany). Two large meshes were cut into rectangular specimens sized 50x20 mm for mechanical testing and 20x20 mm for In vitro degradation experiments. Meshes were divided into control group with no resterilisation and gas resterilisation. Ethylene oxide gas sterilisation was performed at 55°C for 4.5 hours. In vitro degradation in 0.01 M phosphate buffered saline (PBS, pH 7.4) solution at 37 ± 1°C for 8 weeks was applied to one subgroup in each mesh group. Tensiometric measurements and scanning electron microscopyic evaluations were completed for control and resterilisation specimens. Results: Regardless of resterilisation, when meshes were exposed to In vitro degradation, all mechanical parameters decreased significantly. Highest reduction in mechanical properties was observed for Ultrapro due to the degradation of absorbable polyglecaprone and polyglactin parts of these meshes. It was observed that resterilisation by ethylene oxide did not have significant difference on the degradation characteristics and almost similar physical structures were observed for resterilised and non-resterilised meshes. For Vypro II meshes, no significant mechanical difference was observedbetweenresterilised andnon-resterilised meshes after degradationwhile resterilised Ultrapro meshes exhibited stronger characteristics than non-resterilised counterparts, after degradation. Conclusion: Resterilisation with ethylene oxide did not affect the mechanical properties of partially absorbable composite meshes. No important surface changeswere observed in scanning electron microscopy after resterilisation. abstract_id: PUBMED:22772495 Correction of parastomal hernia using meshes The incidence of parastomal hernia in ileal conduit urinary diversion ranges from 4% to 16%. Surgical correction is necessary in about one third of cases and different techniques of surgical reconstruction have been described. Primary fascial repair has a high recurrence rate of 46-100% whereas stoma translocation is associated with complication rates of up to 88%. The use of alloplastic material (usually polypropylene meshes) has reduced the recurrence rate by up to 100% for primary fascial repair and 71% for stoma translocation down to 33%.Composite meshes consist of two layers, a polypropylene layer and an expanded polytetrafluoroethylene (ePTFE) layer. The former is placed against the abdominal wall for permanent reinforcement by ingrowing connective tissue and the ePTFE layer is placed against the abdominal organs preventing adhesions with the bowel. The intraperitoneal placement of such composite meshes is a standardized, simplified, gentle and controllable surgical procedure. This article reports experiences with the surgical correction of parastomal hernias in ileal conduits using composite meshes. abstract_id: PUBMED:26111309 Emerging Trends in Abdominal Wall Reinforcement: Bringing Bio-Functionality to Meshes. Abdominal wall hernia is a recurrent issue world-wide and requires the implantation of over 1 million meshes per year. Because permanent meshes such as polypropylene and polyester are not free of complications after implantation, many mesh modifications and new functionalities have been investigated over the last decade. Indeed, mesh optimization is the focus of intense development and the biomaterials utilized are now envisioned as being bioactive substrates that trigger various physiological processes in order to prevent complications and to promote tissue integration. In this context, it is of paramount interest to review the most relevant bio-functionalities being brought to new meshes and to open new avenues for the innovative development of the next generation of meshes with enhanced properties for functional abdominal wall hernia repair. abstract_id: PUBMED:34635415 Cell-based therapies for reinforcing the treatment efficacy of meshes in abdominal wall hernias:A systematic review and meta-analysis. To achieve a tension-free repair and reduce the recurrence rate of abdominal wall hernias (AWHs), various kinds of meshes have been applied in surgery. However, these meshes are reported to have problems with adhesion, infection, chronic pain and foreign body sensation. Recently, the introduction of cellular components on meshes seems to provide a new alternative to resolve these problems. This study aimed to evaluate the treatment efficacy of meshes seeded with cells (mesh-cell group) for AWHs, compared to meshes without cells (mesh group). Cochrane Library, Web of Science and PubMed were searched for studies that provided data about meshes, cells and AWHs. Twenty-six studies involving 578 animals were included. We found that the mesh-cell group could better control hernia recurrent than the mesh group (OR = 0.25, 95% CI = 0.15-0.42). Although the mesh-cell group did not reduce the incidence of adhesions (OR = 0.67, 95% CI = 0.26-1.74), it alleviated the extent of adhesions (WMD = -1.48, 95% CI = -1.86 to -1.10). In addition, the capillary density of mesh-cell group was also higher than that of mesh group (WMD = 26.27, 95% CI = 14.45-38.09). For incidence of infection, the two groups had no significant differences (OR = 0.94, 95% CI = 0.39-2.31). On the basis of our current evidence, AWHs were likely to receive a satisfied outcome in animal models when treated by meshes seeded with cells. Future studies with human trial data are needed to validate these findings. Answer: The use of composite meshes in laparoscopic repair of abdominal wall hernias does show differences in biocompatibility. In an experimental study using a laparoscopic porcine model, three different composite meshes were tested: Dynamesh IPOM, Proceed, and Parietene Composite. The results indicated that while all composite meshes demonstrated good biocompatibility, there were differences among them. Parietene Composite had the lowest percentage of adhesions to the greater omentum (12.8%) compared to Proceed (31.7%) and Dynamesh IPOM (33.2%), and it also performed best in histological and immunohistochemistry tests. The study concluded that coating polypropylene meshes with collagen may offer significant advantages over other coatings (PUBMED:18806942). However, it is important to note that none of the coatings completely prevented adhesions, indicating that while there are differences in biocompatibility, the issue of adhesion formation is not entirely resolved by current composite mesh technologies. Additionally, the routine use of biologic and biosynthetic meshes, which are alternatives to synthetic meshes, cannot be recommended based on the cumulative data, as they do not provide a superior alternative to synthetic meshes in complex abdominal wall hernia repairs (PUBMED:29388080). Emerging solutions in abdominal mesh development aim to address the limitations of current meshes, including biocompatibility issues. These solutions include lightweight materials, improved attachment, antimicrobial fabrics, and composite and hybrid textiles (PUBMED:38005054). Despite these advancements, the clinical use of biological meshes lacks evidence-based recommendations, as the information on their efficacy and safety is scarce (PUBMED:29501796). In summary, there are differences in biocompatibility among composite meshes used in laparoscopic repair of abdominal wall hernias, but challenges remain in preventing adhesion formation and establishing clear advantages over traditional synthetic meshes.
Instruction: Is there a role for tamsulosin after shock wave lithotripsy in the treatment of renal and ureteral calculi? Abstracts: abstract_id: PUBMED:17509314 Is there a role for tamsulosin in shock wave lithotripsy for renal and ureteral calculi? Purpose: We evaluated the effect of the alpha-blocker tamsulosin on stone clearance, analgesic requirements and steinstrasse in shock wave lithotripsy for solitary renal and ureteral calculus. Materials And Methods: A prospective, double-blind, randomized placebo controlled study was performed during 1 year involving 60 patients with a solitary renal or ureteral calculus undergoing shock wave lithotripsy. The control group (30) received 0.4 mg tamsulosin and the study group (30) received placebo daily until stone clearance or for a maximum of 30 days. An oral preparation of dextropropoxyphene hydrochloride and acetaminophen was the analgesic used on an on-demand basis. The parameters assessed were stone size, position, clearance time, effect on steinstrasse and analgesic requirement. Results: The overall clearance rate was 96.6% (28 of 29) in the study group and 79.3% (23 of 29) in the control group (p = 0.04). With larger stones 11 to 24 mm the difference in the clearance rate was significant (p = 0.03) but not so with the smaller stones 6 to 10 mm (p = 0.35). The average dose of analgesic used was lower with tamsulosin than with controls, without statistical significance. Steinstrasse resolved spontaneously in the tamsulosin group whereas 25% (2 of 8) required intervention in the placebo group. There was no difference between the 2 groups with regard to age, stone size or location. Conclusions: The alpha-blocker tamsulosin seemed to facilitate stone clearance, particularly with larger stones during shock wave lithotripsy for renal and ureteral calculus. It also appeared to improve the outcome of steinstrasse. Tamsulosin may have a potential role in routine shock wave lithotripsy. abstract_id: PUBMED:33294076 Does silodosin offer better results than tamsulosin as medical expulsive treatment after shock wave lithotripsy for single distal ureteric stones? Introduction: Different antagonists of αadrenergic receptors (α-blockers) have been used as medical expulsive treatment (MET) after extracorporeal shock wave lithotripsy (ESWL). Aim: To retrospectively evaluate the expulsion rate of fragments after extracorporeal shock wave lithotripsy performed for single ureteral stones followed by different medical expulsive treatments. Material And Methods: We retrospectively analyzed stone expulsion rates of 190 patients treated by shock wave lithotripsy (SWL) for single, 5 to 10 mm, symptomatic and uncomplicated distal ureteric stones, treated with tamsulosin 0.4 mg, silodosin 8 mg or silodosin 4 mg as MET. Beside the stone-free rate after 4 weeks of treatment, we also investigated the pain intensity using the visual analogue scale (VAS), adverse events induced by the medication, safety of drug administration and the reasons for possible early treatment discontinuation. Results: Silodosin 8 mg and tamsulosin 0.4 mg have similar results in terms of stone-free rate. For silodosin 4 mg the stone-free rate was significantly lower than for the previous two drugs. In patients treated with silodosin 4 mg the VAS was significantly higher than in patients treated with silodosin 8 mg or tamsulosin 0.4 mg, for all the follow-up visits. Conclusions: Alpha-blocker treatment after ESWL with silodosin 8 mg offers a similar stone-free rate compared with tamsulosin 0.4 mg, being well tolerated. A lower dose of silodosin (4 mg) has significantly poor results, irrespective of ureteric stone size, with more frequent renal colic and severe pain. abstract_id: PUBMED:35576073 A randomized trial of adjuvant tamsulosin as a medical expulsive therapy for renal stones after shock wave lithotripsy. Adjuvant medical expulsive therapy (MET) for shock wave lithotripsy (SWL) is controversial. With limited use of the computed tomography (CT), the stone free rate (SFR) become overestimated. Herein we evaluate tamsulosin post-SWL for renal stone using the CT to assess SFR. A randomized controlled trial (NCT05032287) was carried out for renal stone patients amenable for SWL. Patients were allocated after 1st session of SWL to receive tamsulosin 0.4 mg or placebo once daily from the 1st day of SWL and for 3-months or becoming stone free. The primary outcome was SFR, defined by presence of residual fragments (RF) ≤ 3 mm (3C-SFR). The 3C-SFR were 73.8% and 59.6% in tamsulosin and placebo groups, respectively (p = 0.03). The median (IQR) pain scores were 3 (3, 5) and 5 (3, 6) in tamsulosin and placebo groups, respectively (p = 0.04), However, the post-SWL complication and add-on analgesia needed showed no significance differences between groups. The median time for stone free were 30 days (95% CI: 27.29-32.71) in tamsulosin arm, and 36 days (95% CI: 31.01-40.99) in placebo arm, HR = 1.42 (95% CI: 1.02-1.98). Tamsulosin has more reversible adverse effect, compared to placebo (p = 0.03). In our study, the use of tamsulosin as MET following SWL facilitates expulsion of retained residual fragments. Tamsulosin shortens time to reach stone free, decreases pain scores. However, tamsulosin does not affect the add-on IV analgesics and have more reversible adverse effect, compared to placebo. abstract_id: PUBMED:25909212 Extracorporeal shock wave lithotripsy in the treatment of renal and ureteral stones. The use of certain technical principles and the selection of favorable cases can optimize the results of extracorporeal shock wave lithotripsy (ESWL). The aim of this study is to review how ESWL works, its indications and contraindications, predictive factors for success, and its complications. A search was conducted on the Pubmed® database between January 1984 and October 2013 using "shock wave lithotripsy" and "stone" as key-words. Only articles with a high level of evidence, in English, and conducted in humans, such as clinical trials or review/meta-analysis, were included. To optimize the search for the ESWL results, several technical factors including type of lithotripsy device, energy and frequency of pulses, coupling of the patient to the lithotriptor, location of the calculus, and type of anesthesia should be taken into consideration. Other factors related to the patient, stone size and density, skin to stone distance, anatomy of the excretory path, and kidney anomalies are also important. Antibiotic prophylaxis is not necessary, and routine double J stent placement before the procedure is not routinely recommended. Alpha-blockers, particularly tamsulosin, are useful for stones &gt;10mm. Minor complications may occur following ESWL, which generally respond well to clinical interventions. The relationship between ESWL and hypertension/diabetes is not well established. abstract_id: PUBMED:24385155 The effect of tamsulosin on pain and clearance according to ureteral stone location after shock wave lithotripsy. Background: Medical expulsion therapy has shown encouraging results in facilitating spontaneous clearance of ureteral stones after extracorporeal shock wave lithotripsy. However, no other study has yet determined the benefit of medical expulsion therapy for stones in different ureteral locations. Objective: The aim of the study was to evaluate tamsulosin as adjunctive therapy to extracorporeal shock wave lithotripsy (SWL) in terms of pain clearance of stones in the upper, middle, and lower ureter. Methods: Between June 2008 and July 2011, patients with a solitary ureteral stone that was ≥6 mm up to 15 mm and located in the upper, middle, or lower ureter undergoing SWL were evaluated. The patients were randomly allocated to a conservative treatment (group 1) and a tamsulosin treatment group (group 2). Administration of the drug was started immediately after SWL and was continued for a maximum of 28 days. Patients were evaluated for stone clearance, time to stone clearance, and number of SWL sessions. The pain intensity was evaluated by visual analog scale. Results: There were 64 patients in the control group and 59 in the tamsulosin group. The average stone sizes were 10.70 (3.20) mm and 11.40 (3.01) mm (P = 0.24). Group 1 and group 2 received 2507 (984) and 2759 (775) shock waves (P = 0.86), 1.53 (0.8) and 1.49 (0.75) sessions (P = 0.85), respectively. Mean visual analog scale scores and times to clearance were 3.81 (2.74) and 2.73 (2.28) (P = 0.00) and 12.59 (8.63) days and 8.34 (7.60) days (P = 0.00), respectively, for all stones in groups 1 and 2. Only the clearance time of upper ureteral stones between groups showed statistical significance (13.54 [8.32] days vs 7.10 [6.40] days; P = 0.00). Conclusions: Tamsulosin may help in the treatment of all ureteral stones after SWL, particularly stones in the upper ureter, with a shorter time to clearance and less need for analgesic drugs. abstract_id: PUBMED:27216432 Shock-wave lithotripsy: variance within UK practice. The objectives of this study are to determine the current treatment policies of UK shock-wave lithotripsy centres. Fixed-site lithotripter centres in the UK were identified via the national Therapeutic Interventions for Stones of the Ureter (TISU) study (n = 25). Questionnaires were completed regarding current SWL protocols for each centre, including management of anticoagulation, use of antibiotics and analgesia, urine testing, pacemakers, and arterial aneurysms. Data were collected regarding service delivery. Responses were obtained for 21 centres. Most centres use the Storz Modulith (85.7 %). Wide variation was observed in clinical contraindications to SWL, with 47.6 % centres performing SWL in patients with an abdominal aortic aneurysm, 66.7 % performing SWL in patients with a pacemaker, and 66.7 % of centres not performing SWL in asymptomatic patients with a urine dipstick positive for nitrites and leucocytes. The management of anticoagulation pre- and post-SWL showed wide variation, with the omission of anticoagulation ranging from 0 to 10 days pre-SWL. Seventeen distinct analgesia regimens were reported and prophylactic antibiotics are routinely administered in 25.0 % of centres. Tamsulosin is prescribed to all patients in 20.0 % of centres and a further 15.0 % of centres routinely prescribe tamsulosin post-SWL of ureteric stones. The included centres undertake SWL a median of 4 days per week and treat a median of six patients per list. Emergency SWL is unavailable in 30.0 % of centres. This observational real-life study has identified a significant disparity in the delivery of SWL throughout the UK, despite high numbers of patients with renal and ureteric stones being treated with this modality. Further studies should address the key areas of controversy, including an assessment of technical training, and facilitate the development of national guidelines to ensure a high level of standardized care for SWL patients. abstract_id: PUBMED:21166579 Is there a role for tamsulosin after shock wave lithotripsy in the treatment of renal and ureteral calculi? Introduction: Our study aimed at defining the role of tamsulosin as adjunctive therapy after extracorporeal shock wave lithotripsy (ESWL) in patients with stones in the kidney and ureter. Materials And Methods: A placebo-controlled, randomized, double-blind clinical trial prospectively performed between February 2008 and September 2009 on 150 patients with 4-20 mm in diameter renal and ureteral stones referred to our ESWL center. After ESWL, all patients randomly assigned to two groups (placebo and tamsulosin). The drugs administration was started immediately after ESWL and was continued for a maximum of 30 days. Results: From 150 patients, 71 in control group and 70 in case group completed the study. Of 71 patients (60.56%) in control group, 43 patients became stone free; and other patients (39.44%) did not succeed in stone expulsion during 12 weeks after ESWL. In case group of 70 patients (71.4%), 50 patients became stone free. Time of stone passage in most of the patients happened between 20th and 30th day in control group (32.6%) and between 10th and 20th day (50%) in case group after ESWL. There is no statistically significant difference between stone passage in two groups (p = 0.116) and location of stone (p = 0.114), but there is statistically significant difference in time of stone passage from onset of treatment in case and control groups (p = 0.002). Conclusion: At last, this study suggested that tamsulosin facilitate earlier clearance of fragments after ESWL. abstract_id: PUBMED:18413683 Efficacy of tamsulosin with extracorporeal shock wave lithotripsy for passage of renal and ureteral calculi. Objective: To review the evidence for the safety and efficacy of adjunctive tamsulosin in enhancing the efficacy of renal and ureteral stone clearance when used with extracorporeal shock wave lithotripsy (ESWL). Data Sources: A search of MEDLINE (1950-January 2008), PubMed (1950-January 2008), and the Iowa Drug Information System (1966-January 2008) was performed using the search terms tamsulosin and extracorporeal shock wave lithotripsy. MeSH headings included lithotripsy and adrenergic alpha-antagonists. Additional references were found by searching bibliographic references of resulting citations. Study Selection And Data Extraction: All studies utilizing tamsulosin therapy after a single session of ESWL or after the development of steinstrasse, an accumulation of stone fragments that obstructs the ureter, were included. Data Synthesis: To date, 5 prospective studies have evaluated the efficacy of tamsulosin combined with ESWL in enhancing the passage of renal and ureteral stones. In one trial, 12-week renal stone clearance was 60% in the control group compared with 78.5% in the tamsulosin group (p = 0.037). Among trials that evaluated overall ureteral stone clearance, efficacy rates were 33.3-79.3% in the control groups compared with 66.6-96.6% in the tamsulosin groups. Reports of pain and supplemental analgesic dosing were consistently lower with tamsulosin, but data on the incidence of subsequent retreatment with ESWL or ureteroscopy was rarely reported. Adjunctive tamsulosin particularly enhanced the passage of renal stones 10-24 millimeters in diameter. Overall, tamsulosin was well tolerated. Conclusions: Overall, evidence suggests that adjunctive tamsulosin therapy combined with ESWL is safe and effective in enhancing stone clearance in patients with renal stones 10-24 millimeters in diameter. Evidence regarding ureteral stone clearance is inconclusive, although adjunctive tamsulosin has been reported to reduce painful episodes. Larger prospective trials evaluating different dosages and stone locations, as well as the ability of tamsulosin to reduce repeat ESWL or more invasive methods such as ureteroscopy should be performed. abstract_id: PUBMED:16286100 Role of tamsulosin in treatment of patients with steinstrasse developing after extracorporeal shock wave lithotripsy. Objectives: To evaluate whether tamsulosin, as an alpha(1)-blocker, was effective for the treatment of steinstrasse in the lower ureter after shock wave lithotripsy. Methods: A total of 67 patients (43 men and 24 women) with steinstrasse in the lower portion of the ureters were randomly divided into two groups. Only hydration and tenoxicam (20 mg orally once daily) was given to group 1 (35 patients). Group 2 (32 patients), was also given tamsulosin (0.4 mg daily). All patients were reevaluated and questioned about the number of episodes and severity of ureteral colic and the rates of spontaneous resolution of steinstrasse 6 weeks after beginning treatment. They were asked to score the severity of pain according to a visual analog scale. Results: In 23 (65.7%) of 35 patients in group 1 and in 24 (75%) of 32 patients in group 2, steinstrasse resolved during the first 6 weeks. The resolution rates were not significantly different (P &gt;0.05) between groups 1 and 2. Group 1 had more ureteral colic episodes than did group 2 while passing their stones. This difference was statistically significant (P &lt;0.01). Group 1 patients reported significantly greater (P &lt;0.001) visual analog scale scores than did group 2 patients. Conclusions: The addition of tamsulosin to conservative treatment seemed to be beneficial in terms of the reduced number of ureteral colic episodes and the severity of pain in the patients who developed steinstrasse after shock wave lithotripsy. abstract_id: PUBMED:20497881 Effect of alpha1-adrenergic antagonists on lower ureteral stones with extracorporeal shock wave lithotripsy. Objective: To evaluate the efficiency of alpha1-adrenergic antagonists on stone clearance after extracorporeal shock wave lithotripsy (ESWL) in patients with lower ureteral stones. Methods: A total of 107 patients with solitary lower ureteral stones and who underwent single ESWL sessions were divided into two groups. Group 1 received our standard medical therapy, and group 2 was treated with 0.4 mg/day tamsulosin for a maximum of 2 weeks. All patients were re-evaluated with plain film radiography and ultrasound each week during treatment. Results: Twenty-four of the 52 patients in group 1 (46.2%) and 41 of 55 patients in group 2 (74.5%) (p =0.002) were found to be stone-free. Among patients with stones 10-15 mm in diameter, the stone-free rate was 36.4% in group 1 and 73.0% in group 2 (p = 0.003). Average stone expulsion time was 11.6 days and 8.1 days in groups 1 and 2, respectively (p = 0.000). Ureteral colic occurred in 10 patients (19.2%) in group 1 but only 3 patients (5.5%) in group 2 (p = 0.043). The only side effect of tamsulosin was slight dizziness in 2 of the 55 (3.6%) patients in group 2. Conclusion: Adjunctive therapy with alpha1-adrenergic antagonists after ESWL is more effective than, and equally as safe as lithotripsy alone in the treatment of patients with lower ureteral stones. The use of alpha1-adrenergic antagonists is more useful for stones with a large dimension, and can also reduce stone expulsion time and episodes of ureteral colic. Answer: Yes, there is a role for tamsulosin after shock wave lithotripsy (SWL) in the treatment of renal and ureteral calculi. Several studies have evaluated the efficacy of tamsulosin as an adjunctive therapy post-SWL and have found it to be beneficial in various aspects of stone management. Tamsulosin, an alpha-blocker, has been shown to facilitate stone clearance, particularly with larger stones during SWL for renal and ureteral calculus. It also appears to improve the outcome of steinstrasse, a condition where fragments of stones obstruct the ureter after SWL (PUBMED:17509314). Another study found that tamsulosin may help in the treatment of all ureteral stones after SWL, particularly stones in the upper ureter, with a shorter time to clearance and less need for analgesic drugs (PUBMED:24385155). A randomized controlled trial indicated that tamsulosin post-SWL for renal stone using computed tomography (CT) to assess stone-free rates (SFR) showed that tamsulosin facilitated the expulsion of retained residual fragments, shortened the time to reach stone-free status, and decreased pain scores (PUBMED:35576073). Moreover, tamsulosin has been reported to reduce painful episodes and is well tolerated, particularly enhancing the passage of renal stones 10-24 millimeters in diameter (PUBMED:18413683). Furthermore, tamsulosin has been suggested to facilitate earlier clearance of fragments after SWL, although the difference in stone passage between the tamsulosin and control groups was not statistically significant in one study (PUBMED:21166579). Additionally, the use of tamsulosin has been associated with reduced number of ureteral colic episodes and the severity of pain in patients who developed steinstrasse after SWL (PUBMED:16286100). In summary, the evidence suggests that tamsulosin can be an effective adjunctive therapy after SWL, aiding in stone clearance, reducing pain and analgesic requirements, and potentially decreasing the time to achieve a stone-free state.
Instruction: Incidental head and neck (18)F-FDG uptake on PET/CT without corresponding morphological lesion: early predictor of cancer development? Abstracts: abstract_id: PUBMED:29160180 Incidental Findings on FDG PET/CT in Head and Neck Cancer. Objective 18F-fluoro-deoxy-glucose positron emission tomography/computed tomography (FDG PET/CT) imaging is common in head and neck cancer and often identifies incidental findings that necessitate additional patient evaluations. Our goal was to assess the frequency and nature of these incidental imaging findings on FDG-PET/CT. Study Design Retrospective cohort study. Setting Tertiary medical center. Subjects and Methods All patients with head and neck cancer who had undergone FDG-PET/CT imaging between January 2014 and June 2015 at our institution were evaluated for incidental findings. Results A total of 293 patients met criteria; more than one-third (n = 103) had at least 1 finding unrelated to their head and neck cancer, for a total of 134 incidental findings. Incidental findings within the head and neck (33.5% of all) excluding the thyroid were most common: 35% incidental findings were concerning for malignancy; of these, 25.5% were malignant with further workup. Recommendations were given by the head and neck radiologist on 72 (53.7%) findings: 74.5% of potentially malignant findings and 42.5% of benign findings had recommendations for follow-up. Significantly more patients with findings described as malignant were given recommendations for follow-up ( P = .0004). Conclusion Incidental findings on FDG-PET/CT are present in more than one-third of patients with head and neck cancer. More than one-third of incidental findings were concerning for malignancy. This study illustrates how the incidental findings discovered on FDG PET/CT frequently necessitate additional evaluations unrelated to the index head and neck cancer. The impact of these additional assessments on the cost and quality of health care warrants future evaluation. abstract_id: PUBMED:25466367 Correlation of (18)F-BPA and (18)F-FDG uptake in head and neck cancers. Background And Purpose: The aim of this study was to compare the accumulation of 4-borono-2-(18)F-fluoro-phenylalanine ((18)F-BPA) with that of (18)F-fluorodeoxyglucose ((18)F-FDG) in head and neck cancers, and to assess the usefulness of (18)F-FDG PET for screening candidates for boron neutron capture therapy (BNCT). Material And Methods: Twenty patients with pathologically proven malignant tumors of the head and neck were recruited from March 2012 to January 2014. All patients underwent both whole-body (18)F-BPA PET/CT and (18)F-FDG PET/CT within 2weeks of each other. The uptakes of (18)F-BPA and (18)F-FDG at 1h after injection were evaluated using the maximum standardized uptake value (SUVmax). Results: The accumulation of (18)F-FDG was significantly correlated with that of (18)F-BPA. The SUVmax of (18)F-FDG ⩾5.0 is considered to be suggestive of high (18)F-BPA accumulation. Conclusions: (18)F-FDG PET might be an effective screening method performed prior to (18)F-BPA for selecting patients with head and neck cancer for treatment with BNCT. abstract_id: PUBMED:27716085 Clinical significance of incidental [18 F]FDG uptake in the gastrointestinal tract on PET/CT imaging: a retrospective cohort study. Background: The frequency and clinically important characteristics of incidental (18)F-fluorodeoxyglucose ([18 F]FDG) positron emission tomography (PET) uptake in the gastrointestinal tract (GIT) on PET/CT imaging in adults remain elusive. Methods: All PET/CT reports from 1/1/2000 to 12/31/2009 at a single tertiary referral center were reviewed; clinical information was obtained from cases with incidental (18)F-FDG uptake in the GIT, with follow-up through October, 2012. Results: Of the 41,538 PET/CT scans performed during the study period, 303 (0.7 %) had incidental GIT uptake. The most common indication for the PET/CT order was cancer staging (226 cases, 75 %), with 74 % for solid and 26 % for hematologic malignancies. Of those with solid malignancy, only 51 (17 %) had known metastatic disease. The most common site of GIT uptake was the colon, and of the 240 cases with colonic uptake, the most common areas of uptake were cecum (n = 65), sigmoid (n = 60), and ascending colon (n = 50). Investigations were pursued for the GIT uptake in 147 cases (49 %), whereas 51 % did not undergo additional studies, largely due to advanced disease. There were 73 premalignant colonic lesions diagnosed in 56 cases (tubular adenoma, n = 36; tubulovillous adenoma with low grade dysplasia, n = 27; sessile serrated adenoma, n = 4; tubulovillous adenoma with high grade dysplasia, n = 3; villous adenoma, n = 3), and 20 cases with newly diagnosed primary colon cancer. All 20 (100 %) patients with malignant colonic lesions had a focal pattern of [18 F]FDG uptake. Among cases with a known pattern of [18 F]FDG uptake, 98 % of those with premalignant lesions had focal [18 F]FDG uptake. Eighteen (90 %) of the cases with newly diagnosed colon cancer were not known to have metastatic disease of their primary tumor. Areas of incidental uptake in the ascending colon had the greatest chance (42 %) of being malignant and premalignant lesions than in any other area. Conclusion: Focality of uptake is highly sensitive for malignant and premalignant lesions of the GIT. In patients without metastatic disease, incidental focal [18]FDG uptake in the GIT on PET/CT imaging warrants further evaluation. abstract_id: PUBMED:25182627 Incidental focal FDG uptake in the parotid glands on PET/CT in patients with head and neck malignancy. Objectives: To evaluate the prevalence and clinical significance of focal parotid lesions identified by (18)F- FDG PET/CT in patients with nonparotid head and neck malignancies. Methods: From 3,638 PET/CT examinations using (18)F-FDG conducted on 1,342 patients with nonparotid head and neck malignancies, we retrospectively identified patients showing incidental focal FDG uptake in the parotid glands. The diagnosis of parotid lesions was confirmed histopathologically or on imaging follow-up. Patient demographics, clinical features, maximum standardized uptake value (SUV(max)) on PET images, size and attenuation on corresponding contrast-enhanced CT images were assessed and correlated with the final diagnosis. Results: The prevalence of incidental focal parotid FDG uptake on PET/CT was 2.1% (95% CI 1.4 - 3.0%). Among 21 patients with focal parotid lesions confirmed histologically or on imaging follow-up, 7 (33.3%) had malignant lesions (all metastases) and 14 (66.7%) had benign lesions (four pleomorphic adenomas, two Warthin's tumours, one benign lymph node, one granulomatous lesion, six lesions without histopathological confirmation). There were no significant differences in age, sex, SUV(max) or CT findings between patients with benign and those with malignant lesions. Conclusion: Focal parotid FDG uptake on PET/CT in patients with head and neck malignancy warrants further investigations to ensure adequate therapy for incidental parotid lesions. Key Points: • The prevalence of parotid incidentaloma on PET in head and neck malignancy was 2.1% • The malignancy rate of incidental focal parotid FDG uptake was 33.3% • SUV max could not reliably differentiate malignant from benign incidental parotid lesions. abstract_id: PUBMED:24929443 Clinical values for abnormal ¹⁸F-FDG uptake in the head and neck region of patients with head and neck squamous cell carcinoma. Purpose: Fluorine 18-fluorodeoxyglucose ((18)F-FDG) positron emission tomography (PET)/computed tomography (CT) is used to identify index or second primary cancer (SP) of the head and neck (HN) through changes in (18)F-FDG uptake. However, both physiologic and abnormal lesions increase (18)F-FDG uptake. Therefore, we evaluated (18)F-FDG uptake in the HN region to determine clinical values of abnormal tracer uptake. Methods: A prospective study approved by the institutional review board was conducted in 314 patients with newly diagnosed HN squamous cell carcinoma (HNSCC) and informed consent was obtained from all enrolled patients. The patients received initial staging workups including (18)F-FDG PET/CT and biopsies. All lesions with abnormal HN (18)F-FDG uptake were recorded and most of those were confirmed by biopsies. Diagnostic values for abnormal (18)F-FDG uptake were calculated. Results: Abnormal (18)F-FDG uptake was identified in primary tumors from 285 (91.9%) patients. False-negative results were obtained for 22.3% (23/103) T1 tumors and 2.2% (2/93) T2 tumors (P&lt;0.001). Thirty-eight regions of abnormal (18)F-FDG uptake were identified in 36 (11.5%) patients: the thyroid (n=13), maxillary sinus (n=7), palatine tonsil (n=6), nasopharynx (n=5), parotid gland (n=2) and others (n=5). Synchronous SP of the HN was identified in eight (2.5%) patients: the thyroid (n=5), palatine tonsil (n=2), and epiglottis (n=1). The sensitivity and specificity of (18)F-FDG PET/CT for identification of SPs were 75.0% and 98.7%, respectively. Conclusions: (18)F-FDG PET/CT is a reliable method for tumor staging and identify SP in HN region, promoting appropriate therapeutic planning. Additional examinations may be required to identify superficial or small-volume tumors. abstract_id: PUBMED:19305995 Incidental head and neck (18)F-FDG uptake on PET/CT without corresponding morphological lesion: early predictor of cancer development? Purpose: To retrospectively determine whether increased/asymmetric FDG uptake on PET without a correlating morphological lesion on fully diagnostic CT indicates the development of a head and neck malignancy. Methods: In 590 patients (mean age 55.4 +/- 13.3 years) without a head and neck malignancy/inflammation FDG uptake was measured at (a) Waldeyer's ring, (b) the oral floor, (c) the larynx, and (d) the thyroid gland, and rated as absent (group A), present (group B), symmetric (group B1) or asymmetric (group B2). Differences between groups A and B and between B1 and B2 were tested for significance with the U-test (p &lt; 0.05). An average follow-up of about 2.5 years (mean 29.5 +/- 13.9 months) served as the reference period to determine whether patients developed a head and neck malignancy. Results: Of the 590 patients, 235 (40%) showed no evidence of enhanced FDG uptake in any investigated site, and 355 (60%) showed qualitatively elevated FDG uptake in at least one site. FDG uptake values (SUV(max), mean+/-SD) for Waldeyer's ring were 3.0 +/- 0.89 in group A (n = 326), 4.5 +/- 2.18 in group B (n = 264; p &lt; 0.01), 5.4 +/- 3.35 in group B1 (n = 177), and 4.1 +/- 1.7 in group B2 (n = 87; p &lt; 0.01). Values for the oral floor were 2.8 +/- 0.74 in group A (n = 362), 4.7 +/- 2.55 in group B (n = 228; p &lt; 0.01), 4.4 +/- 3.39 in group B1 (n = 130), and 5.1 +/- 2.69 in group B2 (n = 98, p = 0.01). Values for the larynx were 2.8 +/- 0.76 in group A (n = 353), 4.2 +/- 2.05 in group B (n = 237; p &lt; 0.01), 4.0 +/- 2.02 in group B1 (n = 165), and 4.6 +/- 2.8 in group B2 (n = 72; p = 0.027). Values for the thyroid were 2.4 +/- 0.63 in group A (n = 404), 3.0 +/- 1.01 in group B (n = 186; p &lt; 0.01), 2.6 +/- 0.39 in group B1 (n = 130), and 4.0 +/- 1.24 in group B2 (n = 56; p &lt; 0.01). One patient developed a palatine tonsil carcinoma (group B1, SUV(max) 3.2), and one patient developed an oral floor carcinoma (group B1, SUV(max) 3.7). Conclusion: Elevated/asymmetric head and neck FDG accumulation without a correlating morphological lesion can frequently be found and does not predict cancer development. In populations in which goitre is endemic, FDG uptake by the thyroid is common and not associated with thyroid cancer. abstract_id: PUBMED:36060082 Incidental Findings on 18 F-Fluorocholine PET/CT for Parathyroid Imaging. Introduction 18 F-choline positron emission tomography/computed tomography (PET/CT) is an upcoming imaging technique for the localization of hyperfunctioning parathyroid glands. However, 18 F-choline is a nonspecific tracer that also accumulates in malignancies, inflammatory lesions, and several other benign abnormalities. The aim of this study was to determine the occurrence and relevance of incidental findings on 18 F-choline PET/CT for parathyroid localization. Materials and Methods 18 F-choline PET/CTs performed in our center for parathyroid localization from 2015 to 2019 were reviewed. Abnormal uptake of 18 F-choline, with or without anatomical substrate on the co-registered low-dose CT and also incidental findings on CT without increased 18 F-choline uptake were recorded. Each finding was correlated with follow-up data from the electronic medical records. Results A total of 388 18 F-choline PET/CTs were reviewed, with 247 incidental findings detected in 226 patients (58%): 82 18 F-choline positive findings with corresponding pathology on CT, 16 without CT substrate, and 149 18 F-choline negative abnormalities on CT. Malignant lesions were detected in 10/388 patients (2.6%). Of all 98 detected 18 F-choline positive lesions, 15 were malignant (15.3%), concerning 4 metastases and 11 primary malignancies: breast carcinoma ( n = 7), lung carcinoma ( n = 2), thyroid carcinoma ( n = 1), and skin melanoma ( n = 1). Conclusion Clinically relevant incidental findings were observed in a substantial number of patients. In 15.3% of the incidental 18 F-choline positive findings, the lesions were malignant. These data contribute to better knowledge of 18 F-choline distribution, enhance interpretation of 18 F-choline PET/CT, and guide follow-up of incidental findings. Attention should especially be paid to breast lesions in this particular patient group with hyperparathyroidism in which women are typically over-represented. abstract_id: PUBMED:27372808 In vivo spatial correlation between (18)F-BPA and (18)F-FDG uptakes in head and neck cancer. Background And Purpose: Borono-2-(18)F-fluoro-phenylalanine ((18)F-BPA) has been used to estimate the therapeutic effects of boron neutron capture therapy (BNCT), while (18)F-fluorodeoxyglucose ((18)F-FDG) is the most commonly used positron emission tomography (PET) radiopharmaceutical in a routine clinical use. The aim of the present study was to evaluate spatial correlation between (18)F-BPA and (18)F-FDG uptakes using a deformable image registration-based technique. Material And Methods: Ten patients with head and neck cancer were recruited from January 2014 to December 2014. All patients underwent whole-body (18)F-BPA PET/computed tomography (CT) and (18)F-FDG PET/CT within a 2-week period. For each patient, (18)F-BPA PET/CT and (18)F-FDG PET/CT images were aligned based on a deformable image registration framework. The voxel-by-voxel spatial correlation of standardized uptake value (SUV) within the tumor was analyzed. Results: Our image processing framework achieved accurate and validated registration results for each PET/CT image. In 9/10 patients, the spatial distribution of SUVs between (18)F-BPA and (18)F-FDG showed a significant, positive correlation in the tumor volume. Conclusions: Deformable image registration-based voxel-wise analysis demonstrated a spatial correlation between (18)F-BPA and (18)F-FDG uptakes in the head and neck cancer. A tumor sub-volume with a high (18)F-FDG uptake may predict high accumulation of (18)F-BPA. abstract_id: PUBMED:24560597 Interpretation of thyroid incidentalomas in (18)F-FDG PET/CT studies Objective: Thyroid findings or incidentalomas in (18)F-FDG PET/CT studies are relatively frequent, being its clinical significance subject of controversy. The aim of this study was to show our experience in the detection of thyroid incidentalomas by PET/CT studies as well as its follow up. Material And Methods: A retrospective and descriptive review was conducted on patients who had thyroid incidentalomas detected in (18)F-FDG PET/CT studies between June 2010 and March 2013. Patient's medical records were reviewed for age, genre, maximum standardized uptake value (SUVmax), thyroid diseases, TSH and antithyroid antibodies levels, ultrasound, fine-needle aspiration (FNA) and cytology. Results: 4085 PET/CT studies for several purposes were performed. Eighty-three of these studies (2.03%) showed thyroid incidentalomas. Thirty-seven patients showed a diffuse increase of glucose metabolism in the thyroid gland and 46 showed a focal increase of glucose metabolism. Five out of 46 patients with focal uptake were diagnosed of a neoplastic disease by cytology (11%). The SUVmax of malignant pathology did not differ from that of benign thyroid diseases (Mean: 10,26 and 5,92 respectively). Conclusion: In our experience, focal thyroid incidentalomas detected in (18)F-FDG PET/CT studies are related to a significant risk of malignancy (11%). Therefore, in these situations, an ultrasound study with fine needle biopsy should be recommended. Moreover, a diffuse increase of glucose metabolism in the thyroid gland is often associated with benign thyroid pathology. abstract_id: PUBMED:36060084 Comparison of the Sensitivity of 68 Ga-DOTATATE PET/CT with Other Imaging Modalities in Detecting Head and Neck Paraganglioma: Experience from Western India. Background This study aimed to compare the sensitivity of 68 Ga-DOTATATE positron emission tomography/computed tomography (PET/CT) with other imaging modalities in the detection of head and neck paraganglioma (HNPGL). Methods The data of consecutive HNPGL patients ( n = 34) who had undergone at least 68 Ga-DOTATATE PET/CT and anatomical imaging (contrast-enhanced computed tomography/magnetic resonance imaging [CECT/MRI]) were retrospectively reviewed. The diagnosis of HNPGL (the primary tumor) was confirmed either by histopathology ( n = 10) or was based on clinical follow-up and correlation of anatomical with functional imaging in whom histopathology was not available ( n = 24). The sensitivities of 68 Ga DOTATATE PET/CT, 18F-fluorodeoxyglucose positron emission tomography/computed tomography ( 18 F-FDG-PET/CT), 131 I-metaiodobenzylguanidine ( 131 I-MIBG) scintigraphy, and CECT/MRI for primary HNPGL, associated primary pheochromocytoma + sympathetic paraganglioma (PCC + sPGL), and metastatic lesions were analyzed. Results Thirty-four patients (males: 15) [isolated HNPGL: 26, HNPGL + PCC: 04, HNPGL+ sPGL: 03, HNPGL + PCC + sPGL: 01] harboring 50 primary lesions were included. For total lesions, 68 Ga-DOTATATE PET/CT (99.3%) had significantly higher lesion-wise sensitivity than 18 F-FDG PET/CT (81.6%, p = 0.0164), 131 I-MIBG (15.2%, p ≤0.0001), CECT (46.3%, p ≤ 0.0001) but similar sensitivity as MRI neck (97%, p = 0.79). On head-to-head comparison (21 primary HNPGL and 39 metastatic lesions), 68 Ga DOTATATE PET/CT had significantly higher lesion-wise sensitivities for the detection of metastatic (100 vs. 71.9%, p = 0.04) and total lesions (100 vs. 77.2%, p ≤ 0.0001). Conclusion 68 Ga-DOTATATE PET/CT was the most sensitive imaging modality for the detection of HNPGL and related lesions with significantly higher lesion-wise sensitivities than those of 18 F-FDG PET/CT, 131 I-MIBG, and CECT. Answer: The retrospective study by PUBMED:19305995 aimed to determine whether increased/asymmetric FDG uptake on PET without a correlating morphological lesion on fully diagnostic CT indicates the development of a head and neck malignancy. The study involved 590 patients without a known head and neck malignancy/inflammation and measured FDG uptake at various sites in the head and neck region. The patients were followed up for an average of about 2.5 years to determine whether they developed a head and neck malignancy. The results showed that 40% of patients had no evidence of enhanced FDG uptake, while 60% showed qualitatively elevated FDG uptake in at least one site. However, only two patients developed a head and neck malignancy during the follow-up period. One patient developed a palatine tonsil carcinoma, and another developed an oral floor carcinoma. Both cases had symmetric FDG uptake (group B1) with SUV(max) values of 3.2 and 3.7, respectively. The study concluded that elevated/asymmetric head and neck FDG accumulation without a correlating morphological lesion is frequently found and does not predict cancer development. In populations where goitre is endemic, FDG uptake by the thyroid is common and not associated with thyroid cancer. Therefore, incidental head and neck 18 F FDG uptake on PET/CT without corresponding morphological lesion is not an early predictor of cancer development according to the findings of this study.
Instruction: Does medical certification of workers with injuries influence patterns of health service use? Abstracts: abstract_id: PUBMED:27286076 Does medical certification of workers with injuries influence patterns of health service use? Background: Among workers with injuries who seek compensation, a general practitioner (GP) usually plays an important role in a person's return to work (RTW) by advising if the worker is unfit for work (UFW), is able to work on alternate (ALT) duties or is fit for work and also providing referrals to other health service providers. Objective: To examine patterns of health service utilization (HSU) in workers with injuries by condition and type of certificate issued by GP. Methods: Zero-inflated negative binomial and logistic regressions were conducted for major healthcare services accessed over the 12-month period post-initial medical examination. Services included GP consultations, pharmacy, physiotherapy, occupational rehabilitation and psychology. Results: The average number of physiotherapy services was greater in workers with musculoskeletal disorders, back pain and fractures. In contrast, the median number of psychological services was greater in mental health conditions (MHC). Workers with ALT certificates were more likely to use GPs, pharmacy and physiotherapy services. Conclusion: HSU in the 12 months post-initial medical certification varied substantially according to the worker's condition, certificate type, age, gender and residential location. Understanding these factors can facilitate more appropriate resource allocation; strategic thinking on optimal use of particular health services and enables better targeting of particular provider groups for more education on the health benefits of RTW. abstract_id: PUBMED:24099209 Sickness certification of workers compensation claimants by general practitioners in Victoria, 2003-2010. Objective: To examine patterns of the sickness certification of workers compensation claimants by general practitioners in Victoria, Australia, by nature of injury or illness. Design, Setting And Patients: Retrospective analysis of Victorian workers compensation data for all injured and ill workers with an accepted workers compensation claim between 2003 and 2010. Main Outcome Measures: Type (unfit for work, alternative duties, or fit for work) and duration of initial medical certificates relating to workers compensation claims that were issued by GPs, in six categories of injury and illness. Results: Of 124,424 initial medical certificates issued by GPs, 74.1% recommended that workers were unfit for work and 22.8% recommended alternative duties. Unfit-for-work certificates were issued to 94.1% of workers with mental health conditions, 81.3% of those with fractures, 79.1% of those with other traumatic injuries, 77.6% of those with back pain and strains, 68.0% of those with musculoskeletal conditions and 53.0% of those with other diseases. Alternative-duties certificates were significantly longer in duration than unfit-for-work certificates in all injury and illness categories (P &lt; 0.001) but certificates for workers with musculoskeletal injuries and diseases, back pain and strains and other traumatic injuries were of lesser duration than those for workers with fractures, mental health conditions and other diseases. Conclusion: The high proportion of medical certificates recommending complete absence from work presents major challenges in terms of return to work, labour force productivity, the viability of the compensation system, and long-term social and economic development. There is substantial variation in the type and duration of medical certificates issued by GPs. People with mental health conditions are unlikely to receive a certificate recommending alternative duties. Further research is required to understand GP certification behaviour. abstract_id: PUBMED:34495446 Timing of Health Service Use Among Truck Drivers After a Work-Related Injury or Illness. Purposes Timely delivery of treatment and rehabilitation is generally acknowledged to support injury recovery. This study aimed to describe the timing of health service use by injured truck drivers with work-related injury and to explore the association between demographic and injury factors and the duration of health service use. Methods Retrospective cohort study of injured truck drivers with accepted workers' compensation claims in the state of Victoria, Australia. Descriptive analyses examined the percentage of injured truck drivers using health services by service type. Logistic regression model examined predictors of any service use versus no service use, and predictors of extended service use (≥ 52 weeks) versus short-term use. Results The timing of health service use by injured truck drivers with accepted workers' compensation claims varies substantially by service type. General practitioner, specialist physician, and physical therapy service use peaks within the 14 weeks after compensation claim lodgement, whilst the majority of mental health services were accessed in the persistent phase beyond 14 weeks after claim lodgement. Older age, being employed by small companies, and claiming compensation for mental health conditions were associated with greater duration of health service use. Conclusions Injured truck drivers access a wide range of health services during the recovery and return to work process. Delivery of mental health services is delayed, including for those making mental health compensation claims. Health service planning should take into account worker and employer characteristics in addition to injury type. abstract_id: PUBMED:31725184 Patterns of health service use following work-related injury and illness in Australian truck drivers: A latent class analysis. Objectives: To identify patterns of health service use (HSU) in truck drivers with work-related injury or illness and to identify demographic and work-related factors associated with patterns of care. Method: All accepted workers' compensation claims from truck drivers lodged between 2004 and 2013 in Victoria were included. Episodes of HSU were categorised according to practitioner type. Latent class analysis was used to identify the distinct profiles of users with different patterns of HSU. Multinomial logistic regression was used to examine the associations between latent class and predictors. Results: Four profiles of HSU were identified: (a) Low Service Users (55% of the sample) were more likely to be younger, have an injury that did not result in time off work and have conditions other than a musculoskeletal injury; (b) High Service Users (10%) tended to be those aged between 45 and 64 years, living in major cities with musculoskeletal conditions that resulted in time off work; (c) Physical Therapy Users (25%) were more likely to be aged between 45 and 64 years, live in major cities and have nontraumatic injuries that resulted in time off work; and (d) GP/Mental Health Users (10%) were more likely to be over 24 years of age, from the lowest socioeconomic band, be employed by smaller organizations and be claiming benefits for a mental health condition. Conclusions: This study identified distinct categories of HSU among truck drivers following work-related injury. The results can be used to prioritize occupational health and safety promotion to maintain a healthy truck driver work force. abstract_id: PUBMED:36684010 Trajectories of medical service use among girls and boys with and without early-onset conduct problems. Background: Children with conduct problems (CP) have been found to be heavy and costly medical service users in adulthood. However, there is little knowledge on how medical service use develops during childhood and adolescence among youth with and without childhood CP. Knowing whether differences in developmental trajectories of medical service use for specific types of problems (e.g., injuries) are predicted by childhood CP would help clinicians identify developmental periods during which they might intensify interventions for young people with CP in order to prevent later problems and associated increased service use. Methods: Participants were drawn from an ongoing longitudinal study of boys and girls with and without childhood CP as rated by parents and teachers. Medical service use was assessed using administrative data from a public single payer health plan. Latent growth modeling was used to estimate the mean trajectory of four types of medical visits (psychiatric, injury-related, preventative, total visits) across time and evaluate the effect of CP and other covariates. Results: Support the hypothesis that early CP predicts higher medical service use at nine years old, and that this difference persists in a chronic manner over time, even when controlling the effects of ADHD and family income. Girls had fewer medical visits for psychiatric reasons than boys at baseline, but this difference diminished over time. Conclusions: Clinicians should be aware that childhood CP already predicts increased medical service use in elementary school. Issues specific to different contexts in which injuries might occur and sex differences are discussed. abstract_id: PUBMED:36924662 Driven by need, shaped by access: Heterogeneity in patient profiles and patterns of service utilization in patients with alcohol use disorders. Background: Patients with alcohol-use disorders (AUDs) are highly heterogenous and account for an increasing proportion of general medical hospital visits. However, many patients with AUDs do not present with severe medical or psychiatric needs requiring immediate attention. There may be a mismatch between some patients' needs and the available services, potentially driving re-admissions and re-encounters. The current study aims to identify subgroups of AUD patients and predict differences in patterns of healthcare service use (HSU) over time. Methods: Latent class analysis (LCA) was conducted using hospital data incorporating sociodemographic, health behavior, clinical, and service use variables to identify subtypes of AUD patients, then class membership was used to predict patterns of HSU. Results: Four classes were identified with the following characteristics: (1) Patients with acute medical injuries (30 %); (2) Patients with socioeconomic and psychiatric risk factors, (11 %); (3) Patients with chronic AUD with primarily non-psychiatric medical needs (18 %); and (4) Patients with primary AUDs with low medical-treatment complexity (40 %). Negative binomial models showed that Class 4 patients accounted for the highest frequency of service use, including significantly higher rates of emergency department reencounters at 30 days and 12 months. Conclusions: The profile and patterns of HSU exhibited by patients in class 4 suggest that these patients have needs which are not currently being addressed in the emergency department. These have implications for how resources are allocated to meet the needs of patients with AUDs, including those who make frequent visits to the emergency department without high acuity medical needs. abstract_id: PUBMED:31523422 Multi-objective semi-supervised clustering to identify health service patterns for injured patients. Purpose: This study develops a pattern recognition method that identifies patterns based on their similarity and their association with the outcome of interest. The practical purpose of developing this pattern recognition method is to group patients, who are injured in transport accidents, in the early stages post-injury. This grouping is based on distinctive patterns in health service use within the first week post-injury. The groups also provide predictive information towards the total cost of medication process. As a result, the group of patients who have undesirable outcomes are identified as early as possible based health service use patterns. Methods: We propose a multi-objective optimization model to group patients. An objective function is the cost function of k-medians clustering to recognize the similar patterns. Another objective function is the cross-validated root-mean-square error to examine the association with the total cost. The best grouping is obtained by minimizing both objective functions. As a result, the multi-objective optimization model is a semi-supervised clustering which learns health service use patterns in both unsupervised and supervised ways. We also introduce an evolutionary computation approach includes stochastic gradient descent and Pareto optimal solutions to find the optimal solution. In addition, we use the decision tree method to reproduce the optimal groups using an interpretable classification model. Results: The results show that the proposed multi-objective semi-supervised clustering identifies distinct groups of health service uses and contributes to predict the total cost. The performance of the multi-objective model has been examined using two metrics such as the average silhouette width and the cross-validation error. The examination proves that the multi-objective model outperforms the single-objective ones. In addition, the interpretable classification model shows that imaging and therapeutic services are critical services in the first-week post-injury to group injured patients. Conclusion: The proposed multi-objective semi-supervised clustering finds the optimal clusters that not only are well-separated from each other but can provide informative insights regarding the outcome of interest. It also overcomes two drawback of clustering methods such as being sensitive to the initial cluster centers and need for specifying the number of clusters. abstract_id: PUBMED:27048576 Factors associated with sickness certification of injured workers by General Practitioners in Victoria, Australia. Background: Work-related injuries resulting in long-term sickness certification can have serious consequences for injured workers, their families, society, compensation schemes, employers and healthcare service providers. The aim of this study was to establish what factors potentially are associated with the type of sickness certification that General Practitioners (GPs) provide to injured workers following work-related injury in Victoria, Australia. Methods: This was a retrospective population-based cohort study was conducted for compensation claims lodged by adults from 2003 to 2010. A logistic regression analysis was performed to assess the impact of various factors on the likelihood that an injured worker would receive an alternate/modified duties (ALT, n = 28,174) vs. Unfit for work (UFW, n = 91,726) certificate from their GP. Results: A total of 119,900 claims were analysed. The majority of the injured workers were males, mostly age of 45-54 years. Nearly half of the workers (49.9%) with UFW and 36.9% with ALT certificates had musculoskeletal injuries. The multivariate regression analysis revealed that for most occupations older men (55-64 years) were less likely to receive an ALT certificate, (OR = 0.86, (95%CI, 0.81 - 0.91)). Workers suffering musculoskeletal injuries or occupational diseases were nearly twice or three times at higher odds of receiving an ALT certificate when compared to fractures. Being seen by a GP experienced with workers' compensation increased the odds of receiving ALT certificate (OR = 1.16, (95%CI, 1.11 - 1.20)). Occupation and industry types were also important factors determining the type of certificate issued to the injured worker. Conclusions: This study suggests that specific groups of injured workers (i.e. older age, workers with mental health issues, in rural areas) are less likely to receive ALT certificates. abstract_id: PUBMED:26275607 General practitioners and sickness certification for injury in Australia. Background: Strong evidence supports an early return to work after injury as a way to improve recovery. In Australia, General Practitioners (GPs) see about 96 % of injured workers, making them the main gatekeepers to workers' entitlements. Most people with compensable injuries in Australia are certified as "unfit to work" by their GP, with a minority of patients certified for modified work duties. The reasons for this apparent dissonance between evidence and practice remain unexplored. Little is known about the factors that influence GP sickness certification behaviour in Australia. The aim of this study is to describe the factors influencing Australian GPs certification practice through qualitative interviews with four key stakeholders. Methods: From September to December 2012, 93 semi-structured interviews were undertaken in Melbourne, Australia. Participants included GPs, injured workers, employers and compensation agents. Data were thematically analysed. Results: Five themes describing factors influencing GP certification were identified: 1. Divergent stakeholder views about the GP's role in facilitating return to work; 2. Communication between the four stakeholder groups; 3. Conflict between the stakeholder groups; 4. Allegations of GPs and injured workers misusing the compensation system and 5. The layout and content of the sickness certificate itself. Conclusion: By exploring GP certification practice from the perspectives of four key stakeholders, this study suggests that certification is an administrative and clinical task underpinned by a host of social and systemic factors. The findings highlight opportunities such as practice guideline development and improvements to the sickness certificate itself that may be targeted to improve GP sickness certification behaviour and return to work outcomes in an Australian context. abstract_id: PUBMED:36064341 Pre-injury health status of truck drivers with a workers' compensation claim. Truck drivers are a vulnerable population due to the high number of workplace injuries and fatalities predominant in their occupation. In Australia, the road freight transportation industry has been identified as a national priority area in terms of creating preventative measures to improve the health and safety of its workers. With an environment conducive to poor nutritional food choices and unhealthy lifestyle behaviours, many barriers exist to creating a safe and healthy workforce. Thus, the current study aimed to describe the pre-injury hospital-recorded health conditions and health service use of truck drivers with a worker's injury compensation claim/s when compared to workers in other industries. Data was obtained from a compensation claims database and linked with hospital admissions data recorded five years prior to the injury claim. Health and lifestyle behaviour data for the occupational code of truck drivers was compared to other occupational drivers, as well as to all other occupations. Analysis was conducted via logistic regression. The results found that when compared to other occupational drivers, truck drivers were significantly more likely to have a hospital-recorded diagnosis of diabetes and/or hypertension, as well as being significantly more likely to have a hospital record of tobacco use and/or alcohol misuse/abuse. The findings show that there is a need to review and revise existing health strategies to promote the health and wellbeing of truck drivers, especially given their challenging work environment. Answer: Yes, medical certification of workers with injuries does influence patterns of health service use. Studies have shown that the type of certificate issued by a general practitioner (GP) can affect the health service utilization (HSU) of injured workers. For instance, workers with musculoskeletal disorders, back pain, and fractures tend to have a higher average number of physiotherapy services, while those with mental health conditions have a greater median number of psychological services. Workers who receive certificates for alternate duties (ALT) are more likely to use GP, pharmacy, and physiotherapy services (PUBMED:27286076). The type and duration of initial medical certificates issued by GPs also vary significantly by the nature of injury or illness. For example, unfit-for-work certificates are more commonly issued to workers with mental health conditions, fractures, and other traumatic injuries. Certificates recommending alternative duties are generally longer in duration than unfit-for-work certificates across all injury and illness categories (PUBMED:24099209). Moreover, the timing of health service use among injured workers, such as truck drivers, can vary substantially by service type. General practitioner, specialist physician, and physical therapy services are often used within the first 14 weeks after a compensation claim is lodged, while mental health services are accessed more in the persistent phase beyond 14 weeks (PUBMED:34495446). Patterns of HSU among truck drivers following work-related injury also show distinct categories, with some drivers being high service users, particularly those with musculoskeletal conditions that resulted in time off work, and others being low service users, often younger and with injuries not resulting in time off work (PUBMED:31725184). In summary, medical certification by GPs following work-related injuries influences the patterns and timing of health service use among injured workers, with variations observed based on the type of injury, the nature of the certification, and demographic factors.
Instruction: Are occupational factors important determinants of socioeconomic inequalities in musculoskeletal pain? Abstracts: abstract_id: PUBMED:18815713 Are occupational factors important determinants of socioeconomic inequalities in musculoskeletal pain? Objectives: The aim of this study was to quantify socioeconomic inequalities in low-back pain, neck-shoulder pain, and arm pain in the general working population in Oslo and to examine the impact of job characteristics on these inequalities. Methods: All economically active 30-, 40-, and 45-year-old persons who attended the Oslo health study in 2000-2001 and answered questions on physical job demands, job autonomy, and musculoskeletal pain were included (N=7293). Occupational class was used as an indicator of socioeconomic status. The lower occupational classes were compared with higher grade professionals, and prevalences, prevalence ratios, prevalence differences, and population attributable fractions were calculated. Results: There were marked, stepwise socioeconomic gradients for musculoskeletal pain, steeper for the men than for the women. The relative differences (prevalence ratios) were larger for low-back pain and arm pain than for neck-shoulder pain. The absolute differences (prevalence differences) were the largest for low-back pain. Physical job demands explained a substantial proportion of the absolute occupational class inequalities in low-back pain, while job autonomy was more important in explaining the inequalities in neck-shoulder pain and arm pain. The estimated population attributable fractions supported the impact of job characteristics at the working population level, especially for low-back pain. Conclusions: In this cross-sectional study, physical job demands and job autonomy explained a substantial proportion of occupational class inequalities in self-reported musculoskeletal pain in the working population in Oslo. This finding indicates that the workplace may be an important arena for preventive efforts to reduce socioeconomic inequalities in musculoskeletal pain. abstract_id: PUBMED:22244269 Gender inequalities in occupational health in Spain Objectives: To analyze gender inequalities in employment and working conditions, the work-life balance, and work-related health problems in a sample of the employed population in Spain in 2007, taking into account social class and the economic sector. Methods: Gender inequalities were analyzed by applying 25 indicators to the 11,054 workers interviewed for the VI edition of the National Working Conditions Survey. Multivariate logistic regression models were used to calculate odds ratios (OR) and 95% confidence intervals (95% CI), stratifying by occupational social class and economic sector. Results: More women than men worked without a contract (OR=1.83; 95% CI: 1.51-2.21) and under high-effort/low-reward conditions (1.14:1.05-1.25). Women also experienced more sexual harassment (2.85:1.75-4.62), discrimination (1.60:1.26-2.03) and musculoskeletal pain (1.38:1.19-1.59). More men than women carried out shift work (0.86:0.79-0.94), with high noise levels (0.34:0.30-0.40), and high physical demands (0.58:0.54-0.63). Men also suffered more injuries due to occupational accidents (0.67:0.59-0.76). Women white-collar-workers were more likely than their male counterparts to have a temporary contract (1.34:1.09-1.63), be exposed to psychosocial hazards and discrimination (2.47:1.49-4.09) and have occupational diseases (1.91:1.28-2.83). Gender inequalities were higher in the industry sector. Conclusions: There are substantial gender inequalities in employment, working conditions, and work-related health problems in Spain. These gender inequalities are influenced by social class and the economic sector, and should be considered in the design of public policies in occupational health. abstract_id: PUBMED:24009006 Musculoskeletal pain in Europe: the role of personal, occupational, and social risk factors. Objectives: The prevalence of musculoskeletal pain in European countries varies considerably. We analyzed data from the fifth European Working Conditions Survey (EWCS) to explore the role of personal, occupational, and social risk factors in determining the national prevalence of musculoskeletal pain. Methods: Over the course of 2010, 43 816 subjects from 34 countries were interviewed. We analyzed the one-year prevalence of back and neck/upper-limb pain. Individual-level risk factors studied included: sex; age; educational level; socioeconomic status; housework or cooking; gardening and repairs; somatizing tendency; job demand-control; six physical occupational exposures; and occupational group. Data on national socioeconomic variables were obtained from Eurostat and were available for 28 countries. We fitted Poisson regression models with random intercept by country. Results: The main analysis comprised 35 550 workers. Among individual-level risk factors, somatizing tendency was the strongest predictor of the symptoms. Major differences were observed by country with back pain more than twice as common in Portugal (63.8%) than Ireland (25.7%), and prevalence rates of neck/upper-limb pain ranging from 26.6% in Ireland to 67.7% in Finland. Adjustment for individual-level risk factors slightly reduced the large variation in prevalence between countries. For back pain, the rates were more homogenous after adjustment for national socioeconomic variables. Conclusions: Our analysis indicates substantial variation between European countries in the prevalence of back and neck/upper-limb pain. This variation is unexplained by established individual risk factors. It may be attributable in part to socioeconomic differences between countries, with higher prevalence where there is less risk of poverty or social exclusion. abstract_id: PUBMED:25921484 Social position modifies the association between severe shoulder/arm and knee/leg pain, and quality of life after retirement. Purpose: Musculoskeletal disorders are extremely frequent and account for an important part of the global burden of disease. Risk factors for musculoskeletal disorders include sustained occupational exposure to physically demanding jobs. The effects of sustained occupational physical exposures on knee and shoulder pain are known to persist after retirement; also, several studies have shown a socio-economic gradient in health and quality-of-life outcomes, including for musculoskeletal pain. It is thus possible that prolonged occupational exposures affect workers differently in the long-term along a socio-economic gradient. This study was conducted to investigate whether the impacts of severe shoulder/arm and knee/leg pain on the quality of life of retired workers follow a socio-economic gradient. Methods: Data from the French GAZEL cohort study (n = 14,249) were used to compare the impacts of severe shoulder/arm and knee/leg pain separately on the SF-36, Nottingham Health Profile and limitations in activities of daily living measured in 2006 and 2007, between four groups of social position (measured in 1989). Analyses were made in 2014 with multiple linear and logistic regressions and stratified by sex. Results: For both pain sites, in men and women, there was a strong general tendency for the impacts of severe pain to be smaller among participants in higher social positions. Most important differences were related to pain and physical limitations. Conclusions: These results suggest inequalities in the impacts of severe joint pain by socio-economic status. The source of these inequalities is still speculative and merits the scientific attention. abstract_id: PUBMED:16931462 Socioeconomic position and variations in coping strategies in musculoskeletal pain: a cross-sectional study of 1,287 40- and 50-year-old men and women. Objective: To examine the association between socioeconomic position and coping strategies in musculoskeletal pain. Design And Subjects: Cross-sectional study of a random sample of 40- and 50-year-old Danes, participation rate 69%, n=7,125. The study included 1,287 persons who reported functional limitations due to musculoskeletal pain. Methods: Data was collected by postal questionnaires and scales were developed on problem-solving coping and avoidant coping, based on a range of preliminary studies. Multivariate logistic regression analyses was used to study the correlation with socioeconomic position, measured by occupational social class. Results: Among women, there was no correlation between social class and avoidant coping, but a significant decrease in the use of problem-solving coping by decreasing social class, adjusted odds ratio (OR) = 2.64 (95% confidence interval (CI) 1.31-5.32) in social class V vs social classes I + II. Among men, there was no correlation between social class and problem-solving coping, but a significant increase in the use of avoidant coping with decreasing social class, adjusted OR = 3.31 (95% CI 1.75-6.25) in V vs I + II. Conclusion: It is important for clinicians who advise and support patients in their response to musculoskeletal pain to be aware of socioeconomic differences in coping strategies. Gender differences in the association between socioeconomic factors and coping should be further investigated. abstract_id: PUBMED:23064208 Work health determinants in employees without sickness absence. Background: Working ability is known to be related to good physical condition, clear work tasks, positive feedback and other occupational, organizational and psychosocial factors. In Sweden, high levels of sickness absence are due to stress-related disorders and musculoskeletal pain. Aims: To identify work health characteristics in a working population with a large variety of professional skills and occupational tasks. Methods: Employers' data on occupation, sickness absence, age and gender in a working population of 11 occupational groups and questionnaire responses regarding work-organization, environment, work stress, pain, health, and socio-demographic factors were collected. Employees with no history of sick-leave were compared with those with a history of sick-leave (1-182 days, mean 25 days). Results: Of 2641 employees, 1961 participated. Those with no history of sick-leave reported less work-related pain, work-related stress, sleep disturbances, worry about their health, 'sick-presenteeism', monotonous work, bent and twisted working positions and exposure to disturbing noise than those with a history of sick-leave (P &lt; 0.001). They also reported better health, support from superiors, having influence on their working hours and evening and week-end working, longer working hours per week (P &lt; 0.001) and more regular physical training (P &lt; 0.01). Socio-demographic factors were less important than gender, and differences in responses between occupational groups were also found. Conclusions: Workers without a history of sick-leave experienced less stress, sleep disturbances, worry about their own health and less neck, shoulder and back pain and more support from their superiors and influence on their working hours. abstract_id: PUBMED:21692098 Exploring the interplay between work stress and socioeconomic position in relation to common health complaints: the role of interaction. Background: This study explored the interplay between work stress and socioeconomic position and investigated if the interaction of work stress and low socioeconomic position is associated with poorer health. Methods: A representative sample of the Swedish working population, including 2,613 employees (48.7% women) aged 19-64 years, was analyzed. The health outcomes were poor self-rated health, psychological distress, and musculoskeletal pain. Work stress was operationalized as job strain and effort-reward imbalance, and socioeconomic position as occupational class. Interaction analysis was based on departure from additivity as criterion, and a synergy index (SI) was applied, using odds ratios (ORs) from logistic regressions for women and men. Results/conclusions: In fully adjusted models, work stress, and in a lesser extent also socioeconomic position, was associated with higher odds for the three health complaints. The prevalence of poorer health was highest among those individuals jointly exposed to high work stress and low occupational class, with ORs ranging from 1.94 to 6.77 (95%CI 1.01-18.65) for poor self-rated health, 2.42-8.44 (95%CI 1.28-27.06) for psychological distress and 1.93-3.93 (95%CI 1.11-6.78) for musculoskeletal pain. The joint influence of work stress and low socioeconomic position on health was additive rather than multiplicative. abstract_id: PUBMED:27789068 Musculoskeletal pain at various anatomical sites and socioeconomic position: Results of a national survey. Background: Prevalence of musculoskeletal pain according to sites of pain and associated factors in the community has not been thoroughly documented. The association between pain and socioeconomic position has been studied by several authors, but without details in most studies regarding sites of pain, whereas the relations with social position could differ according to the site of pain. The objective of this study was to explore these differences in the community in France. Methods: The national Health and Occupational History survey was conducted in France in 2006 in subjects aged 20-74 years. Self-assessment of pain at various sites in the previous year was recorded. Five sites were considered here: back, neck, shoulder, upper limb, and lower limb. After a description of prevalence according to gender and age, the associations with socioeconomic position at the beginning of the subjects' working life, in seven categories, were studied with logistic models adjusted for age. The analyses were limited to those aged 30-74 years and were conducted separately for men and women. Results: Of the 5520 males and 6643 females studied, prevalence was the highest for back pain (35% for males, 37% for females). Pain was globally more frequent for women. For all sites of pain an increase with age was significant for women. This was not observed in men for back pain (highest prevalence in the 40- to 49-year-old age group) or neck pain. Overall, prevalence of pain was the lowest for professionals (reference category in the analyses). For males, the first occupation as a farmer or blue-collar worker was associated with an increased prevalence for most sites of pain, with odds ratios close to 2. For females, prevalence was increased for more socioeconomic categories, as compared to professionals. Among the five sites, neck pain was an exception: for both men and women, no association was observed between neck pain and socioeconomic position. Conclusion: Although exploratory, these results are consistent with the available knowledge on occupational and personal risk factors for pain, which differ according to the site of pain. Other studies are needed to better understand the causal mechanisms underlying the associations observed. abstract_id: PUBMED:29175868 Do socioeconomic inequalities in pain, psychological distress and oral health increase or decrease over the life course? Evidence from Sweden over 43 years of follow-up. Background: Inequalities over the life course may increase due to accumulation of disadvantage or may decrease because ageing can work as a leveller. We report how absolute and relative socioeconomic inequalities in musculoskeletal pain, oral health and psychological distress evolve with ageing. Methods: Data were combined from two nationally representative Swedish panel studies: the Swedish Level-of-Living Survey and the Swedish Panel Study of Living Conditions of the Oldest Old. Individuals were followed up to 43 years in six waves (1968, 1974, 1981, 1991/1992, 2000/2002, 2010/2011) from five cohorts: 1906-1915 (n=899), 1925-1934 (n=906), 1944-1953 (n=1154), 1957-1966 (n=923) and 1970-1981 (n=1199). The participants were 15-62 years at baseline. Three self-reported outcomes were measured as dichotomous variables: teeth not in good conditions, psychological distress and musculoskeletal pain. The fixed-income groups were: (A) never poor and (B) poor at least once in life. The relationship between ageing and the outcomes was smoothed with locally weighted ordinary least squares, and the relative and absolute gaps were calculated with Poisson regression using generalised estimating equations. Results: All outcomes were associated with ageing, birth cohort, sex and being poor at least once in live. Absolute inequalities increased up to the age of 45-64 years, and then they decreased. Relative inequalities were large already in individuals aged 15-25 years, showing a declining trend over the life course. Selective mortality did not change the results. The socioeconomic gap was larger for current poverty than for being poor at least once in life. Conclusion: Inequalities persist into very old age, though they are more salient in midlife for all three outcomes observed. abstract_id: PUBMED:17576763 Piecework, musculoskeletal pain and the impact of workplace psychosocial factors. Aim: To investigate the impact of piecework on musculoskeletal pain and general health, and the influence of perceived workplace psychosocial factors on any such associations, in a general UK population. Methods: A questionnaire was mailed to an age-stratified random sample of 10 000 adults aged 18-75 in North Staffordshire, UK. Respondents reporting a current main job were asked if this job was paid by a piecework system. Health measures were (i) number of pain areas according to a body manikin and (ii) general health (SF-12v2). Other measures included questions on occupational history and psychosocial aspects of the work environment. Results: The adjusted response was 54%. A total of 1193 respondents reported a current main job, of whom 201 (17%) reported piecework. Pieceworkers were more likely to be older (P &lt; 0.05), male (P &lt; 0.001) and in lower socioeconomic groups (P &lt; 0.001) than non-pieceworkers. Piecework was associated with perceptions of a poor psychosocial working environment, more pain areas (P &lt; 0.05), more elbow (P &lt; 0.01), forearm (P &lt; 0.001) and hand pain (P &lt; 0.05), and a lower physical health score (SF-12v2; P &lt; 0.01), but no difference in mental health score (P = 0.60), compared with non-pieceworkers. After controlling for psychosocial factors, and socioeconomic group, the associations between piecework and pain areas, or physical health, were no longer statistically significant. Conclusions: These results show that piecework was associated with poorer self-reported general physical health and more areas of pain, which may be attributed to low socioeconomic group, and workplace perceptions of little job control, high physical demand and little supervisor support. Answer: Yes, occupational factors are important determinants of socioeconomic inequalities in musculoskeletal pain. Studies have shown that job characteristics such as physical job demands and job autonomy can explain a substantial proportion of occupational class inequalities in self-reported musculoskeletal pain (PUBMED:18815713). For instance, physical job demands were found to be a significant factor in explaining the absolute occupational class inequalities in low-back pain, while job autonomy was more important in explaining inequalities in neck-shoulder pain and arm pain (PUBMED:18815713). Gender inequalities in employment and working conditions also contribute to work-related health problems, including musculoskeletal pain. Women are more likely to work without a contract, under high-effort/low-reward conditions, and experience more sexual harassment, discrimination, and musculoskeletal pain compared to men (PUBMED:22244269). Additionally, socioeconomic position modifies the association between severe shoulder/arm and knee/leg pain and quality of life after retirement, with a general tendency for the impacts of severe pain to be smaller among participants in higher social positions (PUBMED:25921484). Furthermore, socioeconomic position is associated with variations in coping strategies in musculoskeletal pain, with differences observed between men and women in the use of problem-solving and avoidant coping strategies (PUBMED:16931462). Work stress and low socioeconomic position have been shown to have an additive influence on health, with higher odds for poor self-rated health, psychological distress, and musculoskeletal pain among those exposed to both high work stress and low occupational class (PUBMED:21692098). The prevalence of musculoskeletal pain at various anatomical sites also shows an association with socioeconomic position, with lower prevalence among professionals and higher prevalence among farmers or blue-collar workers, especially for men (PUBMED:27789068). Lastly, piecework, which is often associated with lower socioeconomic groups, has been linked to perceptions of a poor psychosocial working environment, more areas of pain, and lower physical health scores, although these associations may be attributed to low socioeconomic group and workplace psychosocial factors (PUBMED:17576763).
Instruction: Surgical procedures for posterior fossa tumors in children: does craniotomy lead to fewer complications than craniectomy? Abstracts: abstract_id: PUBMED:12405369 Surgical procedures for posterior fossa tumors in children: does craniotomy lead to fewer complications than craniectomy? Object: Traditionally, access to the posterior fossa involved a suboccipital craniectomy. More recently, posterior fossa craniotomies have been described, although the long-term benefits of this procedure are not clear. The authors compared the postoperative complications of craniectomies and craniotomies in children with posterior fossa tumors. Methods: From a total of 110 children undergoing surgery for posterior fossa tumors, 56 underwent craniectomy and 54 had a craniotomy. The mean duration of the hospital stay was longer in the craniectomy group (17.5 compared with 14 days). At operation, similar numbers of patients in both groups had total macroscopic clearance of the tumor, complete dural closure, and duraplasty. Postoperatively, more patients in the craniectomy group were noted to have cerebrospinal fluid (CSF) leakage (27 compared with 4%; p &lt; 0.01) and pseudomeningoceles (23 compared with 9%; p &lt; 0.05). There was no significant difference between the two groups in the numbers of patients with CSF infections, wound infections, or hydrocephalus requiring permanent CSF drainage. Patients with CSF leaks had a longer duration of hospital stay (20.7 compared with 14.9 days; p &lt; 0.01), and were more likely to have CSF infections (35 compared with 12%; p &lt; 0.01) and wound infections (24 compared with 1%; p &lt; 0.01) than patients without CSF leaks. Postoperatively, wound exploration and reclosures for CSF leakage were more likely in the craniectomy group (11 compared with 0%; p &lt; 0.01). Multivariate analysis revealed that the only predictor of CSF leakage postoperatively was the type of surgery (that is, craniotomy compared with craniectomy; odds ratio 10.8; p = 0.03). Conclusions: Craniectomy was associated with postoperative CSF leaks, pseudomeningocele, increased wound reclosures, and thus prolonged hospital stays. In turn, CSF leakage was associated with infections of the CSF and wound. The authors propose mechanisms that may explain why CSF leakage is less likely if the bone flap is replaced. abstract_id: PUBMED:24078114 Craniotomy vs. craniectomy for posterior fossa tumors: a prospective study to evaluate complications after surgery. Background: Posterior fossa surgery traditionally implies permanent bone removal. Although suboccipital craniectomy offers an excellent exposure, it could lead to complications. Thus, some authors proposed craniotomy as a valuable alternative to craniectomy. In the present study we compare postoperative complications after craniotomy or craniectomy for posterior fossa surgery. Methods: We prospectively collected data for a consecutive series of patients who underwent either posterior fossa craniotomy or craniectomy for tumor resection. We divided patients into two groups based on the surgical procedure performed and safety, complication rates and length of hospitalization were analyzed. Craniotomies were performed with Control-Depth-Attachment(®) drill and chisel, while we did craniectomies with perforator and rongeurs. Results: One-hundred-fifty-two patients were included in the study (craniotomy n =100, craniectomy n =52). We detected no dural damage after bone removal in both groups. The total complication rate related to the technique itself was 7 % for the craniotomy group and 32.6 % for the craniectomy group (&lt;0.0001). Pseudomeningocele occurred in 4 % vs. 19.2 % (p =0.0009), CSF leak in 2 % vs. 11.5 % (p =0.006) and wound infection in 1 % vs. 1.9 % (p =0.33), respectively. Post-operative hydrocephalus, a multi-factorial complication which could affect our results, was also calculated and occurred in 4 % of the craniotomy vs. 9.6 % of the craniectomy group (p =0.08). The mean length of in-hospital stay was 9.3 days for the craniotomy group and 11.8 days for the craniectomy group (p =0.10). Conclusions: The present study suggests that fashioning a suboccipital craniotomy is as effective and safe as performing a craniectomy; both procedures showed similar results in preserving dural integrity, while post-operative complications were fewer when a suboccipital craniotomy was performed. abstract_id: PUBMED:29074422 Suboccipital Craniotomy Versus Craniectomy: A Survey of Practice Patterns. Objective: Open surgical access to the posterior fossa traditionally has been achieved by permanent bone removal and remains the mainstay of posterior fossa surgery, although craniotomy is an alternative. Considerable variation exists at both the national and international levels within a variety of neurologic and neurosurgical disciplines. In this study, we surveyed current practice patterns regarding preference of suboccipital craniotomy or craniectomy. Methods: The membership directory of the American Academy of Neurological Surgeons was reviewed. SurveyMonkey was used to distribute the survey to members of the American Academy of Neurological Surgeons via a modified Dillman method for e-mail correspondence. Comparisons of frequency distributions, means, and medians, as well as multiple logistic regression were used to determine surgical preferences for craniotomy versus craniectomy. Results: We received 1102 responses (19.6%). Overall, 542 (49.7%) respondents prefer craniotomy and 548 (50.3%) prefer craniectomy. Respondents who prefer craniotomy had completed a residency more recently than respondents who preferred craniectomy (15.9 vs. 21.1 years, P &lt; 0.0001) and were more likely to practice outside of North America (P &lt; 0.01). Some 81.4% of pediatric neurosurgeons prefer craniotomy compared with 43.6% of adult neurosurgeons (P &lt; 0.0001). Craniotomy was most highly preferred for tumor resection and vascular malformation. Within the United States, there was significant variation in preference for craniotomy based on geographic region, with New England most commonly preferring craniotomy and the Mid-Atlantic region most commonly preferring craniectomy. Conclusions: Our results show that preference for suboccipital craniotomy or craniectomy varies according to geographic location of practice, time since completing residency, and age of patient population. abstract_id: PUBMED:19921373 Postoperative sialadenitis following retromastoid suboccipital craniectomy for posterior fossa tumor. During retromastoid and far-lateral posterior fossa surgical approaches the head may be positioned at the extreme limits of rotation and extension. In rare instances, patients may develop acute sialadenitis after surgery as a consequence of such positioning. In those patients, the neck/facial swelling is contralateral to the craniectomy site. The mechanism implicated in acute sialadenitis in the patient described in this report was because of obstruction to the salivary duct due to surgical positioning. The course of this complication is typically benign if it is identified early in the postoperative period. abstract_id: PUBMED:8871727 Spinal subdural enhancement after suboccipital craniectomy. Purpose: To characterize transient intraspinal subdural enhancement (potentially mimicking the subarachnoid spread of tumor) seen on MR images in some children after suboccipital craniectomy for posterior fossa tumor resection. Methods: Radiologic and medical records of 10 consecutive children who had MR imaging for spinal staging after resection of posterior fossa tumor during a 9-month period were reviewed retrospectively. In addition, one case with similar findings of intraspinal enhancement on spinal staging MR images obtained at another institution was included in the review. Results: Intraspinal enhancement thought to be subdural was seen in four of 10 patients undergoing spinal staging MR imaging 6 to 12 days after surgery. In these four patients, MR studies 50 to 18 days later, without intervening treatment, showed resolution of the abnormal enhancement. A fifth patient (from another institution) with similar intraspinal enhancement underwent CT myelography 4 days later, which showed no subarachnoid lesions. No metastases have developed in any of these five patients during the 2.5- to 3.5-year follow-up period. conclusions: From analysis of the MR appearance and on the basis of prior myelographic experience, we suggest an extraarachnoid, probably subdural, location of this enhancement. Awareness of this phenomenon will reduce the rate of false-positive diagnoses of metastatic disease. Preoperative spinal staging should be considered for patients undergoing suboccipital craniectomy. abstract_id: PUBMED:36693620 Predictive Factors for the Occurrence of Perioperative Complications in Pediatric Posterior Fossa Tumors. Objective: Central nervous system tumors are the most common solid neoplasm in children, 60%-70% occurring in the posterior fossa. Surgery is the mainstay of treatment but surgery in the pediatric population is associated with a high risk of perioperative complications. We aimed at analyzing the perioperative complications after posterior fossa surgery in a pediatric population and identifying the associated risk factors. Methods: Retrospective study of all pediatric patients undergoing surgery for resection of a posterior fossa tumor between 1999 and 2019, at the University Hospital of Lausanne. Data were collected including age, clinical presentation, tumor localization, presence of preoperative hydrocephalus, timing of surgery, surgical approach, surgical team, extent of surgical resection, perisurgical complications, and histopathological diagnosis. Statistical analysis was performed to correlate the data with the risk of complications. Results: Sixty-seven patients were included. Perisurgical complications were identified in 39 patients (58.2%), of which 14 (35.9%) required corrective interventions. The perioperative mortality rate was zero. In the univariate analysis, surgery performed under emergency conditions, transvermian and telovelar approaches were statistically correlated with an increased rate of complications. Extent of resection, hydrocephalus, and Lansky index at presentation were not predictive of perioperative complications. Midline tumor, tumor volume &gt;25 cm3, and surgery performed by a nonspecialized pediatric onconeurosurgeon were found to be independent risk factors in the multivariate analysis. Conclusions: Surgery in the posterior fossa in the pediatric population harbors a high risk of complications. Identifying the variables contributing to these complications is important in order to improve surgical management of these patients. abstract_id: PUBMED:830815 Management of hydrocephalus secondary to posterior fossa tumors. The records of children with hydrocephalus secondary to posterior fossa tumors were reviewed and the methods of treatment compared with their subsequent clinical course. Of 86 patients evaluated, 47 had no treatment for hydrocephalus prior to tumor removal., 12 had external ventricular drainage, and 27 had cerebrospinal fluid (CSF) shunts before suboccipital craniectomy. Children with CSF shunts before tumor removal had significantly better postoperative conditions than the children without shunts (p less than 0.01). Operative mortality of children without treatment of hydrocephalus before tumor surgery was 12.8%; it was 3.7% in the children with preexisting shunts. Treatment of hydrocephalus with a CSF shunt prior to suboccipital craniectomy was a safe procedure that significantly lowered the morbidity and mortality of subsequent tumor removal abstract_id: PUBMED:31853546 Natural History of Untreated Transverse/Sigmoid Sinus Thrombosis Following Posterior Fossa Surgery: Case Series and Literature Review. Background: Transverse or sigmoid sinus thrombosis occurs in 4% to 11% of patients following posterior fossa surgery. Anticoagulation has been the mainstay treatment, mostly based on extrapolation from the literature on spontaneous sinus thrombosis. Objective: To analyze the rate and associated complications of postoperative transverse/sigmoid sinus thrombosis for patients undergoing posterior fossa tumor resection. In this series, no antithrombotic therapy was initiated, and no postoperative treatment alterations were made following thrombosis diagnosis. Methods: Prospectively accrued cases from a single surgeon operating at a single academic center were retrospectively reviewed to determine the natural history of untreated transverse/sigmoid sinus thrombosis following posterior fossa surgery. Inclusion criteria were patients 18 yr or older undergoing resection of a posterior fossa tumor. A total of 538 patients were analyzed. Results: In all 26 out of 538 (4.8%) patients were diagnosed with transverse/sigmoid sinus thrombosis on routine postoperative imaging. Early postoperative complication rate was 38% in the sinus thrombosis group, as compared to 15% in the no-thrombosis group (P = .02). A significantly higher rate of pseudomeningocele, dysphagia requiring gastrostomy, and cerebellar stroke signs were noted in patients with postoperative sinus thrombosis. However, only 3 of the 26 patients (12%) with postoperative sinus occlusion suffered prolonged central nervous system complications. Conclusion: Transverse/sigmoid sinus thrombosis following suboccipital craniectomy results in a higher rate of early complications; however, most of these complications resolve without anticoagulation. It may be reasonable, therefore, to manage these patients conservatively in order to avoid the risks associated with anticoagulation in the perioperative period. abstract_id: PUBMED:10955668 Factors influencing surgical complications of intra-axial brain tumours. Object: Extensive surgical resection remains nowadays the best treatment available for most intra-axial brain tumours. However, postoperative sequelae can outweigh the potential benefits of surgery. The goal of this study has been to review the results of this treatment in our Department in order to quantify morbidity and mortality and determine predictive risk factors for each patient. Method: We report a retrospective study of 200 patients submitted to a craniotomy for intra-axial brain tumours including gliomas and metastases. Postoperative major complications are analysed and related to different variables. An exhaustive review of the literature concerning the main controversial points about primary and metastatic brain tumours surgery is done. Findings: The overall major complication rate was 27.5%, with neurological complications being the most frequently encountered. We did not find a statistically significant relation between them and the grade of eloquence of the tumoural area. Infratentorial tumour location, previous radiotherapy and reoperations were factors strongly related to the incidence of regional complications. Age over 60 and severe concomitant disease were risk factors for systemic complications. Interpretation: The results from published series concerning surgical complications after craniotomies for brain tumours are not comparable because of the lack of homogeneity between them. The knowledge of the complications rate in each particular neurosurgical department turns out essentially to provide the patient with tailored information about risks before surgery. abstract_id: PUBMED:28289766 Complications and monitoring standards after elective craniotomy in Germany Background: The increasing endeavors to make inpatient treatment processes more effective leads to a reduction of the length of stay in hospital and minimization of postoperative monitoring. Therefore, the aim of our study was to determine potential postoperative complications for neurosurgical patients undergoing elective surgery with respect to assessment of the relevance for intensive medical care. Furthermore, our approach was compared with the standard of postoperative care of such patients in Germany. Methods: All 499 patients scheduled for elective craniotomy at our institute from 2010-2013 could be included corresponding to various treatment criteria for vascular diseases, such as aneurysms, arteriovenous malformation (AVM) and cavernous hemangioma as well as supratentorial and infratentorial tumors, transsphenoidally operated pituitary adenomas and stereotactic biopsies. All complications could be collated and categorized according to major and minor complications. Furthermore, a survey was conducted among 155 neurosurgical hospitals and departments with respect to the preferred postoperative monitoring strategy for the named treatment categories. Results: The numbers of major complication were in accordance with data from other studies and although minor complications (13.4% in our collective) are rarely recorded in the literature, they do however indicate an adequate postoperative inpatient monitoring. The results of the survey showed a broad preference for intensive care unit monitoring of patients undergoing elective craniotomy in Germany. Conclusion: The undisputed gold standard of postoperative monitoring of neurosurgical patients undergoing elective surgery is still the intensive care unit. Although more flexible surveillance modalities are available, a cost-driven restructuring of postoperative monitoring and in particular reduction of the length of stay in hospital must be subjected to detailed scrutinization. Answer: Based on the provided abstracts, craniotomy appears to lead to fewer complications than craniectomy in surgical procedures for posterior fossa tumors in children. In a study comparing postoperative complications of craniectomies and craniotomies in children with posterior fossa tumors, craniectomy was associated with higher rates of cerebrospinal fluid (CSF) leaks, pseudomeningocele, increased wound reclosures, and thus prolonged hospital stays. CSF leakage was also associated with infections of the CSF and wound. The authors suggest that CSF leakage is less likely if the bone flap is replaced, which is a characteristic of craniotomy (PUBMED:12405369). Another prospective study evaluating complications after surgery for posterior fossa tumors found that the total complication rate was significantly lower in the craniotomy group compared to the craniectomy group. Specifically, pseudomeningocele, CSF leak, and wound infection rates were lower in the craniotomy group. The mean length of in-hospital stay was also shorter for the craniotomy group, although this difference was not statistically significant (PUBMED:24078114). A survey of practice patterns among members of the American Academy of Neurological Surgeons indicated a nearly even split in preference for craniotomy versus craniectomy. However, those who preferred craniotomy had completed their residency more recently and were more likely to practice outside of North America. Pediatric neurosurgeons showed a strong preference for craniotomy (PUBMED:29074422). In summary, the evidence from the abstracts suggests that craniotomy may lead to fewer postoperative complications compared to craniectomy in the context of pediatric posterior fossa tumor surgeries.
Instruction: Pneumonectomy vs lobectomy in the treatment of pathologic N1 NSCLC: could the type of surgical resection dictate survival? Abstracts: abstract_id: PUBMED:12627083 Pneumonectomy vs lobectomy in the treatment of pathologic N1 NSCLC: could the type of surgical resection dictate survival? Aim: To define the prognostic significance of specific types of N1 lymph node involvement in patients operated on for stage II (N1) NSCLC and to evaluate if the extent of resection affects survival. Methods: Of 1117 patients operated on from 1985 to 1998, an homogeneous group of 124 consecutive patients with pathologic T1-T2 N1 disease who had undergone a complete resection with systematic nodal dissection were analysed. No patients received adjuvant radio- or chemotherapy. Results: The overall 5-year survival rate was 48.8%. Survival was not related to pathologic T factor, histology, number, percentage or level of N1 involved, visceral pleura involvement, number of lymph nodes dissected. Patients were then divided into 3 groups depending on the level of lymph node involvement (stations 10, 11 and 12-13) and survival analysed according to the extent of resection (pneumonectomy vs lobectomy). No significant difference was found, however, in the group of level 10, patients treated by pneumonectomy showed a better 5-year survival (58%) compared to patients treated by lobectomy (33%) with a median survival of 110 against 58 months. This data was confirmed by a lower incidence of local recurrence in the pneumonectomy group than lobectomy group (0% vs 24%), whereas the same incidence of distant metastases was observed in the two groups (29% vs 23%). Conclusions: In patients with stage II (N1) NSCLC, only in case of station 10 involved, pneumonectomy could allow a better survival lowering the incidence of local recurrence. However the major part of patients with stage II (N1) NSCLC die for distant metastasis. This supports the necessity to develop a specific systemic treatment. abstract_id: PUBMED:36752515 Surgical outcome of ipsilateral anatomical resection for lung cancer after pulmonary lobectomy. Objectives: Ipsilateral reoperation after pulmonary lobectomy is often challenging because of adhesions from the previous operation. We retrospectively examined the surgical outcome and prognosis of ipsilateral anatomical resection for lung cancer after pulmonary lobectomy using a multicentre database. Methods: We evaluated the perioperative outcomes and overall survival of 51 patients who underwent pulmonary lobectomy followed by ipsilateral anatomical resection for lung cancer between January 2012 and December 2018. In addition, patients with stage I non-small-cell lung cancer (NSCLC) were compared with 3411 patients with stage I lung cancer who underwent pulmonary resection without a prior ipsilateral lobectomy. Results: Ipsilateral anatomical resections included 10 completion pneumonectomies, 19 pulmonary lobectomies and 22 pulmonary segmentectomies. Operative time was 312.2 ± 134.5 min, and intraoperative bleeding was 522.2 ± 797.5 ml. Intraoperative and postoperative complications occurred in 9 and 15 patients, respectively. However, the 5-year overall survival rate after anatomical resection followed by ipsilateral lobectomy was 83.5%. Furthermore, in patients with c-stage I NSCLC, anatomical resection followed by ipsilateral lobectomy was not associated with worse survival than anatomical resection without prior ipsilateral lobectomy. Conclusions: Anatomical resection following ipsilateral lobectomy is associated with a high frequency of intraoperative and postoperative complications. However, the 5-year overall survival in patients with c-stage I NSCLC who underwent ipsilateral anatomical resection after pulmonary lobectomy is comparable to that in patients who underwent anatomical resection without prior pulmonary lobectomy. abstract_id: PUBMED:37324102 Analyzing the impact of minimally invasive surgical approaches on post-operative outcomes of pneumonectomy and sleeve lobectomy patients. Background: Some patients with non-small cell lung cancer (NSCLC) have superior short- and long-term outcomes with sleeve lobectomy rather than pneumonectomy. Originally sleeve lobectomy was reserved for patients with limited pulmonary function, however, the reported superior results allowed sleeve lobectomy to be performed in expanded patient populations. In a further attempt to improve post-operative outcomes surgeons have adopted minimally invasive techniques Minimally invasive approaches have potential benefits to patients such as decreased morbidity and mortality while maintaining the same caliber of oncologic outcomes. Methods: We identified patients at our institution who underwent sleeve lobectomy or pneumonectomy to treat NSCLC from 2007 to 2017. We analyzed these groups in respect to 30- and 90-day mortality, complications, local recurrence, and median survival. We included multivariate analysis to determine the impact of a minimally invasive approach, sex, extent of resection, and histology. Differences in mortality were analyzed using the Kaplan-Meier method using the log-rank test to compare the groups. A two-tailed Z test for difference in proportions was done to analyze complications, local recurrence, 30-day and 90-day mortality. Results: A total of 108 patients underwent sleeve lobectomy (n=34) or pneumonectomy (n=74) for treatment of NSCLC with 18 undergoing open pneumonectomy, 56 undergoing video-assisted thoracoscopic surgery (VATS) pneumonectomy, 29 undergoing open sleeve lobectomy, and 5 undergoing VATS sleeve lobectomy. There was no significant difference in 30-day mortality (P=0.064) but there was a difference in 90-day (P=0.007). There was no difference in complication rates (P=0.234) or local recurrence rates (P=0.779). The pneumonectomy patients had a median survival of 23.6 months (95% CI: 3.8-43.4 months). The sleeve lobectomy group had a median survival of 60.7 months (95% CI: 43.3-78.2 months) (P=0.008). On multivariate analysis extent of resection (P&lt;0.001) and tumor stage (P=0.036) were associated with survival. There was no significant difference between the VATS approach and the open surgical approach (P=0.053). Conclusions: When considering patients undergoing surgery for NSCLC sleeve lobectomy resulted in lower 90-day mortality and better 3-year survival compared to patients undergoing PN. Having a sleeve lobectomy rather than a pneumonectomy and having earlier-stage disease lead to significantly improved survival on multivariate analysis. Having a VATS operation leads to a non-inferior post-operative outcome compared to open surgery. abstract_id: PUBMED:11803340 Survival after bronchoplastic lobectomy for non small cell lung cancer compared with pneumonectomy according to nodal status. Background: In this retrospective study we have compared the results after sleeve lobectomy and pneumonectomy performed for non small cell lung cancer in the period January 1990-December 1995 at the Thoracic Surgery Unit, University Hospital of Siena. Follow-up was updated until December 2000. Methods: In that period, 38 patients underwent sleeve lobectomy and 127 underwent pneumonectomy. The bronchoplasty was a full sleeve in 30 patients and a bronchial wedge resection in eight. Systemic nodal dissection was undertaken routinely. Results: The 30-day postoperative mortality was 5.2% (2/38) in the sleeve lobectomy group and 3.9% (5/127) in the pneumonectomy group. Postoperative complications occurred in 23.6% of patients in the sleeve lobectomy group and in 23.2% of those in the pneumonectomy group. Local recurrences occurred in 5.2% of patients in the sleeve lobectomy group and in 4.8% of those in the pneumonectomy group. The overall 5-year survival for the sleeve lobectomy group was 38% whereas that for the pneumonectomy group was 25% (p=0.03). Regarding lymph-node involvement, in the sleeve lobectomy group, the 5-year survival for N0, N1 and N2 was 62.5, 17.5 and 12.5%, respectively. Conclusions: Our data confirm that sleeve lobectomy, when performed in selected patients with non small cell lung cancer, provides at least similar overall long term survival to that seen after pneumonectomy. Long term result are chiefly related to nodal stage with a significantly lower survival for patients with nodal involvement. As most patients with nodal involvement die from distant metastases, adjuvant treatment, instead of type of resection, would play a major role in prolonging survival. abstract_id: PUBMED:15382293 Pulmonary resection after pneumonectomy. Patients who have a lung cancer in the residual lung after pneumonectomy should not be automatically excluded for surgical consideration. These patients should be carefully staged and evaluated physiologically. The most important initial differentiation is to distinguish a true second primary lung cancer from metastatic recurrent lung cancer. Meticulous staging with chest CT, PET, brain MRI, and mediastinoscopy should be able to successfully exclude metastatic disease, multifocal disease, or locally advanced tumors. Only patients who have stage I disease are candidates for this type of extended resection. Ideally, these patients should have small peripheral tumors that can be encompassed with a low-volume wedge resection. More extended resections, such as segmentectomy or right middle lobectomy, may be considered in some patients but seem to bear a higher operative morbidity and mortality. The need for an upper or lower lobectomy after contralateral pneumonectomy is probably an absolute contraindication to surgical resection. To tolerate pulmonary resection after pneumonectomy, and to obtain the desired survival benefit, patients should have a good to excellent performance status, no serious comorbidities, and a ppoFEV1 greater than 1.0 L/second. In these highly selected patients, pulmonary resection after pneumonectomy can be accomplished with an acceptable operative morbidity and mortality and, in true cases of metachronous second primary lung cancers, may achieve a 5-year survival rate of up to 50%. abstract_id: PUBMED:34281703 Sublobar resection is comparable to lobectomy for screen-detected lung cancer. Objective: Sublobar resection is frequently offered to patients with small, peripheral lung cancers, despite the lack of outcome data from ongoing randomized clinical trials. Sublobar resection may be a particularly attractive surgical strategy for screen-detected lung cancers, which have been suggested to be less biologically aggressive than cancers detected by other means. Using prospective data collected from patients undergoing surgery in the National Lung Screening Trial, we sought to determine whether extent of resection affected survival for patients with screen-detected lung cancer. Methods: The National Lung Screening Trial database was queried for patients who underwent surgical resection for confirmed lung cancer. Propensity score matching analysis (lobectomy vs sublobar resection) was done (nearest neighbor, 1:1, matching with no replacement, caliper 0.2). Demographics, clinicopathologic and perioperative outcomes, and long-term survival were compared in the entire cohort and in the propensity-matched groups. Multivariable logistic regression analysis was done to identify factors associated with increased postoperative morbidity or mortality. Results: We identified 1029 patients who underwent resection for lung cancer in the National Lung Screening Trial, including 821 patients (80%) who had lobectomy and 166 patients (16%) who had sublobar resection, predominantly wedge resection (n = 114, 69% of sublobar resection). Patients who underwent sublobar resection were more likely to be female (53% vs 41%, P = .004) and had smaller tumors (1.5 cm vs 2 cm, P &lt; .001). The sublobar resection group had fewer postoperative complications (22% vs 32%, P = .010) and fewer cardiac complications (4% vs 9%, P = .033). For stage I patients undergoing sublobar resection, there was no difference in 5-year overall survival (77% for both groups, P = .89) or cancer-specific survival (83% for both groups, P = .96) compared with patients undergoing lobectomy. On multivariable logistic regression analysis, sublobar resection was the only factor associated with lower postoperative morbidity/mortality (odds ratio, 0.63; 95% confidence interval, 0.40-0.98). To compare surgical strategies in balanced patient populations, we propensity matched 127 patients from each group undergoing sublobar resection and lobectomy. There were no differences in demographics or clinical and tumor characteristics among matched groups. There was again no difference in 5-year overall survival (71% vs 65%, P = .40) or cancer-specific survival (75% vs 73%, P = .89) for patients undergoing lobectomy and sublobar resection, respectively. Conclusions: For patients with screen-detected lung cancer, sublobar resection confers survival similar to lobectomy. By decreasing perioperative complications and potentially preserving lung function, sublobar resection may provide distinct advantages in a screened patient cohort. abstract_id: PUBMED:33332526 Anatomic resection has superior long-term survival compared with wedge resection for second primary lung cancer after prior lobectomy. Objectives: The extent of surgical resection for early-stage second primary lung cancer (SPLC) in patients with a previous lobectomy is unclear. We sought to compare anatomic lung resections (lobectomy and segmentectomy) and wedge resections for small peripheral SPLC using a population-based database. Methods: The Surveillance, Epidemiology and End Results database was queried for all patients with ≤2 cm peripheral SPLC diagnosed between 2004 and 2015 who underwent prior lobectomy for the first primary and surgical resection only for the SPLC. American College of Chest Physicians guidelines were used to classify SPLC. Kaplan-Meier analysis and multivariable Cox regression were used to compare overall survival. Results: A total of 356 patients met the inclusion criteria with 203 (57%) treated with wedge resection and 153 (43%) treated with anatomic resection. Significantly better median survival was observed with anatomic resection than with wedge resection using a Kaplan-Meier analysis (124 vs 63 months; P &lt; 0.001). With multivariable Cox regression, improved long-term survival was observed for anatomic resection (hazard ratio: 0.44, confidence interval: 0.27-0.70; P = 0.001). Improvement in survival was demonstrated with wedge resection when lymph node sampling was done. Lastly, we calculated the average treatment effect on the treated with inverse probability weighting for a subgroup of patients and found that those with wedge resection and lymph node sampling had shorter long-term survival times. Conclusions: Anatomic resections may provide better long-term survival than wedge resections for patients with early-stage peripheral SPLC after prior lobectomy. Significant improvement in survival was observed with wedge resection for SPLC when adequate lymph node dissection was performed. abstract_id: PUBMED:33434540 Lobectomy With Artery Reconstruction and Pneumonectomy for Non-Small Cell Lung Cancer: A Propensity Score Weighting Study. Background: The treatment of non-small cell lung cancer is based, when suitable, on surgical resection. Pneumonectomy has been considered the standard surgical procedure for locally advanced lung cancers but it is associated with high mortality and morbidity rates. Reconstruction of the pulmonary artery, associated with parenchyma-sparing techniques, is meant to be an alternative to pneumonectomy. Methods: This retrospective single-center study is based on a detailed and comprehensive analysis of the clinical and oncologic data of patients treated between 2004 and 2016 through pneumonectomy or lobectomy with reconstruction of the pulmonary artery. A propensity score weighting approach, based on the preoperative characteristics of two groups of 124 patients each was performed. The subsequent statistical analysis evaluated long-term and short-term clinical outcomes together with risk factors analysis. Results: The comparison between pneumonectomy and pulmonary artery reconstructions showed a higher 30-day (P = .02) and 90-day (P = .03) mortality rate in the pneumonectomy group, together with a higher incidence of major complications (P = .004). Long-term results have shown comparable outcomes, both in terms of 5-year disease-free survival (52.2% for pneumonectomy vs 46% for pulmonary artery reconstructions, P = .57) and overall 5-year survival (41.9% vs 35.6%, respectively; P = .57). Risk factors analysis showed that cancer-specific survival was related to lymph node status (P &lt; .01) and absence of adjuvant therapy (P = .04). Lymph node status also influenced the risk of recurrence (P &lt; .01). Conclusions: Lobectomy with reconstruction of the pulmonary artery is a valuable and oncologically safe alternative to pneumonectomy, with lower short-term mortality and morbidity, without affecting long-term oncologic results. abstract_id: PUBMED:21092990 Survival of patients with clinical stage IIIA non-small cell lung cancer after induction therapy: age, mediastinal downstaging, and extent of pulmonary resection as independent predictors. Background: In clinical stage IIIA non-small cell lung cancer, the role of surgical resection, particularly pneumonectomy, after induction therapy remains controversial. Our objective was to determine factors predictive of survival after postinduction surgical resection. Methods: We retrospectively reviewed a prospectively collected database of 136 patients who underwent surgical resection after induction chemotherapy (n = 119) or chemoradiation (n = 17) from June 1990 to January 2010. Results: One hundred five lobectomies or bilobectomies and 31 pneumonectomies were performed. There was 1 perioperative death (pneumonectomy). Seventy-one patients had downstaging to N0 or N1 nodal status (52%). There were 2 complete pathologic responses. Median follow-up was 42 months (range, 0.69-136 months). Overall 5-year survival for entire cohort was 33% (36% lobectomy, 22% pneumonectomy, P = .001). Patients with pathologic downstaging to pN0 or pN1 had improved 5-year survival (45% vs 20%, P = .003). For patients with pN0 or pN1 disease, survival after lobectomy was better than after pneumonectomy (48% vs 27%, P = .011). In patients with residual N2 disease, there was no statistically significant survival difference between lobectomy and pneumonectomy (5-year survival, 21% vs 19%; P = .136). Multivariate analysis showed as independent predictors of survival age (hazard ratio, 1.05; P = .002), extent of resection (hazard ratio, 2.01; P = .026), and presence of residual pN2 (hazard ratio, 1.60; P = .047). Conclusions: After induction therapy for patients with clinical stage IIIA disease, both pneumonectomy and lobectomy can be safely performed. Although survival after lobectomy is better, long-term survival can be accomplished after pneumonectomy for appropriately selected patients. Nodal downstaging is important determinant of survival, particularly after lobectomy. abstract_id: PUBMED:24498478 Survival after Pneumonectomy for Stage III Non-small Cell Lung Cancer. Objectives: Stage III non-small cell lung cancer (NSCLC) has a poor prognosis. Reports suggest that five-year survival after current treatment is between 14 to 24 percent. The purpose of this retrospective study was to investigate the morbidity and mortality of patients diagnosed with stage III NSCLC and treated with pneumonectomy at the University of Kentucky Medical Center in Lexington, KY. Methods: We reviewed the medical record and tumor registry follow-up data on 100 consecutive patients who underwent pneumonectomy for lung cancer at the University of Kentucky. Results: We identified thirty-six patients in stage III who underwent pneumonectomy. Ten patients had surgery only, eight patients received adjuvant chemotherapy, and eighteen patients received neoadjuvant therapy. There was one surgical death in this series. Mean follow-up was 2.9 years. One-, three-, and five-year survival was 66%, 38%, and 38%, respectively. Five-year survival for the group with adjuvant therapy was 60%. Conclusion: Most lung cancer patients present with advanced disease and the prognosis remains poor. Our experience indicates resection offers an above average chance of long-term survival when supplemented with neoadjuvant and/or adjuvant therapy. Answer: The type of surgical resection, whether pneumonectomy or lobectomy, can indeed influence survival outcomes in the treatment of pathologic N1 non-small cell lung cancer (NSCLC). According to a study by PUBMED:12627083, the overall 5-year survival rate for patients with stage II (N1) NSCLC was 48.8%, and survival was not related to various factors such as pathologic T factor, histology, or number of lymph nodes dissected. However, in the case of station 10 lymph node involvement, patients treated by pneumonectomy showed a better 5-year survival (58%) compared to those treated by lobectomy (33%), with a median survival of 110 against 58 months. This was also confirmed by a lower incidence of local recurrence in the pneumonectomy group than in the lobectomy group (0% vs 24%), although the incidence of distant metastases was similar in both groups. Another study, PUBMED:37324102, found that sleeve lobectomy resulted in lower 90-day mortality and better 3-year survival compared to patients undergoing pneumonectomy. The median survival for the pneumonectomy group was 23.6 months, while the sleeve lobectomy group had a median survival of 60.7 months. Multivariate analysis showed that the extent of resection and tumor stage were associated with survival, and that minimally invasive surgical approaches did not significantly differ from open surgery in terms of post-operative outcomes. PUBMED:11803340 also supports the notion that sleeve lobectomy, when performed in selected patients with NSCLC, provides at least similar overall long-term survival to that seen after pneumonectomy, with a 5-year survival rate of 38% for the sleeve lobectomy group compared to 25% for the pneumonectomy group. In summary, the type of surgical resection can dictate survival in patients with pathologic N1 NSCLC, with evidence suggesting that pneumonectomy may be associated with better survival in certain cases, such as station 10 lymph node involvement. However, sleeve lobectomy also shows promising survival outcomes and may be associated with lower mortality and morbidity, particularly when minimally invasive techniques are used.
Instruction: Does detection of carotid plaque affect physician behavior or motivate patients? Abstracts: abstract_id: PUBMED:18035077 Does detection of carotid plaque affect physician behavior or motivate patients? Background: Imaging techniques to identify subclinical atherosclerosis are becoming more widespread, but few data exist regarding their influence on patient or physician behavior. We evaluated the impact of ultrasound screening to identify carotid artery plaques on physician treatment plans and patient motivation. Methods: Subjects included asymptomatic patients without known vascular disease who had 2 or more cardiac risk factors. Circumferential scanning of the right and left carotid arteries to identify carotid plaques was performed using a handheld ultrasound device in an office setting. The physician's initial treatment recommendations were assessed before and after the results of the carotid scan were reported. Subjects completed a survey to assess motivation to make lifestyle changes before and after the results of the scan were provided. Results: Fifty subjects were enrolled over 9 months. Their mean (SD) age was 54.0 (10.4) years and their mean Framingham 10-year cardiovascular risk was 7.8% (7.9%). More than half (58%) of the subjects had at least one carotid plaque. When carotid plaque was identified, physicians were more likely to prescribe aspirin (P = .031) and lipid-lowering therapy (P = .004). Although subjects with carotid plaque reported an increase in their perceived likelihood of developing heart disease (P = .013), they did not report increased motivation to make lifestyle changes. Conclusions: Ultrasound screening for carotid plaque in an office setting can alter physician treatment plans. Although the presence of plaque increased patient perception of cardiovascular risk, it did not motivate patients to make lifestyle changes. abstract_id: PUBMED:18558473 Ultrasound detection of increased carotid intima-media thickness and carotid plaque in an office practice setting: does it affect physician behavior or patient motivation? Background: The aim of this multicenter study was to determine if identifying increased carotid intima-media thickness (CIMT) or carotid plaque during office-based ultrasound screening examinations could alter physicians' treatment plans and patients' motivation regarding health-related behaviors. Methods: Carotid ultrasound studies were performed by a nonsonographer clinician using a handheld system. Changes in physicians' treatment plans and patients' motivation on the basis of scan results were analyzed using multivariate regression. Results: There were 253 subjects (mean age, 58.1 +/- 6.6 years). When increased CIMT or carotid plaque was detected, physicians were more likely to prescribe aspirin and lipid-lowering therapy (P &lt; .001). Subjects were more likely to report increases in plans to take cholesterol-lowering medication (P = .002) and the perceived likelihood of having or developing heart disease (P = .004). Conclusions: Findings from office-based carotid ultrasound studies can influence physicians' prescriptions of evidence-based interventions. Patients with abnormal ultrasound findings recognize their increased cardiovascular risk and plan to take cholesterol-lowering medication. abstract_id: PUBMED:24624735 Primary prevention and screening in adults: update 2014 This article provides an update on the recommendations for the routine check-up and the primary and secondary prevention of cancer and cardiovascular disease. Changes for cancer screening affect mainly colorectal, lung and prostate cancers. In the area of cardiovascular disease prevention, screening for carotid artery stenosis is still not recommended. The current evidence is insufficient to recommend screening for coronary heart disease or peripheral artery disease in asymptomatic patients. Shared information and decision making between physician and patient is recommended when there is uncertainty regarding the effectiveness of an intervention. abstract_id: PUBMED:23009226 Dialysis methods may affect carotid intima-media thickness in Chinese end-stage renal disease patients. Atherosclerosis is the most common cause of cardiovascular morbidity in end-stage renal disease (ESRD) patients and carotid intima-media thickness (IMT) is an early independent predictor of atherosclerosis. The aim of this study is to compare the continuous ambulatory peritoneal dialysis (CAPD) and the maintenance hemodialysis (MHD) for carotid IMT in Chinese ESRD patients. A total of 72 CAPD patients, 92 MHD patients, and 50 age- and sex-matched healthy controls were included. Dialysis patients were divided into five subgroups according to dialysis duration: 3-6, 7-12, 13-59, 60-119, and 120-179 months. Carotid IMT and carotid plaques were detected for each patient. The carotid IMT and total plaque detection rate in the CAPD and MHD groups were considerably higher than in the healthy control group (p &lt; 0.01). No significant difference was found in the carotid IMT and total plaque detection rate between the CAPD group and the MHD group (p &gt; 0.05). However, after stratification by dialysis duration, the total carotid IMT in the CAPD subgroup was higher than in the MHD subgroup in dialysis duration of 60-119 and 120-179 months (p &lt; 0.05), and there was no significant difference in the total plaque detection rate between the CAPD and MHD subgroups in the same dialysis duration (p &gt; 0.05). Our study showed that both CAPD and MHD affect carotid IMT in Chinese ESRD patients, and the degree of atherosclerosis in CAPD patients might be higher than that in MHD patients after 5 years of dialysis. abstract_id: PUBMED:28903431 Detection of specific Chlamydia pneumoniae and cytomegalovirus antigens in human carotid atherosclerotic plaque in a Chinese population. To explore the relationship between certain pathogens, such as chlamydia pneumonia (Cpn) and cytomegalovirus (CMV), and carotid atherosclerosis (AS) in a Chinese population.Twenty-five carotid atherosclerotic stenosis patients from the Beijing Tiantan Hospital (affiliated with Capital Medical University) participated in the study. After undergoing digital subtraction angiography (DSA) and/or computed tomography angiography (CTA), the degree of carotid artery stenosis was over 70% in all cases, and the patients underwent carotid endarterectomy. Plaque specimens were obtained during surgery. The streptavidin-peroxidase (SP) method was used to test the Cpn and CMV antigens in the specimens, and the relationship between the Cpn and CMV pathogen infections and AS was analyzed based on the test results. In the group of 25 carotid atherosclerotic specimens, the detection rate of the Cpn-specific antigens was 84.0% (21/25). In the control group, the detection rate was 13.3% (2/15) in the ascending aortic intima. Thus, the between-group difference was significant (P&lt;0.01). The CMV-specific antigen detection rate was 72.0% (18/25) using the same experimental group specimens, and the detection rate was zero in the control group. Thus, there were significant between-group differences (P&lt;0.01). Due to the high detection rate of Cpn- and CMV-specific antigens in carotid atherosclerotic plaque in a Chinese population, it can be inferred that pathogens such as Cpn and CMV are one factor associated with carotid atherosclerosis. abstract_id: PUBMED:23146344 Detection of micro-embolic signals: a review of the literature Background: The detection of micro-embolic signals (MES), by transcranial Doppler sonography might be useful for risk stratification in patients with symptomatic and asymptomatic carotid or cerebral artery stenosis, dissections, aortic atheroma, interventional procedures, and right to left cardiac shunts. Aim: Review of the technique and clinical situations of MES detection. Methods: PubMed search from 1990 to 2012. Results: MES were found in 0,19, 48% versus 0,3, and 12% of patients with symptomatic and asymptomatic inferior than 30, 30 to 69, and 70 to 99% carotid stenosis, respectively. MES were related to the risk of recurrent stroke or transient ischemic attack (TIA). In the ACES study, the absolute annual risk of stroke or TIA after 2 years was 7% with vs 3% without MES. In patients with intracranial stenosis, the risk of stroke recurrence was 48% with vs 7% without MES at 13.6 months follow-up. MES were reported in 25% of the symptomatic versus none of the asymptomatic patients with intracranial stenosis. Conclusion: Detection of MES is feasible and reproducible for multicenter studies, using rigourous methodology and long lasting recordings. It may contribute to risk stratification, especially in patients with extra- or intracranial stenosis. abstract_id: PUBMED:26432281 Physician specialty and variation in carotid revascularization technique selected for Medicare patients. Objective: Carotid artery stenting (CAS) has become an alternative to carotid endarterectomy (CEA) for select patients with carotid atherosclerosis. We hypothesized that the choice of CAS vs CEA varies as a function of treating physician specialty, which would result in regional variation in the relative use of these treatment types. Methods: We used Medicare claims (2002-2010) to calculate annual rates of CAS and CEA and examined changes by procedure type over time. To assess regional preferences surrounding CAS, we calculated the proportion of revascularizations by CAS, across hospital referral regions, defined according to the Dartmouth Atlas of Healthcare. We then examined relationships between patient factors, physician specialty, and regional use of CAS. Results: The annual number of all carotid revascularization procedures decreased by 30% from 2002 to 2010 (3.2 to 2.3 per 1000; P = .005). Whereas rates of CEA declined by 35% during these 8 years (3.0 to 1.9 per 1000; P &lt; .001), CAS utilization increased by 5% during the same interval (0.30 to 0.32 per 1000; P = .014). Variation in utilization of carotid revascularization varied across the Unites States, with some regions performing as few as 0.7 carotid procedure per 1000 beneficiaries (Honolulu, Hawaii) and others performing nearly 8 times as many (5.3 per 1000 in Houma, La). Variation in procedure type (CEA vs CAS) was evident as well, as the proportion of carotid revascularization procedures that were constituted by CAS varied from 0% (Casper, Wyo, and Meridian, Miss) to 53% (Bend, Ore). The majority of CAS procedures were performed by cardiologists (49% of all CAS cases), who doubled their rates of CAS during the study period from 0.07 per 1000 in 2002 to 0.15 per 1000 in 2010. Conclusions: Variation in rates of carotid revascularization exists. Whereas rates of carotid revascularization have declined by more than 30% in recent years, utilization of CAS has increased. The proportion of all carotid revascularization procedures performed as CAS varies markedly by geographic region, and regions with the highest proportion of cardiologists perform the most CAS procedures. Evidence-based guidelines for carotid revascularization will require a multidisciplinary approach to ensure uniform adoption across specialties that care for patients with carotid artery disease. abstract_id: PUBMED:6474121 Carotid stenosis: early detection Ultrasound techniques are currently the most reliable and precise methods for noninvasive detection of extracranial carotid artery disease. Continuous-wave-Doppler, associated with spectrum analysis, and high resolution realtime B-mode sonography are complementary. This is demonstrated by correlating ultrasound examinations and angiographies in 112 internal carotids. The value of B-mode sonography for the detection of atheromatous plaques is emphasized, this technique being superior even to angiography. The indications for ultrasound techniques are reviewed. They render possible prevention of cerebral vascular diseases due to extracranial carotid stenoses, and study of the natural history of atheromatous plaques. abstract_id: PUBMED:11885436 The role of the dentist in detection of carotid atherosclerosis. Cerebrovascular accidents (CVA), or stroke, afflict 731,000 Americans each year, with 165,000 of these individuals dying. Stroke is a major cause of death and disability throughout the world, including southern Africa. Atherosclerosis-related formation of thrombi and emboli at the bifurcation of the common carotid artery and proximal internal carotid artery represents a common cause of stroke. The detection of carotid atherosclerosis by dentists using panoramic radiographs recently has been presented to the public through television news stories and the press, but many dentists still do not know how to interpret panoramic radiographs for detection of this condition. This communication illustrates examples in which carotid atherosclerosis was detected using panoramic radiography. Differential diagnoses are presented. Since not every carotid plaque calcifies, panoramic radiography should never be used alone to exclude the possibility of carotid atherosclerosis. It should also be remembered that the mere presence of calcified carotid plaque is not necessarily a reflection on the degree of carotid stenosis. Definitive diagnosis and treatment requires referral of patients deemed to be at risk to an appropriate physician. A variety of advanced diagnostic methods, including gadolinium-enhanced MRI, Duplex Doppler sonography and angiography are used to confirm carotid stenosis. abstract_id: PUBMED:8784115 US National Survey of Physician Practices for the Secondary and Tertiary Prevention of Ischemic Stroke. Medical therapy in patients with carotid artery stenosis. Background And Purpose: Aspirin or other platelet antiaggregants and anticoagulants are commonly used in many types of patients at elevated stroke risk. However, relatively little is known concerning how practicing physicians use these medications in their patients with extracranial carotid artery stenosis. The identification of variations in practice may help to both direct specific educational efforts and guide further research. Methods: Between August 1993 and February 1994, we surveyed the stroke prevention practices of a stratified random sample of 2000 US physicians. The survey included clinical scenarios that probed the use of aspirin or other platelet antiaggregants and anticoagulants in symptomatic and asymptomatic patients with carotid artery stenoses of 50% to 70% or more than 70%, with and without known surgical contraindications. Results: Sixty-seven percent of those eligible completed the survey (n = 1006). More than 85% of physicians responded that they always or often prescribe aspirin or other platelet antiaggregants regardless of degree of carotid artery stenosis, symptom status, or presence of surgical contraindications. However, the reported frequency of use of these medications varied independently according to physician specialty (P = .044). In contrast, in addition to physician specialty, the reported frequency of anticoagulant use varied independently with degree of carotid artery stenosis, symptom status, and presence of surgical contraindications (P &lt; .0001 for each variable). Fifteen percent of physicians responded that they always or often use anticoagulants for asymptomatic patients with 50% to 70% carotid artery stenosis versus 43% who reported doing so for symptomatic patients with a similar degree of stenosis (P &lt; .001); 28% often or always prescribe anticoagulants for asymptomatic patients with more than 70% carotid artery stenosis versus 49% who do so if symptoms are present (P &lt; .001). The odds of noninternist primary care physicians responding that they always or often use anticoagulants were more than five times higher (odds ratio, 5.32; 95% confidence interval [CI], 3.79 to 7.45) than surgical specialists. Compared with surgical specialists, the odds ratios for the use of anticoagulants were 3.65 for internists (95% CI, 2.63 to 5.06) and 1.88 (95% CI, 1.40 to 2.53) for neurologists. Conclusions: These data show the following: (1) Aspirin or other platelet antiaggregants are used by most physicians regardless of degree of carotid artery stenosis, symptom status, or presence of surgical contraindications; (2) anticoagulants are prescribed selectively, with each of these variables influencing their use; and (3) the use of both classes of agents varies with physician specialty training. Answer: The detection of carotid plaque does affect physician behavior. When carotid plaque is identified, physicians are more likely to prescribe aspirin and lipid-lowering therapy (PUBMED:18035077; PUBMED:18558473). This indicates that physicians adjust their treatment plans based on the presence of carotid plaque, likely due to the association of plaque with increased cardiovascular risk. However, the impact on patient motivation appears to be less straightforward. While subjects with carotid plaque did report an increase in their perceived likelihood of developing heart disease, they did not report increased motivation to make lifestyle changes (PUBMED:18035077). In contrast, another study found that patients with abnormal ultrasound findings, which included increased carotid intima-media thickness or carotid plaque, were more likely to report increases in plans to take cholesterol-lowering medication and the perceived likelihood of having or developing heart disease (PUBMED:18558473). This suggests that while the detection of carotid plaque can increase awareness of cardiovascular risk, it may not consistently lead to increased patient motivation for lifestyle changes, although it may influence intentions regarding medication adherence. In summary, the detection of carotid plaque does influence physician behavior by prompting them to prescribe medications that can mitigate cardiovascular risk. The effect on patient motivation is less clear, with some patients reporting increased concern about heart disease but not necessarily translating that concern into lifestyle modifications.
Instruction: First series of mechanical circulatory support in non-compaction cardiomyopathy: Is LVAD implantation a safe alternative? Abstracts: abstract_id: PUBMED:26126056 First series of mechanical circulatory support in non-compaction cardiomyopathy: Is LVAD implantation a safe alternative? Background: Left ventricular non-compaction (LVNC) is a rare cardiac disorder characterized by prominent trabeculae and deep recesses of the ventricular myocardium. Patients with LVNC may develop severe congestive heart failure refractory to medical therapy. However, heart transplantation is strongly limited due to donor organ shortage. Thus mechanical circulatory support by left ventricular assist devices (LVADs) is a promising alternative. Nevertheless, hypertrabeculation and proarrhythmogenic potential in LVNC might represent important hurdles for success of LVAD therapy in these patients. Methods And Results: We retrospectively analyzed the data of a total of 5 patients (3 HVAD, Heartware®; 2 HeartMate II, Thoratec®) with LVNC who underwent LVAD implantation in our institution between 2010 and 2014. Mean follow-up time was 86.5weeks. 30-day survival was 100% without major intrahospital complications. During follow-up, 3 patients developed pump thrombosis requiring pump replacement. Arrhythmias were not detected during follow-up as assessed by ICD interrogation. Conclusions: LVAD implantation in LVNC can be performed with low intrahospital complication rates. However, we observed a high incidence of pump thrombosis during follow-up, possibly related to thromboembolic predisposition by the underlying LVNC. Therefore, careful management of anticoagulation appears to be critical in these patients. abstract_id: PUBMED:30968544 Minimally invasive biventricular mechanical circulatory support with Impella pumps as a bridge to heart transplantation: a first-in-the-world case report. Cardiogenic shock from biventricular failure that requires acute mechanical circulatory support carries high 30 day mortality. Acute mechanical circulatory support can serve as bridge to orthotopic heart transplant (OHT) in selected patients. We report a patient with biventricular failure secondary to rapidly progressive cardiac sarcoidosis refractory to medical management who was bridged to OHT with Impella 5.0 and Impella RP-temporary left and right ventricular assist devices, respectively. This is the first successful bridge to transplantation using these devices in biventricular heart failure and cardiogenic shock. We discuss considerations for using this strategy over veno-arterial extracorporeal membrane oxygenation or surgically implanted assist devices in patients with cardiogenic shock and biventricular failure as a bridge to OHT. abstract_id: PUBMED:32902101 Transcaval access for the emergency delivery of 5.0 liters per minute mechanical circulatory support in cardiogenic shock. Objectives: The purpose of this study was to describe the feasibility and early outcomes of transcaval access for delivery of emergency mechanical circulatory support (MCS) in cardiogenic shock. Background: Vascular access for implantation of MCS in patients with cardiogenic shock is often challenging due to peripheral arterial disease and vasoconstriction. Transcaval delivery of MCS may be an alternative. We describe a series of patients we implanted an Impella 5.0 device, on-table without CT planning, through a percutaneous transcaval access route. Methods: Ten patients with progressive or refractory cardiogenic shock underwent Impella 5.0 implantation via transcaval access. Demographic, clinical and procedural variables and in-hospital outcomes were collected. Results: All ten underwent emergency implantation of the 7 mm diameter Impella 5.0 device via transcaval access. Six were women, with median age of 55.5 years (range, 29-69). Cardiogenic shock was attributed to idiopathic nonischemic cardiomyopathy (n = 4), myocarditis (n = 2), ischemic cardiomyopathy (n = 2), heart transplant rejection (n = 1), and unknown etiology (n = 1). Median duration of support was 92.1 hr (range, 21.2-165.4). Seven (70%) survived to device explant, with six (60%) surviving to access port closure and discharge. Among survivors, five recovered heart function and one received destination therapy left ventricular assist device. Conclusions: Transcaval access is feasible for emergency nonsurgical implantation of the Impella 5.0 device in cardiogenic shock with small or diseased iliofemoral arteries. This allows early institution of higher-flow MCS than conventional femoral artery implantation of the 3.5 L Impella CP device, and enables a bridge-to-recovery or bridge-to-destination strategy. abstract_id: PUBMED:25674024 Temporary mechanical circulatory support: a review of the options, indications, and outcomes. Cardiogenic shock remains a challenging disease entity and is associated with significant morbidity and mortality. Temporary mechanical circulatory support (MCS) can be implemented in an acute setting to stabilize acutely ill patients with cardiomyopathy in a variety of clinical situations. Currently, several options exist for temporary MCS. We review the indications, contraindications, clinical applications, and evidences for a variety of temporary circulatory support options, including the intra-aortic balloon pump (IABP), extracorporeal membrane oxygenation (ECMO), CentriMag blood pump, and percutaneous ventricular assist devices (pVADs), specifically the TandemHeart and Impella. abstract_id: PUBMED:30854315 Temporary mechanical circulatory support for refractory heart failure: the German Heart Center Berlin experience. Background: Temporary mechanical circulatory support (MCS) offers a valuable option for treatment of refractory heart failure. We present our experience with selected MCS devices in cardiogenic shock of different etiologies. Methods: We retrospectively studied patients who were treated in our institution between 01/2016 and 07/2018. Patients receiving only veno-arterial extracorporeal membrane oxygenation (VA-ECMO) support were excluded. Left ventricular support patients received Impella; right ventricular support was conducted using Levitronix CentriMag. Results: Thirty-seven patients received an Impella left ventricular assist device (LVAD). Etiology was: acute on chronic ischemic cardiomyopathy (ICMP; n=12), acute myocardial infarction (AMI; n=11), dilated cardiomyopathy (DCMP; n=7) and toxic cardiomyopathy (TCMP; n=2). Two patients presented with postcardiotomy shock and acute myocarditis, respectively. In one case, Takotsubo cardiomyopathy was diagnosed. Impella was used solely in 28 patients (Impella group) with an in-hospital survival of 37%. In nine patients, Impella was used in combination with extracorporeal life support (ECLS) implantation (ECMELLA group)-in-hospital survival was 33%. In the Impella group six patients recovered, six received a long-term VAD and 16 died on device. In the ECMELLA group one patient recovered, three received a long-term VAD and five died. The majority of CentriMag implantations as a right ventricular assist device (RVAD) were necessary after LVAD implantation (n=52); of these patients, 14 recovered, eight received long-term VAD and 30 died. The remaining 17 patients were supported by RVAD due to AMI (n=7); postcardiotomy (n=7); right heart failure after heart transplantation (n=2) and ICMP (n=1). Six of these patients recovered, two required long-term VAD and nine died. Conclusions: Survival after MCS implantation for left as well as right heart failure in cardiogenic shock remains low, but is superior to that of patients without mechanical support. Short-term MCS remains an option of choice if right, left or biventricular support is needed. abstract_id: PUBMED:34725740 Outcomes of severe peripartum cardiomyopathy and mechanical circulatory support: a case series. Background: We present three cases of severe peripartum cardiomyopathy (PPCM) that required mechanical circulatory supports. Case Presentation: Case 1: A 33-year-old woman developed acute heart failure (AHF) after normal spontaneous delivery. Intra-aortic balloon pump (IABP) was inserted on postpartum day (PD) 10 with a peripartum cardiomyopathy (PPCM), which was withdrawn on PD 30 after medical treatment including anti-prolactin drugs. Case 2: A 44-year-old woman developed AHF 1 month after vaginal delivery. IABP or extra-corporeal membrane oxygenation (ECMO) was not effective and a biventricular assist device was inserted. It was withdrawn on PD 85 after improvement of left ventricular ejection fraction (LVEF). Case3: A 37-year-old woman was transferred with a diagnosis of PPCM. Cardiac function unimproved by IABP or ECMO, and a left ventricular assist device was implanted. It was withdrawn on PD 386 after recovery of LVEF. Conclusion: All the cases with PPCM recovered after mechanical circulatory supports and resumed social lives. abstract_id: PUBMED:26979140 A contemporary review of paediatric heart transplantation and mechanical circulatory support. Improvements in the care of children with cardiomyopathy, CHDs, and acquired heart disease have led to an increased number of children surviving with advanced heart failure. In addition, the advent of more durable mechanical circulatory support options in children has changed the outcome for many patients who otherwise would have succumbed while waiting for heart transplantation. As a result, more children with end-stage heart failure are being referred for heart transplantation, and there is increased demand for a limited donor organ supply. A review of important publications in the recent years related to paediatric heart failure, transplantation, and mechanical circulatory support show a trend towards pushing the limits of the current therapies to address the needs of this growing population. There have been a number of publications focussing on previously published risk factors perceived as barriers to successful heart transplantation, including elevated pulmonary vascular resistance, medication non-adherence, re-transplantation, transplantation of the failed Fontan patient, and transplantation in an infant or child bridged with mechanical circulatory support. This review will highlight some of these key articles from the last 3 years and describe recent advances in the understanding, diagnosis, and management of children with end-stage heart disease. abstract_id: PUBMED:29168549 First Polish analysis of the treatment of advanced heart failure in children with the use of BerlinHeart EXCOR mechanical circulatory support. Background: The treatment of advanced heart failure (HF) in children and infants poses a serious management problem. Heart failure in that patient group is usually of congenital aetiology. The treatment schedules for paediatric patients are in most cases adapted from the guidelines for treatment of adults. Up to 2009, the treatment of that extremely difficult group of patients was limited to pharmacological therapy and occasional heart transplantations. Constantly increasing problems with recruiting donors, especially for the paediatric group, contribute to the fact that mechanical support with the use of ventricular assist devices is for many children the only chance of surviving the period of waiting for a heart donor. Aim: The aim of the study was to analyse the outcomes of circulatory support in Poland and to assess the advisability of this method for treatment of children with severe HF. Methods: This treatment of paediatric patients is currently used in three Polish centres. From December 28, 2009 to August 1, 2015, 27 implantations of BerlinHeart EXCOR® mechanical circulatory support system were performed in children aged from one month to 16 years (10 patients below one year of age; 37%). Left ventricular assist devices were implanted to 21 patients, whereas the remaining children received biventricular support. The most common reason for using this method was HF developed in the course of cardiomyopathy. In one case, HF after Fontan operation was the indication. Results: The duration of the circulatory support period ranged from six to 1215 days. It was followed by successful heart transplantations in 10 (37%) patients, in five (18.1%) it resulted in regeneration of the heart, enabling explantation of the device, whereas three children are still waiting for transplantations. Nine (33%) children died during the therapy because of thromboembolic complications. Conclusions: As follows from our data, circulatory support utilising the BerlinHeart EXCOR® system is an effective and promising method used as a bridge to cardiac transplantation, or for regeneration of the myocardium in paediatric patients. In the group of the youngest and the most difficult patients, the method requires close cooperation of the medical and nursing personnel. abstract_id: PUBMED:37536790 Mechanical Circulatory Support Therapy in the Cardiac Intensive Care Unit. Mechanical circulatory support (MCS) includes temporary and durable mechanical devices used for two sets of indications: 1. acute heart failure (HF) secondary sepsis, a myocardial infarction, or pulmonary emboli, and 2. for chronic end-stage HF secondary to worsening cardiomyopathy despite guideline driven medical treatment. This article is to aide cardiac intensive care unit (ICU) nurses in understanding the history of MCS therapy, the care of the MCS patient in the cardiac ICU, the critical and collaborative role of transplant teams with MCS therapy, educational needs for a successful discharge, and implications for education and shared decision-making when placing these devices. abstract_id: PUBMED:26905181 Paediatric mechanical circulatory support with Berlin Heart EXCOR: development and outcome of a 23-year experience. This paper reviews the development and establishment of the Berlin Heart EXCOR® (BHE®) as a paediatric mechanical circulatory support and reports our entire experience with regard to indications, timing of implantation and explantation and outcome. The Berlin group reported the first successful paediatric bridge to transplantation using a pulsatile pneumatic paracorporeal biventricular assist device, the BHE®, in 1990 in an 8-year-old boy with end-stage heart failure and coarctation of the aorta. This experience prompted them to develop miniaturized pump systems for children through the company Berlin Heart Mediproduct GmbH. The development and production of BHE® to support paediatric patients with heart failure then began. Between 1990 and 2013, the BHE® has been implanted in 122 patients (median age 8.64 years, range 3 days to 17 years) with heart failure, who were inotrope-dependent or switched from extracorporeal membrane oxygenation support or had postcardiotomy low-output syndrome. Thirty-five patients were &lt;1 year old (median 125 days). The aetiology of heart failure included cardiomyopathy in 56 (median age 9.14 years), fulminant myocarditis in 17 (median age 8.2 years), end-stage congenital heart disease in 18 (median age 6.4 years), postcardiotomy heart failure (after correction of congenital heart disease) in 28 (median age 9.6 years) and transplant graft failure in 3 (median age 12.5 years). The overall median duration of implantation was 63.6 (range 1-841) days. Fifty-six children eventually underwent orthotopic heart transplantation. Eighteen patients had myocardial recovery and were weaned successfully. They had entirely normal cardiac function after a range of 4-10 years after surgery. At the time of this report, five patients were still on support, with a duration of 354-369 days. Forty-three patients died on the system from loss of peripheral circulatory resistance, multiorgan damage, sepsis or haemorrhagic or thrombotic complications. Re-exploration because of bleeding was necessary in 22 patients. Pump exchange because of thrombus formation in the valves was necessary 35 times. With the introduction of a modified anticoagulation regimen in 2000, the pump exchange rate has decreased. The BHE® can reliably support the circulation at any age for long periods with good results. It is now an established treatment for children with heart failure of any aetiology. Answer: The first series of mechanical circulatory support in non-compaction cardiomyopathy (LVNC) suggests that left ventricular assist device (LVAD) implantation can be a safe alternative for patients with LVNC who develop severe congestive heart failure refractory to medical therapy. In a retrospective analysis of 5 patients with LVNC who underwent LVAD implantation, a 100% 30-day survival rate was observed without major intrahospital complications. However, during follow-up, a high incidence of pump thrombosis was noted, necessitating pump replacement in 3 patients. No arrhythmias were detected during follow-up as assessed by ICD interrogation. The study concludes that while LVAD implantation in LVNC can be performed with low intrahospital complication rates, the high incidence of pump thrombosis during follow-up indicates a possible thromboembolic predisposition associated with the underlying LVNC. Therefore, careful management of anticoagulation is critical in these patients (PUBMED:26126056).
Instruction: Does the addition of recombinant luteinizing hormone to progesterone for luteal supplementation improve IVF outcomes in high-responder patients? Abstracts: abstract_id: PUBMED:28062551 New stimulation regimens: endogenous and exogenous progesterone use to block the LH surge during ovarian stimulation for IVF. Background: The advent of embryo and oocyte vitrification today gives reproductive specialists an opportunity to consider new strategies for improving the practice and results of IVF attempts. As the freezing of entire cohorts does not compromise, and may even improve, the results of IVF attempts, it is possible to break away from the standard sequence of stimulation-retrieval-transfer. The constraints associated with ovarian stimulation in relation to the potential harmful effects of the hormonal environment on endometrial receptivity can be avoided. Objective And Rationale: This review will look at the new stimulation protocols where progesterone is used to block the LH surge. Thanks to 'freeze all' strategies, the increase in progesterone could actually be no longer a cause for concern. There are two ways of using progesterone, whether it be endogenous, as in luteal phase stimulation, or exogenous, as in the use of progesterone in the follicular phase i.e. progestin primed ovarian stimulation. Search Methods: A literature search was carried out (until September 2016) on MEDLINE. The following text words were utilized to generate the list of citations: progestin primed ovarian stimulation, luteal phase stimulation, luteal stimulation, duostim, double stimulation, random start. Articles and their references were then examined in order to identify other potential studies. All of the articles are reported in this review. Outcomes: The use of progesterone during ovarian stimulation is effective in blocking the LH surge, whether endogenous or exogenous, and it does not affect the number of oocytes collected or the quality of the embryos obtained. Its main constraint is that it requires total freezing and delayed transfer. A variety of stimulation protocols can be derived from these two methods, and their implications are discussed, from fertility preservation to ovarian response profiles to organization for the patients and clincs. These new regimens enable more flexibility and are of emerging interest in daily practice. However, their medical and economic significance remains to be demonstrated. Wider Implications: The use of luteal phase or follicular phase protocols with progestins could rapidly develop in the context of oocyte donation and fertility preservation not related to oncology. Their place could develop even more in the general population of patients in IVF programs. The strategy of total freezing continues to develop, thanks to technical improvements, in particular vitrification and PGS on blastocysts, and thanks to studies showing improvements in embryo implantation when the transfer take place far removed from the hormonal changes caused by ovarian stimulation. abstract_id: PUBMED:24517720 Luteal phase support with estrogen in addition to progesterone increases pregnancy rates in in vitro fertilization cycles with poor response to gonadotropins. In this study, our objective was to determine the effect of adding estradiol hemihydrate (E2) to progestin (P) for luteal phase support on pregnancy outcome in in vitro fertilization (IVF) cycles with poor response to gonadotropins. Ninety-five women with poor ovarian response who underwent controlled ovarian hyperstimulation (COH) with gonadotropin releasing hormone (GnRH) agonist or GnRH antagonist plus gonadotropin protocol for IVF were prospectively randomized into three groups of luteal phase support after oocyte retrieval. Group 1 (n = 33) received only intravaginal progesterone gel (Crinone 8% gel). Group 2 (n = 27) and Group 3 (n = 35) received intravaginal progesterone plus oral 2 and 6 mg estradiol hemihydrate, respectively. Main outcome measures were overall and clinical pregnancy rates (PRs) per patient. Serum LH, E2 and P levels at 7th and 14th days of luteal phase were also measured. Overall and clinical PRs were significantly higher in 2 mg E2 + P than P-only group (44% versus 18% and 37% versus 12.1%, respectively). There were no statistically significant differences between 6 mg E2 + P versus P-only and 2 mg E2 + P versus 6 mg E2 + P groups regarding PRs. Addition of 2 mg/day E2 in addition to P for luteal support significantly increase overall and clinical PRs in cycles with poor response to gonadotropins after IVF. abstract_id: PUBMED:34904585 Dydrogesterone supplementation in addition to routine micronized progesterone administration for luteal support in cycles triggered with lone GnRH agonist results in an acceptable pregnancy rate and avoids the need to freeze embryos. Background: Ovarian hyperstimulation syndrome (OHSS) is reduced when using antagonist cycle with gonadotrophin releasing hormone (GnRH) agonist trigger before ovum pick up. This trigger induces short luteinizing hormone (LH) and follicle stimulating hormone (FSH) peaks, resulting in an inadequate luteal phase and a reduced implantation rate. We assessed whether the luteal phase can be rescued by supplementing with oral dydrogesterone (duphaston) in antagonist cycles after a lone GnRH agonist trigger. Methods: A retrospective cohort study. The study group (N.=123) included women who underwent IVF. Patients received a GnRH-antagonist with a lone GnRH-agonist trigger due to imminent OHSS. The control group (N.=374) included patients who underwent a standard antagonist protocol with a dual trigger of a GnRH-agonist and human chorionic gonadotrophin (hCG). All the patients were treated with micronized progesterone (utrogestan) for luteal phase support. Study patients were given duphaston in addition. Results: The fertilization rate was comparable between the two groups. The mean number of embryos transferred, the clinical pregnancy rate and the take-home baby rate were comparable between groups (1.5±0.6 vs. 1.5±0.5 and 46.3% vs. 41.2%, and 66.7% vs. 87.7%, respectively). No OHSS event was reported in either group. Conclusions: This study was the first to evaluate outcomes of duphaston supplementation for luteal support in an antagonist cycle with lone GnRH agonist trigger. The functionality of the luteal phase of those cycles could be restored by adding duphaston. This approach was found to be safe and prevented the need to postpone embryo transfer in case of pending OHSS. abstract_id: PUBMED:3932570 Support of the luteal phase in in vitro fertilization programs: results of a controlled trial with intramuscular Proluton. There is disagreement among in vitro fertilization (IVF) programs as to the need to administer exogenous progesterone to support the luteal phase of patients undergoing embryo transfer after IVF. We examined the effect on pregnancy rates of Proluton, 50-mg daily injections given on days 7-16 following oocyte recovery, in 186 women undergoing IVF treatment using a combined stimulation regime of clomiphene and human menopausal gonadotropin (hMG). One group was deliberately selected for treatment on the possible criterion of luteal-phase deficiency and two other groups were randomly selected into a treatment and a control group. No effect on pregnancy rate was noted in any of these groups. These results indicate that extension of the luteal phase with exogenous progesterone is unlikely to have a significant effect on increasing the pregnancy rate in IVF programs using similar treatment regimes. abstract_id: PUBMED:3203757 Effects of progesterone on luteinizing hormone release and estradiol/progesterone ratio in the luteal phase of women superovulated for in vitro fertilization and embryo transfer. The present study extends the information on the effects of progesterone (P) on the luteinizing hormone (LH) release, and estradiol (E2)/P ratio in the luteal phase in women superovulated for in vitro fertilization and embryo transfer (IVF-ET). Two groups of 34 patients were induced for ovulation with clomiphene citrate and human menopausal gonadotropins. One group was given 25 mg P (Gesterol, Steris Laboratory Inc., Phoenix, AZ) at the time of, or 4 to 6 hours before human chorionic gonadotropin (hCG) administration and another group served as control (no Gesterol). Of the 34 patients in the Gesterol group, 10 had Gesterol 4 to 6 hours before the administration of hCG, 13 at the time of hCG, and 11 after the spontaneous LH surge. Administration of Gesterol 4 to 6 hours before hCG significantly increased the LH values (19.0 +/- 10.3) compared with those who had Gesterol at the time of hCG (6.8 +/- 2.8, P = 0.0006). A single dose of Gesterol (25 mg P) significantly reduced the E2/P ratio during the luteal phase (P = 0.0005). However, the outcome of IVF-ET was the same in the Gesterol and no-Gesterol groups. It is concluded that a significant increase in P triggers an LH surge and a single dose of Gesterol decreases E2/P ratio in the luteal phase of women after ovarian stimulation. The biochemical mechanisms are unclear. abstract_id: PUBMED:9626129 Luteal-phase estradiol relates to symptom severity in patients with premenstrual syndrome. Premenstrual syndrome (PMS) is characterized by distressing somatic and behavioral symptoms that develop after ovulation, reach a maximum during the premenstrual days, and disappear within 4 days after the onset of menstruation. Corpus luteum formation is necessary for the presence of symptoms, but the role of luteal hormones is unclear. The aim of this work was to investigate the relationship between sex hormone serum concentrations and premenstrual symptom severity in patients with PMS. Mental and physical symptoms were marked on a validated visual analog scale by 30 PMS patients every evening. Daily blood samples were taken in the luteal phase and in most of the follicular phase. Estradiol, progesterone, FSH, and LH were analyzed. Symptom severity was calculated as the number of negative symptoms expressed per day and as summarized scores of negative ratings. Based on premenstrual hormone concentrations and using the median split method, patients were divided into groups with high and low hormone levels. The pattern of expressed symptoms and summarized scores during the menstrual cycle was similar for the 2 groups. High concentration of luteal-phase estradiol and LH were related to the severity of negative premenstrual symptoms. abstract_id: PUBMED:11594558 Ovulation induction disrupts luteal phase function. Abnormalities in the luteal phase have been detected in virtually all the stimulation protocols used in in vitro fertilization, on both the hormonal and endometrial levels. Supraphysiological follicular or luteal sex steroid serum concentrations, altered estradiol: progesterone (E2/P) ratio, and disturbed luteinizing hormone pituitary secretion leading to corpus luteum insufficiency or a direct drug effect have been postulated as the main etiologic factors. Luteinizing hormone supports corpus luteum function, and low LH levels have been described after human menopausal gonadotropin treatment, after gonadotropin-releasing hormone (GnRH)-agonist treatment, or after GnRH-antagonist treatment. These low luteal LH levels may lead to an insufficient corpus luteum function and consequently to a shortened luteal phase or to the low luteal progesterone concentrations frequently described after ovulation induction. A direct effect of the GnRH agonist or GnRH antagonist on human corpus luteum or on human endometrium and thus on endometrial receptivity cannot be excluded, as GnRH receptors have been described in both compartments. Endometrial histology has revealed a wide range of abnormalities during the various stimulation protocols. In GnRH-agonist cycles, mid-luteal biopsies have revealed increased glandulo-stromal dyssynchrony and delay in endometrial development, strong positivity of endometrial glands for progesterone receptors, decreased alphavbeta3-integrin subunit expression, and earlier appearance of surface epithelium pinopodes. These factors suggest a shift forwards of the implantation window. Progesterone supplementation improves endometrial histology, and its necessity has been well established, at least in cycles using GnRH agonists. abstract_id: PUBMED:17938943 Oral progestogen versus intramuscular progesterone for luteal support after assisted reproductive technology treatment: a prospective randomized study. Objectives: To evaluate the efficacy of oral progestogen, chlormadinone acetate, and intramuscular (IM) progesterone for luteal support in patients, undergoing assisted reproductive technology (ART) treatment, who were treated with a gonadotropin-releasing hormone agonist (GnRHa). Methods: This was a prospective randomized study of 40 patients with normal and high response (serum estradiol &gt; 2,000 pg/ml) in GnRHa down-regulation. Patients were randomized to receive either oral chlormadinone acetate or IM progesterone. The outcomes of ART treatment, including pregnancy and embryo implantation rates, were analyzed. Results: There were no significant differences in the clinical pregnancy rates (25 vs. 20%) and in the implantation rates (12.7 vs. 9.1%) of patients who received IM progesterone and oral chlormadinone acetate. Endometrial thickness was also comparable between oral chlormadinone acetate and IM progesterone. Conclusion: Oral progestogen, chlormadinone acetate showed a comparable pregnancy rate and live birth rate with IM progesterone as luteal support for the high responders. The optimal methods for luteal support may be dependent on responses to stimulation with gonadotropin, although it is not concluded that oral chlormadinone acetate is recommended as an option for luteal support in high responders. abstract_id: PUBMED:37727454 Impact of growth hormone on IVF/ICSI outcomes and endometrial receptivity of patients undergoing GnRH antagonist protocol with fresh embryo transfer: a pilot study. Introduction: Gonadotropin-releasing hormone antagonist (GnRH-ant) protocol is widely used in the world for controlled ovarian hyperstimulation (COH). However, previous studies have shown that pregnancy outcomes of fresh embryo transfer with GnRH-ant protocol are not ideal. Current studies have demonstrated the value of growth hormone (GH) in improving the pregnancy outcome of elderly women and patients with diminished ovarian reserve, but no prospective studies have confirmed the efficacy of GH in fresh embryo transfer with GnRH-ant protocol, and its potential mechanism is still unclear. This study intends to evaluate the impact of GH on IVF/ICSI outcomes and endometrial receptivity of patients undergoing GnRH-ant protocol with fresh embryo transfer, and preliminarily explore the possible mechanism. Methods: We designed a randomized controlled trial of 120 infertile patients with normal ovarian response (NOR) who will undergo IVF/ICSI from April 2023 to April 2025, at Department of Reproductive Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology. The patients will be divided into the depot gonadotropin-releasing hormone agonist (GnRH-a) protocol group, GnRH-ant protocol control group, and GnRH-ant protocol plus GH intervention group at a ratio of 1:1:1 by block randomization design. Patients will be followed on enrollment day, trigger day, embryo transfer day, 7 days after oocytes pick-up, 15 days after embryo transfer, 28 days after embryo transfer, and 12 weeks of gestation. The primary outcome is the ongoing pregnancy rate. Secondary outcomes include the gonadotropin dosage, duration of COH, endometrial thickness and pattern, luteinizing hormone, estradiol, progesterone level on trigger day, numbers of retrieved oocytes, high-quality embryo rate, biochemical pregnancy rate, clinical pregnancy rate, implantation rate, ectopic pregnancy rate, early miscarriage rate, multiple pregnancy rate and incidence of moderate and severe ovarian hyperstimulation syndrome. The endometrium of certain patients will be collected and tested for endometrial receptivity. Ethics And Dissemination: The study was approved by the Ethics Committee of Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology [approval number: TJ-IRB20230236; approval date: February 10, 2023]. The research results will be presented at scientific/medical conferences and published in academic journals. Clinical Trial Registration: Chinese Clinical Trial Registry; identifier: ChiCTR2300069397. abstract_id: PUBMED:36304702 Luteal Phase in Assisted Reproductive Technology. Luteal phase (LP) is the period of time beginning shortly after ovulation and ending either with luteolysis, shortly before menstrual bleeding, or with the establishment of pregnancy. During the LP, the corpus luteum (CL) secretes progesterone and some other hormones that are essential to prepare the uterus for implantation and further development of the embryo, the function known as uterine receptivity. LP deficiency (LPD) can occur when the secretory activity of the CL is deficient, but also in cases of normal CL function, where it is caused by a defective endometrial response to normal levels of progesterone. LPD is particularly frequent in treatments using assisted reproductive technology (ART). Controlled ovarian stimulation usually aims to obtain the highest number possible of good-quality oocytes and requires the use of gonadotropin-releasing hormone (GnRH) analogs, to prevent premature ovulation, as well as an ovulation trigger to achieve timed final oocyte maturation. Altogether, these treatments suppress pituitary secretion of luteinizing hormone (LH), required for the formation and early activity of the CL. In addition to problems of endometrial receptivity for embryos, LPD also leads to dysfunction of the local uterine immune system, with an increased risk of embryo rejection, abnormally high uterine contractility, and restriction of uterine blood flow. There are two alternatives of LPD prevention: a direct administration of exogenous progesterone to restore the physiological progesterone serum concentration independently of the CL function, on the one hand, and treatments aimed to stimulate the CL activity so as to increase endogenous progesterone production, on the other hand. In case of pregnancy, some kind of LP support is often needed until the luteal-placental shift occurs. If LPD is caused by defective response of the endometrium and uterine immune cells to normal concentrations of progesterone, a still poorly defined condition, symptomatic treatments are the only available solution currently available. Answer: The provided abstracts do not specifically address the question of whether the addition of recombinant luteinizing hormone (LH) to progesterone for luteal supplementation improves in vitro fertilization (IVF) outcomes in high responder patients. The abstracts discuss various aspects of luteal phase support and the use of different hormones and protocols to improve IVF outcomes, but none of them mention the use of recombinant LH in combination with progesterone for luteal supplementation in high responder patients. For instance, one study examines the effect of adding estradiol to progesterone for luteal phase support in IVF cycles with poor response to gonadotropins, finding that the addition of estradiol improved pregnancy rates (PUBMED:24517720). Another study looks at the supplementation of dydrogesterone in addition to routine micronized progesterone for luteal support, which resulted in an acceptable pregnancy rate without the need to freeze embryos (PUBMED:34904585). Other abstracts discuss the use of different forms of progesterone for luteal support (PUBMED:3932570, PUBMED:3203757, PUBMED:17938943), the impact of growth hormone on IVF outcomes (PUBMED:37727454), and the general challenges associated with luteal phase support in ART treatments (PUBMED:36304702). However, none of these studies specifically mention the combination of recombinant LH with progesterone for luteal phase support in high responder patients undergoing IVF. Therefore, based on the provided abstracts, it is not possible to determine whether the addition of recombinant LH to progesterone for luteal supplementation would improve IVF outcomes in this particular patient group. Additional research and studies would be needed to answer this question.
Instruction: Do racial and ethnic differences in contraceptive attitudes and knowledge explain disparities in method use? Abstracts: abstract_id: PUBMED:22958659 Do racial and ethnic differences in contraceptive attitudes and knowledge explain disparities in method use? Context: Sustained efforts have not attenuated racial and ethnic disparities in unintended pregnancy and effective contraceptive use in the United States. The roles of attitudes toward contraception, pregnancy and fertility remain relatively unexplored. Methods: Knowledge of contraceptive methods and attitudes about contraception, pregnancy, childbearing and fertility were assessed among 602 unmarried women aged 18-29 at risk for unintended pregnancy who participated in the 2009 National Survey of Reproductive and Contraceptive Knowledge. The contribution of attitudes to racial and ethnic disparities in effective method use was assessed via mediation analysis, using a series of regression models. Results: Blacks and Latinas were more likely than whites to believe that the government encourages contraceptive use to limit minority populations (odds ratio, 2.5 for each). Compared with white women, Latinas held more favorable attitudes toward pregnancy (2.5) and childbearing (coefficient, 0.3) and were more fatalistic about the timing of pregnancy (odds ratio, 2.3); blacks were more fatalistic about life in general (2.0). Only one attitude, skepticism that the government ensures contraceptive safety, was associated with contraceptive use (0.7), but this belief did not differ by race or ethnicity. Although blacks and Latinas used less effective methods than whites (0.3 and 0.4, respectively), attitudes did not explain disparities. Lower contraceptive knowledge partially explained Latinas' use of less effective methods. Conclusions: Providing basic information about effective methods might help to decrease ethnic disparities in use. Research should examine other variables that might account for these disparities, including health system characteristics and provider behavior. abstract_id: PUBMED:23697702 Racial and ethnic differences in men's knowledge and attitudes about contraception. Background: Little is known about racial/ethnic differences in men's contraceptive knowledge and attitudes. Study Design: We used multivariable logistic regression to examine racial/ethnic differences in contraceptive knowledge and attitudes among 903 men aged 18-29 in the 2009 National Survey of Reproductive and Contraceptive Knowledge. Results: Black and Hispanic men were less likely than Whites to have heard of most contraceptive methods, including female and male sterilization, and also had lower knowledge about hormonal and long-acting reversible methods. They were less likely to know that pills are ineffective when 2-3 pills are missed [Blacks: adjusted odds ratio (aOR)=0.42; Hispanics: aOR=0.53] and that fertility was not delayed after stopping the pill (Blacks: aOR=0.52; Hispanics: aOR=0.27). Hispanics were less likely to know that nulliparous women can use the intrauterine device (aOR=0.47). Condom knowledge was similar by race/ethnicity, but Blacks were less likely to view condoms as a hassle than Whites (aOR=0.46). Conclusions: Efforts to educate men, especially men of color, about contraceptive methods are needed. abstract_id: PUBMED:26738619 Racial and ethnic differences in women's preferences for features of contraceptive methods. Objectives: To understand women's preferences for specific features of contraceptive methods, the extent to which features of existing methods match women's preferences and whether this match differs by racial and ethnic subgroups. Study Design: Using data from 1783 women in family planning and abortion clinics across the United States, we performed analyses of racial and ethnic differences in contraceptive features reported to be "extremely important" by participants. We explored how preferences vary for more and less effective contraceptive methods. Results: In multivariate analysis, non-Hispanic Black, Latinas and Asian Pacific Islander women were more likely to report the following features as extremely important compared to non-Hispanic Whites (p&lt;0.05): being able to stop using the method at any time, using a method only with intercourse and the method not changing her menstrual periods. Non-Hispanic Black and Latina women were statistically more likely to report that protection against sexually transmitted diseases, having control over when and whether to use the method and being able to become pregnant after stopping use were extremely important. The contraceptive feature preferences of racial and ethnic minority women in our study had a relatively lower match with high efficacy methods and higher match for low efficacy methods compared to White women (p&lt;0.05). Conclusions: High rates of unintended pregnancy among minority women may be due in part to differences in contraceptive features preferences and discrepancy between their preferences and the features of currently available highly effective methods. Implications: In the context of disparities in rates of unintended pregnancy by racial and ethnic group, this variation in preferences for contraceptive features by race/ethnicity may explain differences in contraceptive use and can inform the development of more acceptable methods of contraception. abstract_id: PUBMED:24020775 Racial and ethnic differences in U.S. women's choice of reversible contraceptives, 1995-2010. Context: In the United States, unintended pregnancies disproportionately affect minority populations. Persistent disparities in contraceptive use between black and Hispanic women and white women have been identified, but it is unclear whether racial and ethnic differences in use of the most effective methods have changed. Methods: Data on 4,727 women from the 1995 National Survey of Family Growth and 5,775 women from the 2006-2010 cycle were used to examine the association between race and ethnicity and women's choice of reversible contraceptives according to level of method effectiveness. Stepwise multinomial logistic regressions were used to identify changes in this association between cycles. Analyses controlled for demographic, socioeconomic, family, religious, behavioral and geographic characteristics. Results: The proportion of women using the most effective reversible contraceptive methods increased from 46% in 1995 to 53% in 2006-2010. In 1995, black and Hispanic women's use of the most effective reversible contraceptives did not differ from that of white women. By 2006-2010, however, black women were substantially less likely than white women to use highly effective reversible contraceptive methods rather than no method (relative risk ratio, 0.6). An analysis that combined the two data sets and included a term for the interaction between survey year and race and ethnicity found that relative to white women, black women were less likely in 2006-2010 than in 1995 to use more effective methods rather than no method (0.6). Conclusions: Further research is needed to identify factors that may be causing racial and ethnic disparities in contraceptive decisions to widen. abstract_id: PUBMED:28322769 Racial and ethnic disparities in contraceptive knowledge among women veterans in the ECUUN study. Objective: To assess whether racial/ethnic disparities in contraceptive knowledge observed in the general US population are also seen among women Veterans served by the Veterans Affairs (VA) healthcare system. Study Design: We analyzed data from a national telephone survey of 2302 women Veterans aged 18-44 who had received care within VA in the prior 12 months. Twenty survey items assessed women's knowledge about various contraceptive methods. Multivariable logistic regression was used to examine racial/ethnic variation in contraceptive knowledge items, adjusting for age, marital status, education, income, parity, and branch of military service. Results: Contraceptive knowledge was low among all participants, but black and Hispanic women had lower knowledge scores than whites in almost all knowledge domains. Compared to white women, black women were significantly less likely to answer correctly 15 of the 20 knowledge items, with the greatest adjusted difference observed in the item assessing knowledge about the reversibility of tubal sterilization (adjusted percentage point difference (PPD): -23.0; 95% CI: -27.8, -18.3). Compared to white women, Hispanic women were significantly less likely to answer correctly 11 of the 20 knowledge items, with the greatest adjusted difference also in the item assessing tubal sterilization reversibility (PPD: -13.1; 95% CI: -19.5, -6.6). Conclusion: Contraceptive knowledge among women Veterans served by VA is suboptimal, especially among racial/ethnic minority women. Improving women's knowledge about important aspects of available contraceptive methods may help women better select and effectively use contraception. Implications: Providers in the VA healthcare system should assess and address contraceptive knowledge gaps as part of high-quality, patient-centered reproductive health care. abstract_id: PUBMED:35671011 Racial and ethnic disparities in access to gynecologic care. Purpose Of Review: Despite efforts to minimize patient barriers to equitable care, health disparities persist in gynecology. This paper seeks to highlight racial and ethnic disparities in gynecologic care as represented by recent literature. Recent Findings: Disparities exist among many areas including preventive screenings, vaccination rates, contraception use, infertility, and oncologic care. These can be identified at the patient, physician, and institutional levels. Summary: As we identify these social disparities in healthcare, we gain valuable knowledge of where our efforts are lacking and where we can further improve the health of women. Future research should focus on identifying and combating such disparities with measurable changes in health outcomes. abstract_id: PUBMED:21884385 Racial and ethnic disparities in contraceptive method choice in California. Context: Unintended pregnancy, an important public health issue, disproportionately affects minority populations. Yet, the independent associations of race, ethnicity and other characteristics with contraceptive choice have not been well studied. Methods: Racial and ethnic disparities in contraceptive use among 3,277 women aged 18-44 and at risk for unintended pregnancy were assessed using 2006-2008 data from of the California Women's Health Survey. Sequential logistic regression analyses were used to examine the independent and cumulative associations of racial, ethnic, demographic and socioeconomic characteristics with method choice. Results: Differences in contraceptive use persisted in analyses controlling for demographic and socioeconomic characteristics. Blacks and foreign-born Asians were less likely than whites to use high-efficacy reversible methods-that is, hormonals or IUDs (odds ratio, 0.5 for each). No differences by race or ethnicity were found specifically for IUD use in the full model. Blacks and U.S.-born Hispanics were more likely than whites to choose female sterilization (1.9 and 1.7, respectively), while foreign-born Asians had reduced odds of such use (0.4). Finally, blacks and foreign-born Asians were less likely than whites to rely on male sterilization (0.3 and 0.1, respectively). Conclusions: Socioeconomic factors did not explain the disparities in method choice among racial and ethnic groups. Intervention programs that focus on improving contraceptive choice among black and, particularly, Asian populations need to be developed, as such programs have the potential to reduce the number of unintended pregnancies that occur among these high-risk groups. abstract_id: PUBMED:30994399 Racial and Ethnic Disparities in Desire for Reversal of Sterilization Among U.S. Women. Purpose: Racial and ethnic disparities in rates of female sterilization, a prominent method of contraception, have been consistently observed for decades. Such disparities are also evident in subsequent desire for reversal of the procedure. Additional work is needed to better understand these patterns, particularly given the historical context of coercive sterilization patterns in minority and low-income women. Materials and Methods: Two cycles of the National Survey of Family Growth data are pooled (2011-2013 and 2006-2010) and used to estimate odds ratios (ORs) for race and ethnicity, controlling for payment method, age at sterilization, number of long-term partners, and other known covariates. Results: After adjusting for other factors, the odds of desire for reversal were 70% higher (OR 1.70, confidence interval [95% CI] 1.26-2.29) in non-Hispanic (NH) Black and 54% (OR 1.54, 95% CI 1.14-2.08) in Hispanic women compared to their NH White counterparts. In addition, the likelihood of desire for reversal was substantially increased with lower age at sterilization, a higher number of partners, and lower education. Conclusions: Robust findings of desire for reversal among racial and ethnic minorities, taken together with increased desire for reversal on the basis of specific personal characteristics, merit attention to the possibility that disproportionate outcomes reflect a lack of access to desired contraception and an inability to achieve desired fertility goals in marginalized populations. abstract_id: PUBMED:34058223 Racial and ethnic differences in family planning telehealth use during the onset of the COVID-19 response in Arkansas, Kansas, Missouri, and Oklahoma. Objectives: To explore racial/ethnic disparities in family planning telehealth use. Study Design: We analyzed telehealth and in-clinic visits (n = 3142) from ten family planning clinics (April 1-July 31, 2020) by race/ethnicity and month. Results: Telehealth comprised 1257/3142 (40.0%) of overall visits. Telehealth was used by 242/765 (31.6%) of Black/African American and 31/106 (29.2%) multiracial patients. Patients with unknown (162/295, 54.9%), White (771/1870, 41.2%), and other (51/106, 48.1%) identities comprised the majority of telehealth visits. Conclusions: Our study found differences in telehealth use during the COVID-19 pandemic response. Implications: Understanding barriers and facilitators to telehealth is critical to reducing disparities in access. abstract_id: PUBMED:23565127 Race-Ethnic Differences in Sexual Health Knowledge. Despite extensive research examining the correlates of unintended fertility, it remains a puzzle as to why racial and ethnic minorities are more likely to experience an unintended birth than non-Hispanic whites. This paper focuses on sexual literacy, a potential precursor of unintended fertility. Analyses use a unique dataset of unmarried young adults aged 18-29, the 2009 Survey of Unmarried Young Adults' Contraceptive Knowledge and Practices, to examine beliefs regarding pregnancy risks, pregnancy fatalism, and contraceptive side effects. At the bivariate level, foreign-born Hispanics hold more erroneous beliefs about the risk of pregnancy than other groups, and non-Hispanic blacks are more likely to believe in contraceptive side effects than non-Hispanic whites. Both foreign-born Hispanics and non-Hispanic blacks are more likely than non-Hispanic whites to hold a fatalistic view towards pregnancy. Race-ethnic differences are attenuated for pregnancy misperceptions and fatalism in multivariate models controlling for sources of health information, sexual and fertility experiences, and sociodemographic characteristics. However, non-Hispanic blacks remain more likely than non-Hispanic whites to believe there is a high chance of reduced sexual desire and serious health consequences when using hormonal contraceptives. These differences may contribute to race-ethnic variation in contraceptive use and, ultimately, unintended fertility. Answer: Racial and ethnic differences in contraceptive attitudes and knowledge do appear to contribute to disparities in contraceptive method use, but they do not fully explain these disparities. Studies have found that Black and Latina women are more likely than white women to hold certain beliefs about contraception, such as the idea that the government encourages contraceptive use to limit minority populations, and they may have more favorable attitudes toward pregnancy and childbearing. They also may be more fatalistic about the timing of pregnancy (PUBMED:22958659). However, these attitudes did not fully account for the disparities in effective contraceptive method use observed between these groups. Additionally, Black and Hispanic men have been found to have lower knowledge about various contraceptive methods compared to white men, which could influence contraceptive practices (PUBMED:23697702). Women's preferences for specific features of contraceptive methods also vary by race and ethnicity, with racial and ethnic minority women in one study reporting a lower match with high efficacy methods and a higher match for low efficacy methods compared to white women (PUBMED:26738619). Over time, racial and ethnic disparities in the use of the most effective reversible contraceptives have widened, with Black women becoming less likely than white women to use these methods (PUBMED:24020775). Similarly, Black and Hispanic women veterans have been shown to have lower contraceptive knowledge than white women veterans (PUBMED:28322769). These disparities in knowledge and attitudes are compounded by broader issues of access to gynecologic care, which also exhibit racial and ethnic disparities (PUBMED:35671011). Even when controlling for socioeconomic factors, disparities in contraceptive method choice among racial and ethnic groups persist (PUBMED:21884385). Furthermore, racial and ethnic disparities have been observed in the desire for reversal of sterilization, with non-Hispanic Black and Hispanic women more likely to express this desire than their non-Hispanic white counterparts, suggesting possible issues with access to desired contraception and the ability to achieve fertility goals (PUBMED:30994399). The onset of the COVID-19 pandemic also highlighted differences in telehealth use for family planning services, with Black/African American and multiracial patients using telehealth less than those with unknown, White, and other identities (PUBMED:34058223).
Instruction: A comparison of the diagnostic performances of visceral organ-targeted versus spine-targeted protocols for the evaluation of spinal fractures using sixteen-channel multidetector row computed tomography: is additional spine-targeted computed tomography necessary to evaluate thoracolumbar spinal fractures in blunt trauma victims? Abstracts: abstract_id: PUBMED:20699755 A comparison of the diagnostic performances of visceral organ-targeted versus spine-targeted protocols for the evaluation of spinal fractures using sixteen-channel multidetector row computed tomography: is additional spine-targeted computed tomography necessary to evaluate thoracolumbar spinal fractures in blunt trauma victims? Background: It remains to be determined whether spine-targeted computed tomography (thoracolumbar spine computed tomography [TLS-CT]) images and visceral organ-targeted CT (abdominopelvic [AP]-CT) images are comparable for the evaluation of thoracolumbar spinal fractures using 16-channel multidetector row CT. The elimination of an additional spine-targeted CT protocol would substantially reduce time, the storage burden, and potential patient radiation exposure. Methods: A total of 420 vertebrae in 72 consecutive patients who underwent AP-CT to assess blunt traumatic injury and an additional CT examination using a TLS-CT protocol to evaluate spinal fractures were retrospectively evaluated. The AP-CT set (set A, reconstructed with using a wide display field of view [FOV] and a soft algorithm) and the TLS-CT set (set S, reconstructed using a narrow display FOV and a hard algorithm) were composed of axial plus reformatted sagittal or coronal images or both. Three radiologists independently reviewed all CT data retrospectively. Performances for detecting and typing fractures were compared by using areas under receiver operating characteristic curves and by determining concordance rates. Results: The overall areas under the curves for sets S and A for fracture detection were 0.996 and 0.995, respectively; no significant difference was found between the two sets. Concordance rates for typing performance also showed no statistical significance between the two sets for any of the three observers. Conclusion: Sixteen-channel multidetector row CT images reconstructed using a soft algorithm and a wide display FOV that cover the entire abdomen using a visceral organ-targeted protocol with 1.5-mm collimation are sufficient for the evaluation of spine fractures in trauma patients, given that multiplanar-reformatted images are provided. abstract_id: PUBMED:35714491 Imaging of thoracolumbar spine traumas. Spine trauma is an ominous event with a high morbidity, frequent mortality, and significant psychological, social, and financial consequences for patients, their relatives and society. On average three out of four spinal fractures involve the thoracolumbar spine and up to one-third are complicated by spinal cord injury. Spinal cord injuries (SCI) are a significant cause of disability in US and in all western countries. Knowledge of the main principles of biomechanics is essential in understanding the patho-morphology of spinal injuries, and the evolution of the various classification systems. Classification systems should be able to create a common language between specialists in order to improve patients' prognosis, guide treatment and compare treatment outcomes. Imaging has always been crucial in the evaluation of the injury type and accompanied the development of different classification systems. Thoracolumbar spine (TLS) trauma has a wide spectrum ranging from minor isolated fractures to highly unstable fracture-dislocations. Early classification systems were based on the analysis of the pattern of bony injuries on radiographs and CT. Traditionally, conventional radiographs are performed to confirm the clinical suspicion and to depict the level and type of bone injury. However, because of their inherent limitations, radiographs are often more helpful in proving the existence of a suspected bony spinal injury rather than excluding it. Multidetector computed tomography (MDCT) is superior in evaluating bone anatomy and, especially in polytrauma patients, it is the first line imaging modality. Morphological bone damage may be accurately shown and classified on CT. the most recent classifications also incorporate the integrity of soft tissues structures, which is considered equally relevant to spinal stability. Injuries to ligaments and discs can only be suspected on radiographs and conventional CT, although dual-energy CT is offering new insights on collagen mapping of damaged discs. Magnetic resonance imaging (MRI) may directly assess disc and ligamentous injuries, but also subtle osseous injuries, playing a complementary role in defining the whole spinal damage and an eventual instability. MRI is the only valid modality to assess the spinal cord (SC) and is indicated whenever a neurologic injury is suspected. Advanced MRI techniques, such as diffusion weighted imaging (DWI) and tractography, may provide further information regarding the integrity of the white matter which may improve outcome prognostication. Despite challenges in terms of costs, availability, accessibility and specificity, MRI and advanced MRI techniques are increasingly being used in spinal injuries. We present a review on TLS traumas discussing on the development of different classification system used in their evaluation, the role of imaging for their detection and the correlation to the patients' outcomes and treatment options. abstract_id: PUBMED:16394914 Are plain radiographs of the spine necessary during evaluation after blunt trauma? Accuracy of screening torso computed tomography in thoracic/lumbar spine fracture diagnosis. Background: Fracture of the thoracolumbar (TL) spine is reported in 8 to 15% of victims of blunt trauma. Current screening of these patients is done with conventional radiography. This may require repeated sets of films and take hours to days. It is imperative that these patients get timely, accurate evaluation to allow for treatment planning and early mobilization; alternatives to plain films would aid in this. The objective of this study is to determine whether the data obtained from admission chest/abdomen/pelvis (CAP) computed tomography (CT) scans after blunt trauma has utility in thoracolumbar spine evaluation. Methods: The records of all patients admitted to a Level I trauma center over a 2-month period who underwent CAP CT were reviewed for the presence of TL spine fracture, time to completion of plain film evaluation, and clinical course. Admission CT scans were reviewed by an attending radiologist who was blinded to any previously diagnosed spine fractures. The two tests were compared for diagnostic accuracy and their discriminatory ability was compared using receiver operating characteristic (ROC) curves. Significance was defined as p &lt; 0.05. Results: In all, 103 patients were admitted from January 1, 2003 to February 28, 2003 and underwent CAP CT scan as part of their initial trauma evaluation. Of these, 26 (25%) had thoracolumbar fractures. Seven (27%) thoracolumbar fractures were not seen on plain radiographs taken during the trauma evaluation. Average time until plain film completion in this group was 8 hours (range, 44 minutes to 38 hours). All 26 (100%) patients with fractures, however, were diagnosed on CT scan performed shortly after admission. Of the remaining 77 patients, two (2.6%) were falsely read as positive for fracture on CT. Sensitivity and specificity of CT scan for thoracolumbar fracture were excellent at 100% and 97%, respectively, with a negative predictive value of 100%. Plain radiographs were 73% sensitive, 100% specific, and had a negative predictive value of 92%. Area under the ROC curve for CT was 0.98, but for plain film was 0.86 (p &lt; 0.02). Conclusion: Admission CAP CT obtained as part of the routine trauma evaluation in these high-risk patients is more sensitive than plain radiographs for evaluation of the TL spine after blunt trauma. In addition, CAP CT can be performed faster. Omission of plain radiographs will expedite accurate evaluation allowing earlier treatment and mobilization. abstract_id: PUBMED:22124639 Diagnostic value of tomography of the cervical spine in victims of blunt trauma. Objective: to assess the value of computed tomography in the diagnosis of cervical spine and spinal cord injuries in victims of blunt trauma. Methods: we reviewed the charts of blunt trauma victims from January 2006 to December 2008. We analyzed the following data: epidemiology, mechanism of trauma, transportation of victims to the hospital, intra-hospital care, indication criteria for CT, diagnosis, treatment and evolution of the victims. The victims were divided into two groups: Group I - without cervical spine injury, Group II - with cervical spine injury. Results: we gathered medical records from 3,101 victims. Computed tomography was performed in 1572 (51%) patients, with male predominance (79%) and mean age of 38.53 years in Group I and 37.60 years in Group II. The distribution of trauma mechanisms was similar in both groups. Lesions found included: 53 fractures, eight vertebral listeses and eight spinal cord injuries. Sequelae included: paraplegia in three cases, quadriplegia in eight and brain injury in five. There were seven deaths in Group II and 240 in Group I. The average length of hospital stay was 11 days for Group I and 26.2 days for Group II. CONCLUSION. A CT scan of the cervical spine in victims of blunt trauma was effective in identifying lesions of the cervical spine and spinal cord injuries. Thus, despite the cost of neck CT and the low incidence of lesions identified by it, its indication based on the usual criteria seems justified. abstract_id: PUBMED:23856632 Safe cervical spine clearance in adult obtunded blunt trauma patients on the basis of a normal multidetector CT scan--a meta-analysis and cohort study. Background: A true gold standard to rule out a significant cervical spine injury in subset of blunt trauma patients with altered sensorium is still to be agreed upon. The objective of this study is to determine whether in obtunded adult patients with blunt trauma, a clinically significant injury to the cervical spine be ruled out on the basis of a normal multidetector cervical spine computed tomography. Methods: Comprehensive database search was conducted to include all the prospective and retrospective studies on blunt trauma patients with altered sensorium undergoing cervical spine multidetector CT scan as core imaging modality to "clear" the cervical spine. The studies used two main gold standards, magnetic resonance imaging of the cervical spine and/or prolonged clinical follow-up. The data was extracted to report true positive, true negatives, false positives and false negatives. Meta-analysis of sensitivity, specificity, negative and positive predictive values was performed using Meta Analyst Beta 3.13 software. We also performed a retrospective investigation comparing a robust clinical follow-up and/or cervical spine MR findings in 53 obtunded blunt trauma patients, who previously had undergone a normal multidetector CT scan of the cervical spine reported by a radiologist. Results: A total of 10 studies involving 1850 obtunded blunt trauma patients with initial cervical spine CT scan reported as normal were included in the final meta-analysis. The cumulative negative predictive value and specificity of cervical spine CT of the ten studies was 99.7% (99.4-99.9%, 95% confidence interval). The positive predictive value and sensitivity was 93.7% (84.0-97.7%, 95% confidence interval). In the retrospective review of our obtunded blunt trauma patients, none was later diagnosed to have significant cervical spine injury that required a change in clinical management. Conclusion: In a blunt trauma patient with altered sensorium, a normal cervical spine CT scan is conclusive to safely rule out a clinically significant cervical spine injury. The results of this meta-analysis strongly support the removal of cervical precautions in obtunded blunt trauma patient after normal cervical spine computed tomography. Any further imaging like magnetic resonance imaging of the cervical spine should be performed on case-to-case basis. abstract_id: PUBMED:21477178 Multidetector-row computed tomography of thoracic trauma. Imaging in trauma patients has dramatically evolved since the advent of computed tomography (CT), particularly multidetector CT (MDCT) technology. Axial MDCT images of the body can be acquired in seconds and shown any plane, allowing immediate viewing and interpreting. These factors make CT an invaluable means to detect many injuries not previously visible by any other noninvasive imaging techniques. Potentially subtle, but significant, thoracic injuries such as pneumothorax, haemothorax, aortic injury, sternal and spinal fractures can be detected on MDCT easily. In this article, the author will discuss the use of MDCT in the diagnosis of various thoracic injuries. abstract_id: PUBMED:20006205 Computed tomographic screening for thoracic and lumbar fractures: is spine reformatting necessary? Introduction: Patients who sustain traumatic vertebral fractures often have multiple other associated injuries. Because of the mechanisms of injury, many of these patients routinely undergo chest computed tomographic (CCT) and/or abdominal/pelvic computed tomographic (APCT) scans to diagnose intrathoracic or intra-abdominal injuries. These scans are routinely reformatted to provide more detailed imaging of the spine. Although the patient does not incur more radiation, the charges associated with this are significant. This study compared the sensitivity of these CT modalities in detecting thoracolumbar spine fractures. Methods: A retrospective chart review identified blunt trauma victims, admitted through the emergency department, with a discharge diagnosis of thoracic or lumbar spine fracture that received (1) a chest and T-spine CT, (2) an abdominal/pelvic and lumbar spine CT, or both. Final radiologic readings of these patients' CT scans were obtained, and the sensitivities of the different imaging methods were compared. Discharge diagnosis of spine fracture was considered the gold standard. Results: One hundred seventy-six APCT scans with reformatting and 175 CCT scans with reformatting were available for comparison. There were 9 of 176 false-negative APCT scans vs 3/176 false-negative lumbar spine CT scans. There were 14/175 false-negative CCT scans vs 2/175 false-negative thoracic spine CT scans. The differences in sensitivity were significant (P &lt; .001) for both comparisons. Conclusions: Reformatting of CCT and APCT scans gives improved sensitivity in the detection of thoracic and lumbar spine fractures in trauma patients. Future study looking at clinically significant fractures or those that change clinical management decisions may find that the reformatted images are not routinely needed as a screening tool. abstract_id: PUBMED:14566120 Reformatted visceral protocol helical computed tomographic scanning allows conventional radiographs of the thoracic and lumbar spine to be eliminated in the evaluation of blunt trauma patients. Background: Patients suffering high-energy injuries are at risk for occult thoracic and lumbar spine fractures, and the standard of care includes radiographic spine screening. Most such patients require computed tomographic (CT) scanning to screen for chest and/or abdominal visceral injury. Helical CT (HCT) scanning represents a major technologic change that allows data to be reformatted after the patient has left the radiology suite. We explored the possibility of using reformatted visceral protocol HCT scanning to replace radiographs of the thoracic and lumbar spine in the evaluation of seriously injured patients. Methods: A prospective evaluation of consecutive patients with thoracic and lumbar spine fractures admitted over a 12-month period to an urban Level I trauma center was completed. The ability of conventional radiography and reformatted HCT scanning to detect spine fractures was compared. Results: Of 1,915 trauma patients admitted, 78 (4.1%), with an average Injury Severity Score of 21.3 +/- 1.2, sustained one or more thoracic (n = 35 patients) or lumbar (n = 43 patients) spine fractures. The sensitivity of reformatted HCT scanning as a screening test for spine fractures was 97% for thoracic and 95% for lumbar spine fractures, compared with a sensitivity of 62% for thoracic and 86% for lumbar conventional radiographs. Conclusion: Data obtained from HCT scanning performed to evaluate seriously injured multiple trauma patients for thoracic and abdominal visceral injury can be reformatted to screen for thoracic and lumbar spine fractures, providing accurate screening while eliminating the time, expense, and radiation exposure associated with conventional film radiography. abstract_id: PUBMED:21825938 Canadian Cervical Spine rule compared with computed tomography: a prospective analysis. Background: The Canadian cervical spine rule (CCS) has been found to be an effective tool to determine the need for radiographic evaluation of the cervical spine (c-spine) incorporating both clinical findings and mechanism. Previously, it has been validated only through clinical follow-up or selective use of X-rays. The purpose of this study was to validate it using computed tomography (CT) as the gold standard to identify fractures. Methods: Prospective evaluation was performed on 3,201 blunt trauma patients who were screened by CCS and were compared with a complete c-spine CT. CSS positive indicated at least one positive clinical or mechanism finding, whereas CT positive indicated presence of a fracture. Results: There were 192 patients with c-spine fractures versus 3,009 without fracture on CT. The fracture group was older (42.7 ± 19.0 years vs. 37.8 ± 17.5 years, p = 0.0006), had a lower Glasgow Coma Scale score (13.8 ± 4.2 vs. 14.4 ± 4.3, p &lt; 0.0001), and lower systolic blood pressure (133.3 ± 23.8 mm Hg vs. 139.5 ± 23.1 mm Hg, p = 0.0023). The sensitivity of CCS was 100% (192/192), specificity was 0.60% (18/3009), positive predictive value was 6.03% (192/3183), and negative predictive value was 100% (18/18). Logistic regression identified only 8 of the 19 factors included in the CCS to be independent predictors of c-spine fracture. Conclusions: CCS is very sensitive but not very specific to determine the need for radiographic evaluation after blunt trauma. Based on this study, the rule should be streamlined to improve specificity while maintaining sensitivity. abstract_id: PUBMED:12131382 Evaluation of the lower spine after blunt trauma using abdominal computed tomographic scanning supplemented with lateral scanograms. Background: Patients at risk for thoracolumbar junction (TLJ) and lumbar spine (LS) injury after blunt trauma are classically evaluated using conventional radiographs. Frequently, these patients also undergo abdominal and pelvic computed tomographic (CT) scanning to exclude the presence of associated intra-abdominal injuries. Standard abdominal and pelvic CT scan usually includes an anteroposterior (AP) scout film (scanogram) obtained before the cross-sectional imaging. The objective of this study was to determine whether a lateral CT scanogram and axial CT views would provide adequate imaging to allow for evaluation of the TLJ and LS and therefore eliminate the need for conventional screening computed lumbar spine radiographs (CLSRs). Methods: Patients who sustained blunt injury and required both CLSRs as well as abdominal and pelvic CT scans were prospectively identified. The study protocol (CT + S) added lateral CT scanograms to all helical abdominal and pelvic CT scan studies. The AP and lateral CT scanograms were included with the axial images, and these views were reviewed together during final radiographic interpretation and diagnosis. The results of CT + S were compared with readings of the CLSRs (AP and lateral) in a blinded fashion by a trauma radiologist. Results: Lateral scanograms were generated for 71 patients. All scanograms were technically adequate, with image quality equal or superior to computed plain radiographs. Ten patients were found to have 20 fractures, 19 acute and 1 chronic. All abnormalities identified by plain radiographs were seen using CT + S (sensitivity, 100%; specificity, 100%). Eight transverse process and two spinous process fractures not seen on CLSRs were identified using CT + S. Conclusion: Our CT + S protocol (axial CT images plus AP and lateral scanograms) outperformed screening CLSRs in the detection of fractures of the lower spine (TLJ + LS) after blunt trauma. In addition, scanogram imaging is less dependent on body habitus and adds no additional cost or time to abdominal and pelvic CT scanning. Further study is required to determine whether CT + S can routinely replace conventional radiographs of the lower spine after blunt trauma. Answer: The study described in abstract PUBMED:20699755 investigated whether additional spine-targeted computed tomography (CT) is necessary for evaluating thoracolumbar spinal fractures in blunt trauma victims when compared to visceral organ-targeted CT protocols using sixteen-channel multidetector row CT. The study retrospectively evaluated 420 vertebrae in 72 consecutive patients who underwent both abdominopelvic CT (AP-CT) and an additional CT examination using a thoracolumbar spine CT (TLS-CT) protocol. The AP-CT images (set A) were reconstructed using a wide display field of view (FOV) and a soft algorithm, while the TLS-CT images (set S) were reconstructed using a narrow display FOV and a hard algorithm. Both sets included axial plus reformatted sagittal or coronal images. Three radiologists independently reviewed all CT data, and the performances for detecting and typing fractures were compared using areas under receiver operating characteristic curves and by determining concordance rates. The results showed that the overall areas under the curves for sets S and A for fracture detection were 0.996 and 0.995, respectively, with no significant difference between the two sets. Concordance rates for typing performance also showed no statistical significance between the two sets for any of the three observers. The conclusion of the study was that sixteen-channel multidetector row CT images reconstructed with a soft algorithm and a wide display FOV that cover the entire abdomen using a visceral organ-targeted protocol with 1.5-mm collimation are sufficient for the evaluation of spine fractures in trauma patients, provided that multiplanar-reformatted images are available. Therefore, additional spine-targeted CT may not be necessary for evaluating thoracolumbar spinal fractures in blunt trauma victims.
Instruction: Comparison of hospital pharmacy practice in France and Canada: can different practice perspectives complement each other? Abstracts: abstract_id: PUBMED:17457689 Comparison of hospital pharmacy practice in France and Canada: can different practice perspectives complement each other? Objective: To compare hospital pharmacy practice in France and Canada by identifying similarities and differences in the two institution's pharmacy activities, resources, drug dispensing processes and responsibilities. Setting: Centre hospitalier universitaire Sainte-Justine (SJ), Montréal, Québec, Canada and Hôpital Robert Debré (RD), Paris, France, are two maternal-child teaching hospitals. They share a similar mission focused on patient care, teaching and research. Method: The data were gathered from annual reports, department strategic plans and by direct observation. Main Outcome Measure: The description and comparison of the legal environment, hospital demographics, pharmacy department data, drug dispensing processes and pharmacist activities in the two institutions. Results: The Centre hospitalier universitaire Sainte-Justine and Hôpital Robert Debré are similar with respect to their mission and general demographics; number of beds, annual hospital expenditures, number of admissions, visits and childbirths. The respective pharmacy departments differ in allocated resources. The main operational differences concern compounding, quality control programs and clinical activities. The French department also manages medical devices, medical gases, blood derivatives and the sterilisation unit. These comparisons highlight the more patient-oriented Canadian hospital pharmacy practice against the more product-oriented French hospital practice Factors contributing to these differences include academic curriculum, the attention paid to the legal environment by professional bodies, staffing patterns and culture. Conclusion: There are differences between the hospital pharmacy practice in the studied hospitals in Canada and France. Hospital pharmacy practice in France seems to be more product oriented, and the practice in Canada seems more patient oriented. abstract_id: PUBMED:24780836 Determinants of the evolution of hospital pharmacy in France and Quebec: Perception of hospital pharmacists Background: Hospital pharmacy practice has evolved differently between France and Quebec. While this development is part of broader systems, French and Quebec hospitals have undergone significant changes over the years to cope with challenges, among others, the economic and demographic realities. Purpose: The main objective is to evaluate and compare the perception of French and Quebec hospital pharmacists about the factors that have contributed to the evolution of pharmacy practice in their respective context. Methods: This is a descriptive cross-sectional study. The study focuses on a sample of experienced hospital pharmacists in France and Quebec. We targeted a convenience sample of 50 respondents per country. An online questionnaire with 15 pharmaceutical activities to which are connected nine factors that may have influenced the implementation of each of these activities in each country was used. The mean score was calculated for each of the nine factors for each activity. The perception of French and Quebec hospital pharmacists was then compared. A P value less than 0.05 was considered statistically significant. Results: Two hundred and sixty hospital pharmacists were directly contacted in France and 79 in Quebec. Seventy-eight French pharmacists and 77 Quebec pharmacists responded to the survey, that is a respective response rate of 30% and 97%, respectively. The hierarchy of factors that contributed to the evolution of pharmacy practice was similar between the two countries, legislative and regulatory factors as well as the concern for risk management and quality dominate; scientific human, economic factors and training have a relatively similar position. For cons, the news factor (6th in France against the 10th position in Quebec) and the academic factor (10th position in France against the 6th position in Quebec) obtained inverse scores between France and Quebec. Conclusion: There are few data on the determinants of the evolution of hospital pharmacy in France and Quebec. The hierarchy of factors that contributed to the evolution of pharmacy practice is similar between the two countries, although differences of rank were found for the news and academic factors. Further studies are needed to better understand the factors that influence the evolution of pharmacy practice in health care institutions. abstract_id: PUBMED:25220227 Hospital pharmacy residency in France in 2014: to a recognition of the specialization? The current format of French residency in hospital pharmacy was created in 1983 and is a 4-year specialized training. So far, training has not been recognized as a prerequisite for hospital pharmacy practice. Since 2011, pharmacy residents and hospital pharmacists representative structures have lobbied for that recognition and the government has worked in that direction. The ideology of the concept was validated after a period of probation and the regulatory procedure began late 2012. Two key elements were initially identified as obstacles: first the European legislation on recognition of professional qualifications and then the fear that there might not be enough hospital pharmacists trained in order to complete the care missions in hospital pharmacies in France. The European legislation has now been amended in order to recognize professional qualifications and a demographic analysis of hospital pharmacists leads to the conclusion that these items are no longer obstacles. In 2014, hospital pharmacy residency, through the Specialized Studies degree, should be recognized as a prerequisite for hospital pharmacy practice. abstract_id: PUBMED:1611167 Hospital nuclear pharmacy--a comparison of physician and pharmacist practice. Objective: Nuclear pharmacy is practiced in every hospital with a nuclear medicine clinic. Pharmacists control this practice in fewer than four percent of these institutions. The authors wish to bring to the attention of hospital pharmacists an area of practice in which they can make a significant contribution to the state of pharmacy practice. Method: The current state of the physician practice of nuclear pharmacy is described and compared with the accepted standards of pharmacy practice. Conclusions: Hospital pharmacists can improve pharmaceutical care administered in nuclear medicine by their participation in nuclear pharmacy practice and by the application of hospital pharmacy practice standards. It is also suggested that nuclear pharmacy should be integrated into the pharmacy curriculum at schools of pharmacy. abstract_id: PUBMED:6734438 Hospital pharmacy practice in India. The status of pharmacy practice was evaluated at six hospitals in India. Common drugs were available at private hospitals but the pharmacies at government hospitals had fewer than half of the needed drugs. Selection of the best generic drug appeared difficult because the bioavailability and pharmacokinetic data generally were not available. The hospitals did not have formularies. No unit dose and intravenous admixture services had been implemented. The patient profiles were not maintained. The pharmacists did not appear to provide any professional, educational, or clinical services to patients or physicians. Serum concentrations of drugs were not measured for monitoring therapy. A lack of clinical education and training of pharmacists, lower status and salaries in the hospital pharmacy compared with industry and government, and overall limited resources appear to be the important reasons for the present status of pharmacy practice. abstract_id: PUBMED:15598965 Scope of international hospital pharmacy practice. Objective: To review the published English literature regarding international hospital pharmacy practice. Data Sources: A computer search of all English-language articles in MEDLINE (1966-June 2004) and other Internet sources and International Pharmaceutical Abstracts (1971-June 2004). Study Selection And Data Extraction: All studies that discussed hospital pharmacy or clinical hospital pharmacy activities outside of the US were considered for inclusion. Data Synthesis: The scope of international hospital pharmacy practice is quite varied, both inter- and intra-country, and varying degrees of specialization exist. Although clinical pharmacy is well developed in some countries, it is still in infancy stages in others. In addition, there is disparity in the actual definition of clinical pharmacy throughout the world. Conclusions: Since very few data have been published regarding hospital pharmacy practice on an international scale, we suggest a survey be conducted to objectively capture this information and increase awareness of clinical pharmacy in this setting. abstract_id: PUBMED:23622695 Student assessment of pharmacy practice experiences in France: a national survey Pharmacy practice experience (PPE) aims to help trainees to become independent professional practitioners and is a prerequisite for developing skills and competence. In France, it does not exist any study that describes student satisfaction related to preregistration trainings. The aim of this study was to assess student perceptions related to the four PPE types that are planned along the course of study. A questionnaire was sent to each student. Concerning the first and second year introductory PPE, among the 8491 responses, 73% of the students stated that the length was too long regarding their actual knowledge and the objectives were not always achieved. Four thousand and eleven responses regarding third and fourth year PPE were analyzed. Sixty-two percent of the students did not deliver drugs related to the practice's subject and 57% declared doing similar task assigned during the first year PPE especially storage one's. One thousand six hundred and seven questionnaires regarding hospital practices were received. Forty-one percent of the trainees were not asked to perform task that were related to their actual knowledge. Additional comments focused on overmuch shadowing. Among the 853 responses related to the 6th year PPE, 88% of the students considered that they were overall satisfied of the training. Although 30% felt they acquired insufficient skills and professionalism and declared they were not ready to integrate the work world. Those results should prompt decision makers to modify in depth PPE programs in France in order to improve trainees' professionalism and skills acquisition. Promoting and developing researches in the field of PPE is urgently needed. abstract_id: PUBMED:30537431 Hospital pharmacy technicians practice and perceptions in France and Quebec, Canada. Objectives: To describe practice and perceptions of hospital pharmacy technicians (HPTs) in France and in Quebec, Canada. The secondary objective was to compare both work settings to identify differences. Methods: Cross-sectional online survey in December 2016 and February 2017. The survey was comprised of four sections: demographic, factors contributing to career choice and satisfaction, perceptions regarding training, skills and recognition and interest in new opportunities. The proportion of responses from respondents in France and Quebec was compared with a chi-squared test. Key Findings: There were 101 respondents from France and 224 from Quebec. In comparison with Quebec respondents, French respondents came from large hospitals (France: 87%, 84/97 versus Quebec: 50%, 112/223, P &lt; 0.001). Few HPTs supported pharmacists' clinical activities (France: 4%, 4/97 versus Quebec: 29%, 65/222, P &lt; 0.001). A majority of HPTs indicated that working in the healthcare field contributed to their job satisfaction (France: 94%, 87/93 versus Quebec: 90%, 188/209). Respondents found their training sufficient (France: 54%, 49/90 versus Quebec: 78%, 159/205, P &lt; 0.001). However, few identified having access to sufficient continuing education (France: 40%, 36/90 versus Quebec: 29%, 59/205). Not many thought that their job was well recognized in their centre (France: 13%, 12/90 versus Quebec: 13%, 26/203). However, they felt it had a direct impact on the quality of care, especially in Quebec (France: 86%, 77/90 versus Quebec: 98%, 199/203, P &lt; 0.001). The majority was interested in supporting the pharmacists' clinical activities (France: 91%, 78/86 versus Quebec: 82%, 163/199). Conclusions: Overall, HTP from France and Quebec shared a satisfaction about their profession. They showed an interest in increased recognition and responsibilities (e.g. training, pharmacist support). abstract_id: PUBMED:26195124 Perceived facilitators to change in hospital pharmacy practice in England. Background: Traditionally, hospital pharmacists' roles have been associated with dispensing medications prescribed by doctors and offering advice about medicines to patients and other healthcare professionals. In England, significant changes in the structure of hospital pharmacy practice began in the 1970s and currently hospital pharmacists are undertaking a number of advanced roles including prescribing. Objective: This study investigated the facilitators to change in hospital pharmacy practice in England in order to identify lessons that might assist in the potential changes needed in other countries for extended clinical roles. Setting: The study was conducted in England. Methods: A qualitative study using semi-structured interviews was conducted with 28 participants, comprising 22 pharmacists and 6 pharmacy technicians from England. They were recruited through a snowball sampling technique. Transcribed interviews were entered into the QSR NVivo 10 software for data management and analysed thematically. Main outcome measure Pharmacists and pharmacy technicians' perception of the facilitators to hospital pharmacy practice change in England. Result: Three major themes emerged from this study: drivers for change, strategies for change and efficiency. Many of the drivers identified were linked to changes in the structure of hospital pharmacy including education and training; specialisation in practice and career structure. Strategies employed to achieve practice change included broadening the role of pharmacy technicians in order to free-up pharmacists' time; seizing opportunities for extended roles; developing a relationship with the medical profession and professional leadership influence. Participants perceived that the development of pharmacists' clinical roles have resulted in a more efficient healthcare provision where patients were offered seamless services. Conclusion: Changes in the professional structure of pharmacy including education and training, specialisation, career structure and the roles of pharmacy technicians could benefit the development of pharmacists' clinical roles in other countries. abstract_id: PUBMED:32466229 Global Scope of Hospital Pharmacy Practice: A Scoping Review. The pharmacy profession has undergone tremendous changes over the past few decades. Pharmacists' roles have expanded their boundaries to encompass more patient-centered services. However, the degree to which these roles are practised may vary. This scoping review is aimed at describing the extent and range of the professional pharmacy services offered in hospital pharmacies across different countries and the barriers underlying inappropriate or incomplete implementation of these services. Studies published in the English language between 2015 and 2019 were retrieved from the following databases: PubMed, CINAHL, Scopus, EBSCO Discovery Service, and Web of Science. A thematic analysis across the included studies produced two main themes. "Scope of practice" comprised three subthemes: pharmaceutical care practice, clinical pharmacy practice, and public health services and "Multiple levels of influence" comprised five subthemes: individual, interpersonal, institutional, community, and public policy-related factors. The hospital pharmacy services across countries ranged from traditional drug-centered pharmacy practice to a more progressive, clinically oriented practice. In some countries, there is an apparent inadequacy in the clinical pharmacy services provided compared to other clinical settings. Understanding the current pharmacy practice culture across different health care systems is an essential step towards improving the profession. Answer: The comparison of hospital pharmacy practice in France and Canada reveals that there are indeed differences that could potentially complement each other. In France, the hospital pharmacy practice appears to be more product-oriented, with responsibilities extending to managing medical devices, medical gases, blood derivatives, and the sterilization unit (PUBMED:17457689). In contrast, Canadian hospital pharmacy practice is described as more patient-oriented, focusing on clinical activities and direct patient care (PUBMED:17457689). Both countries share similarities in the factors that have influenced the evolution of pharmacy practice, with legislative and regulatory factors, as well as concerns for risk management and quality, being dominant (PUBMED:24780836). However, there are differences in the ranking of factors such as the news factor and the academic factor, which are perceived differently in France and Quebec (PUBMED:24780836). In France, the residency in hospital pharmacy, which is a 4-year specialized training, is moving towards recognition as a prerequisite for hospital pharmacy practice, indicating a shift towards a more specialized and potentially clinical role for pharmacists (PUBMED:25220227). This could complement the already patient-oriented approach seen in Canada. Furthermore, the scope of international hospital pharmacy practice is varied, and while clinical pharmacy is well developed in some countries, it is still in its infancy in others (PUBMED:15598965). This suggests that there is potential for cross-learning and sharing of best practices between countries like France and Canada. Additionally, hospital pharmacy technicians in both France and Quebec have expressed interest in increased recognition and responsibilities, indicating a shared desire for professional growth and development (PUBMED:30537431). This could lead to a more collaborative practice where technicians support pharmacists in both product-oriented and patient-oriented activities. Overall, the different practice perspectives in France and Canada could indeed complement each other, with France potentially adopting more patient-centered approaches and Canada benefiting from the product management and specialization aspects of the French system. This synergy could lead to a more holistic and integrated hospital pharmacy practice that leverages the strengths of both systems.
Instruction: Do obsessive-compulsive disorder and Tourette syndrome share a common susceptibility gene? Abstracts: abstract_id: PUBMED:25771937 Do obsessive-compulsive disorder and Tourette syndrome share a common susceptibility gene? An association study of the BDNF Val66Met polymorphism in the Chinese Han population. Objectives: We explored the association between the BDNF Val66Met polymorphism and susceptibility to both obsessive-compulsive disorder (OCD) and Tourette syndrome (TS) in the Chinese Han population. Methods: Genotyping for the BDNF Val66Met polymorphism was performed in 321 OCD patients and 426 healthy control subjects and case-control association study data were analysed. Additionally, we evaluated the genetic contribution of this variant in 331 TS patients (including 267 TS trios) and 519 controls using the transmission disequilibrium test (TDT) and case-control study. Results: A statistically significant difference was found in the genetic contribution of the BDNF Val66Met polymorphism between both the OCD (χ(2) = 7.50, P = 0.023 by genotype; χ(2) = 6.67, P = 0.01 by allele) and TS (χ(2) = 6.76, P = 0.03 by genotype; χ(2) = 4.27, P = 0.04 by allele), and control groups. TDT and GHRR analysis for TS trios also showed a significant transform disequilibrium of this polymorphism (TDT: χ(2) = 3.96, P = 0.05; HHRR: χ(2) = 4.33 P = 0.04; GHRR: χ(2) = 5.74, P = 0.02; χ(2) = 0.98, P = 0.37). There was also a significant gender trend between patients and controls in female cases for OCD and in male cases for TS. Conclusions: Our study supports the involvement of the BDNF Val66Met polymorphism as a common genetic susceptibility for OCD and TS in the Chinese Han population, showing specific gender trends. abstract_id: PUBMED:15368614 Examination of the SGCE gene in Tourette syndrome patients with obsessive-compulsive disorder. Mutations in the epsilon-sarcoglycan gene (SGCE) have been reported in families with myoclonus-dystonia (M-D). In addition to abnormal movements, obsessive-compulsive disorder (OCD) has also been described in families with M-D. OCD is a common feature in another movement disorder, namely Tourette syndrome (TS). The comorbidity of these disorders suggests that common genetic factors might be involved in their susceptibility. To evaluate this, we performed two sets of experiments. An association study using a polymorphism within an intron of the SGCE gene was assessed in patients with TS and OCD versus controls, and the SGCE gene itself was screened for mutations in all TS/OCD patients, followed by direct sequencing of the gene in a limited number of these patients. No correlation was found by either method. abstract_id: PUBMED:19302771 The genetics of obsessive-compulsive disorder and Tourette's syndrome: what are the common factors? Genetic discovery in obsessive-compulsive disorder and Tourette's syndrome has made significant progress in the past decade. The two disorders are phenomenologically, epidemiologically, and probably pathophysiologically related; however, as with most neuropsychiatric disorders, gene discovery has been challenging. Genetic epidemiology studies support the existence of susceptibility genes in both disorders, and more extensive genome-wide studies are under way. Gene pathways involving neurotransmitter (serotonin, dopamine, glutamate) and neurodevelopment (synaptic, homeobox) domains have been examined, but more complex genetic mechanisms remain largely unexplored. This review addresses the current state of genetic research in obsessive-compulsive disorder and Tourette's syndrome, emphasizing commonalities between the disorders. Questions on common genetic substrates, the use of endophenotypes, and the utility of genetic data to inform pharmacologic treatment are also addressed. abstract_id: PUBMED:19913658 The genetics of Tourette syndrome: a review. Objectives: This article summarizes and evaluates recent advances in the genetics of Gilles de la Tourette syndrome (GTS). Methods: This is a review of recent literature focusing on (1) the genetic etiology of GTS; (2) common genetic components of GTS, attention deficit hyperactivity disorder (ADHD), and obsessive compulsive disorder (OCD); (3) recent linkage studies of GTS; (4) chromosomal translocations in GTS; and (5) candidate gene studies. Results: Family, twin, and segregation studies provide strong evidence for the genetic nature of GTS. GTS is a heterogeneous disorder with complex inheritance patterns and phenotypic manifestations. Family studies of GTS and OCD indicate that an early-onset form of OCD is likely to share common genetic factors with GTS. While there apparently is an etiological relationship between GTS and ADHD, it appears that the common form of ADHD does not share genetic factors with GTS. The largest genome wide linkage study to date observed evidence for linkage on chromosome 2p23.2 (P=3.8x10(-5)). No causative candidate genes have been identified, and recent studies suggest that the newly identified candidate gene SLITRK1 is not a significant risk gene for the majority of individuals with GTS. Conclusion: The genetics of GTS are complex and not well understood. The Genome Wide Association Study (GWAS) design can hopefully overcome the limitations of linkage and candidate gene studies. However, large-scale collaborations are needed to provide enough power to utilize the GWAS design for discovery of causative mutations. Knowledge of susceptibility mutations and biological pathways involved should eventually lead to new treatment paradigms for GTS. abstract_id: PUBMED:23630162 Common and rare alleles of the serotonin transporter gene, SLC6A4, associated with Tourette's disorder. To evaluate the hypothesis that functionally over-expressing alleles of the serotonin transporter (SERT) gene (solute carrier family 6, member 4, SLC6A4) are present in Tourette's disorder (TD), just as we previously observed in obsessive compulsive disorder (OCD), we evaluated TD probands (N = 151) and controls (N = 858). We genotyped the refined SERT-linked polymorphic region 5-HTTLPR/rs25531 and the associated rs25532 variant in the SLC6A4 promoter plus the rare coding variant SERT isoleucine-to-valine at position 425 (I425V). The higher expressing 5-HTTLPR/rs25531 LA allele was more prevalent in TD probands than in controls (χ(2) = 5.75; P = 0.017; odds ratio [OR], 1.35); and, in a secondary analysis, surprisingly, it was significantly more frequent in probands who had TD alone than in those who had TD plus OCD (Fisher's exact test; P = 0.0006; OR, 2.29). Likewise, the higher expressing LAC haplotype (5-HTTLPR/rs25531/rs25532) was more frequent in TD probands than in controls (P = 0.024; OR, 1.33) and also in the TD alone group versus the TD plus OCD group (P = 0.0013; OR, 2.14). Furthermore, the rare gain-of-function SERT I425V variant was observed in 3 male siblings with TD and/or OCD and in their father. Thus, the cumulative count of SERT I425V becomes 1.57% in OCD/TD spectrum conditions versus 0.15% in controls, with a recalculated, family-adjusted significance of χ(2) = 15.03 (P &lt; 0.0001; OR, 9.0; total worldwide genotyped, 2914). This report provides a unique combination of common and rare variants in one gene in TD, all of which are associated with SERT gain of function. Thus, altered SERT activity represents a potential contributor to serotonergic abnormalities in TD. The present results call for replication in a similarly intensively evaluated sample. © 2013 Movement Disorder Society. abstract_id: PUBMED:8751870 Family study and segregation analysis of Tourette syndrome: evidence for a mixed model of inheritance. To investigate the transmission of Tourette syndrome (TS) and associated disorders within families, complex segregation analysis was performed on family study data obtained from 53 independently ascertained children and adolescents with TS and their 154 first-degree relatives. The results suggest that the susceptibility for TS is conveyed by a major locus in combination with a multifactorial background. Other models of inheritance were definitively rejected, including strictly polygenic models, all single major locus models, and mixed models with dominant and recessive major loci. The frequency of the TS susceptibility allele was estimated to be .01. The major locus accounts for over half of the phenotypic variance for TS, whereas the multifactorial background accounts for approximately 40% of phenotypic variance. Penetrance estimates suggest that all individuals homozygous for the susceptibility allele at the major locus are affected, whereas only 2.2% of males and 0.3% of females heterozygous at the major locus are affected. Of individuals affected with TS, approximately 62% are heterozygous and approximately 38% are homozygous at the major locus. While none of the families had two parents affected with TS, 19% of families had two parents affected with the broader, phenotype, which includes TS, chronic tic disorder, or obsessive-compulsive disorder. abstract_id: PUBMED:12502010 Comorbidity of Tourette's syndrome and schizophrenia--biological and physiological parallels. The authors report on five patients who first developed Tourette's syndrome (TS) and later schizophrenia with the typical positive and negative symptoms; all five had an unfavorable course of schizophrenia. These observations as well as other reported cases raise the question of whether both disorders may share a common background. This is discussed under the aspects of similar symptomatology (echolalia, motor symptoms, cognitive deficits, obsessive-compulsive symptoms), similar pathophysiological signs, genetics and signs of an underlying inflammatory process in subgroups of cases, as well as common therapeutic strategies. A genetically determined susceptibility could possibly underlie both disorders, e.g., an autoimmunologically triggered inflammation or a common pathophysiology of certain symptoms. Both disorders show disturbances of the multiple functional pathways, which seem to be involved in the pathophysiology of both. The clinical overlap of TS and of schizophrenia may be due to a final common pathophysiological pathway. abstract_id: PUBMED:10441206 A frequent polymorphism in the coding exon of the human cannabinoid receptor (CNR1) gene. The central cannabinoid receptor (CB1) mediates the pharmacological activities of cannabis, the endogenous agonist anandamide and several synthetic agonists. The cloning of the human cannabinoid receptor (CNR1) gene facilitates molecular genetic studies in disorders like Gilles de la Tourette syndrome (GTS), obsessive compulsive disorder (OCD), Parkinsons disease, Alzheimers disease or other neuro psychiatric or neurological diseases, which may be predisposed or influenced by mutations or variants in the CNR1 gene. We detected a frequent silent mutation (1359G--&gt;A) in codon 453 (Thr) of the CNR1 gene that turned out to be a common polymorphism in the German population. Allele frequencies of this polymorphism are 0.76 and 0.24, respectively. We developed a simple and rapid polymerase chain reaction (PCR)-based assay by artificial creation of a Msp I restriction site in amplified wild-type DNA (G-allele), which is destroyed by the silent mutation (A-allele). The intragenic CNR1 polymorphism 1359(G/A) should be useful for association studies in neuro psychiatric disorders which may be related to anandamide metabolism disturbances. abstract_id: PUBMED:2209489 Gilles de La Tourette's syndrome and some forms of obsessive-compulsive disorder may share a common genetic diathesis. Family aggregation and twin studies suggest that Gilles de La Tourette's syndrome (TS) and some forms of obsessive-compulsive disorder (OCD) are etiologically related. Neuroanatomically, the structures of the basal ganglia, thalamus and cortex have also been implicated in both TS and OCD suggesting a common neural substrate for these disorders. Neurochemical and neuropharmacological studies have provided less compelling data concerning this heuristically important association. Clinical studies have largely focused on the role of the nigrostriatal, mesolimbic and mesocortical dopaminergic systems in the pathophysiology of TS. In the case of OCD, serotoninergic systems originating in the raphe nuclei and projecting rostrally, have received considerable attention. Recent neuropathological studies of TS have implicated the endogenous opioid peptide, dynorphin, in the pathophysiology of TS. Animal studies have shown that dynorphin can modulate both dopaminergic and serotonergic systems. We have undertaken a cerebrospinal fluid (CSF) study to determine if abnormalities in dynorphin A concentration can be observed in drug-free TS and OCD patients. Preliminary results from this study suggest: 1) that TS patients have an elevated level of CSF dynorphin A (1-8) compared to normal controls; 2) that their level of CSF dynorphin is correlated with the severity of their OCD symptoms; 3) that some, but not all, OCD patients also have high levels of this neuropeptide in their CSF. abstract_id: PUBMED:24549057 Intragenic deletions affecting two alternative transcripts of the IMMP2L gene in patients with Tourette syndrome. Tourette syndrome is a neurodevelopmental disorder characterized by multiple motor and vocal tics, and the disorder is often accompanied by comorbidities such as attention-deficit hyperactivity-disorder and obsessive compulsive disorder. Tourette syndrome has a complex etiology, but the underlying environmental and genetic factors are largely unknown. IMMP2L (inner mitochondrial membrane peptidase, subunit 2) located on chromosome 7q31 is one of the genes suggested as a susceptibility factor in disease pathogenesis. Through screening of a Danish cohort comprising 188 unrelated Tourette syndrome patients for copy number variations, we identified seven patients with intragenic IMMP2L deletions (3.7%), and this frequency was significantly higher (P=0.0447) compared with a Danish control cohort (0.9%). Four of the seven deletions identified did not include any known exons of IMMP2L, but were within intron 3. These deletions were found to affect a shorter IMMP2L mRNA species with two alternative 5'-exons (one including the ATG start codon). We showed that both transcripts (long and short) were expressed in several brain regions, with a particularly high expression in cerebellum and hippocampus. The current findings give further evidence for the role of IMMP2L as a susceptibility factor in Tourette syndrome and suggest that intronic changes in disease susceptibility genes should be investigated further for presence of alternatively spliced exons. Answer: Yes, obsessive-compulsive disorder (OCD) and Tourette syndrome (TS) may share a common susceptibility gene. A study exploring the association between the BDNF Val66Met polymorphism and susceptibility to both OCD and TS in the Chinese Han population found a statistically significant difference in the genetic contribution of the BDNF Val66Met polymorphism between both OCD and TS, and control groups (PUBMED:25771937). This supports the involvement of the BDNF Val66Met polymorphism as a common genetic susceptibility for OCD and TS, showing specific gender trends. Additionally, family studies of GTS and OCD indicate that an early-onset form of OCD is likely to share common genetic factors with GTS (PUBMED:19913658). Furthermore, genetic epidemiology studies support the existence of susceptibility genes in both disorders, and gene pathways involving neurotransmitter and neurodevelopment domains have been examined, suggesting common genetic substrates (PUBMED:19302771). However, it is important to note that not all studies have found a direct association between specific genes and both disorders. For example, an examination of the SGCE gene in Tourette syndrome patients with obsessive-compulsive disorder did not find a correlation by either association study or direct sequencing of the gene (PUBMED:15368614). This highlights the complexity of genetic factors in neuropsychiatric disorders and the need for further research to fully understand the genetic relationships between OCD and TS.
Instruction: Metabolically healthy obesity and risk of mortality: does the definition of metabolic health matter? Abstracts: abstract_id: PUBMED:35910571 The Harm of Metabolically Healthy Obese and the Effect of Exercise on Their Health Promotion. Obesity and obesity-related diseases [type 2 diabetes, cardiovascular disease (CVD), and cancer] are becoming more common, which is a major public health concern. Metabolically healthy obesity (MHO) has become a type of obesity, accounting for a large proportion of obese people. MHO is still harmful to health. It was discovered that MHO screening criteria could not well reflect health hazards, whereas visceral fat, adiponectin pathway, oxidative stress, chronic inflammation, and histological indicators at the microlevel could clearly distinguish MHO from health control, and the biological pathways involved in these micro indicators were related to MHO pathogenesis. This review reveals that MHO's micro metabolic abnormality is the initial cause of the increase of disease risk in the future. Exploring the biological pathway of MHO is important in order to develop an effective mechanism-based preventive and treatment intervention strategy. Exercise can correct the abnormal micro metabolic pathway of MHO, regulate metabolic homeostasis, and enhance metabolic flexibility. It is a supplementary or possible alternative to the traditional healthcare prevention/treatment strategy as well as an important strategy for reducing MHO-related health hazards. abstract_id: PUBMED:29628481 The Association Between Metabolically Healthy Obesity and the Risk of Proteinuria: The Kansai Healthcare Study. Background: Metabolically healthy obesity seems to be a unique phenotype for the risk of cardiometabolic diseases. However, it is not known whether this phenotype is associated with the risk of proteinuria. Methods: Study subjects were 9,185 non-diabetic Japanese male workers aged 40-55 years who had no proteinuria, an estimated glomerular filtration rate ≥60 mL/min/1.73 m2, no history of cancer, and no use of antihypertensive or lipid-lowering medications at baseline. Obesity was defined as body mass index ≥25.0 kg/m2. Metabolic health was defined as the presence of no Adult Treatment Panel III components of the metabolic syndrome criteria, excluding waist circumference, and metabolic unhealth was defined as the presence of one or more metabolic syndrome components, excluding waist circumference. "Consecutive proteinuria" was considered positive if proteinuria was detected twice consecutively as 1+ or higher on urine dipstick at annual examinations to exclude chance proteinuria as much as possible. Results: During the 81,660 person-years follow-up period, we confirmed 390 cases of consecutive proteinuria. Compared with metabolically healthy non-obesity, metabolically healthy obesity was not associated with the risk of consecutive proteinuria (multiple-adjusted hazard ratio [HR] 0.86; 95% confidence interval [CI], 0.37-1.99), but metabolically unhealthy non-obesity with ≥2 metabolic syndrome components (HR 1.77; 95% CI, 1.30-2.42), metabolically unhealthy obesity with one component (HR 1.71; 95% CI, 1.12-2.61), and metabolically unhealthy obesity with ≥2 metabolic syndrome components (HR 2.77; 95% CI, 2.01-3.82) were associated with an increased risk of consecutive proteinuria. Conclusions: Metabolically healthy obesity did not increase the risk of consecutive proteinuria in Japanese middle-aged men. abstract_id: PUBMED:36760599 Association of Metabolically Healthy Obesity and Risk of Cardiovascular Disease Among Adults in China: A Retrospective Cohort Study. Purpose: Previous studies have shown that metabolically healthy obesity (MHO) and changes in its status are connected to an increased incidence of cardiovascular disease (CVD). Yet, fewer studies have been conducted in China, especially for the middle-aged and elderly population, a high-risk group. The purpose of the study was to investigate the association between metabolic health status and CVD events. Patients And Methods: A total of 46,055 participants were categorized into 6 subgroups with different metabolic states according to the existence of metabolic syndrome and body mass index (BMI). The changes in obesity and metabolic health status were defined from baseline to follow-up outcomes with a combination of overweight and obesity. Cox proportional hazards models estimated the association of CVD events and each BMI-metabolic groups. Results: MHO and metabolic abnormality normal weight (MANW) subjects had a higher HR of CVD, 1.62 (95% CI, 1.36-1.92) and 1.24 (95% CI, 1.07-1.44), respectively, than their metabolically healthy normal weight (MHNW) counterparts. Then, more than 50% and 30% of the metabolically healthy overweight or obesity (MHOO) populations maintained their status and converted to a metabolically unhealthy state, respectively. Stable MANW, MHOO and metabolically abnormal obesity (MAO) were associated with a higher risk for CVD, 1.68 (95% CI, 1.37-2.05),1.26 (95% CI, 1.08-1.47) and 1.65 (95% CI, 1.45-1.88), respectively, than stable MHNW. Conclusion: Despite being of normal weight, MANW status is in fact a risk factor for CVD, as well as MHO, especially for the Chinese middle-aged and elderly population. Furthermore, metabolic health is a transient state for partial middle-aged and elderly Chinese individuals, and MAO has the highest risk of CVD, including coronary heart disease (CHD) and stroke. abstract_id: PUBMED:34768636 Do Metabolically Healthy People with Obesity Have a Lower Health-Related Quality of Life? A Prospective Cohort Study in Taiwan. The association between metabolically healthy obesity (MHO) and health-related quality of life (HRQOL) has not been thoroughly evaluated. This study enrolled 906 adult participants aged 35-55 years between 2009 and 2010 in Northern Taiwan; 427 participants were followed up after eight years. Normal weight, overweight, and obesity were evaluated via body mass index. Metabolic health was defined as the absence of cardiometabolic diseases and having ≤1 metabolic risk factor. HRQOL was evaluated using the 36-Item Short Form Health Survey (SF-36), Taiwan version. Generalized linear mixed-effects models were used to analyze the repeated, measured data with adjustment for important covariates. Compared with metabolically healthy normal weight individuals, participants with metabolically unhealthy normal weight and obesity had a significantly poorer physical component summary score (β (95% CI) = -2.17 (-3.38--0.97) and -2.29 (-3.70--0.87), respectively). There were no significant differences in physical and mental component summary scores among participants with metabolically healthy normal weight, overweight, and obesity. This study showed that metabolically healthy individuals with obesity and normal weight had similar HRQOL in physical and mental component summary scores. Maintaining metabolic health is an ongoing goal for people with obesity. abstract_id: PUBMED:32301039 Metabolically Healthy Obesity: Criteria, Epidemiology, Controversies, and Consequences. Purpose Of Review: To present a comprehensive overview regarding criteria, epidemiology, and controversies that have arisen in the literature about the existence and the natural course of the metabolic healthy phenotype. Recent Findings: The concept of metabolically healthy obesity (MHO) implies that a subgroup of obese individuals may be free of the cardio-metabolic risk factors that commonly accompany obese subjects with adipose tissue dysfunction and insulin resistance, known as having metabolic syndrome or the metabolically unhealthy obesity (MUO) phenotype. Individuals with MHO appear to have a better adipose tissue function, and are more insulin sensitive, emphasizing the central role of adipose tissue function in metabolic health. The reported prevalence of MHO varies widely, and this is likely due the lack of universally accepted criteria for the definition of metabolic health and obesity. Also, the natural course and the prognostic value of MHO is hotly debated but it appears that it likely evolves towards MUO, carrying an increased risk for cardiovascular disease and mortality over time. Understanding the pathophysiology and the determinants of metabolic health in obesity will allow a better definition of the MHO phenotype. Furthermore, stratification of obese subjects, based on metabolic health status, will be useful to identify high-risk individuals or subgroups and to optimize prevention and treatment strategies to compact cardio-metabolic diseases. abstract_id: PUBMED:31970099 Metabolically Healthy Obesity and Risk of Incident Chronic Kidney Disease in a Korean Cohort Study. Background: The incident of chronic kidney disease (CKD) of metabolically healthy obesity (MHO) has not been consistently determined. Methods: This study used data of Anseong Ansan community-based cohort, a part of the Korean Genome and Epidemiology Study (KoGES) provided by the Korea Center for Disease Control and Prevention (KCDC). Surveys were received from the Anseung and Ansan residents every two years between 2001-2002 and 2015-2016 for a total of 7 surveys over all. The subjects were divided into 4 phenotypes based on the presenting obesity and metabolic syndrome; 1) metabolically healthy normal weight (MHNW), 2) metabolically healthy obesity (MHO), 3) metabolically abnormal normal weight (MANW), and 4) metabolically abnormal obesity (MAO). Data were analyzed using the Cox proportional hazards regression model. Results: Of 8,865 subjects, 1,551 cases of 49,995 person-year (3.1%) developed incident CKD. At an adjusted hazard ratio (HR) of 1.13, the MHO group was not associated with a higher risk of incident CKD (95% confidence interval (CI): 0.92-1.41, P =0.234, using MHNW as the reference). The adjusted HRs of the MANW and MAO groups for incident CKD were significantly higher than those of the MHNW groups: 1.31 (95% CI: 1.05-1.64, P=0.017) for MANW and 1.49 (95% CI: 1.23-1.79, P&lt;0.001) for MAO. Conclusion: MHO is not associated with a high risk of CKD, and that MANW and MAO increase the risk of the incident CKD. Thus, it is important to consider metabolic health status rather than obesity when evaluating CKD risk. abstract_id: PUBMED:33374826 Association of Metabolically Healthy Obesity and Future Depression: Using National Health Insurance System Data in Korea from 2009-2017. (1) Background: The health implications associated with the metabolically healthy obese (MHO) phenotype, in particular related to symptoms of depression, are still not clear. the purpose of this study is to check whether depression and metabolic status are relevant by classifying them into four groups in accordance with the MHO diagnostic standard. Other impressions seen were the differences between sexes and the effects of the MHO on the occurrence of depression. (2) Methods: A sample of 3,586,492 adult individuals from the National Health Insurance Database of Korea was classified into four categories by their metabolic status and body mass index: (1) metabolically healthy non-obese (MHN); (2) metabolically healthy obese (MHO); (3) metabolically unhealthy non-obese (MUN); and (4) metabolically unhealthy obese (MUO). Participants were followed for six to eight years for new incidences of depression. The statistical significance of the general characteristics of the four groups, as well as the mean differences in metabolic syndrome risk factors, was assessed with the use of a one-way analysis of variance (ANOVA). (3) Results: The MHN ratio in women was higher than in men (men 39.3%, women 55.2%). In both men and women, depression incidence was the highest among MUO participants (odds ratio (OR) = 1.01 in men; OR = 1.09 in women). It was concluded as well that, among the risk factors of metabolic syndrome, waist circumference was the most related to depression. Among the four groups, the MUO phenotype was the most related to depression. Furthermore, in women participants, MHO is also related to a higher risk of depressive symptoms. These findings indicate that MHO is not a totally benign condition in relation to depression in women. (4) Conclusion: Therefore, reducing metabolic syndrome and obesity patients in Korea will likely reduce the incidence of depression. abstract_id: PUBMED:37536710 Association between Metabolically Healthy Status and Risk of Gastrointestinal Cancer. Purpose: Although obesity is associated with numerous diseases, the risks of disease may depend on metabolically healthy status. Nevertheless, it is unclear to whether metabolically healthy status affects risk of gastrointestinal (GI) cancer in general Chinese population. Materials And Methods: A total of 114,995 participants who met the criteria were included from the Kailuan Study. The study participants were divided into four groups according to body mass index (BMI)/waist circumference (WC) and metabolic status. Incident of GI cancer (esophageal cancer, gastric cancer, liver cancer, biliary cancer, pancreatic cancer, and colorectal cancer) during 2006-2020 were confirmed by review of medical records. The Cox proportional hazard regression models were used to assess the association metabolically healthy status with the risk of GI cancer by calculating the hazard ratios (HR) and 95% confidence interval (CI). Results: During a mean 13.76 years of follow-up, we documented 2,311 GI cancers. Multivariate Cox regression analysis showed that compared with the metabolically healthy normal-weight group, metabolically healthy obese (MHO) participants demonstrated an increased risk of developing GI cancer (HR, 1.54; 95% CI, 1.11 to 2.13) by BMI categories. However, such associations were not found for WC category. These associations were moderated by age, sex, and anatomical site of the tumor. Individuals with metabolic unhealthy normal-weight or metabolic unhealthy obesity phenotype also have an increased risk of GI cancer. Conclusion: MHO phenotype was associated with increased risk of GI cancer. Moreover, individuals who complicated by metabolic unhealthy status have an increased risk of developing GI cancer. Hence, clinicians should consider the risk of incident GI cancer in people with abnormal metabolically healthy status and counsel them about metabolic fitness and weight control. abstract_id: PUBMED:33910543 Natural course of metabolically healthy phenotype and risk of developing Cardiometabolic diseases: a three years follow-up study. Background: Whether the metabolically healthy obese (MHO) phenotype is a single, stable or a transitional, fluctuating state is currently unknown. The Mexican-Mestizo population has a genetic predisposition for the development of type 2 diabetes (T2D) and other cardiometabolic complications. Little is known about the natural history of metabolic health in this population. The aim of this study was to analyze the transitions over time among individuals with different degrees of metabolic health and body mass index, and evaluate the incidence of cardiometabolic outcomes according to phenotype. Methods: The study population consisted of a metabolic syndrome cohort with at least 3 years of follow up. Participants were apparently-healthy urban Mexican adults ≥20 years with a body mass index (BMI) ≥20 kg/m2. Metabolically healthy phenotype was defined using the criteria of the National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III) metabolic syndrome criteria and the subjects were stratified into 4 groups according to their BMI and metabolic health. For cardiometabolic outcomes we estimated the incidence of cardiometabolic outcomes and standardized them per 1, 000 person-years of follow-up. Finally, to evaluate the risk for transition and development of cardiometabolic outcomes, we fitted Cox Proportional Hazard regression models. Results: Amongst the 5541 subjects, 54.2% were classified as metabolically healthy and 45.8% as unhealthy. The MHO prevalence was 39.3%. Up to a third of the population changed from their initial category to another and the higher transition rate was observed in MHO (42.9%). We also found several novel factors associated to transition to metabolically unhealthy phenotype; socioeconomic status, number of pregnancies, a high carbohydrate intake, history of obesity and consumption of sweetened beverages. Similarly, visceral adipose tissue (VAT) was a main predictor of transition; loss of VAT ≥5% was associated with reversion from metabolically unhealthy to metabolically healthy phenotype (hazard ratio (HR) 1.545, 95%CI 1.266-1.886). Finally, we observed higher incidence rates and risk of incident T2D and hypertension in the metabolically unhealthy obesity (MUHO) and metabolically unhealthy lean (MUHL) phenotypes compared to MHO. Conclusions: Metabolic health is a dynamic and continuous process, at high risk of transition to metabolically unhealthy phenotypes over time. It is imperative to establish effective processes in primary care to prevent such transitions. abstract_id: PUBMED:26915289 Metabolically Healthy Obesity: Personalised and Public Health Implications. Obesity is a heterogeneous condition; thus, metabolic abnormalities and cardiometabolic risk vary among obese individuals, with a significant proportion considered to be metabolically healthy. However, whether these individuals are truly healthy remains controversial and, therefore, a better understanding of such phenotypes may offer opportunities to improve current obesity diagnosis, intervention, and treatment. Answer: The concept of metabolically healthy obesity (MHO) suggests that a subgroup of obese individuals may not exhibit the cardio-metabolic risk factors typically associated with obesity, such as insulin resistance and metabolic syndrome. This has led to debates about whether individuals with MHO are at an increased risk of mortality compared to their metabolically unhealthy counterparts or those with normal weight, regardless of metabolic health (PUBMED:32301039). The definition of metabolic health indeed matters when assessing the risk of mortality among individuals with MHO. The criteria used to define metabolic health can vary, and this lack of universally accepted criteria contributes to the wide range of reported prevalence of MHO and differing conclusions about its associated risks (PUBMED:32301039). For instance, some studies have found that individuals with MHO are at an increased risk of transitioning to metabolically unhealthy obesity (MUO) over time, which carries a higher risk for cardiovascular disease and mortality (PUBMED:33910543). Furthermore, the natural course of the MHO phenotype is not well understood, and it may evolve towards MUO, increasing the risk for cardiovascular disease and mortality over time (PUBMED:32301039). This suggests that MHO may not be a stable or benign condition and that metabolic health can be a transient state, especially in populations with a genetic predisposition for cardiometabolic diseases (PUBMED:33910543). In summary, the definition of metabolic health is crucial when evaluating the risk of mortality associated with MHO. The lack of a standardized definition complicates the assessment of long-term health outcomes for individuals with MHO. It is important to recognize that MHO may not be a permanently benign condition and that individuals with MHO should be monitored for potential transitions to metabolic unhealthiness, which is associated with increased risks of mortality and cardiometabolic diseases.
Instruction: The mitochondrial phenotype of peripheral muscle in chronic obstructive pulmonary disease: disuse or dysfunction? Abstracts: abstract_id: PUBMED:36818563 Skeletal Muscle Mitochondrial Dysfunction in Chronic Obstructive Pulmonary Disease: Underlying Mechanisms and Physical Therapy Perspectives. Skeletal muscle dysfunction (SMD) is a prevalent extrapulmonary complication and a significant independent prognostic factor in patients with chronic obstructive pulmonary disease (COPD). Mitochondrial dysfunction is one of the core factors that damage structure and function in COPD skeletal muscle and is closely related to smoke exposure, hypoxia, and insufficient physical activity. The currently known phenotypes of mitochondrial dysfunction are reduced mitochondrial content and biogenesis, impaired activity of mitochondrial respiratory chain complexes, and increased mitochondrial reactive oxygen species production. Significant progress has been made in research on physical therapy (PT), which has broad prospects for treating the abovementioned potential mitochondrial-function changes in COPD skeletal muscle. In terms of specific types of PT, exercise therapy can directly act on mitochondria and improve COPD SMD by increasing mitochondrial density, regulating mitochondrial biogenesis, upregulating mitochondrial respiratory function, and reducing oxidative stress. However, improvements in mitochondrial-dysfunction phenotype in COPD skeletal muscle due to different exercise strategies are not entirely consistent. Therefore, based on the elucidation of this phenotype, in this study, we analyzed the effect of exercise on mitochondrial dysfunction in COPD skeletal muscle and the regulatory mechanism thereof. We also provided a theoretical basis for exercise programs to rehabilitate this condition. abstract_id: PUBMED:18755922 The mitochondrial phenotype of peripheral muscle in chronic obstructive pulmonary disease: disuse or dysfunction? Rationale: Peripheral muscle alterations have been recognized to contribute to disability in chronic obstructive pulmonary disease (COPD). Objectives: To describe the mitochondrial phenotype in a moderate to severe COPD population and age-matched controls. Methods: Three primary aspects of mitochondrial function were assessed in permeabilized locomotor muscle fibers. Measurements And Main Results: Respiration rates per milligram of fiber weight were significantly lower in COPD muscle compared with healthy age-matched control muscle under various respiratory states. However, when variations in mitochondrial volume were taken into account by normalizing respiration per unit of citrate synthase activity, differences between the two groups were abolished, suggesting the absence of specific mitochondrial respiratory impairment in COPD. H(2)O(2) production per mitochondrion was higher both under basal and ADP-stimulated states, suggesting that mitochondria from COPD muscle have properties that potentiate H(2)O(2) release. Direct assessment of mitochondrial sensitivity to Ca(2+)-induced opening of the permeability transition pore (PTP) indicated that mitochondria from patients with COPD were more resistant to PTP opening than their counterparts in control subjects. Conclusions: Comparison of these results with those of studies comparing healthy glycolytic with oxidative muscle suggests that these differences may be attributable to greater type II fiber expression in COPD muscle, as mitochondria within this fiber type have respiratory function similar to that of mitochondria from type I fibers, and yet are intrinsically prone to greater release of H(2)O(2) and more resistant to PTP opening. These results thus argue against the presence of pathological mitochondrial alterations in this category of patients with COPD. abstract_id: PUBMED:31119467 Mitochondrial dysfunction and chronic lung disease. The functions of body gradually decrease as the age increases, leading to a higher frequency of incidence of age-related diseases. Diseases associated with aging in the respiratory system include chronic obstructive pulmonary disease (COPD), IPF (idiopathic pulmonary fibrosis), asthma, lung cancer, and so on. The mitochondrial dysfunction is not only a sign of aging, but also is a disease trigger. This article aims to explain mitochondrial dysfunction as an aging marker, and its role in aging diseases of lung. We also discuss whether the mitochondria can be used as a target for the treatment of aging lung disease. abstract_id: PUBMED:29357496 Altered skeletal muscle mitochondrial phenotype in COPD: disease vs. disuse. Patients with chronic obstructive pulmonary disease (COPD) exhibit an altered skeletal muscle mitochondrial phenotype, which often includes reduced mitochondrial density, altered respiratory function, and elevated oxidative stress. As this phenotype may be explained by the sedentary lifestyle that commonly accompanies this disease, the aim of this study was to determine whether such alterations are still evident when patients with COPD are compared to control subjects matched for objectively measured physical activity (PA; accelerometry). Indexes of mitochondrial density [citrate synthase (CS) activity], respiratory function (respirometry in permeabilized fibers), and muscle oxidative stress [4-hydroxynonenal (4-HNE) content] were assessed in muscle fibers biopsied from the vastus lateralis of nine patients with COPD and nine PA-matched control subjects (CON). Despite performing similar levels of PA (CON: 18 ± 3, COPD: 20 ± 7 daily minutes moderate-to-vigorous PA; CON: 4,596 ± 683, COPD: 4,219 ± 763 steps per day, P &gt; 0.70), patients with COPD still exhibited several alterations in their mitochondrial phenotype, including attenuated skeletal muscle mitochondrial density (CS activity; CON 70.6 ± 3.8, COPD 52.7 ± 6.5 U/mg, P &lt; 0.05), altered mitochondrial respiration [e.g., ratio of complex I-driven state 3 to complex II-driven state 3 (CI/CII); CON: 1.20 ± 0.11, COPD: 0.90 ± 0.05, P &lt; 0.05), and oxidative stress (4-HNE; CON: 1.35 ± 0.19, COPD: 2.26 ± 0.25 relative to β-actin, P &lt; 0.05). Furthermore, CS activity ( r = 0.55), CI/CII ( r = 0.60), and 4-HNE ( r = 0.49) were all correlated with pulmonary function, assessed as forced expiratory volume in 1 s ( P &lt; 0.05), but not PA ( P &gt; 0.05). In conclusion, the altered mitochondrial phenotype in COPD is present even in the absence of differing levels of PA and appears to be related to the disease itself. NEW &amp; NOTEWORTHY Chronic obstructive pulmonary disease (COPD) is associated with debilitating alterations in the function of skeletal muscle mitochondria. By comparing the mitochondrial phenotype of patients with COPD to that of healthy control subjects who perform the same amount of physical activity each day, this study provides evidence that many aspects of the dysfunctional mitochondrial phenotype observed in COPD are not merely due to reduced physical activity but are likely related to the disease itself. abstract_id: PUBMED:33000352 Recent progress in the use of mitochondrial membrane permeability transition pore in mitochondrial dysfunction-related disease therapies. Mitochondria have various cellular functions, including ATP synthesis, calcium homeostasis, cell senescence, and death. Mitochondrial dysfunction has been identified in a variety of disorders correlated with human health. Among the many underlying mechanisms of mitochondrial dysfunction, the opening up of the mitochondrial permeability transition pore (mPTP) is one that has drawn increasing interest in recent years. It plays an important role in apoptosis and necrosis; however, the molecular structure and function of the mPTP have still not been fully elucidated. In recent years, the abnormal opening up of the mPTP has been implicated in the development and pathogenesis of diverse diseases including ischemia/reperfusion injury (IRI), neurodegenerative disorders, tumors, and chronic obstructive pulmonary disease (COPD). This review provides a systematic introduction to the possible molecular makeup of the mPTP and summarizes the mitochondrial dysfunction-correlated diseases and highlights possible underlying mechanisms. Since the mPTP is an important target in mitochondrial dysfunction, this review also summarizes potential treatments, which may be used to inhibit pore opening up via the molecules composing mPTP complexes, thus suppressing the progression of mitochondrial dysfunction-related diseases. abstract_id: PUBMED:26867569 Analysis of mitochondrial DNA alteration in new phenotype ACOS. Background: Mitochondria contain their own DNA (MtDNA) that is very sensitive to oxidative stress and as a consequence could be damaged in quantity. Oxidative stress is largely recognized to play a key role in the pathogenesis of asthma and COPD and might have a role in the new intermediate phenotype ACOS (asthma-COPD overlap syndrome). The aim of this study was to investigate MtDNA alterations, as an expression of mitochondrial dysfunction, in ACOS and to verify whether they might help in the identification of this new phenotype and in its differentiation from asthma and COPD. Methods: Ten (10) ACOS according to Spanish guidelines, 13 ACOS according to GINA guidelines, 13 COPD, 14 asthmatic patients and ten normal subjects were enrolled. They further underwent a blood, induced sputum and exhaled nitric oxide collection. Content of MtDNA and nuclear DNA (nDNA) were measured in the blood cells of patients by Real Time PCR. Results: ACOS patients showed an increase of MtDNA/nDNA ratio. Dividing ACOS according to guidelines, those from the Spanish showed a higher value of MtDNA/nDNA compared to those from GINA/GOLD (92.69 ± 7.31 vs 80.68 ± 4.16). Spanish ACOS presented MtDNA/nDNA ratio closer to COPD than asthma. MtDNA was higher in asthmatic, COPD, GINA and Spanish ACOS patients compared to healthy subjects (73.30 ± 4.47-137.0 ± 19.45-80.68 ± 4.16-92.69 ± 7.31 vs 65.97 ± 20.56). Conclusion: We found an increase of MtDNA/nDNA ratio in ACOS subjects that led us to conclude that there is presence of mitochondrial dysfunction in this disease, that makes it closer to COPD than to asthma. Although the MtDNA/nDNA ratio results are a useful marker for differential diagnosis from asthma, COPD and ACOS, further studies are needed to confirm the potentiality of MtDNA/nDNA ratio and to a better characterization of ACOS. abstract_id: PUBMED:26592738 Mitochondrial Dysfunction Launches Dexamethasone-Induced Skeletal Muscle Atrophy via AMPK/FOXO3 Signaling. Muscle atrophy occurs in several pathologic conditions such as diabetes and chronic obstructive pulmonary disease (COPD), as well as after long-term clinical administration of synthesized glucocorticoid, where increased circulating glucocorticoid accounts for the pathogenesis of muscle atrophy. Others and we previously reported mitochondrial dysfunction in muscle atrophy-related conditions and that mitochondria-targeting nutrients efficiently prevent kinds of muscle atrophy. However, whether and how mitochondrial dysfunction involves glucocorticoid-induced muscle atrophy remains unclear. Therefore, in the present study, we measured mitochondrial function in dexamethasone-induced muscle atrophy in vivo and in vitro, and we found that mitochondrial respiration was compromised on the 3rd day following after dexamethasone administration, earlier than the increases of MuRF1 and Fbx32, and dexamethasone-induced loss of mitochondrial components and key mitochondrial dynamics proteins. Furthermore, dexamethasone treatment caused intracellular ATP deprivation and robust AMPK activation, which further activated the FOXO3/Atrogenes pathway. By directly impairing mitochondrial respiration, FCCP leads to similar readouts in C2C12 myotubes as dexamethasone does. On the contrary, resveratrol, a mitochondrial nutrient, efficiently reversed dexamethasone-induced mitochondrial dysfunction and muscle atrophy in both C2C12 myotubes and mice, by improving mitochondrial function and blocking AMPK/FOXO3 signaling. These results indicate that mitochondrial dysfunction acts as a central role in dexamethasone-induced skeletal muscle atrophy and that nutrients or drugs targeting mitochondria might be beneficial in preventing or curing muscle atrophy. abstract_id: PUBMED:31593750 Mitochondrial dysfunction is associated with Miro1 reduction in lung epithelial cells by cigarette smoke. Cigarette smoke (CS) is known to cause mitochondrial dysfunction leading to cellular senescence in lung cells. We determined the mechanism of mitochondrial dysfunction by CS in lung epithelial cells. CS extract (CSE) treatment differentially affected mitochondrial function, such as membrane potential, mitochondrial reactive oxygen species (mtROS) and mitochrondrial mass as analyzed by FACS, and were associated with altered oxidative phosphorylation (OXPHOS) protein levels (Complexes I-IV) in primary lung epithelial cells (SAEC and NHBE), and (complexes I and II) in BEAS2B cells. There were dose- and time-dependent changes in mitochondrial respiration (oxygen consumption rate parameters i.e. maximal respiration, ATP production and spare capacity, measured by the Seahorse analyzer) in control vs. CSE treated BEAS2B and NHBE/DHBE cells. Electron microscopy (EM) analysis revealed perinuclear clustering by localization and increased mitochondrial fragmentation by fragement length analysis. Immunoblot analysis revealed CS-mediated increase in Drp1 and decrease in Mfn2 levels that are involved in mitochondrial fission/fusion process. CSE treatment reduced Miro1 and Pink1 abundance that play a crucial role in the intercellular transfer mechanism and mitophagy process. Overall, these findings highlight the role of Miro1 in context of CS-induced mitochondrial dysfunction in lung epithelial cells that may contribute to the pathogenesis of chronic inflammatory lung diseases. abstract_id: PUBMED:35280400 Myostatin is involved in skeletal muscle dysfunction in chronic obstructive pulmonary disease via Drp-1 mediated abnormal mitochondrial division. Background: Skeletal muscle dysfunction (SMD) is one of the most prominent extrapulmonary effects of chronic obstructive pulmonary disease (COPD). Myostatin negatively regulates the growth of skeletal muscle. We confirmed that myostatin expression is significantly increased in the quadriceps femoris muscle tissue of rats with COPD and is involved in the development of SMD in COPD, but the mechanism by which this occurs has yet to be uncovered. Dynamin-related protein 1 (Drp-1) has been shown to promote apoptosis and affect cellular energy metabolism by mediating enhanced mitochondrial division. Preliminary findings from our group illustrated that mitochondrial division and Drp-1 expression were increased in COPD quadriceps femoris cells. However, it is not yet clear whether mitochondrial dynamics are affected by myostatin in COPD quadriceps myocytes. Methods: The study sought to explore the effects and potential mechanisms of myostatin on skeletal muscle atrophy, mitochondrial dynamics, apoptosis, and the links between related processes in COPD. Results: Our findings showed that cigarette smoke exposure stimulated an increase in myostatin, increased superoxide production, decreased mitochondrial membrane potential, significantly promoted Drp-1-mediated mitochondrial fission, and promoted apoptosis. Conclusions: In summary, our study demonstrated that cigarette smoke led to increased Drp-1 expression and enhanced mitochondrial division by upregulating myostatin, which in turn promoted apoptosis and affected cellular energy metabolism, leading to the development of SMD in COPD. This study extends understandings of skeletal muscle function in COPD and provides a basis for the use of myostatin and Drp-1 as novel therapeutic targets for SMD in COPD. abstract_id: PUBMED:37905633 Targeting Mitochondrial Dysfunction With LncRNAs in a Wistar Rat Model of Chronic Obstructive Pulmonary Disease. Background/aim: Chronic obstructive pulmonary disease (COPD) has become a prominent healthcare issue in recent years. Cigarette smoking (CS) and fine particulate matter (PM2.5) are important causative factors for COPD. This study assessed the aberrant lncRNA profiles in the tissue of rats with COPD caused by CS or PM2.5 Materials and Methods: A COPD rat model was developed using CS (CSM) or PM2.5 (PMM), and lung tissue RNA was extracted. The Gene Ontology (GO) and Kyoto Encyclopaedia of Genes and Genomes (KEGG) were used to investigate the correlations between the distinct lncRNAs and mRNA pathways. A coding-non-coding gene co-expression network (CNC) was constructed by establishing connections between differentially expressed long non-coding RNAs (lncRNAs) and messenger RNAs (mRNAs) associated with mitochondrial dysfunction and the inflammatory response. Results: A quantitative real-time reverse transcription PCR (qRT-PCR) experiment was performed to verify the expression of the particular lncRNAs. Microarray analysis of lung tissue from the COPD model revealed that 123 and 444 lncRNAs were substantially raised and reduced in PMM vs. the control group (Ctrl), respectively, as were 621 and 1,178 mRNAs. Meanwhile, 81 and 340 lncRNAs were consistently raised and lowered in CSM vs. Ctrl, respectively, as were 408 and 931 mRNAs. GO enrichment and KEGG pathway analysis indicated that the COPD model was connected to inflammatory responses, mitochondrial dysfunction, and others. Conclusion: XR_340674, ENSRNOT00000089642, XR_597045, and XR_340651 were decreased, and XR_592469 was elevated. These lncRNAs were shown to be related to mitochondrial dysfunction in the lung tissue of animals exposed to CS or PM2.5. Answer: The mitochondrial phenotype of peripheral muscle in chronic obstructive pulmonary disease (COPD) appears to be more related to dysfunction rather than disuse. Studies have shown that patients with COPD exhibit an altered skeletal muscle mitochondrial phenotype, which includes reduced mitochondrial density, altered respiratory function, and elevated oxidative stress (PUBMED:29357496). This altered phenotype persists even when patients with COPD are compared to control subjects matched for objectively measured physical activity, suggesting that the mitochondrial changes are related to the disease itself rather than a sedentary lifestyle (PUBMED:29357496). Furthermore, mitochondrial dysfunction in COPD is closely related to smoke exposure, hypoxia, and insufficient physical activity, with phenotypes including reduced mitochondrial content and biogenesis, impaired activity of mitochondrial respiratory chain complexes, and increased mitochondrial reactive oxygen species production (PUBMED:36818563). The presence of pathological mitochondrial alterations in COPD patients has been debated, with some studies suggesting that the observed differences may be attributable to greater type II fiber expression in COPD muscle, which have properties that potentiate H2O2 release and are more resistant to permeability transition pore (PTP) opening (PUBMED:18755922). Mitochondrial dysfunction is also recognized as a sign of aging and a disease trigger, playing a role in aging diseases of the lung, including COPD (PUBMED:31119467). Additionally, the opening of the mitochondrial permeability transition pore (mPTP) has been implicated in the development and pathogenesis of various diseases, including COPD (PUBMED:33000352). In summary, the evidence suggests that mitochondrial dysfunction is a significant factor in the skeletal muscle alterations observed in COPD, and it is not solely a consequence of disuse or reduced physical activity. The dysfunction is likely related to the disease pathology and may be a target for therapeutic interventions (PUBMED:29357496; PUBMED:36818563; PUBMED:18755922; PUBMED:31119467; PUBMED:33000352).
Instruction: Is the reduction of the plasma levels of endothelin in the acute and sub-acute stage of myocardial infarct one of the beneficial effects of early treatment with ace inhibitors? Abstracts: abstract_id: PUBMED:8803588 Is the reduction of the plasma levels of endothelin in the acute and sub-acute stage of myocardial infarct one of the beneficial effects of early treatment with ace inhibitors? Background: Studies showed that endothelin-1 (ET-1) was increased in the acute myocardial infarction (AMI). Experimental studies reported that captopril was able to reduce ET-1 secretion. In addition increased levels of ET-1 were reported as a negative prognostic index. The study was aimed to verify whether captopril was able to reduce plasma ET-1 levels in the acute and subacute phases of AMI. Methods: Forty five patients, hospitalized for suspected anterior AMI within 4 h since the onset of symptoms, suitable for thrombolysis (first episode), in Killip class 1-2, were randomized (double blind) into two groups: Group A (23 patients, pts), 7 females and 16 males, received captopril 6.25 mg orally (as first dose) 2-4 h after starting thrombolysis, and the doses of captopril were successively increased up to 25 mg every 8 h. Group B: (22 pts), 5 females and 17 males, received placebo after thrombolysis. All the patients met the reperfusion criteria. Results: The two groups were similar for age, sex, CK peak, ejection fraction, end systolic volume and risk factors. Plasma ET levels were checked on admission, and 2, 12, 24, 48, 72 hours, after starting thrombolysis. Mean concentrations of ET +/- SD: Group A: basal 1.50 +/- 0.67, at 2 h 2.31 +/- 1.24, 12 h 1.84 +/- 1.45, 24 h 1.30 +/- 0.72, 48 h 0.95 +/- 0.50, 72 h 0.60 +/- 0.15 fmol/ml (p &lt; 0.001). Group B: basal 1.58 +/- 0.83, at 2 h 2.38 +/- 1.35, 12 h 2.33 +/- 1.71, 24 h 1.80 +/- 1.41, 48 h 1.46 +/- 0.88, 72 h 0.93 +/- 0.44 fmol/ml (p &lt; 0.001). Difference between the two groups was significant at 48 h (p &lt; 0.05), and 72 h (p &lt; 0.001). Conclusions: Our data suggest that captopril affects plasma endothelin levels in the acute and subacute phases of AMI. In addition, our results seem to be an additional support to the beneficial effects of early captopril treatment in patients with AMI. abstract_id: PUBMED:9736439 Neurohormonal markers of clinical outcome in cardiovascular disease: is endothelin the best one? Endothelin-1 (ET-1) is the most potent vasoconstrictor yet described. The active 21-amino-acid peptide is derived from the conversion of the inactive precursor "Big ET-1" by an enzyme called endothelin-converting enzyme. In addition to its potent action as a vasoconstrictor, endothelin promotes growth and proliferation of smooth muscle and myocardial hypertrophy. ET-1 levels are elevated in acute myocardial infarction (MI), atherosclerosis, renal failure, diabetes, pulmonary hypertension, and congestive heart failure (CHF). ET-1 levels correlate extremely well with the seriousness of the pathophysiologic condition. ET-1 levels at 72 h post MI accurately predict long-term survival. In patients with heart failure, ET-1 levels also predict long-term outcome, with the prognosis being severely compromised in patients with elevated ET-1 levels. Levels of plasma big ET-1 have been demonstrated to predict 1-year mortality and have been shown to be a better predictor of 1-year outcome than plasma atrial natriuretic peptide and norepinephrine, NYHA class, age, and echocardiographic left ventricular parameters. Although a small number of studies have reported beneficial effects of ACE inhibitors on ET-1 levels in animal models, most reports in humans have not found an effect of ACE inhibitors on ET-1 levels. Only one ACE inhibitor, fosinopril, has been shown to be effective in normalizing ET-1 levels in clinically relevant situations, such as the long-term study of patients with CHF. This observation may point to a superior role of fosinopril compared with other ACE inhibitors in CHF patients and may indicate beneficial effects of fosinopril beyond blood pressure control. abstract_id: PUBMED:9057069 Early captopril treatment reduces plasma endothelin concentrations in the acute and subacute phases of myocardial infarction: a pilot study. It has been reported that endothelin-1 (ET-1) increases in acute myocardial infarction (AMI). Experimental studies showed that captopril administration reduces ET-1 secretion. In addition, it was reported that the increased ET-1 levels are a negative prognostic index. The study sought to verify whether captopril can reduce plasma ET levels in the acute and subacute phases of reperfused anterior AMI. Forty-five patients, hospitalized for suspected anterior AMI within 4 h from the onset of symptoms, suitable for thrombolysis (first episode), Killip class I-2, were randomized (double blind) into two groups: group A (23; seven women/16 men) received captopril (as first dose) 2-4 h after starting thrombolysis (the dose was then increased up to 25 mg every 8 h). Group B (22; five women/17 men) received placebo after thrombolysis. All the patients met the reperfusion criteria. The two groups were similar with regard to age, sex, CK peak, ejection fraction, end-systolic volume and risk factors. Plasma ET levels were measured at entry, and 2, 12, 24, 48, and 72 h after starting thrombolysis. Mean concentrations of ET +/- SD: Group A basal, 1.50 +/- 0.67; at 2h, 2.31 +/- 1.24; 12 h, 1.84 +/- 1.45; 24 h, 1.30 +/- 0.72; 48 h, o.95 +/- 0.50; 72 h, 0.60 +/- 0.15 fmol/ml; p &lt; 0.001. Group B basal, 1.58 +/- 0.83; at 2 h, 2.38 +/- 1.35; 12 h, 2.33 +/- 1.71; 24 h, 1.80 +/- 1.41; 48h, 1.46 +/- 0.88; 72 h, 0.93 +/- 0.44 fmol/ml; p &lt; 0.001. Difference between the two groups was significant at the beginning of the test (between 2 and 12 h, p[=]0.002). After that, the values of the plasma endothelin decreased in parallel, p &lt; 0.001. Our data suggest that captopril affects plasma ET levels in the acute and subacute phases of AMI. Moreover, these results provide additional evidence for a beneficial effect of early captopril treatment. abstract_id: PUBMED:8668893 ACE-inhibitors in acute heart infarct The use of ACE-inhibitors in heart failure has been established over the past years. Their use is of uncertain value in the early phases of myocardial infarction, where they are supposed to prevent left ventricular dilatation. More recent studies (ISIS-4, GISSI-3) have tested early treatment by ACE-inhibitors in the acute phase of myocardial infarction. On one hand, it was possible to disprove reservations about risks (hypotension)n in a large cohort; on the other hand, a further reduction of mortality in hospitalized patients by 7% has been shown, corresponding to five patient lives saved for 1000 treated patients. Thus, after institution of the customary therapy of myocardial infarction (inhibitor of platelet aggregation, thrombolysis, beta-blocker) and after exclusion of specific contraindications (hypotension &lt; 100 mmHg, renal failure) ACE-inhibitors could be administered in the acute phase of myocardial infarction. An analysis of the results from these large trials will show whether ACE-inhibitors may benefit groups of patients at particular risks (Killip &gt; 1, age &gt; 70 years, preceding renal failure) noticeably. ACE-inhibitors remain the treatment of choice in patients with developing left ventricular failure. abstract_id: PUBMED:9547443 Effects of angiotensin-converting enzyme inhibitor on plasma B-type natriuretic peptide levels in patients with acute myocardial infarction. Background: Plasma levels of B-type natriuretic peptide (BNP) are markedly increased in patients with heart failure and acute myocardial infarction. The changes in plasma BNP levels in the treatment of acute myocardial infarction with angiotensin-converting enzyme inhibitors have not been examined well. This study was designed to examine the effects of early angiotensin-converting enzyme inhibitor therapy on plasma BNP levels in patients with acute myocardial infarction. Methods And Results: We measured the plasma levels of B-type natriuretic peptide over the time course for 2 weeks in 30 patients with acute myocardial infarction in whom either imidapril (n = 15) or placebo (n = 15) was given at random immediately after admission. Plasma BNP levels increased and reached a peak of 192 +/- 28 pg/ML 16 hours after administration; thereafter, the levels decreased and then again increased, forming the second peak of 217 +/- 38 pg/ML on the fifth day (biphasic pattern). On the other hand, plasma BNP levels increased and reached a peak level of 190 +/- 22 pg/ML 16 hours after admission and then decreased from 2 days after admission until the second week in the imidapril group (monophasic pattern). Left ventricular ejection fraction measured in the second week was significantly higher in the imidapril group than in the control group (62.2 +/- 1.1% vs 51.2 +/- 3.6%, P &lt; .01). Conclusion: It is concluded that plasma BNP levels followed a monophasic pattern after imidapril treatment, whereas a biphasic pattern was followed after placebo, and that plasma BNP levels constitute a marker of ventricular dysfunction in the treatment of acute myocardial infarction with angiotensin-converting enzyme inhibitors. abstract_id: PUBMED:11163738 Early neurohormonal effects of trandolapril in patients with left ventricular dysfunction and a recent acute myocardial infarction: a double-blind, randomized, placebo-controlled multicentre study. Angiotensin-converting enzyme inhibitors improve long-term survival in patients with left ventricular dysfunction after a myocardial infarction, but their mechanism of action is not entirely clear. The neurohormonal effects may be important in this respect, as well as an early hemodynamic unloading induced by these drugs. The primary objective was to assess the effect of trandolapril on plasma levels of atrial natriuretic peptide. A secondary objective was to assess the effects of trandolapril on selected neurohormones, vasoactive peptides and enzymes, which may be important in the development of left ventricular remodeling and heart failure following an acute myocardial infarction. A total of 119 patients with an acute myocardial infarction and a wall motion index &lt; or =1.2 (16-segment echocardiographic model) were randomized to double blind treatment with trandolapril or placebo within 3-7 days after the onset of infarction. Blind treatment was discontinued 21 days after the index infarction. Venous blood samples were collected at rest, before randomization and on the day after treatment was discontinued. At the end of the study, there were no differences in plasma levels of atrial natriuretic peptide between the two treatment groups. Angiotensin-converting enzyme activity was suppressed and plasma renin activity was higher in the trandolapril group. No differences in plasma levels of N-terminal pro-atrial natriuretic peptide, brain natriuretic peptide, aldosterone, noradrenaline, adrenaline, vasopressin, big endothelin-1 and neuropeptide Y were found between the two treatment groups. There were positive correlations between several markers of neurohormonal activation at baseline and variables expressing left ventricular dysfunction and clinical heart failure. Neurohormonal activation is related to left ventricular dysfunction. The effects of 2-3 weeks of angiotensin-converting enzyme inhibition on neurohormonal activation does not predict the already established beneficial long-term effects after myocardial infarction. Thus, early modulation of circulatory neurohormone levels may not be a major mechanism for the efficacy of angiotensin-converting enzyme inhibitors in these patients. Selected plasma hormone markers may still be used to identify patients who might get the greatest benefit from treatment. abstract_id: PUBMED:2483242 Potential use of ACE inhibitors after acute myocardial infarction. Many studies in recent years have dealt with the use of angiotensin converting enzyme (ACE) inhibitors for the treatment of ischemic heart disease. The renin-angiotensin system is widely distributed in plasma and peripheral tissues and is activated following certain conditions including myocardial ischemia. The effects of ACE inhibitors on the ischemic heart are apparently many and achieved through a blockade of plasma and tissue ACE as well as through their property of scavenging free radicals. Captopril seem to exert a beneficial effect in preventing heart failure after myocardial infarction probably by restoring the contractile function of infarcted myocardium. An antianginal effect of ACE inhibitors has been demonstrated in patients with coronary artery disease along with a potentiation of isosorbide dinitrate coronary vasodilator capacity. The antiarrhythmic efficacy, clearly evident in animal models, deserves further investigations in humans. The above listed effects of ACE inhibitors and the suppressive action demonstrated for captopril on platelet aggregation could represent a very useful tool for the future treatment of patients with coronary artery disease. abstract_id: PUBMED:9468078 Effect of early captopril treatment on blood adrenaline levels in acute myocardial infarction (the substudy of ISIS-4). International Study of Infarct Survival-4. Of patients with acute myocardial infarction eligible for the International Study of Infarct Survival-4, randomized to captopril (n = 30) or placebo (n = 33), the captopril group had a significant decrease in blood adrenaline on day 3 compared with baseline values. Results suggest that suppression of sympathetic activity contributes to the beneficial effects of treatment with angiotensin-converting enzyme inhibitors in the early phase of acute myocardial infarction. abstract_id: PUBMED:9244210 Effects of ramipril on plasma fibrinolytic balance in patients with acute anterior myocardial infarction. HEART Study Investigators. Background: The long-term administration of ACE inhibitors to selected patients with left ventricular dysfunction appears to reduce the incidence of recurrent myocardial infarction (MI) and unstable angina pectoris. The mechanisms responsible for the reduction in ischemic events are unknown, but likely candidates include effects on the atherosclerotic process, thrombosis, and/or vascular tone. Methods And Results: The effects of ACE inhibitor therapy with ramipril on plasma fibrinolytic variables were assessed in 120 subjects participating in the Healing and Early Afterload Reduction Therapy (HEART) study, a double-blind, placebo-controlled trial of acute anterior MI patients who were randomly assigned within 24 hours of the onset of symptoms to receive low-dose ramipril (0.625 mg daily), full-dose ramipril (1.25 mg titrated to 10 mg/d), or placebo for 14 days. Plasma levels of plasminogen activator inhibitor-1 (PAI-1) activity and PAI-1 antigen and tissue plasminogen activator (TPA) antigen were measured before randomization and on day 14. Clinical characteristics of the three study groups were similar, as were the prerandomization plasma levels of PAI-1 antigen, PAI-1 activity, and TPA antigen. Compared with the placebo group, PAI-1 antigen levels were 44% lower (P=.004) at day 14 in the ramipril-treated patients, and PAI-1 activity levels were 22% lower (P=.02). In contrast, plasma TPA levels were not significantly different between the placebo-treated and ramipril-treated groups. Conclusions: Treatment with ramipril has a significant impact on plasma fibrinolytic variables during the recovery phase after acute MI. The renin-angiotensin system appears to play an important role in the regulation of vascular fibrinolysis, and interruption of this regulatory pathway may contribute to the clinical benefits of ACE inhibitors. abstract_id: PUBMED:28065766 Changes in One-Year Mortality in Elderly Patients Admitted with Acute Myocardial Infarction in Relation with Early Management. Background: Elderly patients are underrepresented in acute myocardial infarction trials. Our aim was to determine whether, in elderly patients, changes in management in the past 15 years are associated with improved 1-year mortality after hospital admission for myocardial infarction. Methods: We used data from 4 1-month French registries, conducted 5 years apart from 1995 to 2010, including 3389 elderly patients (≥75 years of age). Results: From 1995 to 2010, mean age remained stable (82.1 years), similar in ST- and non-ST-elevation myocardial infarction patients. Obesity, diabetes, hypertension, and hypercholesterolemia increased. History of prior myocardial infarction, stroke, and peripheral artery disease remained stable, while history of heart failure decreased. Major changes in management were noted: early percutaneous coronary intervention, early treatment with antiplatelet agents, low-molecular-weight heparin, beta-blockers, angiotensin-converting enzyme inhibitor or angiotensin receptor blocker, and statins all increased. Early mortality after hospital admission decreased from 25.0% to 8.4%. One-year mortality decreased from 36.2% to 20.0% (adjusted hazard ratio 2010 vs 1995: 0.47, 0.39-0.57), both for ST-elevation myocardial infarction (36.8% to 21.1%) and non-ST-elevation myocardial infarction (34.8% to 19.1%). Mortality reduction was observed in all age groups, including those ≥85 years of age (from 46.2% to 31.4%). The study period, however, was no longer associated with decreased mortality when variables reflecting management changes were taken into account. Conclusions: Early and 1-year mortality after hospital admission of elderly patients with acute myocardial infarction has substantially decreased over the past 15 years. This improvement is likely mediated by increasing use of recommended management strategies. These data support the application of guidelines derived from trials mostly including younger patients to elderly populations as well. Answer: Yes, the reduction of plasma levels of endothelin (ET-1) in the acute and subacute stages of myocardial infarction (MI) is one of the beneficial effects of early treatment with ACE inhibitors. Studies have shown that ET-1 levels are elevated in acute myocardial infarction and are associated with a negative prognosis (PUBMED:9736439). Captopril, an ACE inhibitor, has been demonstrated to reduce plasma ET-1 levels in the acute and subacute phases of AMI (PUBMED:8803588; PUBMED:9057069). This reduction was significant at 48 and 72 hours post-thrombolysis when compared to placebo, suggesting that captopril affects ET levels during these critical phases of AMI (PUBMED:8803588). Moreover, the reduction in ET-1 levels by captopril provides additional evidence for the beneficial effects of early ACE inhibitor treatment in patients with AMI (PUBMED:9057069). While most reports in humans have not found an effect of ACE inhibitors on ET-1 levels, fosinopril, another ACE inhibitor, has been shown to normalize ET-1 levels in patients with congestive heart failure (CHF), indicating potential beneficial effects beyond blood pressure control (PUBMED:9736439). Additionally, the use of ACE inhibitors in the early phases of myocardial infarction is associated with a reduction in mortality and prevention of left ventricular dilatation (PUBMED:8668893). Overall, the evidence suggests that the modulation of ET-1 levels is one of the mechanisms by which ACE inhibitors exert their beneficial effects in the context of acute myocardial infarction, contributing to improved clinical outcomes.
Instruction: Does endoscopic ultrasound have anything to offer in the diagnosis of idiopathic acute pancreatitis? Abstracts: abstract_id: PUBMED:30984704 An investigation into the sensitivity of endoscopic ultrasound in the diagnosis of malignant bile duct in patients with idiopathic acute pancreatitis. Introduction And Objective: Acute pancreatitis (AP) is an inflammatory process of the pancreas characterized by abdominal pain and increased pancreatic enzymes. This disease is diagnosed clinically. Endoscopic ultrasound (EUS), which is a technique with high sensitivity and specificity, is used to diagnose biliary disease. This study aimed to determine the sensitivity of EUS in the diagnosis of malignant bile duct in patients with idiopathic AP. Methods: This descriptive study was performed on 146 patients with pancreatitis hospitalized in the gastrointestinal tract section of the Imam Khomeini Hospital of Ahwaz Jundishapur University of Medical Sciences. The collected data were analyzed by the SPSS 22.0 and the significance level of the test was &lt;0.05. Results: According to the results, 79 (54%) out of the 146 patients were female and 67 (46%) were male. The mean and standard deviation of the patients' age were 52.5 and 19.6 years, respectively. The findings showed that the sensitivity and specificity of the EUS were 33% and 99%, respectively. Compared to the endoscopic retrograde cholangiopancreatography (ERCP), the sensitivity and specificity of the abdominal ultrasound were 62% and 62.5%, respectively. Compared to the ERCP, the sensitivity and specificity of EUS were 92% and 50%, respectively. Conclusion: The findings of this study showed that the sensitivity and specificity of EUS were higher than those of abdominal ultrasound. Moreover, EUS was the preferred method to detect common bile duct stones (CBDS). abstract_id: PUBMED:29097868 Role of endoscopic ultrasound in idiopathic pancreatitis. Recurrent acute pancreatitis (RAP) is defined based on the occurrence of two or more episodes of acute pancreatitis. The initial evaluation fails to detect the cause of RAP in 10%-30% of patients, whose condition is classified as idiopathic RAP (IRAP). Idiopathic acute pancreatitis (IAP) is a diagnostic challenge for gastroenterologists. In view of associated morbidity and mortality, it is important to determine the aetiology of pancreatitis to provide early treatment and prevent recurrence. Endoscopic ultrasound (EUS) is an investigation of choice for imaging of pancreas and biliary tract. In view of high diagnostic accuracy and safety of EUS, a EUS based management strategy appears to be a reasonable approach for evaluation of patients with a single/recurrent idiopathic pancreatitis. The most common diagnoses by EUS in IAP is biliary tract disease. The present review aims to discuss the role of EUS in the clinical management and diagnosis of patients with IAP. It elaborates the diagnostic approach to IAP in relation to EUS and other different modalities. Controversial issues in IAP like when to perform EUS, whether to perform after first episode or recurrent episodes, comparison among different investigations and the latest evidence significance are detailed. abstract_id: PUBMED:31197695 Endoscopic Ultrasound for Routine Assessment in Idiopathic Acute Pancreatitis. Background: Acute pancreatitis (AP) is one of the most common general acute surgical presentations. Current recommendations are that idiopathic acute pancreatitis (IAP) should account for no more than 20% of AP cases. Some studies suggest gallbladder microlithiasis is the aetiology in up to 75% of IAP patients. Endoscopic ultrasound (EUS) has been reported to be effective in the detection of microlithiasis and choledocholithiasis as well as pancreatic parenchymal, ductal and ampullary disorders. The aims of this study were to evaluate the usefulness of EUS in establishing aetiology in IAP patients and to assess if there is a role for EUS in the selection criteria for laparoscopic cholecystectomy to treat a potential biliary cause in IAP patients. Methods: A systematic review following PRISMA guideline was performed to gather data on patients with IAP undergoing EUS for further investigation. Three databases (MEDLINE, PubMed, and EMBASE) were searched to 28 July 2018. Results: Our systematic review included 28 studies, comprising 1850 patients with an initial diagnosis of IAP prior to having EUS. Diagnosis of a potential aetiology or associated pancreatic pathology was established in 1095 (62%, p &lt; 0.001) of cases. A biliary aetiology (microlithiasis or choledocholithiasis) was found in 37%. Chronic pancreatitis and associated pancreatic findings (dilated pancreatic duct, pancreatic duct stricture or stone) were found in 21%. Pancreatic neoplasms were found in 6%. Of the patients who had identifiable biliary pathology on EUS that proceeded to cholecystectomy, 2% had a recurrence of AP during a mean follow-up period of 20.5 months. Conclusions: There is a likely role for the routine use of EUS in the assessment of patients with IAP. The routine use of EUS may decrease the proportion of cases with a diagnosis of IAP. EUS may provide better selection criteria for laparoscopic cholecystectomy in patients with an initial diagnosis of IAP. abstract_id: PUBMED:37892077 Endoscopic Ultrasound to Identify the Actual Cause of Idiopathic Acute Pancreatitis: A Systematic Review. Idiopathic acute pancreatitis (IAP) presents a diagnostic challenge and refers to cases where the cause of acute pancreatitis remains uncertain despite a comprehensive diagnostic evaluation. Endoscopic ultrasound (EUS) has emerged as a valuable tool in the diagnostic workup of IAP. This review explores the pivotal role of EUS in detecting the actual cause of IAP and assessing its accuracy, timing, safety, and future technological improvement. In this review, we investigate the role of EUS in identifying the actual cause of IAP by examining the available literature. We aim to assess possible existing evidence regarding EUS accuracy, timing, and safety and explore potential trends of future technological improvements in EUS for diagnostic purposes. Following PRISMA guidelines, 60 pertinent studies were selected and analysed. EUS emerges as a crucial diagnostic tool, particularly when conventional imaging fails. It can offer intricate visualization of the pancreas, biliary system, and adjacent structures. Microlithiasis, biliary sludge, chronic pancreatitis, and small pancreatic tumors seem to be much more accurately identified with EUS in the setting of IAP. The optimal timing for EUS is post-resolution of the acute phase of the disease. With a low rate of complications, EUS poses minimal safety concerns. EUS-guided interventions, including fine-needle aspiration, collection drainage, and biopsies, aid in the cytological analysis. With high diagnostic accuracy, safety, and therapeutic potential, EUS is able to improve patient outcomes when managing IAP. Further refinement of EUS techniques and cost-effectiveness assessment of EUS-guided approaches need to be explored in multicentre prospective studies. This review underscores EUS as a transformative tool in unraveling IAP's enigma and advancing diagnostic and therapeutic strategies. abstract_id: PUBMED:27375362 Comparing the Roles of EUS, ERCP and MRCP in Idiopathic Acute Recurrent Pancreatitis. Acute recurrent pancreatitis (ARP) is defined as more than two attacks of acute pancreatitis with complete or almost complete resolution of symptoms and signs of pancreatitis between episodes. The initial evaluation fails to detect the cause of ARP in 10%-30% of patients, whose condition is classified as idiopathic ARP. Endoscopic ultrasound (EUS) has gained increasing attention as a useful imaging modality for the pancreas and the extrahepatic biliary tree. The close proximity of the pancreas to the digestive tract allows EUS to obtain detailed images of this organ. This review aims to record pancreaticobiliary endoscopic ultrasound (EUS) and other imaging modalities in the clinical management of patients with idiopathic ARP. abstract_id: PUBMED:31529412 Role of Endoscopic Ultrasound in Detecting Pancreatic Cancer Missed on Cross-Sectional Imaging in Patients Presenting with Pancreatitis: A Retrospective Review. Introduction: Pancreatic cancer is the fourth leading cause of cancer-related death in the USA. Early detection of pancreatic cancer may help improve patient survival. It has been hypothesized that acute idiopathic or chronic pancreatitis is associated with an increased risk of pancreatic cancer; however, these conditions may also represent an early manifestation of pancreatic cancer, rather than just being risk factors. Endoscopic ultrasound (EUS) is a sensitive diagnostic modality for the detection of small, early-stage pancreatic tumors. The aim of this study was to evaluate the diagnostic yield of EUS for pancreatic cancer in patients with acute idiopathic or chronic pancreatitis when cross-sectional imaging (CT and/or MRI) was negative for a mass lesion in the pancreas. Methods: This study was an IRB-approved retrospective chart review conducted for the period of August 2005 to September 2018. Any patient presenting with acute idiopathic or chronic pancreatitis with a CT and/or MRI imaging negative for a pancreatic mass lesion that underwent an EUS during the study period was selected for inclusion. A retrospective review was performed to evaluate the outcomes of patients who had pancreatic cancer diagnosed from an EUS-FNA (fine needle aspiration) sample. Data were collected on patient demographics and clinical characteristics, inclusive of specific post-diagnosis treatment course. An "event rate" was calculated and is defined as the number of positive pancreatic cancer diagnoses on EUS-FNA from all patients presenting with acute idiopathic or chronic pancreatitis who underwent an EUS examination following a CT and/or MRI study negative for pancreatic mass lesion. Results: A total of 565 patients met inclusion criteria, with 30 cases of confirmed pancreatic cancer diagnosed with EUS-FNA from this group. The event rate for EUS diagnosis of pancreatic cancer was 5.3%. The majority of patients (52.0%) diagnosed with cancer were stages I-II. Conclusions: Endoscopic ultrasound should be a routine part of the diagnostic algorithm when evaluating a patient with acute idiopathic or chronic pancreatitis of unclear etiology, particularly when cross-sectional imaging is negative for a mass lesion and clinical suspicion is high for neoplasia. Further prospective studies are needed to evaluate the role of EUS in this setting. abstract_id: PUBMED:21160725 Endoscopic ultrasonography and idiopathic acute pancreatitis. Idiopathic acute pancreatitis is a diagnostic challenge for gastroenterologists. The possibility of finding a cause for pancreatitis usually relies on how far the diagnostic study is taken. Endoscopic explorations such as endoscopic retrograde cholangiopancreatography and endoscopic ultrasonography can help to determine the cause of pancreatitis. Furthermore, microscopic bile examination and magnetic resonance cholangiopancreatography can also be helpful in the work up of these patients. In this article an approximation to the diagnostic approach to patients with idiopathic acute pancreatitis is made, taking into account the reported evidence with which to choose between the different available explorations. abstract_id: PUBMED:11280538 Endoscopic ultrasound in idiopathic acute pancreatitis. Objective: The aim of this study was to determine the utility of endoscopic ultrasound (EUS) in patients with unexplained acute pancreatitis, and whether endoscopic retrograde cholangiopancreatography (ERCP) is subsequently needed. Methods: Subjects who underwent EUS for assessment of idiopathic acute pancreatitis were identified, their medical records were reviewed, and they were contacted for a follow-up telephone interview. EUS diagnosis was compared with the final diagnosis and outcome. Results: EUS revealed a cause of pancreatitis in 21 of the 31 subjects (68%), including microlithiasis in five (16%), chronic pancreatitis in 14 (45%), pancreas divisum in two (6.5%), pancreatic cancer in one (3.2%), and was not diagnostic in 10 (32%). During a mean follow-up period of 16 months, diagnosis changed in four subjects (13%), and nine subjects (29%) had ERCP because of persistent symptoms or recurrent pancreatitis. Conclusion: EUS, a less invasive test than ERCP, demonstrated an etiology in two-thirds of patients with idiopathic acute pancreatitis. Most patients did not require ERCP during the follow-up period. EUS can be an alternative to ERCP in patients with unexplained acute pancreatitis. abstract_id: PUBMED:33128884 Endoscopic Ultrasound. Endoscopic ultrasound provides high-resolution, real-time imaging of the gastrointestinal tract and surrounding extramural structures. In recent years, endoscopic ultrasound has played an increasing role as an adjunct or alternative method to conventional surgical therapies. The role of endoscopic ultrasound in diagnosis and management of gastrointestinal malignancy, pancreatic diseases, and biliary diseases continues to evolve. Therapeutic endoscopic ultrasound procedures for a variety of pancreatic and biliary indications shows a high technical and clinical success rate, with low rate of adverse events. Endoscopic ultrasound plays a key role in multidisciplinary management of complex surgical and oncology patients and those with pancreaticobiliary disorders. abstract_id: PUBMED:35978715 Endoscopic ultrasound diagnostic gain over computed tomography and magnetic resonance cholangiopancreatography in defining etiology of idiopathic acute pancreatitis. Background: About 10%-30% of acute pancreatitis remain idiopathic (IAP) even after clinical and imaging tests, including abdominal ultrasound (US), contrast-enhanced computed tomography (CECT) and magnetic resonance cholangiopancreatography (MRCP). This is a relevant issue, as up to 20% of patients with IAP have recurrent episodes and 26% of them develop chronic pancreatitis. Few data are available on the role of EUS in clarifying the etiology of IAP after failure of one or more cross-sectional techniques. Aim: To evaluate the diagnostic gain after failure of one or more previous cross-sectional exams. Methods: We retrospectively collected data about consecutive patients with AP and at least one negative test between US, CECT and MRCP, who underwent linear EUS between January 2017 and December 2020. We investigated the EUS diagnostic yield and the EUS diagnostic gain over different combinations of these cross-sectional imaging techniques for the etiologic diagnosis of AP. Types and frequency of EUS diagnosis were also analyzed, and EUS diagnosis was compared with the clinical parameters. After EUS, patients were followed-up for a median of 31.5 mo to detect cases of pancreatitis recurrence. Results: We enrolled 81 patients (63% males, mean age 61 ± 18, 23% with previous cholecystectomy, 17% with recurrent pancreatitis). Overall EUS diagnostic yield for AP etiological diagnosis was 79% (20% lithiasis, 31% acute on chronic pancreatitis, 14% pancreatic solid or cystic lesions, 5% pancreas divisum, 5% autoimmune pancreatitis, 5% ductal abnormalities), while 21% remained idiopathic. US, CECT and MRCP, taken alone or in combination, led to AP etiological diagnosis in 16 (20%) patients; among the remaining 65 patients, 49 (75%) obtained a diagnosis at EUS, with an overall EUS diagnostic gain of 61%. Sixty-eight patients had negative US; among them, EUS allowed etiological diagnosis in 59 (87%). Sixty-three patients had a negative CECT; among them, 47 (74%) obtained diagnosis with EUS. Twenty-four had a negative MRCP; among them, 20 (83%) had EUS diagnosis. Twenty-one had negative CT + MRCP, of which 17 (81%) had EUS diagnosis, with a EUS diagnostic gain of 63%. Patients with biliary etiology and without previous cholecystectomy had higher median values of alanine aminotransferase (154 vs 25, P = 0.010), aspartate aminotransferase (95 vs 29, P = 0.018), direct bilirubin (1.2 vs 0.6, P = 0.015), gamma-glutamyl transpeptidase (180 vs 48, P = 0.006) and alkaline phosphatase (150 vs 72, P = 0.015) Chronic pancreatitis diagnosis was more frequent in patients with recurrent pancreatitis at baseline (82% vs 21%, P &lt; 0.001). During the follow-up, AP recurred in 3 patients, one of which remained idiopathic. Conclusion: EUS is a good test to define AP etiology. It showed a 63% diagnostic gain over CECT + MRCP. In suitable patients, EUS should always be performed in cases of IAP. Further prospective studies are needed. Answer: Yes, endoscopic ultrasound (EUS) has a significant role in the diagnosis of idiopathic acute pancreatitis (IAP). EUS is an important imaging modality for the pancreas and biliary tract, and it has been shown to have high diagnostic accuracy and safety in evaluating patients with IAP. It is particularly useful when the initial evaluation fails to detect the cause of recurrent acute pancreatitis (RAP) or IAP, which can occur in 10%-30% of patients (PUBMED:29097868). EUS has been reported to be effective in detecting microlithiasis, choledocholithiasis, pancreatic parenchymal, ductal, and ampullary disorders, which are potential etiologies in IAP patients. A systematic review found that EUS established a potential etiology or associated pancreatic pathology in 62% of cases, with a biliary etiology found in 37% of patients (PUBMED:31197695). Additionally, EUS has been shown to have a higher sensitivity and specificity than abdominal ultrasound in detecting common bile duct stones (CBDS) (PUBMED:30984704). The review of literature also indicates that EUS can identify causes of IAP that are not detected by conventional imaging, such as small pancreatic tumors, chronic pancreatitis, and biliary sludge. The optimal timing for EUS is after the resolution of the acute phase of the disease, and it poses minimal safety concerns (PUBMED:37892077). Furthermore, EUS can be an alternative to more invasive tests like endoscopic retrograde cholangiopancreatography (ERCP) in patients with unexplained acute pancreatitis (PUBMED:11280538). In cases where cross-sectional imaging such as computed tomography (CT) and magnetic resonance imaging (MRI) are negative for a mass lesion, EUS has been shown to have a diagnostic yield of 5.3% for pancreatic cancer in patients presenting with pancreatitis, suggesting its utility in detecting neoplasia in such scenarios (PUBMED:31529412). Overall, EUS is recommended as a routine part of the diagnostic algorithm for patients with IAP, especially when other imaging modalities fail to identify the cause, and it may help in better selecting patients for therapeutic interventions like laparoscopic cholecystectomy (PUBMED:31197695).
Instruction: Can a glucagon stimulation test characterized by lower GH cut-off value be used for the diagnosis of growth hormone deficiency in adults? Abstracts: abstract_id: PUBMED:26129876 Can a glucagon stimulation test characterized by lower GH cut-off value be used for the diagnosis of growth hormone deficiency in adults? Objective: The aim of this study was to assess diagnostic values of insulin tolerance test (ITT), glucagon stimulation test (GST), and insulin like growth factor-I (IGF-I) level, to find optimal GH cut-off values for GST, and to evaluate efficiencies of patient age, gender, body-mass index (BMI), and additional pituitary hormone deficiencies (PHDs) in the diagnosis of growth hormone deficiency (GHD). Study Design: This retrospective study involved 216 patients with a pituitary disease and 26 healthy controls. Age, gender, BMI, medical histories, and hormonal data including baseline and stimulated hormone values were evaluated. Three cut-off values for peak GH responses to stimulation tests were evaluated: (a) 3.00 µg/L on ITT, (b) 3.00 µg/L on GST, and (c) 1.07 µg/L on GST. Results: According to the ITT, GST with 3.00 µg/L cut-off, and GST with 1.07 µg/L cut-off, GHD was present in 86.1, 74.5, and 54.2 % patients, respectively. Patient age, BMI, and number of PHDs, but not gender, were found to be correlated with IGF-I and peak GH concentrations. All patients with an IGF-I concentration ≤95 ng/ml or ≥3 PHD had GHD. None of the patients with adequate GH response to the GST with 1.07 µg/L cut-off, but blunted responses to ITT and GST with 3.00 µg/L cut-off, had ≥3 PHDs. 12 out of 26 (46.2 %) healthy subjects failed the GST with 3.00 µg/L cut-off, but not with 1.07 µg/L cut-off. Conclusions: Patient age, IGF-I, BMI, and number of PHDs are efficient factors associated with the diagnosis of GHD. A 4 h GST with a diagnostic GH threshold of 1.07 µg/L seems to be a good diagnostic method for GHD. abstract_id: PUBMED:26897383 Revised GH and cortisol cut-points for the glucagon stimulation test in the evaluation of GH and hypothalamic-pituitary-adrenal axes in adults: results from a prospective randomized multicenter study. Context: Recent studies suggest using lower GH cut-points for the glucagon stimulation test (GST) in diagnosing adult GH deficiency (GHD), especially in obese patients. There are limited data on evaluating GH and hypothalamic-pituitary-adrenal (HPA) axes using weight-based dosing for the GST. Objective: To define GH and cortisol cut-points to diagnose adult GHD and secondary adrenal insufficiency (SAI) using the GST, and to compare fixed-dose (FD: 1 or 1.5 mg in patients &gt;90 kg) with weight-based dosing (WB: 0.03 mg/kg). Response to the insulin tolerance test (ITT) was considered the gold standard, using GH and cortisol cut-points of ≥3 ng/ml and ≥18 µg/dL, respectively. Design: 28 Patients with hypothalamic-pituitary disease and 1-2 (n = 14) or ≥3 (n = 14) pituitary hormone deficiencies, and 14 control subjects matched for age, sex, estrogen status and body mass index (BMI) underwent the ITT, FD- and WB-GST in random order. Results: Age, sex ratio and BMI were comparable between the three groups. The best GH cut-point for diagnosis of GHD was 1.0 (92 % sensitivity, 100 % specificity) and 2.0 ng/mL (96 % sensitivity and 100 % specificity) for FD- and WB-GST, respectively. Age negatively correlated with peak GH during FD-GST (r = -0.32, P = 0.04), but not WB-GST. The best cortisol cut-point for diagnosis of SAI was 8.8 µg/dL (92 % sensitivity, 100 % specificity) and 11.2 µg/dL (92 % sensitivity and 100 % specificity) for FD-GST and WB-GST, respectively. Nausea was the most common side effect, and one patient had a seizure during the FD-GST. Conclusion: The GST correctly classified GHD using GH cut-points of 1 ng/ml for FD-GST and 2 ng/ml for WB-GST, hence using 3 ng/ml as the GH cut-point will misclassify some GH-sufficient adults. The GST may also be an acceptable alternative to the ITT for evaluating the HPA axis utilizing cortisol cut-points of 9 µg/dL for FD-GST and 11 µg/dL for WB-GST. abstract_id: PUBMED:38107725 Growth Hormone Cut-Off Post Glucagon Stimulation Test in an Indian Cohort of Overweight/Obese Hypopituitary Patients for the Diagnosis of Adult Growth Hormone Deficiency. Obesity has been associated with reduced growth hormone (GH) secretion, which might lead to the over diagnosis of adult GH deficiency (GHD) in overweight (OW)/obese hypopituitary patients. Currently, there are no body mass index (BMI)-specific peak GH cut-offs for the glucagon stimulation test (GST) for assessing adult GHD in India, given the BMI cut-offs vary for Asians. The study's main objective was to determine a peak GH cut-off level for the diagnosis of adult GHD in overweight (OW)/obese individuals utilizing the GST. Forty OW/obese subjects were studied in two groups of 20 each. The first group included 20 OW/obese hypopituitary adults and the second group included 20 control subjects. The intervention consisted of a 3 h GST. The main outcome measured was the peak GH level on GST. The mean age of control subjects was lower (33.15 ± 7.67 v/s. 42.10 ± 13.70 years; P = 0.017) in comparison with hypopituitary adults. The mean BMI (27.93 ± 1.63 v/s. 25.81 ± 1.66 kg/m2; P &lt; 0.001), mean IGF1 (272.81 ± 38.57 v/s. 163.75 ± 42.42; P &lt; 0.001, and mean HOMA IR (11.8 ± 9.7 v/s. 6.02 ± 3.14; P = 0.02) was greater in OW/obese controls. The mean GH peak was significantly higher in control subjects (5.41 ± 3.59 ng/mL v/s. 1.49 ± 1.25 ng/mL; P &lt; 0.001) compared to hypopituitary subjects. ROC curve analysis demonstrated a GH cut-off of 3.3 ng/mL with a moderate sensitivity of 70% and high specificity of 95%, with an AUC of 0.838 (P &lt; 0.001; 95% confidence interval [CI] of 0.710-0.965) for the diagnosis of GHD in overweight/obese hypopituitary adults. This study demonstrates that a cut-off of 3.3 ng/mL would diagnose GHD in Indian overweight/obese hypopituitary adults. abstract_id: PUBMED:37052176 Diagnosis and testing for growth hormone deficiency across the ages: a global view of the accuracy, caveats, and cut-offs for diagnosis. Growth hormone deficiency (GHD) is a clinical syndrome that can manifest either as isolated or associated with additional pituitary hormone deficiencies. Although diminished height velocity and short stature are useful and important clinical markers to consider testing for GHD in children, the signs and symptoms of GHD are not always so apparent in adults. Quality of life and metabolic health are often impacted in patients with GHD; thus, making an accurate diagnosis is important so that appropriate growth hormone (GH) replacement therapy can be offered to these patients. Screening and testing for GHD require sound clinical judgment that follows after obtaining a complete medical history of patients with a hypothalamic-pituitary disorder and a thorough physical examination with specific features for each period of life, while targeted biochemical testing and imaging are required to confirm the diagnosis. Random measurements of serum GH levels are not recommended to screen for GHD (except in neonates) as endogenous GH secretion is episodic and pulsatile throughout the lifespan. One or more GH stimulation tests may be required, but existing methods of testing might be inaccurate, difficult to perform, and can be imprecise. Furthermore, there are multiple caveats when interpreting test results including individual patient factors, differences in peak GH cut-offs (by age and test), testing time points, and heterogeneity of GH and insulin-like growth factor 1 assays. In this article, we provide a global overview of the accuracy and cut-offs for diagnosis of GHD in children and adults and discuss the caveats in conducting and interpreting these tests. abstract_id: PUBMED:30963752 Assessing the adrenal axis by the glucagon stimulation test in children with idiopathic growth hormone deficiency. Approximately 30% of children with idiopathic growth hormone deficiency (IGHD) also suffer from other pituitary hormone deficien-cies. Of children with IGHD, approximately 10% are unable to generate appropriate ACTH levels in response to stress. This study was prospectively designed to test the integrity of the adrenal axis in patients with an established diagnosis of IGHD using the glucagon stimulation test (GST). The study population comprised 39 patients with established childhood-onset IGHD. The diagnosis of GHD was established on the basis of failure of GH to increase over 10 ng/ml after two stimulation tests. The GST was performed by intra-muscular injection of 1 mg glucagon. The criteria followed to define adrenal deficiency was cortisol less than 167 ng/l in response to GST. The mean peak blood glucose level was 8.64 ±1.71 mmol/l. Analysing the cohort using the cut-off of 167 ng/ml to define adrenal insufficiency under GST, there were 25.64% of children diagnosed: 20% among males and 35.7% among females. Subjects with GH and ACTH deficiency had a mean peak GH of 2.07 ±1.79 ng/ml - significantly lower than GH peak of children with IGHD alone (p &lt; 0,001). The frequency of children with combined somatotroph and corticotroph deficiencies with a GH peak &lt; 3 ng/ml was 21% (p &lt; 0,001). The current study identified a prevalence of adrenal insufficiency of 25.64%, which could predict greater risk for children if untreated, especially because a substantial proportion of patients do not present clinical symptoms. abstract_id: PUBMED:34418530 Diagnosing Growth Hormone Deficiency: Can a Combined Arginine and Clonidine Stimulation Test Replace 2 Separate Tests? Objective: Given the large number of false-positive growth hormone deficiency (GHD) diagnoses from a single growth hormone (GH) stimulation test in children, 2 different pharmacologic tests, performed on separate days or sequentially, are required. This study aimed to assess the reliability and safety of a combined arginine-clonidine stimulation test (CACST). Methods: This was a retrospective, single-center, observational study. During 2017-2019, 515 children aged &gt;8 years underwent GH stimulation tests (CACST: n = 362 or clonidine stimulation test [CST]: n = 153). The main outcome measures used to compare the tests were GH response (sufficiency/deficiency) and amplitude and timing of peak GH and safety parameters. Results: Population characteristics were as follows: median age of 12.2 years (interquartile range [IQR]: 10.7, 13.4), 331 boys (64%), and 282 prepubertal children (54.8%). The GHD rate was comparable with 12.7% for CACST and 14.4% for CST followed by a confirmatory test (glucagon or arginine) (P = .609). Peak GH was higher and occurred later in response to CACST compared with CST (14.6 ng/mL [IQR: 10.6, 19.4] vs 11.4 ng/mL [IQR: 7.0, 15.8], respectively, P &lt; .001; 90 minutes [IQR: 60, 90] vs 60 minutes [IQR: 60, 90], respectively, P &lt; .001). No serious adverse events occurred following CACST. Conclusion: Our findings demonstrate the reliability and safety of CACST in detecting GHD in late childhood and adolescence, suggesting that it may replace separate or sequential GH stimulation tests. By diminishing the need for the second GH stimulation test, CACST saves time, is more cost-effective, and reduces discomfort for children, caregivers, and medical staff. abstract_id: PUBMED:27409821 AMERICAN ASSOCIATION OF CLINICAL ENDOCRINOLOGISTS AND AMERICAN COLLEGE OF ENDOCRINOLOGY DISEASE STATE CLINICAL REVIEW: UPDATE ON GROWTH HORMONE STIMULATION TESTING AND PROPOSED REVISED CUT-POINT FOR THE GLUCAGON STIMULATION TEST IN THE DIAGNOSIS OF ADULT GROWTH HORMONE DEFICIENCY. Objective: The clinical features of adult GH deficiency (GHD) are nonspecific, and GH stimulation testing is often required to confirm the diagnosis. However, diagnosing adult GHD can be challenging due to the episodic and pulsatile GH secretion, concurrently modified by age, gender, and body mass index (BMI). Methods: PubMed searches were conducted to identify published data since 2009 on GH stimulation tests used to diagnose adult GHD. Relevant articles in English language were identified and considered for inclusion in the present document. Results: Testing for confirmation of adult GHD should only be considered if there is a high pretest probability, and the intent to treat if the diagnosis is confirmed. The insulin tolerance test (ITT) and glucagon stimulation test (GST) are the two main tests used in the United States. While the ITT has been accepted as the gold-standard test, its safety concerns hamper wider use. Previously, the GH-releasing hormone-arginine test, and more recently the GST, are accepted alternatives to the ITT. However, several recent studies have questioned the diagnostic accuracy of the GST when the GH cut-point of 3 μg/L is used and have suggested that a lower GH cut-point of 1 μg/L improved the sensitivity and specificity of this test in overweight/obese patients and in those with glucose intolerance. Conclusion: Until a potent, safe, and reliable test becomes available, the GST should remain as the alternative to the ITT in the United States. In order to reduce over-diagnosing adult GHD in overweight/obese patients with the GST, we propose utilizing a lower GH cut-point of 1 μg/L in these subjects. However, this lower GH cut-point still needs further evaluation for diagnostic accuracy in larger patient populations with varying BMIs and degrees of glucose tolerance. Abbreviations: AACE = American Association of Clinical Endocrinologists BMI = body mass index GH = growth hormone GHD = GH deficiency GHRH = GH-releasing hormone GHS = GH secretagogue GST = glucagon stimulation test IGF = insulin-like growth factor IGFBP-3 = IGF-binding protein 3 ITT = insulin tolerance test ROC = receiver operating characteristic WB-GST = weight-based GST. abstract_id: PUBMED:33567442 Diagnosis of Growth Hormone Deficiency in Children: The Efficacy of Glucagon versus Clonidine Stimulation Test. Introduction: The diagnosis of childhood growth hormone deficiency (GHD) requires a failure to respond to 2 GH stimulation tests (GHSTs) performed with different stimuli. The most commonly used tests are glucagon stimulation test (GST) and clonidine stimulation test (CST). This study assesses and compares GST and CST's diagnostic efficacy for the initial evaluation of short children. Methods: Retrospective, single-center, observational study of 512 short children who underwent GHST with GST first or CST first and a confirmatory test with the opposite stimulus in cases of initial GH peak &lt;7.5 ng/mL during 2015-2018. The primary outcome measure was the efficacy of the GST first or CST first in diagnosing GHD. Results: Population characteristics include median age of 9.3 years (interquartile range 6.2, 12.1), 78.3% prepubertal, and 61% boys. Subnormal GH response in the initial test was recorded in 204 (39.8%) children: 148 (45.5%) in GST first and 56 (30%) in CST first, p &lt; 0.001. Confirmatory tests verified GHD in 75/512 (14.6%) patients. Divergent results between the initial and confirmatory tests were more prevalent in GST first than CST first (103/148 [69.6%] vs. 26/56 [46.4%], p &lt; 0.001) indicating a significantly lower error rate for the CST first compared to the GST first. In multivariate analysis, the only significant predictive variable for divergent results between the tests was the type of stimulation test (OR = 0.349 [95% CI 0.217, 0.562], p &lt; 0.001). Conclusions: Screening of GH status with CST first is more efficient than that with GST first in diagnosing GHD in short children with suspected GHD. It is suggested that performing CST first may reduce the need for a second provocative test and avoid patients' inconvenience of undergoing 2 serial tests. abstract_id: PUBMED:18404387 Diagnosis of adult GH deficiency. Based on previous consensus statements, it has been widely accepted that the diagnosis of adult growth hormone deficiency (GHD) must be shown biochemically by provocative tests of GH secretion; in fact, the measurement of IGF-I as well as of other markers was considered unable to distinguish between normal and GHD subjects. The Insulin Tolerance Test (ITT) was indicated as that of choice and severe GHD defined by a GH peak lower than 3 microg/l. It is now recognized that, although normal IGF-I levels do not rule out severe GHD, very low IGF-I levels in patients highly suspected for GHD (i.e. patients with childhood-onset severe GHD or with multiple hypopituitarism acquired in adulthood) can be considered as definite evidence for severe GHD. However, patients suspected for adult GHD with normal IGF-I levels must be investigated by provocative tests. ITT remains a test of reference but it should be recognized that other tests are as reliable as ITT. Glucagon as classical test and, particularly, new maximal tests such as GHRH in combination with arginine or GH secretagogues (GHS) (i.e. GHRP-6) have well defined cut-off limits, are reproducible, able to distinguish between normal and GHD subjects. Overweight and obesity have confounding effect on the interpretation of the GH response to provocative tests. In adults cut-off levels of GH response below which severe GHD is demonstrated must be appropriate to lean, overweight and obese subjects to avoid false positive diagnosis in obese adults and false negative diagnosis in lean GHD patients. abstract_id: PUBMED:36750758 Comparison of glucagon stimulation test and low dose ACTH test in assessing hypothalamic-pituitary-adrenal (HPA) axis in children. Introduction: Children with a pituitary hormone deficiency are at risk for secondary adrenal insufficiency (AI). A stimulation test is usually performed for diagnosing AI, evaluating both the hypothalamic-pituitary-adrenal and growth hormone (GH)-IGF-1 axes. This single test is preferred by clinicians and is considerably more tolerable by patients. The objective of this study was to evaluate the glucagon stimulation test (GST), which is commonly used to assess both axes. Its diagnostic capability for GH deficiency is high and well accepted, however its utility for determining secondary AI has not been well established. Methods: This retrospective study involved 120 patients under 18 years of age with short stature who had undergone both a GST and low dose ACTH stimulation test (LDACTH test). Twenty-six children who had more than 6 months elapsed between the two tests were excluded from the study. The study was conducted on patients of the Pediatric Endocrinology Department at Soroka University Hospital, a tertiary medical centre in Beer Sheva, Israel. Statistical analyses were carried out via IBM SPSS (v. 22), with a significance level determined at p &lt; .05. Results: Different cortisol cut-off values were assessed for GST and it was determined that the highest combined sensitivity and specificity yielded a cut-off point of 320 nmol/L (56% sensitivity and 83% specificity) while the currently accepted cut-off value (500 nmol/L) yielded 100% sensitivity and 6% specificity. Conclusion: The results of this study show that GST is not an optimal tool for diagnosing secondary AI. Therefore, clinicians using this test should interpret its results with caution. Answer: Yes, a glucagon stimulation test (GST) characterized by lower growth hormone (GH) cut-off values can be used for the diagnosis of growth hormone deficiency (GHD) in adults. Several studies have investigated the diagnostic values of different GH cut-off values in GST for adults with suspected GHD. One study found that a 4-hour GST with a diagnostic GH threshold of 1.07 µg/L is a good diagnostic method for GHD, as all patients with an IGF-I concentration ≤95 ng/ml or ≥3 pituitary hormone deficiencies (PHDs) had GHD. This suggests that lower GH cut-off values in GST can be effective in diagnosing GHD in adults (PUBMED:26129876). Another study proposed revised GH cut-points for the GST in diagnosing adult GHD, suggesting that using a GH cut-point of 3 ng/ml may misclassify some GH-sufficient adults. The study recommended GH cut-points of 1 ng/ml for fixed-dose GST and 2 ng/ml for weight-based GST, which correctly classified GHD with high sensitivity and specificity (PUBMED:26897383). Furthermore, a study in an Indian cohort of overweight/obese hypopituitary patients suggested a GH cut-off of 3.3 ng/mL for the diagnosis of GHD in overweight/obese individuals, demonstrating that lower cut-offs can be appropriate for specific populations (PUBMED:38107725). The American Association of Clinical Endocrinologists and American College of Endocrinology also reviewed the use of GST for diagnosing adult GHD and proposed utilizing a lower GH cut-point of 1 µg/L in overweight/obese patients to reduce over-diagnosing GHD (PUBMED:27409821). In summary, lower GH cut-off values in GST have been shown to be effective for diagnosing GHD in adults, particularly when considering factors such as patient age, BMI, and the presence of additional PHDs. These findings support the use of GST with lower GH cut-off values as a diagnostic tool for adult GHD.
Instruction: Size of the tumor and pheochromocytoma of the adrenal gland scaled score (PASS): can they predict malignancy? Abstracts: abstract_id: PUBMED:19145205 Observer variation in the application of the Pheochromocytoma of the Adrenal Gland Scaled Score. Morphologic determination of the malignant potential of adrenal pheochromocytoma is a challenging problem in surgical pathology. A multiparameter Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) was recently developed based on a comprehensive study of a single institutional cohort of 100 cases. Assignment of a PASS was proposed to be useful for identifying pheochromocytomas with potential to metastasize, which defines malignancy according to the current World Health Organization terminology. A PASS is derived by evaluating multiple morphologic parameters to obtain a scaled score based on the summed weighted importance of each. Despite the proposal of this system several years ago, few studies have since examined its robustness and, in particular, the potential for observer variation inherent in the interpretation and assessment of these morphologic criteria. We further examined the utility of PASS by reviewing an independent single institutional cohort of adrenal pheochromocytomas as evaluated by 5 multi-institutional pathologists with at least 10 years experience in endocrine pathology. We found significant interobserver and intraobserver variation in assignment of PASS with variable interpretation of the underlying components. We consequently suggest that PASS requires further refinement and validation. We cannot currently recommend its use for clinical prognostication. abstract_id: PUBMED:20703467 Size of the tumor and pheochromocytoma of the adrenal gland scaled score (PASS): can they predict malignancy? Background: Size can predict malignancy in adrenocortical tumors, but the same extrapolation for pheochromocytomas (PCC) is controversial. The goal of this study was to find a correlation between the tumor size and malignant potential of PCC and determine whether the "Pheochromocytoma of the adrenal gland scaled score" (PASS) proposed by Thompson can be applied to predict malignancy. Methods: A retrospective analysis of patients with PCC operated on from 1991 to 2007 revealed 98 PCC removed from 93 patients. Tumor size was available for 90 tumors. Six (6.4%) patients had proven malignancy. Five familial cases were excluded from the PASS analysis. Results: Of the benign cases, none developed recurrence or metastasis. There were 54 (60%) tumors &gt; 6 cm and 36 (40%) tumors ≤ 6 cm. All 12 PASS parameters were individually present in higher frequency in the &gt;6-cm group; but the difference was not statistically significant except cellular monotony (p = 0.02). Overall, a PASS ≤ 4 was found in 57 patients. Mean PASS was statistically significantly higher in the &gt;6-cm group (4.4 vs. 3.3, p = 0.04). Of the sporadic benign cases, 21 (41%) patients with tumor size &gt; 6 cm had a PASS of &gt;4, and none of them developed metastasis. PASS ≤ 4 was found in 25 (81%) PCC in the ≤6-cm group, and none developed metastases. PASS ≥ 4 was found in six (19%) patients in the ≤6-cm group, and none developed metastases. 68 patients completed 5-year follow-up, and the remaining had a mean follow-up of 28.7 months. No correlation was found between tumor size and PASS &gt; 4 and PASS ≤ 4 (7.8 cm vs. 7.1 cm; p = 0.23). Conclusions: Presently there is not enough evidence to indict a large (&gt;6 cm) PCC as malignant. Furthermore, PASS cannot be reliably applied to PCC for predicting malignancy. abstract_id: PUBMED:11979086 Pheochromocytoma of the Adrenal gland Scaled Score (PASS) to separate benign from malignant neoplasms: a clinicopathologic and immunophenotypic study of 100 cases. No comprehensive series has evaluated the histologic features of pheochromocytoma to separate benign from malignant pheochromocytoma by histomorphologic parameters only. Fifty histologically malignant and 50 histologically benign pheochromocytomas of the adrenal gland were retrieved from the files of the Armed Forces Institute of Pathology. The patients included 43 females and 57 males, with an age range of 3-81 years (mean 46.7 years). Patients usually experienced hypertension (n = 79 patients). The mean tumor size was 7.2 cm (weight was 222 g). Histologically, the cases of malignant pheochromocytomas of the adrenal gland more frequently demonstrated invasion (vascular [score = 1], capsular [score = 1], periadrenal adipose tissue [score = 2]), large nests or diffuse growth (score = 2), focal or confluent necrosis (score = 2), high cellularity (score = 2), tumor cell spindling (score = 2), cellular monotony (score = 2), increased mitotic figures (&gt;3/10 high power fields; score = 2), atypical mitotic figures (score = 2), profound nuclear pleomorphism (score = 1), and hyperchromasia (score = 1) than the benign tumors. A Pheochromocytoma of the Adrenal gland Scaled Score (PASS) weighted for these specific histologic features can be used to separate tumors with a potential for a biologically aggressive behavior (PASS &gt; or =4) from tumors that behave in a benign fashion (PASS &lt;4). The pathologic features that are incorporated into the PASS correctly identified tumors with a more aggressive biologic behavior. Application of these criteria to a large cohort of cases will help to elucidate the accuracy of this grading system in clinical practice. abstract_id: PUBMED:27790441 Risk Stratification in Paragangliomas with PASS (Pheochromocytoma of the Adrenal Gland Scaled Score) and Immunohistochemical Markers. Introduction: Paragangliomas (PGLs) are rare tumours that arise in sympathetic and parasympathetic paraganglia and are derived from neural crest cells. Presence of metastasis is the only absolute criterion for malignancy. There is no single histo-morphological feature indicating malignant potential and multiple parameters have been proposed to prognosticate the individual case. This includes studies conducted using Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) and Immunohistochemical (IHC) markers. Aim: We have studied ten cases of paraganglioma and attempted to correlate the prognosis with multiple clinicopathological variables. Materials And Methods: This study was done in a tertiary care general hospital over a period of five years. Available clinical records and histopathology slides of all patients were reviewed. Using Pheochromocytoma of the Adrenal Gland Scaled Score (PASS), we divided the cases into two groups-tumours showing high risk behaviour (PASS≥4) and tumours showing benign behaviour (PASS&lt;4). IHC analysis was done using synaptophysin, chromogranin, S100 and Ki67. We correlated S100 immunoreactivity and Ki67 proliferative index with PASS score. Both PASS score and IHC markers were also correlated with clinical outcome. Results: There were six Pheochromocytomas (PHC) and four Paragangliomas (PGL). Two paragangliomas were retroperitoneal and one each was located in ear (HNPGL) and broad ligament. PASS score was ≥4 in five cases and &lt;4 in five cases. Out of five cases in which PASS was ≥4, three cases showed clinical evidence of malignancy and two cases were benign. All the cases in which PASS was &lt;4 were clinically benign. S100 immunoreactivity was grade 1 in two cases, grade 2 in six cases and grade 3 in two cases. The cases in which S100 immunoreactivity was grade 1 were malignant. One case in which S100 was grade 2 was clinically malignant. Ki67 labeling index was raised (&gt;3%) in two cases, which were malignant correlated with malignant PASS score. Conclusion: We conclude that the following clinicopathological parameters should be taken into account for risk assessment of malignant behaviour of paragangliomas- location, size, PASS score, S100 immunoreactivity and Ki67 labeling index. abstract_id: PUBMED:725049 Computed tomography of the adrenal gland. Computed tomography (CT) easily and accurately demonstrates both the normal and abnormal adrenal gland. The normal adrenal gland can be seen in almost 95% of patients. With this technique, 29 of 29 proved adrenal masses were demonstrated; one case of bilateral adrenal hyperplasia could not be recognized, another showed equivocal enlargement. CT is an excellent screening and often definitive radiologic test of evaluating the adrenal gland. abstract_id: PUBMED:6435195 Technic of real-time ultrasonic examination of the adrenal glands and adrenal gland tumors A prospective examination was carried out to determine the optimum technique for demonstrating the adrenal glands; there were 60 normal persons, 16 with small adrenal tumours (average size 13 mm.) and 10 with large adrenal tumours (average size 38 mm.). A normal adrenal gland was identified only once amongst the 60 patients as a hypoechoic structure. Fifteen of the 16 small, and eight of nine large tumours could be demonstrated sonographically. An intercostal approach was particularly suitable for showing the suprarenal region and for small tumours. Large tumours could be shown by a ventral, lateral or dorsal approach. abstract_id: PUBMED:27894345 Ultrasonographic features of adrenal gland lesions in dogs can aid in diagnosis. Background: Ultrasonography to visualize adrenal gland lesions and evaluate incidentally discovered adrenal masses in dogs has become more reliable with advances in imaging techniques. However, correlations between sonographic and histopathological changes have been elusive. The goal of our study was to investigate which ultrasound features of adrenal gland abnormalities could aid in discriminating between benign and malignant lesions. To this end, we compared diagnosis based on ultrasound appearance and histological findings and evaluated ultrasound criteria for predicting malignancy. Results: Clinical records of 119 dogs that had undergone ultrasound adrenal gland and histological examination were reviewed. Of these, 50 dogs had normal adrenal glands whereas 69 showed pathological ones. Lesions based on histology were classified as cortical adrenal hyperplasia (n = 67), adenocarcinoma (n = 17), pheochromocytoma (n = 10), metastases (n = 7), adrenal adenoma (n = 4), and adrenalitis (n = 4). Ultrasonographic examination showed high specificity (100%) but low sensitivity (63.7%) for identifying the adrenal lesions, which improved with increasing lesion size. Analysis of ultrasonographic predictive parameters showed a significant association between lesion size and malignant tumors. All adrenal gland lesions &gt;20 mm in diameter were histologically confirmed as malignant neoplasms (pheochromocytoma and adenocarcinoma). Vascular invasion was a specific but not sensitive predictor of malignancy. As nodular shape was associated with benign lesions and irregular enlargement with malignant ones, this parameter could be used as diagnostic tool. Bilaterality of adrenal lesions was a useful ultrasonographic criterion for predicting benign lesions, as cortical hyperplasia. Conclusions: Abnormal appearance of structural features on ultrasound images (e.g., adrenal gland lesion size, shape, laterality, and echotexture) may aid in diagnosis, but these features alone were not pathognomic. Lesion size was the most direct ultrasound predictive criterion. Large and irregular masses seemed to be better predictors of malignant neoplasia and lesions &lt;20 mm in diameter and nodular in shape were often identified as cortical hyperplastic nodules or adenomas. abstract_id: PUBMED:7413972 Computer assisted tomography of the normal and abnormal adrenal gland (author's transl) Initially, the hitherto existing radiographic methods of investigating the adrenal gland are demonstrated. Whole body CT opens a third dimension--in addition to being a non invasive method without risk. The normal CT findings of the adrenal gland are described, as well as the normal variants in shape and position. With help of morphometry and image manipulation, measurements of the size of the adrenal gland in 20 healthy patients were made and are being listed; and the respective normal variants as well as hypo- and hyperplasia are pointed out. Some examples serve to illustrate pathologic conditions, such as inflammation and benign and malignant primary and secondary neoplasia. Finally, the value of adrenal gland CT is discussed in reference to the other existing respective radiographic methods. abstract_id: PUBMED:7076919 CT configuration of the enlarged adrenal gland. Computed tomography was used to study 31 enlarged adrenal glands in 26 patients. The specific diagnoses were: 12 metastases, 7 pheochromocytomas, 7 adenomas, 2 carcinomas, 2 adrenal hemorrhages, and 1 cyst. Enlarged adrenal glands were frequently found to have an elongated, ovoid cross-sectional appearance. Twenty-four of the 31 adrenal glands (77%) had a length to width ratio of 1.2 or greater. The limited space available for uniform concentric expansion of of the enlarging gland appears to be the chief factor causing adrenal masses to assume an ovoid shape. Inasmuch as the upper pole of the unenhanced kidney and the inferior vena cava may both exhibit a similar cross-sectional appearance, cognizance of the potential ovoid configuration of the enlarged adrenal gland should facilitate detection of adrenal masses. abstract_id: PUBMED:23588637 Prognostic value of Pheochromocytoma of the Adrenal Gland Scaled Score (Pass score) tests to separate benign from malignant neoplasms. Background: Differentiating malignant from benign pheochromocytoma has been challenging when based on histologic features. This is due to the definition of malignant pheochromocytoma which are defined by the presence of metastases. A PASS score was developed and according to many authors, a PASS score&gt; =4 identified potentially malignant tumors. Aim: To assess the prognostic value of PASS score in differentiating benign pheochromocytomas from malignant ones. Methods: The records of 11 patients with tumors diagnosed as "pheochromocytoma" were identified from 1970 to 2010 in the files of the pathology, intern medicine and biochemistry departments of the Charles Nicolle hospital and Pasteur Institute. Receiver operating characteristics (ROC) curve analysis was performed to evaluate the diagnostic performance of PASS. The logistic model was developed using the 11 predictive variables. Its performance was evaluated by calculating the area under the ROC curve and comparing it with that of the PASS. Results: In benign tumors, The PASS score was &lt;4 in 3 cases and &gt;=4 in 6 cases. In malignant tumors, the PASS score was &gt;=4 in both cases. According to the ROC curve analysis, a PASS equal or superior to 4 identifies malignant pheochromocytoma with a sensitivity of 50% and a specificity of 45%. Conclusion: I think that PASS score, despite its low sensitivity, may help to reserve the more aggressive treatment and narrow follow up for potentially malignant tumors. Widespread of this called score with complete clinical data will help to validate these findings and to add other prognostic factors of value that could be a part of this scaled score such as immunohistochemical findings. Answer: The Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) was developed to help differentiate between benign and malignant pheochromocytomas based on various histologic features. However, the ability of PASS and tumor size to predict malignancy in pheochromocytomas is controversial. One study found significant interobserver and intraobserver variation in the assignment of PASS, suggesting that the system requires further refinement and validation before it can be recommended for clinical prognostication (PUBMED:19145205). Another study concluded that there is not enough evidence to consider a large (>6 cm) pheochromocytoma as malignant and that PASS cannot be reliably applied to predict malignancy. This study found no correlation between tumor size and PASS in predicting malignancy (PUBMED:20703467). The original study that proposed the PASS system found that a score of ≥4 could separate tumors with a potential for aggressive behavior from those that behave benignly. However, this system's accuracy in clinical practice needs further validation through the application to a larger cohort of cases (PUBMED:11979086). Another study on paragangliomas, which are closely related to pheochromocytomas, suggested that multiple clinicopathological parameters, including PASS score, should be considered for risk assessment of malignant behavior. This study found that a PASS score of ≥4 was associated with malignancy in some cases, but not all, and that other factors such as S100 immunoreactivity and Ki67 labeling index should also be taken into account (PUBMED:27790441). In summary, while the PASS system and tumor size have been proposed as tools to predict malignancy in pheochromocytomas, their reliability and accuracy are still under debate. Further research and validation are required before these parameters can be confidently used in clinical settings to predict malignancy in pheochromocytomas.
Instruction: Is liver biopsy necessary in the evaluation of a living donor for liver transplantation? Abstracts: abstract_id: PUBMED:29527988 Liver Biopsy Results in Potential Donor Evaluation in Living Related Liver Transplant. Objectives: The number of living-donor liver transplants has been increasing due to the growing discrepancy between the number of patients on wait lists for liver transplant and the availability of deceased donations. Evaluations of potential liver donors should ensure the safety of the surgical procedure for both the donor and recipient. Liver biopsy is the criterion standard for selecting optimal donors. In this study, we evaluated the importance of preoperative liver biopsy in selecting donor candidates. Materials And Methods: We evaluated the data of 612 living-related liver donor candidates who received liver biopsies between January 2001 and June 2017 at our center. Results: In the 612 liver donor candidates (328 male, 284 female; age range, 18-69 years), 416 liver biopsies (68%) were reported as normal and 196 liver biopsies (32%) had pathologic findings. Of 196 donors with pathologic findings, 86 (44%) had fatty changes and 24 (12%) had portal inflammation. Conclusions: The high rate of pathologic findings in liver biopsy of healthy-appearing donor candidates indicated the importance of liver biopsy in the preoperative evaluation of donors. abstract_id: PUBMED:25420828 Is liver biopsy necessary in the evaluation of a living donor for liver transplantation? Background: The role of liver biopsy in the evaluation of a candidate for living liver donation is controversial. Some authors suggest doing it routinely, but others do it only in selected cases. The aim of this work was to evaluate the usefulness of protocol liver biopsy in the evaluation of candidates for living liver donation. Methods: Ninety potential candidates for living liver donation were evaluated. In 46 cases donation was contraindicated without the need of liver biopsy. In the remaining 44 candidates, liver biopsy was done on a protocol basis. The usefulness of protocol biopsy was compared with the use of biopsy according to the recommendations of the Vancouver Forum. Results: Fifteen of the 44 biopsies were indicated according to the recommendations of the Vancouver Forum. Twelve of them were normal, and 3 had liver steatosis or steatohepatitis. Of the 29 biopsies done per protocol, 28 were normal and 1 showed liver steatosis. Donation was contraindicated according to liver biopsy findings in 3 of the 15 patients with liver biopsy done according to the Vancouver Forum recommendations and in none of the 29 patients with biopsy done per protocol (P = .034). Conclusions: Protocol liver biopsy has a limited utility in the evaluation of the candidates for living liver donation. abstract_id: PUBMED:25890612 Donor biopsy in living donor liver transplantation: is it still relevant in a developing country? Liver transplantation is an important modality of treatment for end-stage liver disease. Liver biopsy evaluation has been an important aspect of the donor evaluation protocol. With the advent of newer modalities of donor evaluation such as high resolution CT scan, fibroscan and NMR spectroscopy, the relevance of the liver biopsy appears to be diminishing. We investigated the usefulness of donor liver biopsy evaluation in patients who had been cleared by radiological investigations. We evaluated 184 donor liver biopsies performed over a one-year period and found that 18% showed &gt;5% steatosis and around 40% showed portal inflammation, which was, however, minimal to mild. Fibrosis was detected in 10 cases (5.4%), 7 being in stage 1 and 3 in stage 2. Donors with these findings were not considered for transplantation. We conclude that the liver biopsy still continues to be relevant especially in a developing country and does add additional information to the diagnostic work-up of a liver donor. abstract_id: PUBMED:16035060 Preoperative donor liver biopsy for adult living donor liver transplantation: risks and benefits. The role of liver biopsy (LB) in donor selection for adult living donor liver transplantation remains controversial, since the procedure is associated with additional potential risks for the donor. From April 1998 to August 2004, 730 potential living donors for 337 adult recipients underwent our multistep evaluation program. In 144 candidates, LB was performed. LB was obtained in a percutaneous ultrasound-guided fashion by means of Menghini needle (32 cases) or Tru-cut needle (112 cases). The biopsy specimen was preserved in 5% formalin and processed with hematoxylin &amp; eosin-stained sections. Thirty-one (21%) of 144 candidates who underwent an LB had a positive finding at histological examination that induced their exclusion from donation, of whom 21 had liver steatosis of varying kind and grade (10%-80%) and 10 had a nonsteatotic hepatopathy (non-A-D hepatitis in 6 cases, diffuse granulomatosis in 2, schistosomiasis in 1, fibrosis in 1). The only observed major complications related to LB were 2 intraparenchymal haematomas, both of which resolved spontaneously within a few months. In conclusion, based on these findings, we believe that preoperative LB in the donor selection for adult LDLT is necessary, once the initial donor screening and noninvasive evaluation is complete. Because other screening modalities can be unreliable, without preoperative LB a fraction of potential donors will be operated on inappropriately, risking both donor and recipient. The main objective of LB should be to ensure the donor's safety more than the preservation of the graft function. abstract_id: PUBMED:19633759 Chances and risks in living donor liver transplantation. Introduction: Liver transplantation is the first-line therapy in treatment of end-stage liver diseases. Due to the mismatch of available donor organs and growing waiting lists in Germany, live donation is of great interest. Methods: Selective literature review. Results And Discussion: Pediatric living donor liver transplantation almost eliminated waiting list mortality in children and achieved excellent short and long term survival. The situation in adult-to-adult living donor liver transplantation is different, due to the need for extensive donor resection and smaller graft volume for the recipient. Careful donor evaluation and defined selection criteria are essential to minimize the donor's risk and to achieve results comparable to whole organ transplantation. Living donor liver transplantation offers the recipient certain advantages such as superior graft quality, but the procedure should be reserved for selected patients. Donor safety is the highest priority in this procedure. Living donor transplantation should remain in the hands of experienced centers. abstract_id: PUBMED:17614979 Discontinuation of living donor liver transplantation for PSC due to histological abnormalities in intraoperative donor liver biopsy. Liver transplantation is the only curative treatment known to date for end-stage liver disease occurring as a result of primary sclerosing cholangitis (PSC). Here, we report a case in which living donor liver transplantation (LDLT) for PSC was cancelled because of histological abnormalities in intraoperative biopsy of the donor liver. The donor was the mother of the recipient, and her preoperative evaluation revealed no abnormalities. In the donor operation, the donor liver biopsy revealed expansion of the portal zone with lymphocytic infiltration and dense concentric fibrosis developed around a bile duct. These histological findings were identical to those of early-stage PSC; therefore, the LDLT was called off. The experience in this case suggests that preoperative liver biopsy may be useful to exclude first-degree relative donors with potential PSC prior to LDLT for PSC. abstract_id: PUBMED:16889543 Outcomes of donor evaluations for adult-to-adult right hepatic lobe living donor liver transplantation. The purpose of this study is to determine the role of liver biopsy and outcome of patients undergoing donor evaluation for adult-to-adult right hepatic lobe living donor liver transplantation (LDLT). Records of patients presenting for a comprehensive donor evaluation between 1997 and February 2005 were reviewed. Liver biopsy was performed only in patients with risk factors for abnormal histology. Two hundred and sixty patients underwent a comprehensive donor evaluation and 116 of 260 (45%) were suitable for donation, 14 of 260 (5.4%) did not complete evaluation and 130 of 260 (50%) were rejected. Four patients underwent unsuccessful hepatectomy surgery due to discovery of intraoperative abnormalities. Between 1997 and 2001, the acceptance rate of donor candidates (63%) was higher than 2002-2005 (36%), p &lt; 0.0001. Sixty-six of the 150 eligible patients (44%) fulfilled criteria for liver biopsy and 28 of 66 (42%) had an abnormal finding. Less than half of the patients undergoing donor evaluation were suitable donors and the donor acceptance rate has declined over time. A large proportion of the patients undergoing liver biopsy have abnormal findings. Our evaluation process failed to identify 4 of 103 who had aborted donor surgeries. abstract_id: PUBMED:38076356 Donor Evaluation Protocol for Live and Deceased Donors. Donor evaluation is a critical step before proceeding with liver transplantation (LT) in both deceased donor LT (DDLT) and living donor LT (LDLT). A good, healthy graft is necessary for the success of the transplantation. Other issues in selecting a donor include the transmission of infections and malignancies from the donor. Because of the scarcity of cadaver organs, an increasing number of extended-criteria donors, or 'marginal donors', are being utilized. LDLT also has potential risks to the donor, and donor safety needs to be kept in mind before proceeding with LT. The current review highlights the factors to be considered during donor evaluation for living and deceased donors before LT. abstract_id: PUBMED:29170994 "No go" donor hepatectomy in living-donor liver transplantation. Background: Selection of appropriate donors after rigorous evaluation is of paramount importance in living-donor liver transplantation. Despite this, donor surgery may not proceed due to unforeseen reasons. The aim of this paper is to study reasons for "no go" donor hepatectomy in living liver donors. Patients And Methods: Donor operations stopped after surgical start, directly due to donor safety-related reasons, qualified for inclusion as "no go" donor hepatectomy. Living-donor evaluation was performed as per standard protocol. Data for consecutive living liver donors operated between April 2012 and November 2016 were analyzed to evaluate reasons for "no go" donor hepatectomy in a liver transplantation unit at a tertiary care teaching hospital. Results: In 307 donors, the operation was aborted in 7 (2.3 %). One patient had unexpected biliary pathology with fibrosis found intraoperatively. Operations in five donors were abandoned in view of liver parenchymal abnormalities (fibrosis/steatohepatitis). One donor had hemodynamically significant bradycardia after handling the round ligament. All these donors recovered uneventfully and remained well on follow-up. Conclusions: "No go" donor hepatectomy remains a real possibility despite rigorous assessment. Although thresholds for on-table rejection of the donor after complete evaluation vary, "no go" hepatectomy is a calculated risk-avoidance approach. abstract_id: PUBMED:20524455 Evaluation of donors for living donor liver transplantation (LDLT). Liver transplantation has been the last resort of definite treatment for decompensate cirrhosis, early-stage of hepatocellular carcinoma, and acute liver failure. Organ shortage is the major obstacle of deceased-donor liver transplantation. Since the first case of living-donor liver transplantation (LDLT), many centers around the world started the LDLT program. Living donors should be informed about the possible risk of morbidity and mortality, and later give consent for liver donation without coercion. Donor selection and evaluation have become one of the important steps prior to LDLT, aiming to exclude donors who may have high risks from LDLT and to assure that LDLT recipients will receive perfect liver grafts. In Thailand, living donors must have been blood relatives or be legally married with recipients for at least three years. Donor evaluation can be divided into three step-by-step phases. Psychological evaluation of living donors is also included in pre-transplant assessment. Answer: The necessity of liver biopsy in the evaluation of a living donor for liver transplantation is a subject of debate. Some studies and experts argue for its importance, while others question its utility. Liver biopsy is considered the criterion standard for selecting optimal donors, as it can reveal pathologic findings such as fatty changes and portal inflammation that might not be apparent through other evaluation methods. In a study evaluating 612 living-related liver donor candidates, 32% had pathologic findings on liver biopsy, indicating its importance in preoperative evaluation (PUBMED:29527988). Another study found that 18% of donor liver biopsies showed >5% steatosis and around 40% showed portal inflammation, with fibrosis detected in 5.4% of cases, leading to the conclusion that liver biopsy continues to be relevant, especially in developing countries (PUBMED:25890612). Additionally, preoperative donor liver biopsy is considered necessary by some to ensure donor safety and to avoid inappropriate operations, as other screening modalities can be unreliable (PUBMED:16035060). On the other hand, the utility of protocol liver biopsy is questioned by some studies. For instance, one study found that protocol liver biopsy had limited utility, as donation was contraindicated according to liver biopsy findings in none of the patients with biopsy done per protocol (PUBMED:25420828). Furthermore, the evaluation of donors for living donor liver transplantation (LDLT) involves a comprehensive assessment that includes psychological evaluation and other medical tests, suggesting that liver biopsy is just one component of a multifaceted evaluation process (PUBMED:20524455). In summary, while liver biopsy can provide valuable information that may not be captured by other evaluation methods, its routine use in all donor evaluations is debated, and its utility may be limited in certain contexts. The decision to perform a liver biopsy should be based on individual donor risk factors and the protocols of the transplant center.
Instruction: Detrusor overactivity. Does it represent a difference if patients feel the involuntary contractions? Abstracts: abstract_id: PUBMED:15540754 Detrusor overactivity. Does it represent a difference if patients feel the involuntary contractions? Purpose: We evaluated the differences between patients with overactive bladder (OAB) who felt involuntary detrusor contractions during cystometry (detrusor overactivity [DO]) and those who did not feel them. Materials And Methods: We prospectively studied 45 patients with symptoms of nonneurogenic, nonobstructed overactive bladder and with DO on cystometry. All patients underwent videourodynamics, the ice water test and electrical perception threshold determination. Continence, urodynamic parameters, data from specific sensory evaluation and outcome of drug treatment were examined. Results: Almost half of our patients did feel the contractions of DO and half did not. The groups differed significantly. Those without DO sensation were more frequently incontinent, had more involuntary detrusor contractions and these occurred earlier during bladder filling. They had involuntary start of voiding more frequently, more pathological sensation of bladder filling and lower electrical sensory thresholds. The results of drug treatment were better in the group who felt DO. Conclusions: Contractions of DO are felt by some of the patients and they differ from those patients who do not feel such contractions. It is likely that this finding reflects the existence of different OAB conditions with a different neuropathological cause and a different treatment outcome. Therefore, we suggest that specific tests for the evaluation of sensation in the lower urinary tract should be part of the diagnosis of patients with DO and symptoms of OAB. abstract_id: PUBMED:32549896 Evaluation of bladder shape using transabdominal ultrasound: Feasibility of a novel approach for the detection of involuntary detrusor contractions. Conventional assessment of overactive bladder syndrome uses invasive pressure-measuring catheters to detect bladder contractions (urodynamics). We hypothesised that bladder shape changes detected and measured using transabdominal ultrasound scan could provide a non-invasive and clinically useful alternative investigation of bladder contractions. This feasibility study evaluated a novel transabdominal ultrasound scan bladder shape test during conventional urodynamics and physiological bladder filling. The aim was to initially evaluate and refine a non-invasive approach for detecting and quantifying bladder shape changes associated with involuntary bladder contractions. To develop measurement techniques and characterise bladder shape changes during bladder filling, healthy female volunteers (n=20) and women with overactive bladder symptoms who had previously undergone urodynamics (n=30) completed symptom questionnaires and bladder diaries. The bladder shape test protocol included consumption of 1 l water before undergoing serial transabdominal ultrasound scan imaging of the bladder during physiological bladder filling and during episodes of urgency. In a further group of women with overactive bladder (n=22), serial transabdominal ultrasound scan images were captured during urodynamics so that shape changes occurring with bladder contractions could be characterised. In both healthy volunteers and women with overactive bladder, the transverse view of the bladder provided the most reliable plane to characterise and measure bladder shape changes. A sphericity index derived from the ratio between maximum inscribed and minimum circumscribed ellipses (πac2(inner)/πac2(outer)) offered a reliable and reproducible measurement system. Of participants undergoing transabdominal ultrasound scan during urodynamics, there were significant measurable differences in sphericity index between patients with bladder contractions (n=12) and patients with acontractile bladders (p &lt; 0.001). Bladder shape changes detected during physiological filling and urodynamics have provided preliminary evidence to support further research into bladder shape test as a non-invasive diagnostic tool to identify involuntary bladder contractions in patients with overactive bladder syndrome. abstract_id: PUBMED:17570436 Relationships between micturition urgency and involuntary voiding dynamics in men with urinary incontinence from idiopathic detrusor overactivity. Purpose: In men with urinary incontinence from idiopathic detrusor overactivity we determined whether bladder voiding dynamics differs between those with and without urgent micturition. Materials And Methods: We retrospectively assessed urodynamic findings in 3 groups of 22 men each. Groups 1 and 2 had idiopathic detrusor overactivity with detrusor overactivity incontinence and with micturition urgency in group 1. Group 2 showed no urgency but felt a strong voiding desire just after the onset of involuntary micturition. Control group 3 included nonneurological unobstructed men undergoing urodynamic examination for mixed reasons who proved to be urodynamically normal. Patients with detrusor overactivity and controls were assessed by nonparametric statistics for significant differences in bladder voiding dynamics. Results: Detrusor contraction strength proved to be increased in groups 1 and 2 with the highest levels in group 1. Detrusor contraction velocity had the highest levels in group 1 and it differed insignificantly in groups 2 and 3. Voiding contractions were equally well sustained in groups 1 and 3, and proved to be less well sustained in group 2. Conclusions: Detrusor overactivity involves enhanced detrusor contraction strength levels, particularly in patients who feel urgency. In urgency-free patients with detrusor overactivity detrusor contraction velocity differs insignificantly from that in controls and voiding detrusor contractions proved to be less well sustained than in controls and patients who experienced urgency. This suggests that detrusor contraction velocity may have a role in causing urgency and urgency may have a role in enhancing and sustaining involuntary voiding detrusor contractions in patients with detrusor overactivity. abstract_id: PUBMED:11385691 Involuntary detrusor contractions: correlation of urodynamic data to clinical categories. Data regarding the prevalence and urodynamic characteristics of involuntary detrusor contractions (IDC) in various clinical settings, as well as in neurologically intact vs. neurologically impaired patients, are scarce. The aim of our study was to evaluate whether the urodynamic characteristics of IDC differ in various clinical categories. One hundred eleven consecutive neurologically intact patients and 21 consecutive neurologically impaired patients, referred for evaluation of persistent irritative voiding symptoms, were prospectively enrolled. All patients were presumed by history to have IDC, and underwent detailed clinical and urodynamic evaluation. Based on clinical evaluation, patients were placed into one of four categories according to the main presenting symptoms and the existence of neurological insult: 1) frequency/urgency; 2) urge incontinence; 3) mixed stress incontinence and irritative symptoms; and 4) neurogenic bladder. IDC was defined by detrusor pressure of &gt; or = 15 cm H2O whether or not the patient perceived the contraction; or &lt; 15 cm H2O if perceived by the patient. Eight urodynamic characteristics of IDC were analyzed and compared between the four groups. IDC were observed in all of the neurologically impaired patients, compared with 76% of the neurologically intact patients (P &lt; 0.001). No correlation was found between amplitude of IDC and subjective report of urgency. All clinical categories demonstrated IDC at approximately 80% of cystometric capacity. Eighty-one percent of the neurologically impaired patients, compared with 97% of the neurologically intact patients, were aware of the IDC at the time of urodynamics (P &lt; 0.04). The ability to abort the IDC was significantly higher among continent patients with frequency/urgency (77%) compared with urge incontinent patients (46%) and neurologically impaired patients (38%). In conclusion, when evaluating detrusor overactivity, the characteristics of the IDC are not distinct enough to aid in differential diagnosis. However, the ability to abort IDC and stop incontinent flow may have prognostic implications, especially for the response to behavior modification, biofeedback, and pelvic floor exercise. abstract_id: PUBMED:22534015 Changes in detrusor muscle oxygenation during detrusor overactivity contractions. Objective: To investigate changes in the oxygenated and deoxygenated haemoglobin (Hb) of the bladder wall during voluntary and involuntary detrusor contractions. Study Design: Women with lower urinary tract symptoms were recruited from a urodynamics clinic. Near infra-red spectroscopy, a non-invasive optical technique which monitors changes in tissue oxygenation, was used to measure oxygenated and deoxygenated haemoglobin simultaneously while the women underwent urodynamics. All data were compared using paired sample t-test. Results: Fifty-five women with a mean age of 52 years were enrolled into the study. In the 23 women with detrusor overactivity (15 with isolated detrusor overactivity and 8 with mixed urinary incontinence) there was a statistically significant rise in deoxygenated Hb during involuntary detrusor contractions at maximum detrusor pressure compared to the start of filling (p=0.02). There was no statistically significant change between Hb parameters measured at the start of the filling phase and those measured during voluntary detrusor contraction at pdetQmax (detrusor pressure at maximum flow rate). The mean detrusor pressure measured during voiding, however, was significantly higher than the maximum pressure during involuntary detrusor contractions (p=0.03). Conclusion: There is a significant rise in the deoxygenated Hb in the detrusor muscle during detrusor overactivity, which is not seen during voiding even when the pdetQmax was higher than the peak detrusor pressure during involuntary contractions. These interesting changes in detrusor muscle oxygenation during involuntary detrusor contraction need to be explored further to assess if deoxygenation plays a role in the pathogenesis of detrusor overactivity. abstract_id: PUBMED:31464344 ATP transients accompany spontaneous contractions in isolated guinea-pig detrusor smooth muscle. New Findings: What is the central question of this study? Overactive bladder is associated with enhanced spontaneous contractions, but their origins are unclear. The aim of this study was to characterize the accompanying ATP transients. What is the main finding and its importance? Spontaneous detrusor contractions were accompanied by transient increases of ATP, and their appearance was delayed by previous activation of efferent nerves to the detrusor. This indicates that spontaneous ATP release from nerve terminals supports spontaneous contractions. ATP is a functional excitatory neurotransmitter in human bladder only in pathologies such as overactive bladder. A potential drug target is revealed to manage this condition. Abstract: Spontaneous contractions are characteristic of the bladder wall, but their origins remain unclear. Activity is reduced if the mucosa is removed but does not disappear, suggesting that a fraction arises from the detrusor. We tested the hypothesis that spontaneous detrusor contractions arise from spontaneous ATP release. Guinea-pig detrusor strips, without mucosa, were superfused with Tyrode solution at 36°C. Preparations were subjected to electrical field stimulation (EFS; 3 s trains at 90 s intervals) to produce nerve-mediated contractions, abolished by 1 µm TTX. Amperometric ATP electrodes on the preparation surface recorded any ATP released. Spontaneous contractions and ATP transients were recorded between EFS trains. Nerve-mediated contractions were attenuated by atropine and α,β-methylene ATP; in combination, they nearly abolished contractions, as did nifedipine. Contractions were accompanied by ATP transients that were unaffected by atropine but inhibited by TTX and greatly attenuated by nifedipine. Spontaneous contractions were accompanied by ATP transients, with a close correlation between the magnitudes of both transients. ATP and contractile transients persisted with TTX, atropine and nifedipine. Immediately after a nerve-mediated contraction and ATP transient, there was a longer interval than normal before spontaneous activity resumed. Spontaneous contractions and ATP transients are proposed to arise from ATP leakage from nerve terminals innervating the detrusor. Extracellular ATP has a greater functional significance in humans who suffer from detrusor overactivity (spontaneous bladder contractions associated with incontinence) owing to its reduced hydrolysis at the nerve-muscle interface. This study shows the origin of spontaneous activity that might be exploited to develop a therapeutic management of this condition. abstract_id: PUBMED:28609582 Mild external heating and reduction in spontaneous contractions of the bladder. Objectives: To measure the effect of external heating on bladder wall contractile function, histological structure and expression of proteins related to tissue protection and apoptosis. Material And Methods: In vitro preparations of bladder wall and ex vivo perfused pig bladders were heated from 37 to 42°C, 46 and 50°C for 15 min. Isolated preparations were heated by radiant energy and perfused bladders were heated by altering perfusate temperature. Spontaneous contractions or pressure variations were recorded, as well as responses to the muscarinic agonist carbachol or motor nerve excitation in vitro during heating. Tissue histology in control and after heating was analysed using haematoxylin and eosin staining and 4'-6-diamidino-2-phenylindole (DAPI) nuclear labelling. The effects of heating on protein expression levels of (i) heat shock proteins HSP27-pSer82 and inducible-HSP70 and (ii) caspase-3 and its downstream DNA-repair substrate poly-[ADP-ribose] polymerase (PARP) were measured. Results: Heating to 42°C reduced spontaneous contractions or pressure variations by ~70%; effects were fully reversible. There were no effects on carbachol or nerve-mediated responses. Tissue histology was unaffected by heating, and expression of heat shock proteins as well as caspase-3 and PARP were also unaltered. A TRPV1 antagonist had no effect on the reduction of spontaneous activity. Heating to 46°C had a similar effect on spontaneous activity and also reduced the carbachol contracture. Urothelial structure was damaged, caspase-3 levels were increased and inducible-HSP70 levels declined. At 50°C evoked contractions were abolished, the urothelium was absent and heat shock proteins and PARP expression was reduced with raised caspase-3 expression. Conclusions: Heating to 42°C caused a profound, reversible and reproducible attenuation of spontaneous activity, with no tissue damage and no initiation of apoptosis pathways. Higher temperatures caused tissue damage and activation of apoptotic mechanisms. Mild heating offers a novel approach to reducing bladder spontaneous activity. abstract_id: PUBMED:11170188 Does the method of cystometry affect the incidence of involuntary detrusor contractions? A prospective randomized urodynamic study. The International Continence Society (ICS) defines overactive detrusor as "one that is shown objectively to contract during the filling phase while the patient is attempting to inhibit micturition." The aim of the present study was to assess whether instructing the patient neither to try void nor to inhibit micturition during filling cystometry may improve the detection rate of involuntary detrusor contractions (IDCs). Forty-two consecutive patients (mean age 65 +/- 13.5 years), referred for urodynamic evaluation of persistent irritative lower urinary tract symptoms were prospectively enrolled. All patients were presumed, by history, to have IDCs. Cystometry was performed twice at the same session, each time by using randomly different instructions: Method 1, patients were instructed to try to inhibit micturition during bladder filling; and Method 2, patients were instructed to neither try to void nor try to inhibit micturition, but simply report his or her sensations to the examiner. The occurrence, as well as the urodynamic characteristics of IDCs, were analyzed separately and compared between the two filling methods. Method 1 identified only 20 cases of IDCs, while Method 2 identified 27 cases (48 versus 64 % of the study population, respectively; P = 0.02). Analysis of urodynamic characteristics revealed a clear trend of reduced bladder volume at which IDCs occurred when patients were instructed to neither try to void nor to inhibit micturition during bladder filling; however, statistical significance was not established (189 +/- 122 versus 240 +/- 149 mL, respectively; P = 0.13). All other urodynamic characteristics of IDCs were similar in both methods. In conclusion, better detection rates of IDCs were achieved by instructing the patient to neither try to void nor try to inhibit micturition, but simply report his or her sensations to the examiner, during filling cystometry. If the patient is instructed to inhibit micturition during bladder filling-about 26 % of the IDC cases are misdiagnosed. abstract_id: PUBMED:7514686 The influence of prostatic urethral anesthesia in overactive detrusor in patients with benign prostatic hyperplasia. We examined the effects of prostatic urethral anesthesia on cystometrography in benign prostatic hyperplasia (BPH) patients with or without neurological disorders. Although cystometrography after anesthesia showed no disappearance of involuntary detrusor contraction, it did demonstrate significant increases in first sensation volume and maximum cystometric capacity in BPH patients without neurological diseases, as well as BPH patients with a history but no physical evidence of neurological disease. Furthermore, the bladder might be augmented more efficaciously in patients with involuntary detrusor contractions. No significant differences were found in first sensation volume or maximum cystometric capacity before and after anesthesia in patients without infravesical obstruction who had documented neurological disease with physical evidence. Our results demonstrated that prostatic urethral anesthesia can be used preoperatively in patients with infravesical obstruction to discriminate whether involuntary detrusor contractions are due to infravesical obstruction or to neurological disease. abstract_id: PUBMED:31183825 Spontaneous Activity and the Urinary Bladder. The urinary bladder has two functions: to store urine, when it is relaxed and highly compliant; and void its contents, when intravesical pressure rises due to co-ordinated contraction of detrusor smooth muscle in the bladder wall. Superimposed on this description are two observations: (1) the normal, relaxed bladder develops small transient increases of intravesical pressure, mirrored by local bladder wall movements; (2) pathological, larger pressure variations (detrusor overactivity) can occur that may cause involuntary urine loss and/or detrusor overactivity. Characterisation of these spontaneous contractions is important to understand: how normal bladder compliance is maintained during filling; and the pathophysiology of detrusor overactivity. Consideration of how spontaneous contractions originate should include the structural complexity of the bladder wall. Detrusor smooth muscle layer is overlain by a mucosa, itself a complex structure of urothelium and a lamina propria containing sensory nerves, micro-vasculature, interstitial cells and diffuse muscular elements.Several theories, not mutually exclusive, have been advanced for the origin of spontaneous contractions. These include intrinsic rhythmicity of detrusor muscle; modulation by non-muscular pacemaking cells in the bladder wall; motor input to detrusor by autonomic nerves; regulation of detrusor muscle excitability and contractility by the adjacent mucosa and spontaneous contraction of elements of the lamina propria. This chapter will consider evidence for each theory in both normal and overactive bladder and how their significance may vary during ageing and development. Further understanding of these mechanisms may also identify novel drug targets to ameliorate the clinical consequences of large contractions associated with detrusor overactivity. Answer: Yes, there is a difference if patients feel the involuntary contractions associated with detrusor overactivity (DO). A study evaluated the differences between patients with overactive bladder (OAB) who felt involuntary detrusor contractions during cystometry (DO) and those who did not feel them. The results showed that patients who did not feel the contractions were more frequently incontinent, had more involuntary detrusor contractions which occurred earlier during bladder filling, had involuntary start of voiding more frequently, more pathological sensation of bladder filling, and lower electrical sensory thresholds. Additionally, the results of drug treatment were better in the group who felt DO. This suggests that the sensation of DO contractions may reflect different OAB conditions with varying neuropathological causes and treatment outcomes, indicating the importance of specific tests for the evaluation of sensation in the lower urinary tract as part of the diagnosis of patients with DO and symptoms of OAB (PUBMED:15540754).
Instruction: Symptomatic or asymptomatic gallstone disease: is the gallbladder motility the clue? Abstracts: abstract_id: PUBMED:12239906 Symptomatic or asymptomatic gallstone disease: is the gallbladder motility the clue? Background/aims: Only a minority of gallstone patients develop biliary pain. Until now, the factors related to pain have been poorly described. Methodology: In a prospective study, gallstone patients without acute cholecystitis, pancreatitis or hepatobiliary obstruction were classified into typical symptomatic (type-I, n = 44), atypical symptomatic (type-II, n = 14) and asymptomatic (type-III, n = 29) using a standardized questionnaire (8 items for typical, 3 items for unspecific complaints). Demographic data, body mass index, number and size of gallstones, gallbladder wall thickness and motility after a standardized meal were assessed (ultrasound study) in patients, results were compared to controls without hepatobiliary disease. Results: The gallbladder contractility was similar to controls in type-I (67%) but diminished in II (55%) and III (46%, P &lt; 0.05). Type-I showed lower fasting (P &lt; 0.05) and postprandial gallbladder volumes (P &lt; 0.0005) and was associated with smaller stones (P &lt; 0.0001), younger age (P &lt; 0.0001) and female gender (P &lt; 0.001). Body mass index, stone number and gallbladder wall thickness was not related to pain. Conclusions: A sluggish gallbladder may protect from biliary pain. The consideration of gallbladder motility and further risk factors (small stones, younger age and female gender) may help to predict the clinical course of gallstone patients, define atypical complaints as biliary related and select patients for treatment. abstract_id: PUBMED:29217930 Symptomatic Cholilithiasis and Cholecystectomy for a 9-Month-Old Infant: A Case Report. Background: Symptomatic cholilithiasis is rare in children. Thus, a high degree of suspicion is required for diagnosis. Once a child is diagnosed with symptomatic cholilithiasis, cholecystectomy is required to relieve the symptoms and prevent complication. Case Details: A 9-month-old infant from Addis Ababa presented to the Pediatric Department of ZewdituMemorial Hospital on January 30, 2015 with irritability, abdominal pain. On workup, she was found to have gall stones, and her condition was at last attributed to biliary colic after months of follow-up in the Department of Pediatrics. She underwent cholecystectomy on the 31st of July 2015 and discharged with improved results. This is the first report of symptomatic cholilithiasis and cholecystectomy in Ethiopia at 9 months of age. Conclusion: Cholilithiasis is rare in infants, and one should have a high index of suspicion for diagnosis. Cholecystectomy should be done as in adults if symptomatic. abstract_id: PUBMED:1478733 Gastrointestinal motility in pregnancy. The gallbladder and gut should be viewed as hormonally responsive organs the normal physiology of which may be altered by the hormones of pregnancy. The gallbladder enlarges and empties sluggishly in response to meals during pregnancy. Small bowel transit is slowed, and the resting pressure of the lower esophageal sphincter is reduced. All these effects are reversed by delivery; motility reverts toward normal in the postpartum period. The rapid return of normal motility suggests that the effects of pregnancy are hormonally related. Most studies have demonstrated that progesterone, not estrogen, may be the hormone responsible. Although incompletely defined, one mechanism of the effects of pregnancy on motility may be progesterone-induced inhibition of the mobilization of intracellular calcium within smooth muscle cells. abstract_id: PUBMED:36510302 Management of symptomatic cholelithiasis: a systematic review. Background: Symptomatic cholelithiasis is a common surgical disease and accounts for half of the over one million cholecystectomies performed in the USA annually. Despite its prevalence, only one prior systematic review has examined the evidence around treatment strategies and it contained a narrow scope. The goal of this systematic review was to analyze the clinical effectiveness of treatment options for symptomatic cholelithiasis, including surgery, non-surgical therapies, and ED pain management strategies. Methods: Literature search was performed from January 2000 through June 2020, and a narrative analysis was performed as studies were heterogeneous. Results: We identified 12 publications reporting on 10 trials (9 randomized controlled trials and 1 observational study) comparing treatment methods. The studies assessed surgery, observation, lithotripsy, ursodeoxycholic acid, electro-acupuncture, and pain-management strategies in the emergency department. Only one compared surgery to observation. Conclusion: This work presents the existing data and underscores the current gap in knowledge regarding treatment for patients with symptomatic cholelithiasis. We use these results to suggest how future trials may guide comparisons between the timing of surgery and watchful waiting to create a set of standardized guidelines. Providing appropriate and timely treatment for symptomatic cholelithiasis is important to streamline care for a costly and prevalent disease. Trial Registration: PROSPERO Protocol Number: CRD42020153153. abstract_id: PUBMED:37093281 Incidence of symptomatic gallstones after bariatric surgery: the impact of expectant management. Background: Bariatric surgery is the most effective treatment for sustained weight reduction and obesity-related comorbidities. The development of gallstones as a result of rapid weight loss is a well-known consequence of bariatric procedures. It remains unclear, if there is an increased risk of these gallstones becoming symptomatic. Methods: A retrospective analysis of 505 consecutive patients submitted to either Roux-en-Y Gastric Bypass or Sleeve Gastrectomy between January and December 2019 was performed. The aim of our study was to determine the incidence of symptomatic cholelithiasis in asymptomatic patients with their gallbladder in situ after bariatric surgery and to identify potential risk factors for its development. Results: Of the 505 patients included, 79 (15.6%) underwent either previous cholecystectomy. (n = 67, 84.8%) or concomitant cholecystectomy during bariatric surgery (n = 12, 15.2%). Among the remaining 426 (84.4%) patients, only 8 (1.9%) became symptomatic during the 12-month follow-up period. When compared with patients who remained asymptomatic, they had a higher median preoperative BMI (47.0 vs. 42.8, p = 0.046) and prevalence of cholelithiasis on preoperative ultrasound (62.5% vs. 10.7%, p = 0.001). Multivariate analysis revealed preoperative BMI and cholelithiasis on preoperative ultrasound as independent risk factors for symptomatic biliary disease (OR 1.187, 95%CI 1.025-1.376, p = 0.022 and OR 10.720, 95%CI 1.613-71.246, p = 0.014, respectively). Conclusion: Considering a low incidence of symptomatic gallstones after bariatric surgery, concomitant cholecystectomy should only be performed in symptomatic patients undergoing bariatric surgery. Preoperative factors, such as a higher BMI and positive ultrasound for cholelithiasis, may be related to the development of symptomatic gallstones. abstract_id: PUBMED:37711138 Evaluation and management of symptomatic duodenal diverticula: a single-center retrospective analysis of 647 patients. Aims: To explore the clinical characteristics of patients with symptomatic duodenal diverticula and to generalize how to make appropriate treatment choices for this group of patients. Materials And Methods: From January 2010 to September 2020, a total of 647 patients with duodenal diverticula (DD) were included in this study. 345 of them with relevant symptoms were divided into the symptomatic group and the other 302 patients were in the asymptomatic group. Results: Among all patients, most DD were located in the periampullary area, &lt;1 cm in size, and single in number. The distribution of DD localized in the 2nd portion/periampullary (P = 0.002/P &lt; 0.001) and with a 1 cm size cut-off value (P = 0.003) was significantly different between the symptomatic and asymptomatic groups. Multivariate Logistics analysis further suggests that diverticular size (&lt;1 cm, 1-3 cm) and combined biliary comorbidities (bile duct stones and gallstones, primary bile duct stones, cholangitis without bile duct stones) may be factors influencing the choice of treatment modality. Of all patients undergoing surgical treatment, a total of 7 cases developed various postoperative complications, and no one died. Conclusions: Patients with DD ≥1 cm or located in the periampullary were more likely to be symptomatic. The specific size of the DD and the combination of specific biliary comorbidities may have an impact on the choice of treatment modality. abstract_id: PUBMED:8041152 Estrogen inhibits sphincter of Oddi motility. Gallstones and sphincter of Oddi dysfunction are both more common in women than men, suggesting that endogenous hormones may play an important role in these conditions. Female sex hormones are known to affect cholesterol metabolism and gallbladder motility. However, the effect of these hormones on the sphincter of Oddi has not previously been studied. We therefore tested the hypothesis that exogenous estrogen administration would inhibit sphincter of Oddi motility. Twenty-three male prairie dogs fed a nonlithogenic diet were studied. Under alpha-chloralose anesthesia, a side hole pressure-monitored perfusion catheter was positioned in the sphincter of Oddi and perfused with degassed water at 0.15 ml/min. Femoral arterial and venous catheters were placed. Sphincter of Oddi phasic wave frequency (F), amplitude (A), and motility index (MI = F x A), as well as arterial blood pressure (BP), were monitored for 10-min intervals before (control), during 20-min intravenous infusions of 0.1, 1, or 10 micrograms/kg beta-estradiol, and for 20 min after estradiol infusion. No response was observed at the 0.1- or 1-micrograms doses. Sphincter of Oddi motility was significantly (P &lt; 0.05) reduced during estrogen infusion at the higher dose of 10 micrograms, primarily due to decreased phasic wave frequency. Sphincter motility remained depressed for at least 20 min following estrogen infusion. We conclude that estrogen effects on the sphincter of Oddi may contribute to the higher incidence of gallstones and sphincter dysfunction seen in premenopausal women. abstract_id: PUBMED:29306087 A model of gallbladder motility. Impaired gallbladder motility leads to some clinical manifestations associated with its muscle contraction and/or the rate of filling with bile. To gain a better understanding of the possible reasons for different filling/emptying patterns we developed a mathematical model of the gallbladder that takes into account the kinetics of its filling and emptying and changes in the concentration of the accumulated bile. The model is based on four parameters responsible for the maximum speed of bile evacuation (Mg), pulsation of contractions (ω), the kinetic filling rate (kg) and the maximum bile mass accumulated in the gallbladder (mtotal). The model results were fitted to different clinical results describing gallbladder motility depending on the meal composition, patient's age and health condition (obesity and gallstones). Compatibility of the model results with the experimental data allows us to draw physiological conclusions. We found that different gallbladder emptying patterns may result from differences in the amplitude of contraction of gallbladder muscles (e.g. for various meal composition), differences in the rate of bile inflow (e.g. for obese patients during filling), and differences in gallbladder muscle pulsations (e.g. for lean patients during early gallbladder emptying). The model of gallbladder motility can facilitate identification of causes of disorders, help to explore complicated physiological pathways, and can be applied in etiology analysis or studies of observable clinical indicators. abstract_id: PUBMED:10903001 Review article: gall-bladder motility after intestinal surgery. There is a close relationship in humans between gall-bladder motility and gastrointestinal motility during the fasting state, as well as in the post-prandial period. Only a minority of publications take this relationship into account in the description of biliary dysmotility after various surgical procedures. Most publications deal with post-prandial gall-bladder motility or with stimulation of gall-bladder contraction from various prokinetic drugs or gastrointestinal hormones. Impaired gall-bladder motility has been demonstrated after Billroth II gastric resection, pyloroplasty and colectomy, but the epidemiological data on the risk of gallstone formation in these patients are too scarce and equivocal to recommend prophylactic cholecystectomy. Future studies on gall-bladder motility after surgical procedures should include measurements of gall-bladder motility during the fasting state, as well as in the post-prandial period. The measurements should be related to the migrating motor complexes, and this necessitates a simultaneous recording of gastrointestinal motility. abstract_id: PUBMED:2615319 Esophageal, gastric and duodenal motility in patients with cholelithiasis Disturbed motility of the upper gastrointestinal tract presenting with phasic derangement (the absence of all motor phases or peristaltic duodenal contraction, abnormal duration of phases) has been found out in all cholelithiasis patients examined. The findings should in no way be viewed as isolated chronic duodenal insufficiency. Impairment of esophageal, gastric and duodenal motility needs identification and correction in combined treatment, rehabilitation and prevention of cholelithiasis. Answer: The gallbladder motility appears to be a significant factor in differentiating between symptomatic and asymptomatic gallstone disease. A prospective study found that gallbladder contractility was similar to controls in patients with typical symptomatic gallstone disease (type-I), but it was diminished in atypical symptomatic (type-II) and asymptomatic (type-III) patients. Type-I patients also had lower fasting and postprandial gallbladder volumes, were associated with smaller stones, younger age, and female gender, but body mass index, stone number, and gallbladder wall thickness were not related to pain. This suggests that a sluggish gallbladder may protect from biliary pain and that considering gallbladder motility along with other risk factors may help predict the clinical course of gallstone patients and select patients for treatment (PUBMED:12239906). Other studies have shown that gallbladder and gastrointestinal motility can be affected by various factors, including pregnancy, where the gallbladder empties sluggishly in response to meals, and small bowel transit is slowed (PUBMED:1478733). Additionally, estrogen has been found to inhibit sphincter of Oddi motility, which may contribute to the higher incidence of gallstones and sphincter dysfunction seen in premenopausal women (PUBMED:8041152). A mathematical model of gallbladder motility has also been developed to better understand the reasons for different filling/emptying patterns, which can be influenced by factors such as meal composition, patient's age, and health condition (PUBMED:29306087). Furthermore, impaired gallbladder motility has been observed after certain surgical procedures, and there is a close relationship between gall-bladder motility and gastrointestinal motility both during fasting and post-prandial periods (PUBMED:10903001). Disturbed motility of the upper gastrointestinal tract has also been found in all cholelithiasis patients examined in one study, indicating that impairment of esophageal, gastric, and duodenal motility should be identified and corrected in the treatment and prevention of cholelithiasis (PUBMED:2615319). In conclusion, gallbladder motility is indeed a key factor in the manifestation of symptomatic versus asymptomatic gallstone disease, and understanding its role can aid in the management and treatment of patients with gallstones.
Instruction: Predicting computerized physician order entry system adoption in US hospitals: can the federal mandate be met? Abstracts: abstract_id: PUBMED:18053762 Predicting computerized physician order entry system adoption in US hospitals: can the federal mandate be met? Objectives: The purpose of this study is four-fold. First, the hospitals' current level of computerized physician order entry (CPOE) adoption is reported; second, internal and external influence factors' roles in determining CPOE adoption rates are described; third, the future diffusion rate of CPOE systems in US hospitals is empirically predicted; finally, the current technology's state-of-the-art is assessed. Data Source: Secondary data from 3 years of the Leapfrog Group's annual survey (2002-2004) of US tertiary-care hospitals. Study Design: This study estimates CPOE market penetration rates applying technology diffusion theory and Bass modeling techniques for three future CPOE adoption scenarios-'Optimistic,' 'Best estimate', and 'Conservative' are empirically derived. Principal Findings: Two of the CPOE adoption scenarios have diffusion S-curve that indicates a technology will achieve significant market penetration. Under current conditions, CPOE adoption in urban hospitals will not reach 80% penetration until 2029. Conclusions: The promise of improved quality of care through medication error reductions and significant cost controls prompted the Institute of Medicine to call for universal CPOE adoption by 1999. However, the CPOE products available as of 2006 represent only a 'second generation technology', characterized by many limitations. Without increased external and internal pressures, such CPOE systems are unlikely to achieve full diffusion in US hospitals in a timely manner. Alternatively, developing a new generation of CPOE technology that is more 'user-friendly' and easily integrated into hospitals' legacy systems may be a more expedient approach to achieving widespread adoption. abstract_id: PUBMED:34949924 Implementation of Computerized Physician Order Entry in Primary Care: A Scoping Review. Purpose: This scoping review aimed to assess the implementation and outcomes of computerized physician order entry (CPOE) in primary care. Methods: A scoping review was carried out in accordance with the Joanna Briggs Institute's guidelines (JBI). The databases PubMed, CINAHL, Science Direct, and Google Scholar were all searched. The full text of each article was reviewed for eligibility after the title and abstract were evaluated. JBI data extraction were used to extract data. Donabedian's framework served as the foundation for the data discussion. Results: Based on the inclusion criteria, seven studies were included. The studies' main goal in common was to analyze the outcome or impact of implementing CPOE systems in ambulatory or primary care settings. Several studies described the framework, current state of implementation, and evaluation or recommendation following CPOE system implementation. Many positive effects were felt by physicians or prescribers, pharmacists, patients, and primary care providers, with patient safety being the primary goal. Conclusion: Although this study discovered some issues and factors associated with CPOE implementation and adoption, such as infrastructure, workflow, level of engagement, and safety culture, CPOE has many positive outcomes for patients, physicians, and primary care. To improve CPOE adoption in healthcare, particularly primary care, more research into the structure, framework, and components of CPOE deployment is required. abstract_id: PUBMED:24764707 Toward successful migration to computerized physician order entry for chemotherapy. Background: Computerized physician order entry (cpoe) systems allow for medical order management in a clinical setting. Use of a cpoe has been shown to significantly improve chemotherapy safety by reducing the number of prescribing errors. Usability of these systems has been identified as a critical factor in their successful adoption. However, there is a paucity of literature investigating the usability of cpoe for chemotherapy and describing the experiences of cancer care providers in implementing and using a cpoe system. Methods: A mixed-methods study, including a national survey and a workshop, was conducted to determine the current status of cpoe adoption in Canadian oncology institutions, to identify and prioritize knowledge gaps in cpoe usability and adoption, and to establish a research agenda to bridge those gaps. Survey respondents were representatives of cancer care providers from each Canadian province. The workshop participants were oncology clinicians, human factors engineers, patient safety researchers, policymakers, and hospital administrators from across Canada, with participation from the United States. Results: A variety of issues related to implementing and using a cpoe for chemotherapy were identified. The major issues concerned the need for better understanding of current practices of chemotherapy ordering, preparation, and administration; a lack of system selection and procurement guidance; a lack of implementation and maintenance guidance; poor cpoe usability and workflow support; and other cpoe system design issues. An additional three research themes for addressing the existing challenges and advancing successful adoption of cpoe for chemotherapy were identified: The need to investigate variances in workflows and practices in chemotherapy ordering and administrationThe need to develop best-practice cpoe procurement and implementation guidance specifically for chemotherapyThe need to measure the effects of cpoe implementation in medical oncology. Conclusions: Addressing the existing challenges in cpoe usability and adoption for chemotherapy, and accelerating successful migration to cpoe by cancer care providers requires future research focusing on workflow variations, chemotherapy-specific cpoe procurement needs, and implementation guidance needs. abstract_id: PUBMED:16284040 U.S. adoption of computerized physician order entry systems. Computerized physician order entry (CPOE) has been shown to reduce preventable, potential adverse events. Despite this evidence, fewer than 5 percent of U.S. hospitals have fully implemented these systems. We assess empirically alternative reasons for low CPOE implementation using data from various sources. We find that CPOE is related to hospital ownership and teaching status; government and teaching hospitals are much more likely than other hospital types are to invest in CPOE. Hospital profitability is not associated with CPOE investment. Although greater diffusion of CPOE is needed, it might have to await continuing publicity efforts and substantial reimbursement system changes. abstract_id: PUBMED:21802779 Increasing adoption of computerized provider order entry, and persistent regional disparities, in US emergency departments. Study Objective: The US government provides financial incentives for "meaningful use" of health information technology, including computerized provider order entry. We assess prevalence of emergency department (ED) computerized provider order entry in 4 states, identify characteristics predicting computerized provider order entry adoption, and assess adoption in 1 state over time, all before incentive programs. Methods: We surveyed all nonfederal EDs in Massachusetts, Colorado, Georgia, and Oregon, assessing health information technology prevalence in 2008, focusing on computerized provider order entry, an enabler of other health information technology and a key element in itself. We use multivariable logistic regression to evaluate predictors of adoption. We compared the Massachusetts data with data from a similar survey we conducted for Massachusetts in 2005, using 95% confidence intervals (CIs) to assess the change in rate. Results: We identified and surveyed 351 EDs, and 290 (83%) responded to the computerized provider order entry module. Of these, 30% had adopted computerized provider order entry. Odds of computerized provider order entry in rural EDs were 0.07 relative to urban (95% CI 0.01 to 0.39). Oregon EDs had a higher likelihood of computerized provider order entry adoption than Georgia EDs, the state with the lowest adoption (odds ratio 2.9; 95% CI 1.2 to 7.3). In 2005, 15% of Massachusetts EDs reported computerized provider order entry versus 44% in 2008 (29% difference; 95% CI 26% to 32%). Conclusion: Health information technology adoption varies by state and urbanicity, with less computerized provider order entry in rural EDs. ED computerized provider order entry adoption nearly tripled in Massachusetts from 2005 to 2008, before any financial inducements. Federal resources might be more effective if they helped providers select health information technology tools, improve health information technology design, and evaluate its influence on care delivery, versus simply calling for "more". abstract_id: PUBMED:26338217 The evolution of the market for commercial computerized physician order entry and computerized decision support systems for prescribing. Objective: To understand the evolving market of commercial off-the-shelf Computerized Physician Order Entry (CPOE) and Computerized Decision Support (CDS) applications and its effects on their uptake and implementation in English hospitals. Methods: Although CPOE and CDS vendors have been quick to enter the English market, uptake has been slow and uneven. To investigate this, the authors undertook qualitative ethnography of vendors and adopters of hospital CPOE/CDS systems in England. The authors collected data from semi-structured interviews with 11 individuals from 4 vendors, including the 2 most entrenched suppliers, and 6 adopter hospitals, and 21 h of ethnographic observation of 2 user groups, and 1 vendor event. The research and analysis was informed by insights from studies of the evolution of technology fields and the emergence of generic COTS enterprise solutions. Results: Four key themes emerged: (1) adoption of systems that had been developed outside of England, (2) vendors' configuration and customization strategies, (3) localized adopter practices vs generic systems, and (4) unrealistic adopter demands. Evidence for our over-arching finding concerning the current immaturity of the market was derived from vendors' strategies, adopters' reactions to the technology, and policy makers' incomplete insights. Conclusions: The CPOE/CDS market in England is still in an emergent phase. The rapid entrance of diverse products, triggered by federal policy initiatives, has resulted in premature adoption of systems that do not yet adequately meet the needs of hospitals. Vendors and adopters lacked understanding of how to design and implement generic solutions to meet diverse user needs. abstract_id: PUBMED:32744148 Computerized provider order entry-related medication errors among hospitalized patients: An integrative review. The Institute of Medicine estimates that 7,000 lives are lost yearly as a result of medication errors. Computerized physician and/or provider order entry was one of the proposed solutions to overcome this tragic issue. Despite some promising data about its effectiveness, it has been found that computerized provider order entry may facilitate medication errors.The purpose of this review is to summarize current evidence of computerized provider order entry -related medication errors and address the sociotechnical factors impacting the safe use of computerized provider order entry. By using PubMed and Google Scholar databases, a systematic search was conducted for articles published in English between 2007 and 2019 regarding the unintended consequences of computerized provider order entry and its related medication errors. A total of 288 articles were screened and categorized based on their use within the review. One hundred six articles met our pre-defined inclusion criteria and were read in full, in addition to another 27 articles obtained from references. All included articles were classified into the following categories: rates and statistics on computerized provider order entry -related medication errors, types of computerized provider order entry -related unintended consequences, factors contributing to computerized provider order entry failure, and recommendations based on addressing sociotechnical factors. Identifying major types of computerized provider order entry -related unintended consequences and addressing their causes can help in developing appropriate strategies for safe and effective computerized provider order entry. The interplay between social and technical factors can largely affect its safe implementation and use. This review discusses several factors associated with the unintended consequences of this technology in healthcare settings and presents recommendations for enhancing its effectiveness and safety within the context of sociotechnical factors. abstract_id: PUBMED:30446896 Medication prescribing errors: a pre- and post-computerized physician order entry retrospective study. Background The computerization of prescriptions with a computerized physician order entry contributes to securing the error-free drug supply, but is not risk-free. Objective: To determine the impact of a computerized physician order entry system on prescribing errors immediately after its implementation and 1 year later. Setting The Cardiology and Diabetology Departments at Toulouse University Hospital, France. Method The prescriptions were analysed by pharmacists over three 30-day periods for 3 consecutive years (N: computerization period, N - 1, N + 1). For each identified error, the prescriber was informed by a pharmaceutical intervention. The pharmaceutical interventions were counted and arranged according to the classification by the French Society of Clinical Pharmacy. Their average numbers and clinical impacts were compared for each period using t-tests and Kruskal-Wallis tests. Main outcome measure The average numbers of pharmaceutical interventions. Results In total, 12.1 pharmaceutical interventions per 100 patient days were done during the N - 1 period, 14.1 during N and 9.6 during N + 1. Among those, 3.6 (N) and 2.1 (N + 1) were related to the computerization itself, and 10.5 (N) and 7.5 (N + 1) were not. The average number of computerization-related pharmaceutical interventions significantly decreased from N to N + 1 (p = 0.04). The average number of classic interventions decreased from N - 1 to N + 1 (p = 0.02). The clinical impacts of the computerization related errors were similar to those of other errors. Conclusion The implementation of the computerized physician order entry induced the appearance of specific computerized-related errors, but the number of classic errors decreased. The entry-system related errors were not more severe than other errors, and the number decreased after 1 year. abstract_id: PUBMED:37497185 Trends in computerized provider order entry: 20-year bibliometric overview. Background: Drug-related problems (DRPs) can lead to serious health issues and have significant economic impacts on healthcare systems. One solution to address this issue is the use of computerized physician order entry systems (CPOE), which can help prevent DRPs by reducing the risk of medication errors. Objective: The purpose of this study is to provide an analysis on scientific production of the past 20 years in order to describe trends in academic publishing on CPOE and to identify the major topics as well as the predominant actors (journals, countries) involved in this field. Methods: A PubMed search was carried out to extract articles related to computerized provider order entry during the period January 1st 2003- December 31st 2022 using a specific query. Data were downloaded from PubMed in Extensible Markup Language (XML) and were processed through a dedicated parser. Results: A total of 2,946 articles were retrieved among 623 journals. One third of these articles were published in eight journals. Publications grew strongly from 2002 to 2006, with a dip in 2008 followed by an increase again in 2009. After 2009, there follows a decreasing until 2022.The most producing countries are the USA with 51.39% of the publication over the period by France (3.80%), and Canada (3.77%). About disciplines, the top 3 is: "medical informatics" (21.62% of articles), "pharmacy" (19.04%), and "pediatrics" (6.56%). Discussion: This study provides an overview of publication trends related to CPOE, which exhibited a significant increase in the first decade of the 21st century followed by a decline after 2009. Possible reasons for this decline include the emergence of digital health tools beyond CPOE, as well as healthcare professionals experiencing alert fatigue of the current system. Conclusion: Future research should focus on analyzing publication trends in the field of medical informatics and decision-making tools to identify other areas of interest that may have surpassed the development of CPOE. abstract_id: PUBMED:25557572 Effect of computerized physician order entry on imaging study indication. Purpose: The effect of computerized physician order entry (CPOE) on imaging indication quality had only been measured in one institution's emergency department using a homegrown electronic health record with faculty physicians, and only with one instrument. To better understand how many US hospitals' recent CPOE implementations had affected indication quality, we measured its effect in a generalizable inpatient setting, using one existing and one novel instrument. Methods: We retrospectively analyzed the indications for 100 randomly selected inpatient abdominal CT studies during 2 calendar months immediately prior to a 3/3/2012 CPOE implementation (1/1/2012-2/29/2012) and during 2 subsequent calendar months (5/1/2012-6/30/2012). We excluded 2 intervening months to avoid behaviors associated with adoption. We measured indication quality using a published 8-point explicit scoring scale and our own, novel, implicit 7-point Likert scale. Results: Explicit scores increased 93% from a pre-CPOE mean ± 95% confidence interval of 1.4 ± 0.2 to a CPOE mean of 2.7 ± 0.3 (P &lt; .01). Implicit scores increased 26% from a pre-CPOE mean of 4.3 ± 0.3 to a CPOE mean of 5.4 ± 0.2 (P &lt; .05). When presented with a statement that an indication was "extremely helpful," and choices ranging from "strongly disagree" = 1 to "strongly agree" = 7, implicit scores of 4 and 5 signified "undecided" and "somewhat agree," respectively. Conclusions: In an inpatient setting with strong external validity to other US hospitals, CPOE implementation increased indication quality, as measured by 2 independent scoring systems (one pre-existing explicit system and one novel, intuitive implicit system). CPOE thus appears to enhance communication from ordering clinicians to radiologists. Answer: The adoption of computerized physician order entry (CPOE) systems in US hospitals has been a subject of interest due to the potential for improved quality of care and cost controls. However, the federal mandate for universal CPOE adoption has faced challenges. According to a study that applied technology diffusion theory and Bass modeling techniques, under current conditions, CPOE adoption in urban hospitals is not expected to reach 80% penetration until 2029 (PUBMED:18053762). This suggests that meeting the federal mandate for widespread CPOE adoption may be difficult without significant changes. Several factors influence the adoption of CPOE systems. Hospital ownership and teaching status have been associated with CPOE investment, with government and teaching hospitals more likely to invest in these systems than other types of hospitals. However, hospital profitability does not appear to be associated with CPOE investment (PUBMED:16284040). Additionally, there are persistent regional disparities, with rural emergency departments having significantly lower odds of CPOE adoption compared to urban ones (PUBMED:21802779). The market for CPOE and Computerized Decision Support (CDS) systems is still emerging, with a variety of products entering the market rapidly, often before they adequately meet the needs of hospitals. This has led to premature adoption of systems that do not align well with localized practices (PUBMED:26338217). Moreover, the implementation of CPOE systems has been associated with the emergence of specific computerization-related errors, although the number of classic errors decreased after one year of implementation (PUBMED:30446896). To improve the adoption rates and meet the federal mandate, it may be necessary to develop a new generation of CPOE technology that is more user-friendly and easily integrated into hospitals' legacy systems (PUBMED:18053762). Additionally, more research into the structure, framework, and components of CPOE deployment is required, particularly in primary care settings, to enhance adoption (PUBMED:34949924). Addressing the challenges in CPOE usability and adoption for chemotherapy, for example, requires future research focusing on workflow variations, chemotherapy-specific CPOE procurement needs, and implementation guidance needs (PUBMED:24764707). In conclusion, while CPOE has many positive outcomes for patients, physicians, and primary care, the current state of technology and various challenges associated with its implementation suggest that meeting the federal mandate for CPOE adoption across US hospitals is a complex task that may not be achieved in the near term without concerted efforts to address these issues.
Instruction: Deep infiltrating endometriosis: Should rectal and vaginal opacification be systematically used in MR imaging? Abstracts: abstract_id: PUBMED:27216959 Deep infiltrating endometriosis: Should rectal and vaginal opacification be systematically used in MR imaging? Objectives: To evaluate the interest of rectal and vaginal filling in vaginal and recto-sigmoid endometriosis with MR imaging. To compare the results between a senior and a junior radiologist review. Methods: Sixty-seven patients with clinically suspected deep infiltrating endometriosis were included in our MRI protocol consisting of repeated T2-weigthed sequences (axial and sagittal) before and after rectal and vaginal marking with ultrasonography gel. Vaginal and recto-sigmoid endometriosis lesions were analyzed before and after opacification. The inter-reader agreement between senior and junior scores was studied. Results: Concerning vaginal and muscularis and beyond colonic involvement, no significant difference (P=0.32) was observed and the inter-reader agreement was excellent (K=0.96 and 0.97 respectively). Concerning serosa colonic lesions, a significant difference was observed (P=0.01) and the inter-reader agreement was poor (K=0). Conclusions: Rectal and vaginal filling in endometriosis staging with MRI is not necessary no matter the reader experiment. abstract_id: PUBMED:29675360 Deep infiltrating endometriosis MR imaging with surgical correlation. In this pictorial review, MR imaging findings of deep infiltrating endometriosis (DIE) are illustrated together with surgical correlation. DIE can appear as irregular nodules or plaques with similar signal intensity to muscle on both T1-weighted and T2-weighted images. Hemorrhage foci and strands or stellate margins are also often noted. Restriction of diffusion can be seen on diffusion-weighted image. Fibrosis and adhesions often result in morphologic changes, such as alimentary tract tortuosity, irregular or nodular thickening of uterosacral ligaments, and partial or complete obliteration of the pouch of Douglas. After intravenous gadolinium contrast agent administration, homo- or heterogeneous mild to moderate enhancement can be observed. MR imaging can depict endometriosis lesions and extension of DIE at different anatomic locations, which is well consistent with surgical findings. Combining signal and morphological abnormalities, MR imaging can diagnose and assess the extension of DIE with high accuracy. MR imaging findings of DIE facilitate surgeons at treatment decision making and patient communication. abstract_id: PUBMED:22187631 Levator ani muscle complex: anatomic findings in nulliparous patients at thin-section MR imaging with double opacification. Purpose: To determine levator ani muscle complex anatomic findings in nulliparous patients at magnetic resonance (MR) imaging examinations performed with opacification of the vagina and rectum with ultrasonographic gel. Materials And Methods: The institutional review board approved this retrospective study, and the informed consent requirement was waived. Findings from pelvic MR imaging examinations with double opacification in 123 consecutive nulliparous patients (mean age, 32.13 years; age range, 17-45 years) who were suspected of having endometriosis were reviewed. The pubococcygeal muscles were analyzed on coronal sections obtained through the middle part of the vagina, perineal body, and anal canal. The puborectalis muscles were analyzed on coronal sections obtained through the perineal body. The iliococcygeal muscles were analyzed on coronal sections obtained through the rectum. Miscellaneous findings such as visibility of deep transverse muscles of the perineum, perineal body, and focal muscle defects were also noted. Results: In 56% (69 of 123) of patients, at least one morphologic variant (thinning or aplasia) of a muscle of the levator ani complex was noted. Variants of puborectalis muscles were noted in 6% of patients. Variants of iliococcygeal muscles were noted in 13%. Variants of pubococcygeal muscles were noted in 32% at the anal canal level, in 49% at the perineal body level, and in 49% at the vaginal level. Variants of pubococcygeal muscles were noted on the left side in 53 patients (77% of pubococcygeal muscle variants). Conclusion: Numerous morphologic variants of the levator ani muscle complex are noted at coronal thin-section MR imaging with double opacification. Most involve the pubococcygeal muscle on the left side at perineal body and vaginal levels. Whether some of these anatomic findings may favor prolapse after vaginal birth may be questioned. abstract_id: PUBMED:31673476 The rectal vaginal opacification with water and the antiperistaltic agent in magnetic resonance scanning of the intestinal endometriosis. The diagnosis of deep intestinal endometriosis is mandatory to plan treatment and for follow-up; however, there is no consensus worldwide in the use of rectal/ vaginal opacification and anti-peristaltic agents for magnetic resonance imaging (MRI) scanning, being defined as an option for the examination. The transvaginal ultrasound images of previous MRI with the standard protocol, and recent MRI in our institution with rectal/vaginal opacification with water and the anti-peristaltic agent are presented in four cases for comparison, respectively. The technique in our institution seems to be more effective than routine pelvic MRI scans in the intestinal endometriosis. abstract_id: PUBMED:31753239 Magnetic Resonance Enema in Rectosigmoid Endometriosis. Intestinal endometriosis occurs in 4% to 37% of women with deep endometriosis (DE). Noninvasive diagnosis of presence and characteristics of rectosigmoid endometriosis permits the best counseling of patients and ensures best therapeutic planning. Magnetic resonance enema (MR-e) is accurate in diagnosing DE. After colon cleansing, rectal distention and opacification improves the performance of MR-e in diagnosing rectosigmoid endometriosis. MR imaging cannot optimally assess the depth of penetration of endometriosis in the intestinal wall. There is a need for multicentric studies with a larger sample size to evaluate reproducibility of MR-e in diagnosis of rectosigmoid endometriosis for less experienced radiologists. abstract_id: PUBMED:21813257 Deeply infiltrating endometriosis: evaluation of retro-cervical space on MRI after vaginal opacification. Objectives: To prospectively investigate diagnostic value and tolerability of MRI after intra-vaginal gel opacification for diagnosis and preoperative assessment of deeply infiltrating endometriosis. Methods: Sixty-three women with clinical suspicion of deeply infiltrating endometriosis were previously examined with trans-vaginal ultrasonography and then with MRI pre and post administration of vaginal gel. We evaluated the tolerability of this procedure with a scoring scale from 0 to 3. We also assessed with a score from 1 to 4 the visibility of four regions: Douglas-pouch, utero-sacral-ligaments, posterior-vaginal-fornix and recto-vaginal-septum. All patients underwent laparoscopic surgery after MRI. Results: Five patients considered procedure intolerable. Visibility of utero-sacral-ligaments and posterior-vaginal-fornix showed to be increased with gel (p&lt;0.001). In 57 out of 80 patients the MRI has allowed us to diagnose deeply infiltrating endometriosis. Overall, the percentages of MRI-sensitivity, specificity, positive predictive value and negative predictive value were respectively 67.8%, 95.3%, 89.4 and 83.5% without gel, and 90.8%, 94.6%, 90.8% and 94.6% with gel; trans-vaginal ultrasonography sensitivity, specificity, positive predictive value and negative predictive value were 57.5%, 96.6%, 90.9% and 79.5%. In evaluation of utero-sacral-ligaments trans-vaginal ultrasonography, MRI without gel and with gel sensitivity was respectively 61.9%, 47.6% and 81%; for recto-vaginal-septum these values were 12.5%, 68.7% and 93.7%; for pouch of Douglas 82%, 87% and 97.4%; finally for posterior-vaginal-fornix 27.3%, 36.4% and 81.8%. Conclusions: MRI with gel opacification of vagina should be recommended for suspicion of deep infiltrating endometriosis, in particular for the added value in evaluation of recto-vaginal septum, utero-sacral ligaments and posterior vaginal fornix. abstract_id: PUBMED:33392249 Value of 3D MRI and Vaginal Opacification for the Diagnosis of Vaginal Endometriosis. Objective: The aim of the study was to evaluate three-dimensional (3D) T2 MRI before and after vaginal opacification (VO) by gel (3DT2VO) and the additional value of 3DT1 with fat-suppression (3DT1FS) MRI in the diagnosis of vaginal endometriosis. Methods: In this study conducted from 2010 to 2013, 51 patients scheduled for surgical treatment of endometriosis underwent MRI 1 day before surgery. Three readers (novice, intermediate, expert) were asked to retrospectively diagnose vaginal endometriosis independently and blindly using four different readings (i.e., 3DT2, 3DT2VO, 3DT2 with 3DT1FS, 3DT2VO with 3DT1FS). Vaginal endometriosis diagnosis was positive on observation of a thickening of vaginal walls on 3DT2 with or without high-signal-intensity spots on 3DT2 and/or 3DT1FS. The reference standard was surgery and histology. Descriptive analysis, Chi-square test, and ROC curves were used for statistical analysis. Results: For all readers, the combination of 3DT2 and 3DT1FS significantly improved the diagnosis of vaginal endometriosis compared with 3DT2 (p = 0.002, p = 0.02, and p = 0.003). 3DT2VO significantly improved diagnosis for the intermediate reader (p = 0.01). High-signal-intensity spots on 3DT1FS had a sensitivity of 50-63.6%, specificity of 86.2-96.6%, and high positive likelihood ratios (14.5-Inf). Conclusion: 3DT2 in association with 3DT1FS appears to be the best 3D MRI protocol for the diagnosis of vaginal endometriosis, whatever the level of experience of readers. The additional value of 3DT2VO is variable among the readers. abstract_id: PUBMED:19106831 MR imaging features of deep pelvic endometriosis: correlation with laparoscopy Unlabelled: Deep pelvic endometriosis is an invalidating disorder affecting the retrocervical region, rectosigmoid colon and urinary bladder generally requiring surgical management. MRI is the preoperative imaging modality of choice. The purpose of this paper is to describe the MR imaging features of deep pelvic endometriosis with laparoscopic correlation. Methods: Thirty-five patients with clinical suspicion of deep pelvic endometriosis underwent pelvic MRI. Results of MRI, including morphological and signal characteristics features of the lesions were compared to laparoscopic fidings. Results: Laparoscopy detected lesions of deep pelvic endometriosis of the uterosacral ligaments (n=10), torus uterinum (n=9), rectosigmoid (n=11), Douglas pouch (n=9), recto-vaginal septum (n=6), bladder (n=4) and posterior vaginal cul-de-sac (n=2). The sensitivity, specificity, positive predictive value and negative predictive value of MRI were assessed for each localization. Conclusion: MRI allows diagnosis of deep pelvic endometriosis of the bladder, rectosigmoid and Douglas pouch and with lower sensitivity for lesions of the uterosacral ligaments, posterior vaginal cul-de-sac and rectovaginal septum. abstract_id: PUBMED:29540333 Diagnostic performance of MR imaging, coloscan and MRI/CT enterography for the diagnosis of pelvic endometriosis: CNGOF-HAS Endometriosis Guidelines Diagnostic performance of MR imaging for the diagnosis of pelvic endometriosis are good. Even if some differences of performances exists according the location considered, the risk of misdiagnosis is lower than 10% for trained teams (NP2). The performance of pelvic MR imaging and surgery are quite similar to diagnose endometrioma (sensitivity and specificity&gt;90%). A negative pelvic MR imaging allows to exclude deep pelvic endometriosis with a performance similar to surgery but a positive MR imaging is less accurate than surgery because of a high number of false positives (23%). Pelvic MR imaging is more sensitive and less specific than ultrasonography for the diagnosis of uterosacral ligament, vagina or recto vaginal septum (NP2). Pelvic ultrasonography is more sensitive than pelvic MR imaging for the diagnosis of colorectal location (NP3). Pelvic MR imaging is a reproducible technique for the diagnosis of pelvic endometriosis (NP3). Regarding, quality criteria of pelvic MR imaging, no data are enough to recommend a specific MR unit, digestive preparation, or a specific moment during the menstrual cycle to realize the examination. Vaginal and/or rectal opacification are options. Most of studies are based a protocol including 3D T2W and 3DT1W sequences. Gadolinium injection is useful to characterize a complex adnexal mass. In clinical routine, slices crossing the kidneys are useful to evaluate the presence of pyelo calic distension. ColoCT is an accurate technique to diagnose pelvic digestive endometriosis (rectosigmoide and iléocaecal) (NP3). abstract_id: PUBMED:19862535 Utility of vaginal and rectal contrast medium in MRI for the detection of deep pelvic endometriosis. Objective: To study the sensitivity of MRI performed utilising vaginal and rectal opacification with ultrasound gel in the detection of deep pelvic endometriosis. Material And Methods: This was a prospective monocentric study. All patients evaluated by the gynaecologist for pelvic pain, endometriosis or infertility were included. Axial and sagittal T2-weighted images were performed both with and without vaginal and rectal opacification with ultrasound gel. Three radiologists, all blinded, interpreted the images with a minimum of 15 days between the two readings. MRI performance with and without vaginal and rectal opacification was evaluated by calculating sensitivity, specificity and both positive and negative predictive values. Results: Seventy-eight patients were included. Among these, 31 patients had deep pelvic endometriosis of which 24 were confirmed by laparoscopy. Seventy-six locations of deep pelvic endometriosis were discovered on MRI. For the three reviewers there was a significant improvement in sensitivity between pre- and post-contrast MRI (p &lt; 0.0002). Conclusion: Opacification of the vagina and rectum significantly improved the sensitivity of MRI for the detection of deep pelvic endometriosis by expanding the vagina and rectum, thus allowing better delineation of the pelvic organs. This was especially apparent for lesions localised to the vagina and rectovaginal septum. Answer: The use of rectal and vaginal opacification in MR imaging for deep infiltrating endometriosis (DIE) is debated. Some studies suggest that rectal and vaginal filling with ultrasonography gel during MRI may not be necessary for staging endometriosis, as no significant difference was observed in the detection of vaginal and muscularis and beyond colonic involvement, regardless of the experience level of the reader (PUBMED:27216959). However, other research indicates that rectal and vaginal opacification can improve the performance of MRI in diagnosing rectosigmoid endometriosis (PUBMED:31753239), and vaginal opacification with gel can significantly increase the visibility of certain pelvic structures, potentially aiding in the diagnosis of deeply infiltrating endometriosis (PUBMED:21813257). Moreover, the use of rectal/vaginal opacification with water and an antiperistaltic agent has been shown to be more effective than routine pelvic MRI scans for intestinal endometriosis (PUBMED:31673476). Additionally, vaginal opacification (VO) with gel in 3D T2 MRI has been reported to significantly improve the diagnosis of vaginal endometriosis for some readers, although the additional value of 3DT2VO is variable among readers (PUBMED:33392249). In contrast, another study found that MRI with vaginal and rectal opacification significantly improved the sensitivity for the detection of deep pelvic endometriosis, particularly for lesions localized to the vagina and rectovaginal septum (PUBMED:19862535). This suggests that opacification may be particularly useful for delineating pelvic organs and improving lesion detection in certain locations. In summary, while some evidence suggests that rectal and vaginal opacification may not be systematically necessary for all cases of DIE (PUBMED:27216959), other studies highlight its potential benefits in improving diagnostic accuracy for specific pelvic locations and facilitating surgical planning (PUBMED:31753239, PUBMED:21813257, PUBMED:31673476, PUBMED:33392249, PUBMED:19862535). Therefore, the decision to use rectal and vaginal opacification in MR imaging for DIE may depend on the specific clinical scenario and the structures of interest.
Instruction: Is naturopathy as effective as conventional therapy for treatment of menopausal symptoms? Abstracts: abstract_id: PUBMED:14499029 Is naturopathy as effective as conventional therapy for treatment of menopausal symptoms? Background: Although the use of alternative medicine in the United States is increasing, no published studies have documented the effectiveness of naturopathy for treatment of menopausal symptoms compared to women receiving conventional therapy in the clinical setting. Objective: To compare naturopathic therapy with conventional medical therapy for treatment of selected menopausal symptoms. Design: A retrospective cohort study, using abstracted data from medical charts. Setting: One natural medicine and six conventional medical clinics at Community Health Centers of King County, Washington, from November 1, 1996, through July 31, 1998. Patients: Women aged 40 years of age or more with a diagnosis of menopausal symptoms documented by a naturopathic or conventional physician. Main Outcome Measures: Improvement in selected menopausal symptoms. Results: In univariate analyses, patients treated with naturopathy for menopausal symptoms reported higher monthly incomes ($1848.00 versus $853.60), were less likely to be smokers (11.4% versus 41.9%), exercised more frequently, and reported higher frequencies of decreased energy (41.8% versus 24.4%), insomnia (57.0% versus 33.1%), and hot flashes (69.6% versus 55.6%) at baseline than those who received conventional treatment. In multivariate analyses, patients treated with naturopathy were approximately seven times more likely than conventionally treated patients to report improvement for insomnia (odds ratio [OR], 6.77; 95% confidence interval [CI], 1.71, 26.63) and decreased energy (OR, 6.55; 95% CI, 0.96, 44.74). Naturopathy patients reported improvement for anxiety (OR, 1.27; 95% CI, 0.63, 2.56), hot flashes (OR, 1.40; 95% CI, 0.68, 2.88), menstrual changes (OR, 0.98; 95% CI, 0.43, 2.24), and vaginal dryness (OR, 0.91; 95% CI, 0.21, 3.96) about as frequently as patients who were treated conventionally. Conclusions: Naturopathy appears to be an effective alternative for relief of specific menopausal symptoms compared to conventional therapy. abstract_id: PUBMED:16862739 A method for describing and evaluating naturopathic whole practice. Context: Even though complementary and alternative medicine (CAM) is generally practiced as distinct systems of medicine, almost all CAM research has focused on single therapies. In order to more adequately evaluate the effectiveness of these medical systems, studies that evaluate the outcome of intact whole systems are needed. One challenge lies in defining the whole medical system (and any medical system it is compared to) in a way that ensures treatment fidelity. Objective: This paper presents a proposed method to measure treatment fidelity (treatment criteria) in studies of the naturopathic medical system. Design: Illustrative example of the theory-based development and post-hoc "testing" of treatment criteria against an existing database of actual treatments prescribed by a random sample of naturopathic physicians. Main Outcome Measures: Treatment criteria for 3 conditions--menopausal symptoms, bowel dysfunction, and fatigue/fibromyalgia--and their comparison to actual treatments prescribed. Results: A set of meaningful, measurable treatment criteria based on the naturopathic practice principles were defined that could have generated the majority (82%-93%) of treatment prescriptions given at visits for these conditions. Several of the treatment criteria components are common across the 3 conditions studied, and might be appropriate for all visits to doctors of naturopathy (NDs). Others are specific to each condition. In addition to ensuring model validity, these criteria help identify critical components of care, enable study replication, provide a measure of quality of care, and are one step toward allowing CAM to be studied as it is generally practiced-as distinct systems of medicine. Setting: Work was performed at Bastyr University and the University of Arizona. abstract_id: PUBMED:26245055 How to stop hormone replacement therapy? Hormone replacement therapy is indicated for the treatment of menopausal and urogenital symptoms. The therapy is recommended to be started at the lowest effective dose for a minimum period of time. Discontinuation of the therapy or at least reduction of the dose should be considered yearly. The treatment can be stopped immediately or gradually. The risk of recurrence of menopausal symptoms is equal for both techniques of cessation. Furthermore, the number of those having restarted the therapy does not differ between cessation techniques. A woman is thus able to choose her own cessation technique. Immediate cessation is often successful, whereby complicated instructions for drug reduction are avoided. abstract_id: PUBMED:32627593 Cognitive behavioral therapy for menopausal symptoms. This article describes cognitive behavioral therapy (CBT) for women with problematic menopausal symptoms, and provides the evidence from clinical trials of women going through the menopause, women with breast cancer treatment-induced symptoms and women with problematic symptoms in a work context. The CBT focus is primarily on vasomotor symptoms (VMS) but it also targets stress, low mood and sleep problems. CBT is a brief therapy (four to six sessions) that is theory- and evidence-based; it is acceptable to women and effectively reduces the impact of VMS, improves sleep and has benefits to quality of life. VMS frequency is also reduced significantly in some trials but not others. CBT has been found to be consistently effective when delivered in groups, self-help book and on-line formats (with or without additional support). The MENOS 1 and MENOS 2 CBT protocols are recommended for the treatment of VMS by the North American Menopause Society (2015); CBT has been recommended for the treatment of anxiety and depression for women during the menopause transition and post menopause (NICE, 2015); and telephone CBT has been shown to be an effective treatment for insomnia. abstract_id: PUBMED:29693846 ALTERNATIVES OF MENOPAUSAL HORMONE THERAPY. Introduction: It has been generally accepted that the benefits of menopausal hormone therapy outweigh the risks. but there are still some concerns about the administration of menopausal hormone therapy, which has introduced alternative treatments. Pharmacological Alternatives. Central alpha-2 agonist clonidine is only marginally more effective than placebo, and significantly less effective than estrogen. Antiepileptic drug gabapentin reduces hot flashes; however, it is less effective than estrogen. Selective serotonin reuptake inhibitors (paroxetine and fluoxetine) and selective noradrenaline reuptake inhibitors (venlafaxine) reduce vasomotor symptoms and improve depression, anxiety and sleep. Results of studies about dehydroepiandrosterone effects on menopausal symptoms are inconsistent and additional investigations are needed. Non-Pharmacological Alternatives. Stellatum ganglion blockade is a successful treatment for reducing vasomotor symptoms in patients with contraindications for menopausal hormone therapy. Efficacy of acupuncture, homeopathy and reflexology Should be proved by adequate studies. Phytoestrogens could reduce vasomotortymptoms but to a lesser extent than conventional menopausal hormone therapy. However, they have not been proved yet to pro-ide cardiovascular protection and prevention of osteoporosis. nor they could be recommended instead of traditional menopausal hor-one therapy. There is a concern about their undesirable effects. Adequate diet, unchanging body weight Nwthin ideal values and adequate physical activities have beneficial long-term effects, first of all onlpreservation of bone density Alternatives for Atrophic Changes of Vaginal Epithelium. Menopausal symptoms resulting from vaginal atrophy could be resolved by use of hydrophilic prep- arations, lubricants and topical lidocaine creamn r 4% lidocaine water solution for dyspareunia. Conclusion: If there are contrain-ications to menopausal hormone therapy or patients are unwilling to take hormone therapy, alternative treatments, which canlalso solve menopausal symptoms, should be considered. abstract_id: PUBMED:22831824 Compounded bioidentical menopausal hormone therapy. Although improvement in long-term health is no longer an indication for menopausal hormone therapy, evidence supporting fewer adverse events in younger women, combined with its high overall effectiveness, has reinforced its usefulness for short-term treatment of menopausal symptoms. Menopausal therapy has been provided not only by commercially available products but also by compounding, or creation of an individualized preparation in response to a health care provider's prescription to create a medication tailored to the specialized needs of an individual patient. The Women's Health Initiative findings, coupled with an increase in the direct-to-consumer marketing and media promotion of compounded bioidentical hormonal preparations as safe and effective alternatives to conventional menopausal hormone therapy, have led to a recent increase in the popularity of compounded bioidentical hormones as well as an increase in questions about the use of these preparations. Not only is evidence lacking to support superiority claims of compounded bioidentical hormones over conventional menopausal hormone therapy, but these claims also pose the additional risks of variable purity and potency and lack efficacy and safety data. The Committee on Gynecologic Practice of the American College of Obstetricians and Gynecologists and the Practice Committee of the American Society for Reproductive Medicine provide an overview of the major issues of concern surrounding compounded bioidentical menopausal hormone therapy and provide recommendations for patient counseling. abstract_id: PUBMED:17039856 Menopacenutrient therapy: an alternative approach to pharmaceutical treatments for menopause. Considerable controversy surrounds the use of hormone replacement therapy (HRT) for treatment of peri-menopausal symptoms. Recent publications from three large, prospective randomized studies call the safety of HRT into question, and leave patients searching for answers. Nutrient therapy may provide symptomatic relief without increasing risk of chronic disease. In this study, results of a series of uncontrolled prospective studies of peri-menopausal symptom relief using Menopace nutrient therapy were combined to provide a broad perspective on the safety and effectiveness of this alternative treatment modality. Data from seven studies with a total of 766 subjects were analyzed. Subjects with specific menopausal symptoms reported improvement after three months of daily use of the therapy, ranging from 87.8% of subjects with hot flashes to 67.5% of subjects with poor concentration reporting improvement. Overall improvement in menopausal symptoms was reported in 93.2% of all subjects. These results provide consistent evidence of the effectiveness of comprehensive, nutritionally balanced nutrient therapy for treatment of menopausal symptoms. While most evidence-based practitioners focus primarily on research results from randomized, controlled clinical trials, other forms of research evidence can also guide clinicians searching for safe and effective treatment options for their patients. abstract_id: PUBMED:36470539 Drugs for the treatment of postmenopausal symptoms: Hormonal and non-hormonal therapy. Postmenopausal symptoms are systemic symptoms associated with estrogen deficiency after menopause. At present, treatments for postmenopausal symptoms include hormonal therapy (HT) and non-HT. However, the optimal regimen for balancing the benefits and risks remains unclear. This article reviewed the characteristics, regimens, and side effects of drugs used in hormonal and non-HT. However, HT is still the most effective treatment with safety in early initiation since menopause onset. Nevertheless, it is essential to evaluate the risks of related chronic diseases and customize individualized treatments. Possible estetrol preparations and more types of Tissue Selective Estrogen Complex formulations are potential directions of drug development in the future of HT. Regarding non-HT, fezolinetant, currently in phase III clinical trials, is poised to become a first-in-class therapy for vasomotor symptoms. Ospemifene, dehydroepiandrosterone (DHEA), and vaginal lasers can also be used for moderate-to-severe genitourinary syndrome of menopause. Recent data suggest a superior efficacy and safety of vaginal lasers, but more validated evidence of long-term tolerability is needed to respond to the United States Food and Drug Administration warning. Herbal medication commonly used in Asia is effective in alleviating menopausal symptoms; however, its adverse effects still require more detailed reports and standardized observation methods. This review contributes to a better understanding of drugs for the treatment of postmenopausal symptoms and provides useful information for clinical drug selection. abstract_id: PUBMED:18042008 Menopausal hormone therapy for vasomotor symptoms: balancing the risks and benefits with ultra-low doses of estrogen. Estrogen therapy is the most consistently effective treatment and the only therapy approved by the FDA for menopausal vasomotor symptoms. Following the safety issues reported in the primary Women's Health Initiative publications and with continued patient requests for treatment, a challenge to clinicians has been to identify the lowest effective dose of estrogen for alleviating menopausal symptoms. A number of low-dose estrogen preparations are now available, and transdermal preparations containing an ultra-low dose (25% of the previous conventional or standard dose) of estrogen have recently been approved by the FDA. These preparations effectively relieve menopausal symptoms such as vasomotor symptoms and vaginal atrophy, and potentially protect against bone loss. Compared with standard-dose estrogen therapy, these ultra-low-dose products have an improved tolerability profile and may require reduced amounts or a lower frequency of progestogen administration, potentially mitigating the apparent long-term adverse effects of estrogen-progestogen combinations, as noted in the Women's Health Initiative. abstract_id: PUBMED:22825109 Committee opinion No. 532: compounded bioidentical menopausal hormone therapy. Although improvement in long-term health is no longer an indication for menopausal hormone therapy, evidence supporting fewer adverse events in younger women, combined with its high overall effectiveness, has reinforced its usefulness for short-term treatment of menopausal symptoms. Menopausal therapy has been provided not only by commercially available products but also by compounding, or creation of an individualized preparation in response to a health care provider’s prescription to create a medication tailored to the specialized needs of an individual patient. The Women’s Health Initiative findings, coupled with an increase in the direct-to-consumer marketing and media promotion of compounded bioidentical hormonal preparations as safe and effective alternatives to conventional menopausal hormone therapy, have led to a recent increase in the popularity of compounded bioidentical hormones as well as an increase in questions about the use of these preparations. Not only is evidence lacking to support superiority claims of compounded bioidentical hormones over conventional menopausal hormone therapy, but these claims also pose the additional risks of variable purity and potency and lack efficacy and safety data. The Committee on Gynecologic Practice of the American College of Obstetricians and Gynecologists and the Practice Committee of the American Society for Reproductive Medicine provide an overview of the major issues of concern surrounding compounded bioidentical menopausal hormone therapy and provide recommendations for patient counseling. Answer: The retrospective cohort study described in abstract PUBMED:14499029 suggests that naturopathy may be an effective alternative for relief of specific menopausal symptoms compared to conventional therapy. In this study, patients treated with naturopathy were approximately seven times more likely to report improvement for insomnia and decreased energy. However, for other symptoms such as anxiety, hot flashes, menstrual changes, and vaginal dryness, naturopathy patients reported improvement about as frequently as patients who were treated conventionally. This indicates that while naturopathy may be particularly effective for certain symptoms like insomnia and decreased energy, its effectiveness for other menopausal symptoms is comparable to that of conventional treatments. It is important to note that the effectiveness of naturopathy, as with any treatment, may vary depending on the individual and the specific symptoms being treated. Additionally, the study in question was retrospective and based on self-reported data, which may introduce some biases or limitations in the findings. Nonetheless, the results of this study provide some evidence that naturopathy could be considered as an alternative or complementary approach to conventional therapy for the treatment of menopausal symptoms.
Instruction: Is children's or parents' coffee or tea consumption associated with the risk for type 1 diabetes mellitus in children? Abstracts: abstract_id: PUBMED:8039488 Is children's or parents' coffee or tea consumption associated with the risk for type 1 diabetes mellitus in children? Childhood Diabetes in Finland Study Group. Objective: The study was carried out to determine whether coffee or tea consumption by the child before diagnosis of diabetes or consumption by parents at the time of the child's conception or during pregnancy was associated with the risk for childhood type 1 diabetes. Design: Case-control study. Setting And Subjects: All diabetic children younger than 15 years, and diagnosed from September 1986 to the end of April 1989, were invited to participate. 600 newly diagnosed diabetic children and 536 randomly selected population-based children, and their parents took part in a nationwide study. Results: The risk for type 1 diabetes was increased in the children who consumed at least 2 cups of coffee daily [odds ratio (OR) 1.94, 95% confidence interval (CI) 1.08-3.47], and in the children who consumed 1 cup of tea (OR 1.69, 95% CI 1.21-2.37) or at least 2 cups daily (OR 2.59, 95% CI 1.60-4.18) when adjusted for mother's education, child's age and child's sex. Parents' consumption of coffee or tea during conception of the child and mother's coffee consumption during pregnancy did not affect the risk for diabetes in the children. Conclusions: We observed an increased risk for type 1 diabetes in the children who consumed coffee or tea regularly. abstract_id: PUBMED:35628987 Family and Individual Quality of Life in Parents of Children with Developmental Disorders and Diabetes Type 1. Background: This cross-sectional study assessed both family and individual quality of life (QOL), and their association with self-esteem, optimism, chronic psychological stress, anxiety, and depression in parents of children with chronic conditions. Methods: Parents of children with Down syndrome (DS), autistic spectrum disorder (ASD), cerebral palsy (CP), diabetes mellitus type 1 (DMT1), and parents of children without chronic diseases with typical development (TD) were included. Multivariate linear regression analysis was used to assess parental characteristics associated with the domains of individual and family QOL. Results: Compared to the parents of TD children, parents of children with ASD and DS were more likely to report reduced family QOL in all domains, while parents of children with DMT1 had lower parental perception. Self-esteem was positively associated with all domains of individual QOL, while optimism was associated with the overall individual QOL perception and health. Higher stress perception was negatively associated with most of the domains of individual and family QOL. Conclusions: This study confirmed that parents of children with chronic conditions are more likely to have lower perception of both individual and family QOL, which were associated with self-esteem, chronic stress, anxiety, and depression. Interventions should focus not only on the child with a chronic condition but on parents too. abstract_id: PUBMED:23977442 Behavioral Science Research Informs Bioethical Issues in the Conduct of Large-Scale Studies of Children's Disease Risk. Background: Birth cohort studies of the natural history of pediatric common disease risk raise many bioethical issues, including re-consenting participants over time as children mature and cohort retention. Understanding participants' study-specific knowledge, attitudes, beliefs, and behavior may offer insights into these issues from a psychological perspective. Methods: We conducted an analysis of factors associated with parent-child communication about minor children's participation in a population-based birth cohort; children's knowledge about their own participation; and parental willingness to be re-contacted for future study among Swedish parents (N = 3,605) of children originally enrolled at birth in a prospective study of type 1 diabetes risk. Results: More open parent-child communication about disease risk screening research and greater knowledge among children about their own research participation facilitated greater parent willingness to participate in further study. Parents' decisions about further study participation were most strongly favorable among those who communicated openly with their child and with high study-specific knowledge. Conclusions: Epidemiologists, bioethicists, and others involved in the design and conduct of large-scale, prospective birth cohorts may consider embedding periodic assessments of participants' study-specific attitudes and behavior to address long-term retention and willingness to engage in future research. abstract_id: PUBMED:23763537 Psychometric properties of the Pediatric Testing Attitudes Scale-Diabetes (P-TAS-D) for parents of children undergoing predictive risk screening. Objective: Examine the factor structure, reliability, and validity of the Pediatric Testing Attitudes Scale-Diabetes (P-TAS-D), a measure of parental attitudes about predictive risk screening for type 1 diabetes in children. Methods: Surveys were completed by 3720 Swedish parents of children participating in the adolescent follow-up of a birth cohort study of type 1 diabetes onset. Parents averaged 43.5 years, 42.3% were college-educated, and 10.6% of children had a family history of type 1 diabetes. The parent sample was randomly divided, an exploratory factor analysis (EFA; n = 1860) was conducted, followed by confirmatory factor analysis (CFA; n = 1860) and testing. Results: EFA/CFA revealed the P-TAS-D has three factors/scales: Attitudes and Beliefs toward type 1 diabetes predictive risk screening (α = 0.92), Communication about risk screening results (α = 0.71), and Decision Making (r = 0.19, p &lt; 0.001). This solution fit the data well (χ(2) [42] = 536.0, RMSEA = 0.08, CFI = 0.95) and internal consistency for the full scale was high (α = 0.86, M = 36.2, SD = 8.2). After adjusting for covariates, more favorable attitudes toward children's risk screening were associated with greater worry about type 1 diabetes (B = 1.1, p &lt; 0.001), less worry about health overall (B = -0.10, p = 0.001), and more positive attitudes toward (B = 0.28, p &lt; 0.001) and less worry about (B = 0.41, p &lt; 0.001) diabetes research. Conclusions: The P-TAS-D is a stable, reliable, and valid measure for assessing parents' type 1 diabetes risk screening attitudes. Scale data can help target parent education efforts in risk screening trials. abstract_id: PUBMED:37249210 Psychological variables and lifestyle in children with type1 diabetes and their parents: A systematic review. Diabetes may impact physical and psychosocial well-being; the diabetes incidence has seen a drastic increase globally. There is also a rise in poor mental health and well-being in children with and without chronic illness; problems are being seen at a younger age. The objective of this review was to understand the determinants of these problems in a family context. We conducted a systematic review to investigate what lifestyle and psychological factors influence children with Type 1 diabetes and their parents. A focused literature search was performed using a combination of keywords that covered the relevant terminology for diabetes, target population, and associated emotional distress, using electronic bibliographic databases containing publications until May 2022. Methodological quality was assessed using the Quality Assessment Tools for Quantitative Studies. Twenty articles met the inclusion criteria. Quality scores were weak because of a lack of comparison groups, information about the type of therapy, or adequate sample sizes. Many of the studies included a wide age range in their sample. The majority of the studies reported that parents and their children showed depression symptoms, fear of hypoglycaemia, and higher parenting stress. We conclude that sufficiently powered studies employing appropriate control groups and measures are needed to elucidate the psychological variables associated with Type1 diabetes in children and the effects on parents, especially considering primary-age children who are increasingly reported to suffer from poor mental health, and its implications. This should help to introduce better targeted interventions and improve behavioural outcomes. abstract_id: PUBMED:36949972 Parental Resilience and Physical Health in Parents of Children With Type 1 Diabetes in Northern Greece. Background: Type 1 diabetes mellitus (T1DM) is the most common endocrine and metabolic disorder in children. On the other hand, little is known regarding the health of parents whose children suffer from T1DM. Aim: The study aims to investigate the mental resilience and physical health of parents of children with type 1 diabetes. Methods: The sample consisted of 80 parents of children and adolescents with T1DM.The study was conducted with the contribution of associations of parents of children with type 1 diabetes in a large hospital in Northern Greece between April 2021 and September 2021. A demographic and clinical questionnaire, the Wagnild and Young Resilience Scale-14 (RS-14), and the General Health 28 Physical Health Measurement Questionnaire (GHQ-28) were used to collect the research data. Results: Of the parents, 18.8% were male while 65% were female. The mean age of the parents was 44.02±6.71 years while the age of their children with diabetes was 13.13±6.05 years. Almost half of the children followed intensive insulin treatment (47.5%) whereas 22,5% reported that their children received insulin via a pump. A higher percentage of parents reported measuring their children's blood sugar more than six times a day (46,3%) and having their glycated hemoglobin (HbA1c) levels checked four times a year (51.2%). Finally, statistically significant effects on the physical symptoms and severe depression of parents of children with type 1 diabetes were observed. Conclusions: Additional research is needed to assess the Greek parent population's resilience and physical health. This study will help healthcare providers to expand their knowledge and meet parents' needs. abstract_id: PUBMED:33889613 Parents' experience of caring for children with type 1 diabetes in mainland China: A qualitative study. Background: Parents of children with type 1 diabetes mellitus (T1DM) are under heavy caregiving stress, and parental caregivers' experience can affect the health outcomes of children with T1DM. Aim: To describe the true inner feelings of parents caring for children with T1DM. Methods: Descriptive research methods were used to classify and summarize parents' experience when adapting to the role of caregivers for children with T1DM. The data was sorted and analyzed using content analysis. Themes of parents' experience caring for children with T1DM were refined, and their feelings were deeply investigated. Results: A total of 4 themes and 12 subthemes were identified: (1) Desire for information (disease-related information, home care information, and channels of information acquisition); (2) Skill guidance needs (insulin injection techniques, skills required for symptom management, and skills for parent-child communication); (3) Seeking emotional support (family support, peer support from other parents of children with T1DM, and professional support); and (4) Lack of social support (needs for financial support and needs for social security). Conclusion: Exploring the true experience of parents caring for children with T1DM is of great significance for helping them adapt to their role as caregivers. Nurses should provide professional guidance in terms of information, skills, emotion, and social support to parental caregivers. abstract_id: PUBMED:23833609 Affective responses of the parents after diagnosis of type 1 diabetes in children. Background: These days, diabetes is deemed as one of the most important health and social-economic problems of the world. Since parents play a major role in treatment of diabetes, the most important part of managing diabetes is in the hands of the parents of children affected by diabetes. This special responsibility will increase the stress and family challenges and impacts parents' emotional responses. The affective reactions or responses of the parents can also be conveyed to the child himself and reduce self-care, increase glucose levels, increase the possibility of complications and reduce the quality of life. Thus, it is highly important to recognize the affective reactions of parents during various stages of the disease for the purpose of intervention. Materials And Methods: All parents of children diagnosed with insulin-dependent diabetes who referred to Sedigheh-ye-Tahereh Endocrinology and Metabolism Research Center, Isfahan, Iran, were selected and the Symptom Checklist-90 (SCL-90) was filled in five stages (immediately, one month, three months, six months and twelve months after diagnosis). Convenient sampling was used to select 45 consecutive subjects out of whom 10 dropped out during the study. Findings: The major problems of the study subjects at the beginning of diagnosis were depression, anxiety and physical problems, respectively. Three, six and twelve months later, they were depression, obsession and physical problems. Over time, the mean score of parents' affective reactions declined which indicated the acceptance of the disease by parents over time. Conclusions: In view of the fact that both mother and father of children with diabetes suffer from affective problems and since fathers refer to diabetes centers less than mothers, some decisions should be made to mentally support both fathers and mothers. abstract_id: PUBMED:15998896 Coffee consumption and risk of type 2 diabetes: a systematic review. Context: Emerging epidemiological evidence suggests that higher coffee consumption may reduce the risk of type 2 diabetes. Objective: To examine the association between habitual coffee consumption and risk of type 2 diabetes and related outcomes. Data Sources And Study Selection: We searched MEDLINE through January 2005 and examined the reference lists of the retrieved articles. Because this review focuses on studies of habitual coffee consumption and risk of type 2 diabetes, we excluded studies of type 1 diabetes, animal studies, and studies of short-term exposure to coffee or caffeine, leaving 15 epidemiological studies (cohort or cross-sectional). Data Extraction: Information on study design, participant characteristics, measurement of coffee consumption and outcomes, adjustment for potential confounders, and estimates of associations was abstracted independently by 2 investigators. Data Synthesis: We identified 9 cohort studies of coffee consumption and risk of type 2 diabetes, including 193 473 participants and 8394 incident cases of type 2 diabetes, and calculated summary relative risks (RRs) using a random-effects model. The RR of type 2 diabetes was 0.65 (95% confidence interval [CI], 0.54-0.78) for the highest (&gt;or=6 or &gt;or=7 cups per day) and 0.72 (95% CI, 0.62-0.83) for the second highest (4-6 cups per day) category of coffee consumption compared with the lowest consumption category (0 or &lt;or=2 cups per day). These associations did not differ substantially by sex, obesity, or region (United States and Europe). In the cross-sectional studies conducted in northern Europe, southern Europe, and Japan, higher coffee consumption was consistently associated with a lower prevalence of newly detected hyperglycemia, particularly postprandial hyperglycemia. Conclusions: This systematic review supports the hypothesis that habitual coffee consumption is associated with a substantially lower risk of type 2 diabetes. Longer-term intervention studies of coffee consumption and glucose metabolism are warranted to examine the mechanisms underlying the relationship between coffee consumption and type 2 diabetes. abstract_id: PUBMED:29501444 Association between habitual coffee consumption and metabolic syndrome in type 1 diabetes. Background And Aims: In the general population, habitual coffee consumption is inversely associated with the metabolic syndrome, a syndrome that is rather common also in patients with type 1 diabetes. However, whether coffee intake is beneficially related to the metabolic syndrome also in type 1 diabetes, is not known. We, therefore, studied the potential association between coffee consumption and the metabolic syndrome in a large population of individuals with type 1 diabetes. Furthermore, we investigated whether coffee consumption is associated with insulin resistance (estimated glucose disposal rate, eGDR), kidney function (estimated glomerular filtration rate, eGFR), and low-grade chronic inflammation (high-sensitivity C-reactive protein, hsCRP). Methods And Results: Data from 1040 participants in the Finnish Diabetic Nephropathy Study were included in these cross-sectional analyses. Metabolic syndrome was assumed if at least 3 of the following cardiovascular risk factors were present: central obesity, high blood pressure, low HDL-cholesterol concentration, high triglyceride concentration, and hyperglycaemia. Subjects were categorized based on self-reported daily coffee intake: non-consumers (&lt;1 cup/d), low (≥1 cups/d &lt; 3), moderate (≥3 cups/d &lt; 5), and high coffee consumption (≥5 cups/d). In multivariable logistic regression analysis, moderate and high coffee consumption was associated with increased odds of the metabolic syndrome. Moreover, any level of coffee consumption was associated with increased risk of the blood pressure-component. An increasing trend was observed in the eGFR with increasing coffee consumption. Conclusions: In type 1 diabetes, high coffee intake is associated with the metabolic syndrome, and especially its blood pressure-component. Answer: Children's consumption of coffee or tea is associated with an increased risk for type 1 diabetes mellitus (T1DM) in children. Specifically, the risk for T1DM was increased in children who consumed at least 2 cups of coffee daily (odds ratio [OR] 1.94, 95% confidence interval [CI] 1.08-3.47) and in children who consumed 1 cup of tea (OR 1.69, 95% CI 1.21-2.37) or at least 2 cups daily (OR 2.59, 95% CI 1.60-4.18) when adjusted for mother's education, child's age, and child's sex. However, parents' consumption of coffee or tea during the conception of the child and mother's coffee consumption during pregnancy did not affect the risk for diabetes in the children (PUBMED:8039488).
Instruction: Pathological findings after radical prostatectomy in men eligible for active surveillance (French trial SURACAP): is the misclassification rate acceptable? Abstracts: abstract_id: PUBMED:21482401 Pathological findings after radical prostatectomy in men eligible for active surveillance (French trial SURACAP): is the misclassification rate acceptable? Objective: To analyze pathological data of the radical prostatectomy specimen in patients operated for clinically-localized prostate cancer and who meet strict criteria for active surveillance necessary to be included in the French trial SURACAP. Patients And Methods: The data of patients who underwent a radical prostatectomy at our institution between 1998 and 2010 were reviewed. We only included the patients that met the usual criteria for active surveillance: clinical stage T1-2a tumor, PSA ≤ 10 ng/mL, biopsy Gleason sum inferior or equal to 6 with no pattern of grade 4 or 5, cancer involvement inferior or equal to two biopsy cores, inferior to 3 mm of malignant tissue in each positive biopsy core. From them, only those who were diagnosed from a second line biopsies cores were included for further analysis. Results: Overall, 48 patient who met the "SURACAP" criteria had a laparoscopic radical prostatectomy at out institution. Mean age was 65.4 years. The mean preoperative PSA was 6.1 ng/mL. Clinical stage of the tumor was T1c in 95% of patients and T2a in 5%. Biopsy Gleason score was 6 (3+3) in 100%. Pathological analysis of the surgical specimen showed that 19% of patients had a seminal vesicle invasion or an extracapsular extension. The Gleason score of the pathological specimen was 6 (3+3) in 57% of patients, 7 (3+4) in 38% and 8 (4+4) in 5% of patients. The Gleason score upgrading was 43% of patients. Conclusion: In our experience, 19% of patients who meet the criteria for active surveillance show an extracapsular extension or a seminal vesicle invasion on pathological analysis. Active surveillance is still under evaluation. abstract_id: PUBMED:20006888 Pathological findings and prostate specific antigen outcomes after radical prostatectomy in men eligible for active surveillance--does the risk of misclassification vary according to biopsy criteria? Purpose: We compared the pathological findings and prostate specific antigen outcome after radical prostatectomy in men eligible for active surveillance according to 3 biopsy inclusion criteria. Materials And Methods: The study population included 177 men eligible for active surveillance who fulfilled clinicobiological criteria and biopsy criteria as group 1-less than 3 positive cores and less than 3 mm total tumor length, group 2-less than 3 positive cores with cancer involvement of less than 50% in any core and group 3-less than 33% of positive cores. Prostate specific antigen density cutoffs were also studied in these groups. Pathological findings on radical prostatectomy specimens and biochemical recurrence-free survival were studied. Median followup after radical prostatectomy was 34 months. Results: A majority of Gleason score 6 disease was observed in group 1 (51.7%) whereas a majority of Gleason score 7 or greater disease was reported in groups 2 (53.6%) and 3 (55.4%). Extracapsular extension was noted in 17.5% of radical prostatectomy specimens in group 3 vs 11.2% in group 1 (p = 0.175). The risk of overall unfavorable disease (defined as pT3-4 stage and/or Gleason score 8 or greater) was significantly higher in men with cancer involvement of 3 mm or greater on initial biopsy (27.3% vs 13.5%, respectively, p = 0.023). The 3-year biochemical recurrence-free survival rate was 94.0% and was not affected by the 3 active surveillance definitions. Conclusions: Even with the use of a 21-core biopsy protocol the rate of unfavorable disease in radical prostatectomy specimens remains increased in men eligible for active surveillance. Patients must be informed of this risk of misclassification which ranges from 20% to 28% in men who fulfill the less stringent biopsy criteria. abstract_id: PUBMED:27744325 Evaluation of predictors of unfavorable pathological features in men eligible for active surveillance using radical prostatectomy specimens: a multi-institutional study. Objective: Active surveillance has emerged as an alternative to immediate treatment in men with favorable-risk prostate cancer; however, consensus about defining the appropriate candidates is still lacking. To examine the factors predicting unfavorable pathology among active surveillance candidates, we assessed low-risk radical prostatectomy specimens. Methods: This retrospective study included 1753 men who had undergone radical prostatectomy at six independent institutions in Japan from 2005 to 2011. Patients who met the active surveillance criteria were categorized depending on the pathological features of the radical prostatectomy specimens. 'Reclassification' was defined as upstaging (≥pT3) or upgrading (radical prostatectomy Gleason score ≥7), and 'adverse pathology' was defined as pathological stage ≥pT3 or radical prostatectomy Gleason score ≥4 + 3. Multivariate analysis was used to analyze the preoperative factors for reclassification and adverse pathology. The rates of reclassification and adverse pathology were evaluated by classifying patients according to biopsy core numbers. Results: The active surveillance criteria were met by 284 cases. Reclassification was identified in 154 (54.2%) cases, while adverse pathology in 60 (21.1%) cases. Prostate-specific antigen density and percentage of positive cores were independently associated with reclassification and adverse pathology. The rates of reclassification and adverse pathology were significantly higher among patients with &lt;10 biopsy cores than among others. Thus, focusing on 149 patients with ≥10 biopsy cores, prostate-specific antigen density was the only independent predictor of unfavorable pathological features. The receiver operating characteristic curve analysis determines an optimal cut-off value of prostate-specific antigen density as 0.15 ng/ml2. Conclusions: Prostate-specific antigen density is the most important predictor of unfavorable pathological features in active surveillance candidates. abstract_id: PUBMED:24168232 Pathological upgrading and upstaging of patients eligible for active surveillance according to currently used protocols. Objectives: To investigate the ability of six contemporary active surveillance protocols to appropriately select active surveillance candidates among Korean men who underwent radical prostatectomy. Methods: Between January 2001 and December 2011, 1968 patients underwent radical prostatectomy for prostate cancer at Samsung Medical Center, Seoul, Korea. Patients met the criteria for active surveillance according to six currently used criteria, including those from the Johns Hopkins Hospital, the University of Toronto, the University of California at San Francisco, the Prospective Prostate Cancer Research International Active Surveillance, the University of Miami and the Memorial Sloan-Kettering Cancer Center. The rates of Gleason score upgrading, upstaging and misclassification at final pathology were assessed. Results: Among 1006 assessable patients, the percentage of men eligible for active surveillance varied from 13.5% to 38.5%, depending on the criteria used. The rates of upgrading ranged from 41.6% to 50.6%. Extracapsular extension was reported in 4.1% to 8.5% of patients, whereas seminal vesicle invasion was reported in 0.5% to 1.6% of patients. The upstaging rates according to the six active surveillance criteria varied from 4.5% to 9.3%, and the rates of misclassification varied from 44.5% to 54.8%. Conclusions: Currently available active surveillance criteria might not be suitable in Korean patients with prostate cancer, as they have a high likelihood of underestimating cancer. abstract_id: PUBMED:23680308 Expanded criteria to identify men eligible for active surveillance of low risk prostate cancer at Johns Hopkins: a preliminary analysis. Purpose: At our institution the eligibility criteria used to enroll patients in active surveillance are clinical stage T1, prostate specific antigen density less than 0.15 ng/ml, biopsy Gleason score 6 or less, 2 or fewer positive biopsy cores and 50% or less involvement of any biopsy core. We hypothesized that these criteria may be excessively strict, precluding many men from active surveillance. Materials And Methods: We studied pathological outcomes in men treated with radical prostatectomy between 1995 and 2012 who met 4 or more of the 5 active surveillance criteria. Outcomes included a definition of significant tumor (pathological Gleason 7 or greater, or nonorgan confined). We compared adverse pathology rates between men who met all 5 vs 4 of 5 active surveillance criteria. Results: Of 8,261 men 1,890 (22.9%) met all active surveillance eligibility criteria and 2,133 (25.8%) met 4. Men with values exceeding prostate specific antigen density and biopsy Gleason criteria were at increased risk for adverse pathological outcomes. Clinical stage greater than T1 was not associated with adverse pathological findings. The risk of significant tumors in men with clinical stage T2 lesions, 3 or fewer positive biopsy cores and less than 60% core involvement was comparable to that of men who met all active surveillance criteria. Conclusions: Prostate specific antigen density greater than 0.15 ng/ml and biopsy Gleason score 7 or greater are strongly associated with adverse pathological findings at radical prostatectomy. Our findings suggest that active surveillance criteria should be expanded to include men with clinical stage T2 lesions and a greater number of positive biopsy cores of low grade. Based on these preliminary findings, we are in the process of reassessing active surveillance eligibility criteria using more detailed pathological analysis. abstract_id: PUBMED:20534686 Pathological findings at radical prostatectomy in Japanese prospective active surveillance cohort. Objectives: The present study was carried out to analyze pathological features of prostatectomy specimens performed at different timing and trigger during active surveillance. Methods: One hundred and thirty-four patients that fit a selection condition similar to the so-called Hopkins' criteria were enrolled into the present study between January 2002 and December 2003. Patients were recommended to start curable treatment when they showed prostate-specific antigen-doubling time of 2 years or shorter or pathological progression at 1-year re-biopsy. Median observation period was 61 months. Results: Fourteen patients underwent radical prostatectomy immediately after enrollment (Group A) whereas 28 patients underwent radical prostatectomy after substantial periods of active surveillance (Group B). Of the 28 Group B, trigger of radical prostatectomy was on protocol in 17 patients (Group B1) whereas 11 patients underwent radical prostatectomy by their preference (Group B2). Upgrade from initial biopsy was observed in 43% of Group A and 68% of Group B. Upgrade was more frequently observed in Group B1 than B2 with border line significance (P = 0.075). Perineural infiltration and positive surgical margin rates of Group B1 were significantly higher than those of B2 (P &lt; 0.05). Conclusions: Unfavorable pathological features of surgical specimens were more frequently observed in patients who underwent radical prostatectomy due to short prostate-specific antigen-doubling time or biopsy findings than those who underwent radical prostatectomy because of other reasons including patients' preference. Rates of unfavorable pathological features at radical prostatectomy that deviate initial selection criteria was high enough to support integration of frequent biopsies into active surveillance program. abstract_id: PUBMED:19913808 Eligibility for active surveillance and pathological outcomes for men undergoing radical prostatectomy in a large, community based cohort. Purpose: We analyzed competing active surveillance criteria in men who underwent radical prostatectomy in relation to outcome data in a large, community based cohort. Materials And Methods: We identified all men from the CaPSURE database who underwent radical prostatectomy from 1999 to 2007 and met inclusion criteria for the stringent prospective University of California-San Francisco and Johns Hopkins active surveillance protocols. Rates of pathological upgrading, up staging and biochemical recurrence were compared. Results: We identified 2,837 men who underwent radical prostatectomy and had complete pathological and followup data available. Of these men 1,375 and 125 met University of California-San Francisco and Johns Hopkins criteria, respectively. When comparing men who met the 2 sets of criteria vs those who met University of California-San Francisco criteria only, there were no significant differences in the rate of upgrading (20% vs 27%, p = 0.07) and up staging (6% vs 8%, p = 0.39) at radical prostatectomy. At a median 36-month followup 5-year biochemical recurrence-free estimates were similar at 92% in men who met the 2 sets of criteria and 90% in those who met the University of California-San Francisco definition only. On multivariate analysis upgrading to 7 or greater (HR 2.2, 95% CI 1.2-4.2), up staging (HR 3.5, 95% CI 1.3-9.3), and upgrading plus up staging (HR 6.9, 95% CI 3.3-14.5) were associated with a higher risk of biochemical recurrence in patients who met University of California-San Francisco criteria. Conclusions: Men who met enrollment criteria for the 2 active surveillance protocols had a similar rate of upgrading, up staging and 5-year biochemical recurrence-free rates after radical prostatectomy. Further comparison between current protocols is warranted to establish universal inclusion criteria. abstract_id: PUBMED:22104434 Pathological outcomes of men eligible for active surveillance after undergoing radical prostatectomy: are results predictable? Introduction: To analyze pathological results in patients with prostate cancer eligible for active surveillance (AS) after radical prostatectomy and available prediction systems. Methods: A retrospective analysis was performed of 612 patients who underwent radical prostatectomy during a 14-year period. Subsequently, we selected those patients who would have been eligible for AS according to 2 different published criteria. Group AS-A matched the following criteria: ≤T2a; Gleason Score ≤6; and prostate-specific antigen &lt;10 ng/mL, while group AS-B applied to different criteria: ≤T2a; Gleason Score &lt;7; and prostate-specific antigen ≤15 ng/mL. Pathological outcomes were compared with results of the 2001 Partin tables. Results: Altogether, 125 (20.4%) patients were included in group AS-A and 159 (25.9%) in group AS-B. We detected 32 cases of &gt;pT2c (25.6%) for group AS-A and 47 cases (29.6%) for AS-B, respectively. Gleason score upgrading was recorded in 34.4% (AS-A) and 38.3% (AS-B). Results of the Partin tables showed good discrimination among patients at risk for positive lymph nodes but limited discrimination for organ-confined disease, seminal vesicle. Conclusions: Overall &gt;25% of patients eligible for AS showed either upstaging or Gleason score upgrading, which could not be measured with the examined predictive tools. Patients should be informed about the risks of inaccurate preoperative diagnostic. abstract_id: PUBMED:23321581 Pathological, oncologic and functional outcomes of radical prostatectomy following active surveillance. Purpose: We examined prostatectomy pathology, and oncologic and functional outcomes of men progressing from active surveillance to radical prostatectomy. Materials And Methods: We identified patients on active surveillance treated with radical prostatectomy. We compared patients on active surveillance ultimately treated with radical prostatectomy to age and prostate specific antigen matched men undergoing immediate radical prostatectomy after a diagnosis of low risk disease who were candidates for active surveillance (group 1). We also compared patients on active surveillance with progression to Gleason 7 disease to men treated who had similar de novo disease (group 2) to determine whether patients on active surveillance have potentially adverse outcomes. Results: Of 289 patients on active surveillance 41 (14.2%) underwent radical prostatectomy after a median of 35.2 months (IQR 22.8-46.6) on active surveillance. Compared to group 1, the radical prostatectomy after active surveillance group had expectedly worse pathological outcomes, whereas the pathological outcomes of patients undergoing radical prostatectomy after active surveillance with progression to Gleason 7 disease were similar to those of group 2. At a median of 3.5 years from radical prostatectomy (IQR 2.6-4.7), biochemical recurrence was low and comparable between the radical prostatectomy after active surveillance group and group 1 (2.6% vs 5.4%, p = 0.47), while erectile function was 29.0% and continence 89.7%, comparable to both groups. Conclusions: Radical prostatectomy after a period of active surveillance does not appear to result in adverse pathological outcomes compared to patients with a similar preoperative pathology. abstract_id: PUBMED:29680952 Functional outcomes of robot-assisted radical prostatectomy in patients eligible for active surveillance. Objective: To assess the outcome of low risk prostate cancer (PCa) patients who were candidates for active surveillance (AS) but had undergone robot-assisted radical prostatectomy (RARP). Method: We reviewed our prospectively collected database of patients operated by RARP between 2006 and 2014. Low D'Amico risk patients were selected. Oncological outcomes were reported based on pathology results and biochemical failure. Functional outcomes on continence and potency were reported at 12 and 24 months. Continence was assessed by the number of pads per day. With respect to potency, it was assessed using the Sexual Health Inventory for Men (SHIM) and Erectile Hardness Scale (EHS). Results: Out of 812 patients, 237 (29.2%) patients were D'Amico low risk and were eligible for analysis. 44 men fit Epstein's criteria. 134 (56.5%) men had pathological upgrading. Age and clinical stage were predictors of upgrading on multivariate analysis. 220 (92.8%) patients had available follow-up for biochemical recurrence, potency, and continence for 2 years. The mean and median follow-up was 34.8 and 31.4 months, respectively. Only 5 (2.3%) men developed BCR, all of whom had pathological upgrading. Extra capsular extension and positive surgical margins were observed in 14.8 and 19.1%, respectively. 0 pad was achieved in 86.7 and 88.9% at 1 and 2 years, respectively. Proportion of patients with SHIM &gt; 21 at 1 and 2 years was 24.8 and 30.6%, respectively. Moreover, patients having erections adequate for intercourse (EHS ≥ 3) were seen in 69.6 and 83.1% at 1 and 2 years, respectively. Functional outcomes of patients fitting Epstein's criteria (n = 44) and patients with no upgrading on final pathology (n = 103) were not significantly different compared to the overall low risk study group. Conclusion: This retrospective study showed that RARP is not without harm even in patients with low risk disease. On the other hand, considerable rate of upgrading was noted. Answer: The pathological findings after radical prostatectomy in men eligible for active surveillance in the French trial SURACAP indicate that there is a significant rate of misclassification. In the study, 19% of patients who met the criteria for active surveillance showed an extracapsular extension or a seminal vesicle invasion on pathological analysis. Additionally, the Gleason score of the pathological specimen was upgraded in 43% of patients, with 57% having a Gleason score of 6 (3+3), 38% having a score of 7 (3+4), and 5% having a score of 8 (4+4) (PUBMED:21482401). This high rate of misclassification suggests that the criteria used to select patients for active surveillance may not be stringent enough to accurately identify those with truly low-risk prostate cancer. The presence of extracapsular extension or seminal vesicle invasion, as well as the upgrading of Gleason scores, indicates that a significant proportion of patients may have more aggressive disease than initially thought, which could impact their long-term outcomes if managed with active surveillance alone. Therefore, the misclassification rate observed in this study may not be considered acceptable, as it implies that a notable percentage of men who could potentially benefit from more aggressive treatment are instead being managed conservatively based on their preoperative assessments. This underscores the need for improved selection criteria or additional diagnostic tools to more accurately stratify the risk of prostate cancer patients and determine the most appropriate management strategy.
Instruction: Is failure to provide venous thromboprophylaxis negligent? Abstracts: abstract_id: PUBMED:18265533 Is failure to provide venous thromboprophylaxis negligent? Objectives: To report the outcome of 100 consecutive medicolegal claims referred to one of the authors (1990-2003) following the development of venous thromboembolism (VTE) in surgical patients. Methods: A retrospective analysis of the experience of a vascular surgeon acting as an expert witness in the United Kingdom. Results: Prophylaxis had been provided to 43 claimants with risk factors, who, unfortunately, still developed a VTE and alleged negligence. Twenty-nine claims involved patients who had not received prophylaxis because they were at low risk. In 25/28 claims where no prophylaxis was provided, despite identifiable VTE risk factors, the claim was successful. Claimants who developed a VTE that had been managed incorrectly were successful whether they had received prophylaxis or not. Settlement amounts, where disclosed, are reported. Conclusions: Failure to perform a risk assessment and to provide appropriate venous thromboprophylaxis in surgical patients is considered negligent. Clinicians looking after all hospitalized patients who are not assessing their patients' risk for VTE and/or not providing appropriate prophylaxis are at risk of being accused of negligence. abstract_id: PUBMED:37537891 Central venous catheter safety in pediatric patients with intestinal failure. Children with intestinal failure (IF) require long-term central venous access to provide life-sustaining parenteral nutrition. Mechanical, thrombotic, and infectious complications are potentially life-threatening and may necessitate central venous catheter (CVC) replacement. Repeated central line replacements may lead to a loss of vascular access sites and increases risk for intestinal transplantation. Children with IF face unique challenges for CVC safety given their young age, altered anatomy, and increased risk of thrombosis and infection. The following review addresses preventative, diagnostic, and treatment strategies for central line safety concerns specific to children with IF as well as recommendations for promoting catheter safety during activities, travel, and emergencies. abstract_id: PUBMED:34459379 Venous Thromboembolism in Patients with Heart Failure. Heart failure (HF) and venous thromboembolism (VTE) are common clinical entities, closely interrelated, sharing multiple pathophysiological mechanisms. Their co-incidence is associated with further worsening of the prognosis of one another. Despite their frequent co-existence, important clinical questions still remain unanswered. The risk of VTE especially in chronic HF patients appears to vary widely in clinical studies, while the VTE-associated risk in HF patients is still not well determined and cannot be accurately predicted. Although scientific guidelines recommend venous thromboprophylaxis in patients hospitalized with an acute HF syndrome, venous thromboprophylaxis has not been studied adequately in prospective trials in ambulatory HF patients. In the present review, we aimed to summarize the current knowledge on the epidemiology of VTE and HF, the risk prediction for VTE occurrence in HF patients, the impact on patient outcome, and the need for anticoagulation in certain HF subgroups to improve prognosis, while we sought to identify gaps in knowledge that need to be addressed in the future. abstract_id: PUBMED:21396509 Venous thromboembolism in heart failure: preventable deaths during and after hospitalization. Objective: Our aim was to compare the clinical characteristics, prophylaxis, treatment, and outcomes of patients with venous thromboembolism with and without heart failure. Methods: We studied patients with heart failure in the population-based Worcester Venous Thromboembolism Study of 1822 consecutive patients with validated venous thromboembolism. Results: Of the 1822 patients with venous thromboembolism, 319 (17.5%) had a history of clinical heart failure and 1503 (82.5%) did not. Patients with heart failure were older (mean age 75 vs 62 years, P&lt;.0001) and more likely to have been immobilized (65.2% vs 46.1%, P&lt;.0001). Thromboprophylaxis was omitted in approximately one third of patients with heart failure who had been hospitalized for non-venous thromboembolism-related illness or had undergone major surgery within the 3 months before diagnosis. Patients with heart failure had a higher frequency of in-hospital death (9.7% vs 3.3%, P&lt;.0001) and death within 30 days of venous thromboembolism diagnosis (15.6% vs 6.4%, P&lt;.0001). Heart failure (adjusted odds ratio [OR] 2.04; 95% confidence interval [CI], 1.15-3.62) and immobility (adjusted OR 4.37; 95% CI, 2.42-7.9) were associated with an increased risk of in-hospital death. Heart failure (adjusted OR 1.57; 95% CI, 1.01-2.43) and immobility (adjusted OR 3.05; 95% CI, 2.01-4.62) also were independent predictors of death within 30 days of venous thromboembolism diagnosis. Conclusion: High mortality was observed among patients with heart failure and venous thromboembolism both during and after hospitalization. Heart failure and immobility are potent risk factors for in-hospital death and death within 30 days in patients with venous thromboembolism. abstract_id: PUBMED:34509140 It's not always the shunt: Microthombi formation in venous collaterals causing symptoms of shunt failure in the setting of shunted hydrocephalus. We present a patient with a history of shunted hydrocephalus due to neonatal iatrogenic thoracic venous occlusion with subsequent interval development of spontaneous thoracic venous collateral occlusion as a young adult presenting with symptoms of ventriculoperitoneal shunt failure. Though the patient's presenting symptoms were suggestive of shunt failure in the setting of known shunt dependent hydrocephalus, specific ophthalmologic findings, including venous engorgement, retinal and subconjunctival hemorrhages as well as periorbital edema in conjunction with papilledema, led to the correct diagnosis of cranio-orbital congestion secondary to microthrombi formation in the venous collateral anomalies of her chest wall. This pathology was successfully managed with warfarin. abstract_id: PUBMED:26765646 Heart failure and risk of venous thromboembolism: a systematic review and meta-analysis. Background: Venous thromboembolism is a major global health problem that is often secondary to other clinical situations. Many studies have investigated the association between venous thromboembolism and heart failure, but have yielded inconsistent findings. We aimed to quantify the absolute and relative risks (RR) for venous thromboembolism in patients with heart failure after hospital admission. We also assessed rates of venous thromboembolism in patients in different settings. Methods: In this systematic review and meta-analysis, we searched for studies investigating the risk of venous thromboembolism in patients in hospital with heart failure. We searched for studies published between Jan 1, 1955, and March 31, 2015, in PubMed, Embase, Evidence-Based Medicine Reviews, Allied and Complementary Medicine Database, Ovid HealthSTAR, Global Health, Ovid Nursing Database, Web of Science, CINAHL Plus, ProQuest Central, Conference Papers Index, BIOSIS Previews, and ClinicalTrials.gov. All cohort studies and subgroup analyses of randomised controlled trials (RCTs) were eligible for inclusion if they reported venous thromboembolism rates (number of events per follow-up period) or RR estimates. We extracted data from published reports and contacted the corresponding authors of records with insufficient quantitative data. RRs and 95% CIs were pooled using a random-effects model. This study is registered with PROSPERO, number CRD42014015504. Findings: Of 8673 records identified, we included 71 studies with data from 88 cohorts in our analysis, with 59 cohorts included in the assessment of venous thromboembolism rates and 46 cohorts included in the meta-analysis of heart failure and risk of venous thromboembolism. Venous thromboembolism rates varied widely in patients in hospital with heart failure from different settings. The overall median symptomatic venous thromboembolism rate was 2·48% (IQR 0·84-5·61); rates was were 3·73% (1·05-7·31) for patients who did not receive thromboprophylaxis and 1·47% (0·64-3·54) for those who did. Overall, patients with heart failure in hospital had an RR of 1·51 (1·36-1·68) for venous thromboembolism. The overall I(2) statistic was 96·1% and there was no evidence of publication bias (Egger's test, p=0·46). Interpretation: Heart failure is a common independent risk factor for venous thromboembolism. Thromoboprophylaxis should be considered in clinical practice for high-risk patients. Funding: National Natural Science Foundation. abstract_id: PUBMED:27800228 Abducens Palsy Due to Cerebral Venous Sinus Thrombosis in a Patient with Heart Failure. Cerebral venous sinus thrombosis has a wide spectrum of presentation. The clinical manifestation depends on the location of the thrombus, its rate of progression, and the extent of venous collateralization. In this case report, we present the findings of cerebral venous sinus thrombosis presenting with abducens palsy and papilloedema in a patient with heart failure, an unusual etiology for cerebral venous sinus thrombosis. abstract_id: PUBMED:30952453 Development of venous thrombi in a pediatric population of intestinal failure. Background/purpose: Although pediatric intestinal failure (IF) is now a survivable diagnosis, children are still at risk for complications. Loss of venous access persists as a leading indication for intestinal transplantation. The goal of this study was to identify risk factors for loss of venous access in a pediatric intestinal failure population on long-term PN. Methods: We identified all patients who were PN dependent. Results: Patients that developed venous thrombosis had significantly more lines placed in the first 2 years of life compared to those who did not develop thrombosis. Multivariate regression analysis revealed that diagnosis (NEC and gastroschisis) and parental education were significant predictors of venous thrombosis. Conclusion: By identifying potential risk factors for thrombus development, interventions can be developed to improve the overall outcome in pediatric IF patients. Type Of Study: Diagnostic LEVEL OF EVIDENCE: III. abstract_id: PUBMED:34967025 Anticoagulants decrease the risk for catheter-related venous thrombosis in patients with chronic intestinal failure: A long-term cohort study. Background: Catheter-related venous thrombosis (CRVT) is a severe complication of home parental nutrition. Although primary prevention of CRVT is crucial, there is no consensus on anticoagulant use to prevent this adversity. The aim was to compare CRVT risk in patients with chronic intestinal failure (CIF) in the presence or absence of anticoagulants, and to identify CRVT risk factors. Methods: This retrospective cohort study comprised adult patients with CIF with a central venous access device (CVAD) between 2010 and 2020 that were treated at our national CIF referral center. Analyses were performed at a CVAD level. Results: Overall, 1188 CVADs in 389 patients were included (540.800 CVAD days). Anticoagulants were used in 403 CVADs. In total, 137 CRVTs occurred in 98 patients, resulting in 0.25 CRVTs/1000 CVAD days (95% CI, 0.22-0.29). Anticoagulant use was associated with a decreased CRVT risk (odds ratio [OR] = 0.53; 95% CI, 0.31-0.89; P = 0.02). Left-sided CVAD insertion (OR = 2.00; 95% CI, 1.36-2.94), a history of venous thrombosis (OR = 1.73; 95% CI, 1.05-2.84), and a shorter period postinsertion (OR = 0.78; 95% CI, 0.65-0.92) were independently associated with an increased CRVT risk. Conclusion: Anticoagulants decreased the CRVT risk. In addition, we identified left-sided vein insertion, a history of venous thrombosis, and a shorter period post-CVAD insertion as CRVT risk factors. Further prospective studies should provide guidance whether prophylactic anticoagulant use, especially in higher-risk patients with a left-sided CVAD or a history of venous thrombosis, is justified. abstract_id: PUBMED:26936289 Central venous thrombosis in children with intestinal failure on long-term parenteral nutrition. Purpose: Central venous thrombosis (CVT) is a serious complication of long-term central venous access for parenteral nutrition (PN) in children with intestinal failure (IF). We reviewed thse incidence of CVT and possible risk factors. Methods: Children with IF on home PN (2010-2014) with central venous imaging were reviewed. Patient demographics, catheter characteristics and related complications, and markers of liver function were compared between children with and without CVT. Serum thrombophilia markers were reviewed for patients with CVT. Results: Thirty children with central venous imaging were included. Seventeen patients had thrombosis of ≥1 central vein, and twelve had ≥2 thrombosed central veins. Patients with and without CVT had similar demographics and catheter characteristics. Patients with CVT had a significantly lower albumin level (2.76±0.38g/dL vs. 3.12±0.41g/dL, p=0.0223). The most common markers of thrombophilia in children with CVT were antithrombin, protein C and S deficiencies, and elevated factor VIII. There was a statistically significant correlation between a combined protein C and S deficiency and having &gt;1 CVT. Conclusions: Children with IF on long-term PN are at high risk for CVT potentially owing to low levels of natural anticoagulant proteins and elevated factor FVIII activity, likely a reflection of liver insufficiency and chronic inflammation. Answer: Based on the provided abstracts, failure to perform a risk assessment and to provide appropriate venous thromboprophylaxis in surgical patients is considered negligent. In a retrospective analysis of medicolegal claims in the UK, it was found that in cases where no prophylaxis was provided despite identifiable VTE risk factors, the majority of claims were successful, indicating that the standard of care was not met (PUBMED:18265533). This suggests that clinicians are at risk of being accused of negligence if they do not assess their patients' risk for VTE and/or do not provide appropriate prophylaxis.
Instruction: Follow-up of known carcinoid liver metastases: is respiratory-gated t(2) fast spin-echo enough? Abstracts: abstract_id: PUBMED:21474918 Follow-up of known carcinoid liver metastases: is respiratory-gated t(2) fast spin-echo enough? Purpose: To compare the reliability of T(1)-weighted, T(2)-weighted, and different phases of dynamic contrast-enhanced MRI in the detection and reproducible size assessment of known carcinoid hepatic metastases. Materials And Methods: 22 patients with known carcinoid hepatic metastases qualified for the study. Three readers reviewed MRI images twice independently at sessions that were &gt;2 weeks apart. The best sequences for metastases conspicuity, number and size, and reproducibility of size were compared subjectively. Linear mixed models were used to compare the number and size of metastases between readers and sequences, with the significance level set at p &lt; 0.05. Results: The best overall sequence rated was T(2) FSE (fast spin-echo). The average numbers of metastases was equivalent using T(1)-weighted arterial and T(2) FSE but less for T(2) FRFSE (fast-recovery, fast spin-echo) or delayed imaging. 1,067 lesions were detected and 66 were measured twice by three readers. There was no significant difference between the sequences or between the readings in size measurement when the same sequence was used. However, there was a difference among sequences for size of metastases (p &lt; 0.001). Conclusion: T(2) FSE can be used as a basic sequence in detecting and monitoring the size of carcinoid hepatic metastases and may serve as the primary sequence in patients with contrast allergy or at risk for nephrogenic systemic fibrosis. abstract_id: PUBMED:12933487 Spectrum of MRI appearances of untreated metastases of the liver. Objective: The purpose of our study was to identify the spectrum of MRI appearances of untreated liver metastases from different primary origins. Materials And Methods: Over a period of 52 months, we used our clinical information system to retrospectively identify the first MRIs obtained in 165 consecutive patients who had untreated liver metastases. All patients had histologic confirmation of the primary tumor. Liver metastases were confirmed at histologic examination, on imaging, or at clinical follow-up. MR sequences used included T1-weighted spoiled gradient-echo, T2-weighted half-Fourier acquisition single-shot turbo spin-echo, and serial gadolinium-enhanced spoiled gradient-echo imaging. Size, signal intensity characteristics, and pattern of enhancement of the metastases on MRIs were evaluated by two radiologists in consensus. Lesions were categorized by size: smaller than 1.5 cm, between 1.5 and 3.0 cm, and larger than 3.0 cm. Results: A total of 516 metastases (size range, 5-120 mm; mean, 28 mm) were assessed. Fifty-nine patients had hypervascular lesions, and 106 patients had hypovascular lesions. A significant difference in proportion of tumor vascularity was observed between the primary tumors described as classically hypervascular and those described as classically hypovascular (chi-square test for proportions of 70.8, p &lt; 0.0001). The most common pattern was peripheral ring (72% of patients) seen on the arterial dominant phase images, with incomplete central progression (63%) seen on the delayed phase images. A hypointense ring seen in the periphery of the tumor during the delayed phase was the most common appearance in hypervascular metastases (27% patients) and was particularly conspicuous in patients with neuroendocrine and carcinoid tumors. Perilesional enhancement was common (47%), mostly seen in hypovascular metastases (92%). Generally, large lesions tended to show a peripheral ring or heterogeneous enhancement, and small lesions showed homogeneous enhancement. Conclusion: MRI allows the identification of a wide spectrum of appearances of untreated liver metastases. The extent and pattern of enhancement of various histologic types of tumor are depicted on MRI. abstract_id: PUBMED:9571956 Mangafodipir trisodium (MnDPDP)-enhanced magnetic resonance imaging of the liver and pancreas. Contrast-enhanced magnetic resonance imaging (MRI) of the liver and pancreas is frequently performed to improve the sensitivity and specificity of lesion detection in these organs. The concept of using tissue-specific contrast media is to selectively enhance the normal parenchyma, but not lesions, so that the contrast between tumorous and normal tissue is increased, and lesion detectability improved. Mangafodipir trisodium (MnDPDP) has been developed as a hepatocellular-specific contrast agent, but uptake has also been found in pancreatic tissue. In this study the safety and diagnostic efficacy of MnDPDP were investigated in both healthy volunteers and in patients with liver and pancreatic tumors. In healthy volunteers (n = 8), dose-dependent enhancement in T1-weighted images was observed in the normal liver and pancreatic parenchyma after infusion of MnDPDP at doses of 5 and 10 mumol/kg. The maximal enhancement in the two dose groups was 77 and 110% in the liver, and 57 and 84% in the pancreas, respectively. The enhancement-over-time profiles demonstrated that the effective imaging window was about 2 h for the liver, and over 4 h for the pancreas. There was no measurable enhancement in brain structures protected by intact blood-brain barrier, and no changes of clinical importance were found in vital signs or in blood and urinary chemistry variables. Compared with unenhanced images (including T2-weighted images), significantly more lesions were detected on MnDPDP-enhanced T1 images in 82 patients with liver tumors (mostly metastases). Features such as rim enhancement and the enhancement in hepatocellular carcinomas can provide information for differential diagnosis. In a study on patients with pancreatic tumors, mainly adenocarcinomas (n = 21) and islet cell tumors (n = 19), two additional lesions were found in the MnDPDP-enhanced images. The contrast enhancement in the pancreatic parenchyma can vary greatly, depending on the site of the enhancing part of the organ in relation to a large tumor. The tumors of both origins were also enhanced post-contrast, but to a lesser degree than the normal pancreatic tissue. MnDPDP enhancement was investigated in 30 liver metastases from endocrine tumors in 13 patients. These lesions showed a signal increase of about 49% post-contrast, which lasted longer than that in the normal liver tissue. The findings may help to distinguish these tumors from other metastatic tumors. T1-weighted sequences of four types, including a spin-echo and three variants of fast gradient-echo sequences, and various parameter combinations, were investigated in healthy volunteers (n = 6), with the aim of finding the optimal sequence for MnDPDP-enhanced MRI of the liver and pancreas. The fat-and-water out-of-phase, fast field (gradient)-echo sequence was the best for imaging of both the liver and pancreas. The studies have shown that MnDPDP is safe when given as an infusion, and is effective as a liver- and pancreas-specific contrast medium, with improved lesion detection in MRI of these organs. It is also useful for the characterization of liver tumors. abstract_id: PUBMED:9626886 Uptake of mangafodipir trisodium in liver metastases from endocrine tumors. The purpose of the study was to investigate retrospectively whether mangafodipir trisodium (MnDPDP) can enhance the liver metastases from endocrine tumors. Thirteen patients with endocrine tumors and liver metastases underwent T1-weighted spin-echo (SE) and turbo gradient-echo (GRE) MRI conducted before and 20 to 60 minutes after i.v. infusion of MnDPDP. Additional 24-hour-delay scans were performed in 8 of 13 patients. MR signal intensity (SI) was measured in liver parenchyma and metastases, which was then related to that of paraspinal muscle. A total of 30 lesions on precontrast and postcontrast images and 18 lesions on 24-hour-delay images were measured. An enhancement by 49% in SE and 40% in GRE images (P = .0001) was observed in tumor tissues after MnDPDP infusion. In 24-hour-delay images, the SI of the lesions remained relatively high, but in liver parenchyma, it decreased significantly, and the tumor-liver tissue contrast was reduced. abstract_id: PUBMED:9500260 Malignant hepatic tumors: changes on MRI after hepatic arterial chemoembolization--preliminary findings. This study describes the MR appearances of malignant hypervascular liver lesions pre- and post-hepatic-arterial chemoembolization, with correlation to serial imaging and clinical responses. Eight patients with malignant hypervascular liver lesions underwent pretreatment and posttreatment MR examination on a 1.5-T MR imager. MR sequences included T1-weighted spoiled gradient echo (SGE), T2-weighted fat-suppressed spin echo or turbo spin echo, and dynamic gadolinium-enhanced SGE images. All patients underwent pretreatment, initial posttreatment, and subsequent posttreatment MR studies. The histology of primary tumors included various types of hepatocellular carcinoma (HCC) (four patients: fibrolamellar HCC [one patient], HCC [two patients], mixed HCC/cholangiocarcinoma [one patient]) and liver metastases (four patients: untyped islet cell tumor [two patients], gastrinoma [one patient], carcinoid [one patient]). Response to chemoembolization was determined by three assessments: MR response, serial imaging response, and clinical response. The appearance of MR response to chemoembolization was determined based on the correlation with clinical and serial imaging response. The MR response of lesions that showed good clinical response included: increase in signal intensity on T1-weighted images (three patients), decrease in signal intensity on T2-weighted images (three patients), and negligible or minimal enhancement on immediate postgadolinium images (four patients) after chemoembolization. The most marked change in lesion appearance was observed in lesions &lt; or = 1 cm, which had intense homogeneous enhancement on pretreatment MR studies and negligible enhancement on initial posttreatment MR examinations. MR response of lesions that showed moderate clinical response demonstrated a variety of lesion appearances from substantial change to minimal change. MR response of lesions that showed poor clinical response demonstrated no change in lesion appearances compared with the pretreatment MR study. Our results demonstrated change in appearance of liver lesions between pre- and post-hepatic-arterial chemoembolization MR studies. MR response correlated with response determined by serial imaging studies and clinical findings. abstract_id: PUBMED:21134563 Laparoscopic radiofrequency thermal ablation of neuroendocrine hepatic metastases: long-term follow-up. Background: Since our first report 13 years ago, laparoscopic radiofrequency ablation has been incorporated into the treatment algorithm of patients with neuroendocrine liver metastases. The aim of this study is to report long-term oncologic results. Methods: Eighty-nine patients with neuroendocrine hepatic metastases underwent 119 laparoscopic radiofrequency ablation sessions within 13 years. Data were obtained from a prospective, Institutional Review Board approved database. Univariate Kaplan Meier and multivariate Cox proportional hazards model were used for statistical analyses. Data are expressed as mean ± standard error of the mean. Results: Thirty-five women and 54 men with a mean age of 56 ± 1.4 years were included in this study. Tumor types included were carcinoid (n = 55), pancreatic islet cell (n = 23), and medullary thyroid cancer (n = 11). Mean tumor size was 3.6 ± 0.2 and the number of lesions was 6 ± 1. Perioperative morbidity was 6%, and 30-day mortality was 1%. Symptom relief was achieved in 97% of patients after radiofrequency ablation. Median follow-up was 30 ± 3 months. Twenty-two percent of patients developed local liver recurrence, 63% developed new liver lesions, and 59% developed extrahepatic disease in follow-up. Repeat radiofrequency ablation (27%) and chemoembolization (7%) were used to achieve additional local tumor control in follow up. Median disease-free survival was 1.3 years and the overall survival was 6 years after radiofrequency ablation. Liver tumor volume, symptoms, and extrahepatic disease were independent predictors of survival. Conclusion: To our knowledge, this is the largest prospective experience with radiofrequency ablation of neuroendocrine liver metastases. Effective symptom palliation and long-term local tumor control are possible in these patients with minimal morbidity. abstract_id: PUBMED:7957287 Dynamic Gd-DOTA-enhanced MR imaging of hepatic metastases from pancreatic neuroendocrine tumors. The aim of this study was to determine the MR imaging features of hepatic metastases from pancreatic neuroendocrine tumors (HMPNT), and to assess their enhancement characteristics on dynamic gadolinium-chelate-enhanced MR imaging. Twelve consecutive patients with pathologically proven HMPNT underwent spin-echo (SE) and dynamic gradient-recalled echo (GRE) MR imaging before and after intravenous administration of a gadolinium-chelate (gadolinium tetraazacyclododecanetetraacetic acid; Gd-DOTA). MR examinations were performed prospectively and interpreted retrospectively in consensus by two radiologists. Fifty-five HMPNT were identified in matching anatomic sections on the different MR sequences and included in the study. On T1-weighted SE images, 45 HMPNT (82%) were hypointense and 10 HMPNT (18%) were isointense. On T2-weighted SE images 55 HMPNT (100%) were hyperintense. On GRE images obtained 20 s after Gd-DOTA injection, 41 HMPNT (75%) showed slight peripheral enhancement, and 14 HMPNT (25%) showed internal enhancement. Forty-four HMPNT (80%) were heterogeneous. On GRE images obtained 4 min after Gd-DOTA injection, 37 HMPNT (67%) showed peripheral enhancement, and 18 HMPNT (33%) showed a global and almost complete enhancement. Heterogeneity of enhancement was seen in all 55 HMPNT (100%). Although HMPNT exhibit a large spectrum of MR features, early enhancement and heterogeneity on dynamic GRE MR images are suggestive features of HMPNT. abstract_id: PUBMED:16504962 Assessment of the extent of metastases of gastrointestinal carcinoid tumors using whole-body PET, CT, MRI, PET/CT and PET/MRI. Objective: To assess the diagnostic value of whole-body positron emission tomography (PET), computed tomography (CT), magnetic resonance imaging (MRI), and the fusion of PET and CT (PET/CT) and PET and MRI (PET/MRI) in the detection of metastatic disease of gastrointestinal carcinoid tumors. Materials And Methods: This prospective study included six patients with extensive nonresectable metastases of gastrointestinal carcinoid tumors which were consecutively examined from the base of the skull to the proximal thigh using a state-of-the-art PET/CT scanner and a 1.5 Tesla whole-body MRI scanner. PET was performed with a carbohydrated F-18-labeled somatostatin-receptor ligand ([ superset18 F]FP-Gluc-TOCA) using a Pico-3D PET scanner. CT was performed with a venous-dominant contrast-enhanced phase using a 16-slice CT scanner. MRI was performed with a coronal T2-weighted Half-Fourier Acquired Single-Shot Turbo Spin Echo (HASTE) sequence, a coronal T2-weighted Turbo-Short Tau Inversion-Recovery (STIR) sequence, a coronal T1-weighted Turbo Spin Echo (TSE) sequence and a high resolution axial T2-weighted TSE sequence. The data sets from PET and CT were fused automatically. The PET and MRI data sets were fused manually. Lesions were rated as metastases if they were not clearly identified as benign lesions according to standard radiological criteria. Results: For PET, CT, MRI, PET/CT, and PET/MRI, the lesion-by-lesion based analysis showed an overall detection rate for liver metastases (n = 391) of 49.9% (P&lt;.001), 37.1% (P&lt;.001), 98.2%, 50.9% (P&lt;.001) and 100%, for lymph node metastases (n = 37) of 91.9%, 83.8%, 64.9%, 100% and 97.3% and for osseous metastases (n = 12) of 100%, 8.3% (P&lt;.005), 66.7%, 100% and 100%. Conclusions: PET as single modality revealed the most lymph node and osseous metastases. MRI as single modality revealed the most liver metastases. The combination of molecular/metabolic with anatomical/ morphological information improves the diagnostic accuracy for the detection of metastases in comparison to the single modalities. Whole-body PET/MRI is a very promising diagnostic modality for oncological imaging due to the missing radiation exposure and the high soft tissue resolution of MRI in contrast to CT. abstract_id: PUBMED:12383582 Metastatic carcinoid tumor to the heart: echocardiographic-pathologic study of 11 patients. Objective: We sought to investigate the clinical and echocardiographic (echo) characteristics of metastatic carcinoid tumor in the heart. Background: Right-sided valvular dysfunction is the hallmark of carcinoid heart disease. Cardiac metastases are uncommon in carcinoid syndrome. Features of patients with metastatic carcinoid tumor involving the heart (MCH) have not been well described. Methods: From 1985 through 1999, 11 patients (8 male, 3 female), mean age +/- standard deviation, 58 +/- 6 years, were seen who had pathologically confirmed MCH. All patients had echoes, which were reviewed retrospectively. Results: All patients with MCH had carcinoid syndrome. The primary carcinoid tumor was in the small bowel in 83% of patients, and all patients had hepatic metastases. On pathologic review, the 11 patients had 15 MCH tumors. All metastases were intramyocardial. The MCH involved the right ventricle in 40%, left ventricle in 53%, and ventricular septum in 7%. The average size of macroscopic tumors was 1.8 +/- 1.2 cm. Nine MCH tumors were detected by echo in 6 of the 11 patients (55%). Mean echo-detected tumor size was 2.4 cm (range, 1.2 to 4). All tumors noted by echo were well circumscribed, non-infiltrating, and homogeneous. In the 5 other patients, review of autopsy records revealed 6 macroscopic tumors, mean size 0.35 cm (range, 0.2 to 0.4), none detected by echo even retrospectively. Carcinoid valve disease was present in 8 of the 11 MCH patients. The tricuspid valve was affected in all 8 patients (73%), pulmonary valve in 7 (64%), and left sided valves in 4 (36%) All patients with MCH identified by echo had cardiac surgery, 3 primarily for carcinoid valve disease and 2 for non-carcinoid cardiac disease; in 1 patient, MCH was the primary indication for cardiac surgery. Conclusions: MCH is uncommon but can be easily identified by echo if tumor size is &gt;/=1.0 cm. In patients without valvular dysfunction, MCH may be the only manifestation of carcinoid heart disease. A search for MCH should be an integral part of the echo exam in patients with carcinoid syndrome. abstract_id: PUBMED:17293303 Imaging of liver metastases: MRI. Metastases are the most common malignant liver lesions and the most common indication for hepatic imaging. Specific characterization of liver metastases in patients with primary non-hepatic tumors is crucial to avoid unnecessary diagnostic work-up for incidental benign liver lesions. Magnetic resonance (MR) is rapidly emerging as the imaging modality of choice for detection and characterization of liver lesions due to the high specificity resulting from optimal lesion-to-liver contrast and no radiation exposure. Improvements in breath-hold T1-weighted fast spoiled gradient echo and rapid T2-weighted single shot echo-train acquisition enable imaging of the liver in a single breath-hold with high spatial resolution. Most metastases are hypo- to isointense on T1 and iso- to hyperintense on T2-weighted images. MR contrast agents provide critical tumor characterization and can be safely used in patients with iodine contrast allergy and renal failure. Other agents, including newly developing gadolinium-chelates or iron oxide agents may provide additional benefits in selected applications. The degree and nature of tumor vascularity form the basis for liver lesion characterization based on enhancement properties. Liver metastases may be hypovascular or hypervascular. Colon, lung, breast and gastric carcinomas are the most common tumors causing hypovascular liver metastases, and typically show perilesional enhancement. Neuroendocrine tumors including carcinoid and islet cell tumors, renal cell carcinoma, breast, melanoma, and thyroid carcinoma are tumors most commonly causing hypervascular hepatic metastases, which may develop early enhancement with variable degrees of washout and peripheral rim enhancement. Answer: The study described in abstract PUBMED:21474918 aimed to compare the reliability of different MRI sequences in the detection and reproducible size assessment of known carcinoid hepatic metastases. The study involved 22 patients and utilized T(1)-weighted, T(2)-weighted, and dynamic contrast-enhanced MRI sequences. Three readers reviewed the MRI images independently at two sessions that were more than two weeks apart. The sequences were compared subjectively for metastases conspicuity, number and size, and reproducibility of size measurements. The results indicated that T(2) fast spin-echo (FSE) was rated as the best overall sequence. The average number of metastases detected was equivalent using T(1)-weighted arterial and T(2) FSE, but less for T(2) fast-recovery fast spin-echo (FRFSE) or delayed imaging. A total of 1,067 lesions were detected, and 66 were measured twice by three readers. There was no significant difference in size measurement between the sequences or between the readings when the same sequence was used. However, there was a significant difference among sequences for the size of metastases (p < 0.001). The conclusion of the study was that T(2) FSE could be used as a basic sequence in detecting and monitoring the size of carcinoid hepatic metastases. It may serve as the primary sequence in patients with contrast allergy or at risk for nephrogenic systemic fibrosis. Therefore, based on this study, respiratory-gated T(2) fast spin-echo appears to be sufficient for the follow-up of known carcinoid liver metastases, particularly when considering the size and detection of these lesions.
Instruction: Does an extended stroke unit service with early supported discharge have any effect on balance or walking speed? Abstracts: abstract_id: PUBMED:17067970 Does an extended stroke unit service with early supported discharge have any effect on balance or walking speed? Objective: To evaluate the effect of an extended stroke unit service with early supported discharge on balance and walking speed, and to explore the association between initial leg paresis, initial movement ability and balance one year after stroke. Design: A randomized controlled trial comparing early supported discharge with ordinary stroke unit service. Patients: A total of 62 eligible patients after stroke. Methods: The outcome measures were Berg Balance Scale and walking speed at 1, 6, 26 and 52 weeks after stroke. Results: We found no significant differences between the 2 groups during follow-up. There was a significant improvement on Berg Balance Scale (p=0.013) and walking speed (p=0.022) in the early supported discharge group, but not in the ordinary service group, from 1 to 6 weeks' follow-up. All patients with initial severe leg paresis suffered from poor balance one year after the stroke. The odds ratio for poor balance was 42.1 (95% confidence interval; 3.5-513.9) among patients with no initial walking ability. Conclusion: These results do not conclusively indicate that early supported discharge has an effect on balance. A strong association was found between initial severe leg paresis, initial inability to walk and poor balance after one year. abstract_id: PUBMED:24833680 Balance and walking after three different models of stroke rehabilitation: early supported discharge in a day unit or at home, and traditional treatment (control). Objective: To compare the effects on balance and walking of three models of stroke rehabilitation: early supported discharge with rehabilitation in a day unit or at home, and traditional uncoordinated treatment (control). Design: Group comparison study within a randomised controlled trial. Setting: Hospital stroke unit and primary healthcare. Participants: Inclusion Criteria: a score of 2-26 on National Institutes of Health Stroke Scale, assessed with Postural Assessment Scale for Stroke (PASS), and discharge directly home from the hospital stroke unit. Interventions: Two intervention groups were given early supported discharge with treatment in either a day unit or the patient's own home. The controls were offered traditional, uncoordinated treatment. Outcome Measures: Primary: PASS. Secondary: Trunk Impairment Scale-modified Norwegian version; timed Up-and-Go; 5 m timed walk; self-reports on problems with walking, balance, ADL, physical activity, pain and tiredness. The patients were tested before randomisation and 3 months after inclusion. Results: From a total of 306 randomised patients, 167 were tested with PASS at baseline and discharged directly home. 105 were retested at 3 months: mean age 69 years, 63 men, 27 patients in day unit rehabilitation, 43 in home rehabilitation and 35 in a control group. There were no group differences, either at baseline for demographic and test data or for length of stroke unit stay. At 3 months, there was no group difference in change on PASS (p&gt;0.05). Some secondary measures tended to show better outcome for the intervention groups, that is, trunk control, median (95% CI): day unit, 2 (0.28 to 2.31); home rehabilitation, 4 (1.80 to 3.78); control, 1 (0.56 to 2.53), p=0.044; and for self-report on walking, p=0.021 and ADL, p=0.016. Conclusions: There was no difference in change between the groups for postural balance, but the secondary outcomes indicated that improvement of trunk control and walking was better in the intervention groups than in the control group. Trial Registration: This study is part of the Early Supported Discharge after Stroke in Bergen, ClinicalTrials.gov (NCT00771771). abstract_id: PUBMED:27190448 Effects of Kinesio taping and Mcconnell taping on balance and walking speed of hemiplegia patients. [Purpose] The aim of this study was to evaluate the overlap effect of the PNF following the application of Kinesio taping and the McConnell taping, and also the impact of the taping application method on the balance and walking speed of the patients with stroke. [Subjects and Methods] Thirty-six patients who were diagnosed with hemiplegia due to stroke were selected as subjects of this study. They were randomly and evenly divided into experiment group 1 (Kinesio taping group), experiment group 2 (McConnell taping group), and the control group; each group had 12 patients. [Results] The Berg balance scale (BBS) was used to evaluate balance, and the ability in this study. A 10 m walking test (10MWT) was performed to measure the walking speed. Experiment group 1 showed a statistically significant improvement in balance and walking speed compared to experiment group 2, and the control group in week 4 and week 8. [Conclusion] Application of Kinesio taping had a more beneficial effect on the balance and walking speed than joint-fixation taping of the patients with stroke. abstract_id: PUBMED:38058183 Healthcare professionals' experiences of delivering a stroke Early Supported Discharge service - An example from Ireland. Objective: To explore healthcare professionals' experiences of the development and delivery of Early Supported Discharge for people after stroke, including experiences of the COVID-19 pandemic. Design: Qualitative descriptive study using one-to-one semi-structured interviews. Data were analysed using reflexive thematic analysis. Setting: Nine Early Supported Discharge service sites in Ireland. Participants: Purposive sampling identified 16 healthcare professionals. Results: Five key themes were identified (1) Un-coordinated development of services, (2) Staff shortages limit the potential of Early Supported Discharge, (3) Limited utilisation of telerehabilitation post COVID-19 pandemic, (4) Families need information and support, and (5) Early Supported Discharge involves collaboration with people after stroke and their families. Conclusions: Findings highlight how Early Supported Discharge services adapted during the COVID-19 pandemic and how gaps in the service impacts on service delivery. Practice implications include the need to address staff recruitment and retention issues to prevent service shortages and ensure consistent access to psychology services. Early Supported Discharge services should continue to work closely with families and address their information and support needs. Future research on how telerehabilitation can optimally be deployed and the impact of therapy assistants in Early Supported Discharge is needed. abstract_id: PUBMED:23455948 A qualitative study exploring patients' and carers' experiences of Early Supported Discharge services after stroke. Objective: To investigate patients' and carers' experiences of Early Supported Discharge services and inform future Early Supported Discharge service development and provision. Design And Subjects: Semi-structured interviews were completed with 27 stroke patients and 15 carers in the Nottinghamshire region who met evidence-based Early Supported Discharge service eligibility criteria. Participants were either receiving Early Supported Discharge or conventional services. Setting: Community stroke services in Nottinghamshire, UK. Results: A thematic analysis process was applied to identify similarities and differences across datasets. Themes specific to participants receiving Early Supported Discharge services were: the home-based form of rehabilitation; speed of response; intensity and duration of therapy; respite time for the carer; rehabilitation exercises and provision of technical equipment; disjointed transition between Early Supported Discharge and ongoing rehabilitation services. Participants receiving Early Supported Discharge or conventional community services experienced difficulties related to: limited support in dealing with carer strain; lack of education and training of carers; inadequate provision and delivery of stroke-related information; disjointed transition between Early Supported Discharge and ongoing rehabilitation services. Conclusions: Accelerated hospital discharge and home-based rehabilitation was perceived positively by service users. The study findings highlight the need for Early Supported Discharge teams to address information and support needs of patients and carers and to monitor their impact on carers in addition to patients, using robust outcome measures. abstract_id: PUBMED:15137554 Evaluation of an extended stroke unit service with early supported discharge for patients living in a rural community. A randomized controlled trial. Objective: To evaluate the effect of an extended stroke unit service (extended service), with early supported discharge and co-ordination of further rehabilitation in co-operation with the primary health care system in three rural municipalities. Design: A randomized controlled trial comparing extended service with ordinary stroke unit service (ordinary service). Subjects: Sixty-two eligible patients with acute stroke living in the rural municipalities of Malvik, Melhus and Klaebu. Main Measures: The primary outcome was the proportion of patients who were independent according to Modified Rankin Scale (mRS) (independence = mRS &lt; or = 2) 52 weeks after onset of stroke. Secondary outcomes were mRS at 6 and 26 weeks and Barthel Index (BI), Nottingham Health Profile (NHP) and Caregiver Strain Index (CSI) at 6, 26 and 52 weeks. Mortality and length of stay were registered during the 52 weeks. Results: Twelve patients (39%) in the extended service group versus 16 patients (52%) in the ordinary service group were independent according to mRS at 52 weeks (p = 0.444). The odds ratio for independence (extended service versus ordinary service) was 0.33 (95% confidence interval (CI) 0.088-1.234). According to outcome by secondary measures there were no significant differences except less social isolation on NHP in the extended service group at 26 weeks (p = 0.046). There were no significant differences in length of stay. Conclusion: An extended stroke unit service with early supported discharge seems to have no positive effect on functional outcome for patients living in rural communities, but might give a trend toward better quality of life. There were no significant differences in length of stay. abstract_id: PUBMED:34618617 Evaluating stroke early supported discharge using cost-consequence analysis. Purpose: To evaluate different stroke early supported discharge (ESD) services in different geographical settings using cost-consequence analysis (CCA), which presents information about costs and outcomes in the form of a balance sheet. ESD is a multidisciplinary service intervention that facilitates discharge from hospital and includes delivery of stroke specialist rehabilitation at home. Materials And Methods: Data were collected from six purposively sampled services across the Midlands, East and North of England. All services, rural and urban, provided stroke rehabilitation to patients in their own homes. Cost data included direct and overhead costs of service provision and staff travel. Consequence data included service level adherence to an expert consensus regarding the specification of ESD service provision. Results: We observed that the most rural services had the highest service cost per patient. The main costs associated with running each ESD service were staff costs. In terms of the consequences, there was a positive association between service costs per patient and greater adherence to meeting the evidence-based ESD service specification agreed by an expert panel. Conclusions: This study found that rural services were associated with higher costs per patient, which in turn were associated with greater adherence to the expert consensus regarding ESD service specification. We suggest additional resources and costs are required in order for rural services to meet evidence-based criteria.Implications for rehabilitationThe main costs of an early supported discharge (ESD) service for stroke survivors were staff costs and these were positively associated with greater levels of rurality.Greater costs were associated with greater adherence to ESD core components, which has been previously found to enhance the effectiveness of ESD service provision.The cost-consequence analysis provides a descriptive summary for decision-makers about the costs of delivering ESD, suggesting additional resources and costs are required in order for rural services to meet evidence-based criteria. abstract_id: PUBMED:11108761 Benefit of an extended stroke unit service with early supported discharge: A randomized, controlled trial. Background And Purpose: Several trials have shown that stroke unit care improves outcome for stroke patients. The aim of the present trial was to evaluate the effects of an extended stroke unit service (ESUS), with early supported discharge, cooperation with the primary healthcare system, and more emphasis on rehabilitation at home as essential elements. Methods: In a randomized, controlled trial, 160 patients with acute stroke were allocated to the ESUS and 160 to the ordinary stroke unit service (OSUS). The primary outcome was the proportion of patients who were independent as assessed by the modified Rankin Scale (RS) (RS &lt;/=2=global independence) and independent in activities of daily living (ADL) as assessed by Barthel Index (BI) (BI &gt;/=95=independent in ADL) after 26 weeks. Secondary outcomes were RS and BI scores after 6 weeks; the proportion of patients at home, in institutions, and deceased after 6 and 26 weeks; and the length of stay in institutions. Results: After 26 weeks, 65.0% in the ESUS versus 51.9% in the OSUS group showed global independence (RS &lt;/=2) (P:=0.017), while 60.0% in the ESUS versus 49.4% in the OSUS group were independent in ADL (BI &gt;/=95) (P:=0.056). The odds ratios for independence (ESUS versus OSUS) were as follows: RS, 1.72 (95% CI, 1.10 to 2.70); BI, 1.54 (95% CI, 0.99 to 2.39). At 6 weeks, 54.4% of the ESUS group and 45. 6% of the OSUS group were independent according to RS (P:=0.118), and 56.3% versus 48.8% were independent according to BI (P:=0.179). The proportion of patients at home after 6 weeks was 74.4% for ESUS and 55.6% for OSUS (P:=0.0004), and the proportion in institutions was 23.1% versus 40.0%, respectively (P:=0.001). After 26 weeks, 78. 8% in the ESUS group versus 73.1% in the OSUS were at home (P:=0. 239), while 13.1% versus 17.5% were in institutions (P:=0.277). The mortality in the 2 groups did not differ. Average lengths of stay in an institution were 18.6 days in the ESUS and 31.1 days in the OSUS group (P:=0.0324). Conclusions: An ESUS with early supported discharge seems to improve functional outcome and to reduce the length of stay in institutions compared with traditional stroke unit care. abstract_id: PUBMED:26972087 Balance impairment limits ability to increase walking speed in individuals with chronic stroke. Purpose Determine the relationship between balance impairments and the ability to increase walking speed (WS) on demand in individuals with chronic stroke. Methods WS and Berg Balance Scale (BBS) data were collected on 124 individuals with chronic stroke (&gt;6 months). The ability to increase WS on demand (walking speed reserve, WSR) was quantified as the difference between participants' self-selected (SSWS) and maximal (MWS) walking speeds. Correlation, regression and receiver operating characteristic (ROC) analyses were performed to investigate the relationship between balance and the ability to increase WS. Results Of sample, 58.9% were unable to increase WS on demand (WSR &lt; 0.2 m/s). BBS scores were associated with WSR values (rs=0.74, 0.65-0.81) and were predictive of 'able/unable' to increase WS [odds ratio (OR) = 0.75, 0.67-0.84]. The AUC for the ROC curve constructed to assess the accuracy of BBS to discriminate between able/unable to increase WS was 0.85 (0.78-0.92). A BBS cutscore of 47 points was identified [sensitivity: 72.6%, specificity: 90.2%, +likelihood ratio (LR): 7.41, -LR: 0.30]. Conclusions The inability to increase WS on demand is common in individuals with chronic stroke, and balance appears to be a significant contributor to this difficulty. A BBS cutscore of 47 points can identify individuals who may benefit from balance interventions to improve the ability to increase their WS. Implications for Rehabilitation A majority of individuals with chronic stroke may be unable to increase their walking speed beyond their self-selected speed on demand. This may limit functional ambulation, as these individuals are walking "at capacity". Balance impairments contribute to the inability to increase walking speed. A Berg Balance Scale score &lt;47 points can be used to identify individuals with chronic stroke walking "at capacity" due to balance impairments. abstract_id: PUBMED:28365632 Stroke unit care, inpatient rehabilitation and early supported discharge. Stroke units reduce death and disability through the provision of specialist multidisciplinary care for diagnosis, emergency treatments, normalisation of homeostasis, prevention of complications, rehabilitation and secondary prevention. All stroke patients can benefit from provision of high-quality basic medical care and some need high impact specific treatments, such as thrombolysis, that are often time dependent. A standard patient pathway should include assessment of neurological impairment, vascular risk factors, swallowing, fluid balance and nutrition, cognitive function, communication, mood disorders, continence, activities of daily living and rehabilitation goals. Good communication and shared decision making with patients and their families are key to high-quality stroke care. Patients with mild or moderate disability, who are medically stable, can continue rehabilitation at home with early supported discharge teams rather than needing a prolonged stay in hospital. National clinical guidelines and prospective audits are integral to monitoring and developing stroke services in the UK. Answer: The effect of an extended stroke unit service with early supported discharge on balance and walking speed has been evaluated in several studies, but the results are not conclusive. One randomized controlled trial found no significant differences in balance or walking speed between the early supported discharge group and the ordinary stroke unit service group during follow-up. However, there was a significant improvement in the early supported discharge group from 1 to 6 weeks' follow-up for both balance and walking speed, but not in the ordinary service group. The study also found a strong association between initial severe leg paresis, initial inability to walk, and poor balance after one year (PUBMED:17067970). Another study comparing three models of stroke rehabilitation, including early supported discharge with rehabilitation in a day unit or at home, and traditional uncoordinated treatment, found no difference in change between the groups for postural balance. However, secondary outcomes indicated that improvement of trunk control and walking was better in the intervention groups than in the control group (PUBMED:24833680). A study on the effects of Kinesio taping and McConnell taping on balance and walking speed of hemiplegia patients showed that Kinesio taping had a more beneficial effect on balance and walking speed than joint-fixation taping (PUBMED:27190448). In summary, while some studies suggest that early supported discharge may lead to improvements in balance and walking speed, particularly in the short term, the evidence is not definitive. The benefits may depend on the specific characteristics of the patient population, the nature of the intervention, and the timing and duration of follow-up assessments.
Instruction: Can preterm twins breast feed successfully? Abstracts: abstract_id: PUBMED:9216605 Can preterm twins breast feed successfully? Aim: To compare the success of singleton and twin preterm infants in establishing and maintaining breast feeding, and to evaluate the effectiveness of current programmes to promote breast feeding. Methods: All infants less than 37 weeks gestation discharged in one month from the special care baby unit at National Womens Hospital were studied. Data on the infants and their in hospital course was recorded from the neonatal records. The mothers were contacted by telephone 3 to 4 months after discharge, to elicit the subsequent breast feeding rates. Results: Thirty of 33 preterm infants (29 to 36 weeks gestation) were breast fed at discharge from hospital: 93% of singletons, and 89% of twins. The twins were older and heavier at discharge (p &lt; 0.004) due to their longer hospital stays (28.4 vs 16.3 days, p &lt; 0.05). All but 2 infants progressed to exclusive breast feeding. There was a similar rate of decline in the rates of breast feeding in singletons and twins to 68% at 8-12 weeks and 49% at 12-16 weeks after birth. Conclusions: Preterm twins can breast feed as successfully as preterm singleton infants; as with sufficient assistance and encouragement, their rates of breast feeding were comparable to those of term infants. Although the resources of this hospital do not allow preterm infants to become fully breast fed before discharge, the current programme at National Womens Hospital is effective in establishing successful breast feeding in these high risk infants. abstract_id: PUBMED:37151109 Prospective longitudinal comparative study showed that breastfeeding outcomes were comparable in preterm twins and singleton infants. Aim: We compared milk volumes, skin-to-skin contact and breastfeeding by the mothers of very preterm twins and singleton infants born at 28-32 weeks of gestation. Methods: This Norwegian longitudinal prospective comparative study was carried out in two neonatal intensive care units: one with single family rooms and one open bay unit. It comprised 49 singleton infants, 28 twins and their mothers. The mothers' milk volume and direct breastfeeding were recorded from birth until 4 months' of corrected age. They also answered the breastfeeding self-efficacy scale and skin-to-skin contact was recorded. Results: The mothers of preterm twins produced doubled the volume of expressed milk at day 14, compared to the mothers of singletons (mean 816 ± 430 mL vs. 482 ± 372 mL, p &lt; 0.05) and this difference was still sustained at 34 + 0 weeks/days (p &lt; 0.02). Mothers of twins had their first breastfeeding attempt later than mothers of singletons (median of 133 h compared to 56 (p &lt; 0.002). Preterm twins received less daily skin-to-skin contact (mean 157 ± 66 min each vs. 244 ± 109) (p &lt; 0.001). There were no differences in receiving mother's own milk, exclusively direct breastfeeding or perceived breastfeeding self-efficacy. Conclusion: Breastfeeding was initiated as successfully in preterm twins as singletons as the mothers' milk production doubled. abstract_id: PUBMED:32229675 The contribution of twins conceived by in vitro fertilization to preterm birth rate: observations from a quarter of century. Objective Little information exists related to the contribution of assisted reproductive technology (ART) twins to the preterm and very preterm birth rate. We sought to examine this contribution over a period of more than two decades in a tertiary perinatal center. Methods We identified all preterm births from 1993 to 2017, born at &lt;37 or &lt;32 weeks' gestation, by mode of conception [in vitro fertilization (IVF) vs. non-IVF pregnancies]. We generated trend lines of the annual change of the dependent variable (% preterm birth). Results We evaluated 74,299 births, including 3934 (5.3%) preterm births at &lt;37 and 826 (1.1%) at &lt;32 weeks' gestation. In this period, 1019 (1.4%) twin pairs were born including 475 (46.6%) and 80 (7.8%) at &lt;37 and &lt;32 weeks, respectively. There were 213 (5.4%) IVF pregnancies among the preterm births at &lt;37 weeks, including 88 (41.3%) twins. Fifteen (1.8%) births of all IVF gestations were at &lt;32 weeks, and all were twins. Whereas the annual rate of spontaneous twins did not change, a significant increase over time exists for IVF twins (P &lt; 0.05, R2 = 0.6). We demonstrated an increase in IVF twin births at &lt;37 weeks but not for spontaneously conceived twins. Whereas the twin birth rate at &lt;32 weeks did not change over time, all preterm births at &lt;32 weeks following IVF were twins. Conclusions The risk of twins after ART increasingly contributes to preterm births at &lt;37 weeks and ART twins are at significant risk for preterm births at &lt;32 weeks. abstract_id: PUBMED:12219324 Breast-feeding in preterm twins: Development of feeding behavior and milk intake during hospital stay and related caregiving practices. In a prospective study of 13 preterm twins still in the hospital, 85% were breast-fed, of which 46% were breast-fed exclusively. Most mothers preferred simultaneous breast-feeding, using the football hold. Observations and maternal descriptions showed differences between the twins in their development of breast-feeding behavior, especially in sucking. The mothers' suggestions regarding special support for the breast-feeding mothers of preterm twins involved synchronizing feeding with the twins' behavioral states; twin cobedding; appropriate armchairs and breast-feeding pillows; experimenting with breast-feeding positions; information about breast milk production; nurses' spontaneous practical assistance, encouragement, and emotional support; the provision of privacy; the availability of parent rooms; and opportunities for fathers' presence in the hospital. abstract_id: PUBMED:36076279 Breastfeeding initiation, duration, and experiences of mothers of late preterm twins: a mixed-methods study. Background: Twins and late preterm (LPT) infants are at an increased risk of being breastfed to a lesser extent than term singletons. This study aimed to describe the initiation and duration of any and exclusive breastfeeding at the breast for mothers of LPT twins and term twins during the first 4 months and to explore the breastfeeding experiences of mothers of LPT twins. Methods: A sequential two-sample quantitative-qualitative explanatory mixed-methods design was used. The quantitative data were derived from a longitudinal cohort study in which 22 mothers of LPT twins and 41 mothers of term twins answered questionnaires at one and four months after birth (2015-2017). The qualitative data were obtained from semi-structured interviews with 14 mothers of LPT twins (2020-2021), based on results from the quantitative study and literature. Analysis included descriptive statistics of quantitative data and deductive content analysis of the qualitative data, followed by condensation and synthesis. Results: All mothers of LPT twins (100%) and most mothers of term twins (96%) initiated breastfeeding. There was no difference in any breastfeeding during the first week at home (98% versus 95%) and at 1 month (88% versus 85%). However, at 4 months, the difference was significant (44% versus 75%). The qualitative data highlighted that mothers of LPT twins experienced breastfeeding as complex and strenuous. Key factors influencing mothers' experiences and decisions were their infants' immature breastfeeding behaviors requiring them to express breast milk alongside breastfeeding, the burden of following task-oriented feeding regimes, and the lack of guidance from healthcare professionals. As a result, mothers started to question the worth of their breastfeeding efforts, leading to changes in breastfeeding management with diverse results. Support from fathers and grandparents positively influenced sustained breastfeeding. Conclusions: Mothers of LPT twins want to breastfeed, but they face many challenges in breastfeeding during the first month, leading to more LPT twins' mothers than term twins' mothers ceasing breastfeeding during the following months. To promote and safeguard breastfeeding in this vulnerable group, care must be differentiated from routine term infant services, and healthcare professionals need to receive proper education and training. abstract_id: PUBMED:26000625 A clinical opinion on how to manage the risk of preterm birth in twins based on literature review. Twin pregnancies are prone to preterm birth and consequent morbidity. There is an increasing evidence base concerning the prediction and prevention of preterm birth in singletons, including the reduction of morbidity with therapies such as magnesium sulphate and antenatal corticosteroids. However, the research in twins is less clear, partly due to fewer numbers being investigated, but also evidence is largely based on twins without a previous history. Prophylactic interventions such as cerclage, progesterone and vaginal pessaries are increasingly showing benefit in singleton pregnancies with a prior history and when the cervix is short. Cerclage in twins has not been adequately researched in women with previous preterm birth, and as with singletons should not be used on the basis of a short cervix alone. Vaginal progesterone does not work in twins, but its value in high-risk twins, with a prior history and short cervix is uncertain. The vaginal pessary may be valuable in the twin with a short cervix. Currently, it is reasonable to extrapolate some of the evidence from singletons to twins, e.g. with antenatal corticosteroids and magnesium sulphate. Cerclage, vaginal pessaries and progesterone should not be routinely used in twin pregnancies without an additional high-risk factor such as prior history of preterm birth or short cervix, until further evidence is obtained. abstract_id: PUBMED:24360590 Prediction of preterm birth in twins. About 13% of twins are born before 34 weeks and 7% before 32 weeks. The prediction of preterm birth in twins is based on the same tests as in singleton pregnancies. In twin pregnancies, the cut-off for short cervix at the second trimester scan is less than 25 mm (compared with 15 mm in singletons); length less than 20 mm is associated with 42% risk for birth before 32 weeks and cervical length less than 25 mm is associated with 28% risk for birth before 28 weeks. The measurement of cervical length in pregnancies with symptoms of preterm labour may have limited accuracy in predicting preterm birth. In asymptomatic women, a positive fetal fibronectin test seems to be associated with 35% risk for birth before 32 weeks and 40% risk for birth less than 34 weeks, whereas a negative test decreases the risk to 6% and 17%, respectively. The differences in the predictive value of tests between twins and singletons reflect the diverse pathophysiology of preterm birth between the two groups. abstract_id: PUBMED:8551016 Bacterial contaminated breast milk and necrotizing enterocolitis in preterm twins. A pair of preterm twins developed fatal necrotizing enterocolitis (NEC) in association with Staphylococcus epidermidis septicaemia after receiving contaminated expressed breast milk (EBM). S. epidermidis NEC can be associated with severe bowel inflammation, high morbidity and mortality. Breast milk is the most suitable nutrient for preterm infants but EBM should undergo regular screening for bacterial overgrowth. We urge caution before administering EBM found to be heavily contaminated with S. epidermidis to preterm infants. abstract_id: PUBMED:29983834 Epigenetic signature of preterm birth in adult twins. Background: Preterm birth is a leading cause of perinatal mortality and long-term health consequences. Epigenetic mechanisms may have been at play in preterm birth survivors, and these could be persistent and detrimental to health later in life. Methods: We performed a genome-wide DNA methylation profiling in adult twins of premature birth to identify genomic regions under differential epigenetic regulation in 144 twins with a median age of 33 years (age range 30-36). Results: Association analysis detected three genomic regions annotated to the SDHAP3, TAGLN3 and GSTT1 genes on chromosomes 5, 3 and 22 (FWER: 0.01, 0.02 and 0.04) respectively. These genes display strong involvement in neurodevelopmental disorders, cancer susceptibility and premature delivery. The three identified significant regions were successfully replicated in an independent sample of twins of even older age (median age 66, range 56-80) with similar regulatory patterns and nominal p values &lt; 5.05e-04. Biological pathway analysis detected five significantly enriched pathways all explicitly involved in immune responses. Conclusion: We have found novel evidence associating premature delivery with epigenetic modification of important genes/pathways and revealed that preterm birth, as an early life event, could be related to differential methylation regulation patterns observable in adults and even at high ages which could potentially mediate susceptibility to age-related diseases and adult health. abstract_id: PUBMED:34716772 Ductus Arteriosus of Extremely Preterm Twins is More Resistant to Cyclooxygenase Inhibitors Than Those of Singletons. Symptomatic patent ductus arteriosus (sPDA) is common among preterm infants, and can lead to several complications. This is particularly true for extremely preterm infants, as closure of the ductus arteriosus using cyclooxygenase inhibitors is often difficult. A recent study using a preterm sheep model showed that intimal thickening-required for anatomical closure of the ductus arteriosus-is less developed in twins than in singletons. Therefore, this study primarily aimed to prove that the ductus arteriosus of extremely preterm twins is more resistant to cyclooxygenase inhibitors than those of extremely preterm singletons. Its secondary aim was to assess whether the resistance against cyclooxygenase inhibitors differed according to chorionicity. In this retrospective case-control study, medical records of 162 extremely preterm infants (gestational age &lt; 28 weeks) were reviewed, and the treatment course of sPDA was subsequently compared between singletons (n = 131) and twins (n = 31). The median indomethacin doses for sPDA and the necessity for surgical ligation were significantly higher in twins than in singletons (5 vs 2 [p &lt; 0.001] and 42% vs 21% [p = 0.018], respectively). No significant differences in sPDA treatment, including the number of indomethacin doses and the necessity for surgical ligation, were observed between monochorionic diamniotic and dichorionic diamniotic twins. This study confirms that the ductus arteriosus of extremely preterm twins is more resistant to cyclooxygenase inhibitors than those of singletons. However, there was no significant difference in sPDA treatment by chorionicity. Answer: Yes, preterm twins can breastfeed successfully. Studies have shown that with sufficient assistance and encouragement, preterm twins can establish and maintain breastfeeding rates comparable to those of term infants. A study from the National Women's Hospital indicated that 93% of singletons and 89% of twins, all preterm, were breastfed at discharge, and most of these infants progressed to exclusive breastfeeding. The decline in breastfeeding rates over time was similar for both singletons and twins (PUBMED:9216605). Another study found that mothers of preterm twins produced double the volume of expressed milk compared to mothers of singletons by day 14, and this difference was sustained at 34 weeks. Although preterm twins received less daily skin-to-skin contact, there were no differences in receiving mother's own milk, exclusively direct breastfeeding, or perceived breastfeeding self-efficacy (PUBMED:37151109). Additionally, a study of 13 preterm twins in the hospital found that 85% were breastfed, with 46% exclusively breastfed. Most mothers preferred simultaneous breastfeeding, and the study highlighted the importance of support for breastfeeding mothers of preterm twins, including practical assistance and emotional support (PUBMED:12219324). Furthermore, a mixed-methods study on late preterm twins showed that all mothers initiated breastfeeding, and although they faced challenges, support from fathers and grandparents positively influenced sustained breastfeeding (PUBMED:36076279). These findings suggest that with the right support and resources, preterm twins can indeed breastfeed successfully.
Instruction: A mysterious case of gastroparesis: could the secret be found in a drink? Abstracts: abstract_id: PUBMED:23547733 A mysterious case of gastroparesis: could the secret be found in a drink? Background: Gastroparesis is a disorder characterized by delayed gastric emptying of a meal in the absence of a mechanical gastric outlet obstruction. Idiopathic gastroparesis is at least as common as diabetic gastroparesis in most case series, and the true prevalence of gastroparesis is unknown. Results: We report here an interesting case of idiopathic gastroparesis characterized by sudden onset in a female patient. The diagnosis was confirmed by ultrasonographic study of gastric emptying and electrogastrography, by gastric endoscopy/histology, and finally by allergy tests. The disorder was found to be due to a rare cause, namely an allergic predisposition. In fact, our patient, who demonstrated an allergy to gold salts, had drunk a glass of a liqueur containing gold flakes and developed an eosinophilic aggregation in the gastric mucosa observed at gastric endoscopy/histology. The symptoms disappeared after steroid administration. Conclusion: Our experience suggests that gastric histology and close enquiry into any history of allergy may be useful diagnostic tools in cases of idiopathic gastroparesis. abstract_id: PUBMED:34698151 Regurgitation under the ERAS Program: A Case Report. Introduction: Enhanced recovery after surgery (ERAS) is an evidence-based concept that reduces the recovery period after major abdominal surgery. Ingestion of carbohydrate solutions up until two hours before elective surgery has shown positive results. The authors present a case of regurgitation in a patient with apparently low risk for delayed gastric emptying who drank a carbohydrate solution two hours before induction of anaesthesia. Case Report: An 80-year-old male patient with a relevant history of ischemic heart disease, atrial fibrillation, stage 3 chronic kidney disease and hypertension, was diagnosed with rectal cancer. He was scheduled for an anterior rectal resection hand-assisted laparoscopic surgery under the ERAS program, which included a 200 mL carbohydrate drink the night before and in the morning of the surgery, no less than two hours before the induction of anaesthesia. Immediately after loss of consciousness, there was regurgitation of a significant amount of clear fluid. Discussion: Even though ingestion of oral carbohydrate drinks is considered to be safe up to two hours before anaesthesia, further evaluation (e.g., gastric ultrasonography) may be considered in non-high-risk patients. abstract_id: PUBMED:35006328 Preoperative Carbohydrate Drink Intake Increases Glycemic Variability in Patients with Type 2 Diabetes Mellitus in Total Joint Arthroplasty: A Prospective Randomized Trial. Background: Preoperative carbohydrate treatment attenuates insulin resistance and improves metabolism to an anabolic state. Despite these benefits, impaired glycemic control and aspiration risk related to gastroparesis represent concerns for patients with diabetes undergoing surgery. This randomized controlled trial investigated the effects of oral carbohydrate therapy on perioperative glucose variability, metabolic responses, and gastric volume in diabetic patients undergoing elective total hip or knee arthroplasty. Methods: Fifty diabetic patients scheduled to undergo elective total knee or hip arthroplasty during August 2019-October 2020 were randomly assigned to a control or carbohydrate therapy (CHO) group. CHO group of patients received a 400-mL carbohydrate drink 2-3 h before anesthesia; control group of patients underwent overnight fasting from midnight, one night before surgery. Blood glucose levels were measured before intake of the carbohydrate drink, before spinal anesthesia, preoperatively, immediately postoperatively, and 1 h postoperatively. Insulin level and gastric volume were measured before spinal anesthesia. Results: The glucose variability of patients in the CHO group was significantly higher than that of those in the control group (16.5 vs. 10.1%, P = 0.008). Similarly, insulin resistance was higher in the CHO group than in the control group (8.5 vs. 2.7, P &lt; 0.001). The gastric volume did not differ significantly between the groups (61.3 vs. 15.2 ml, P = 0.082). Conclusions: Preoperative oral carbohydrate therapy increases glucose variability and insulin resistance in diabetic patients. Therefore, carbohydrate beverages should be cautiously administered to diabetic patients, considering metabolic and safety aspects. Trial registration number ClinicalTrials.gov (No. NCT04013594). abstract_id: PUBMED:28202056 Normal gastric emptying time of a carbohydrate-rich drink in elderly patients with acute hip fracture: a pilot study. Background: Guidelines for fasting in elderly patients with acute hip fracture are the same as for other trauma patients, and longer than for elective patients. The reason is assumed stress-induced delayed gastric emptying with possible risk of pulmonary aspiration. Prolonged fasting in elderly patients may have serious negative metabolic consequences. The aim of our study was to investigate whether the preoperative gastric emptying was delayed in elderly women scheduled for surgery due to acute hip fracture. Methods: In a prospective study gastric emptying of 400 ml 12.6% carbohydrate rich drink was investigated in nine elderly women, age 77-97, with acute hip fracture. The emptying time was assessed by the paracetamol absorption technique, and lag phase and gastric half-emptying time was compared with two gender-matched reference groups: ten elective hip replacement patients, age 45-71 and ten healthy volunteers, age 28-55. Results: The mean gastric half-emptying time in the elderly study group was 53 ± 5 (39-82) minutes with an expected gastric emptying profile. The reference groups had a mean half-emptying time of 58 ± 4 (41-106) and 59 ± 5 (33-72) minutes, indicating normal gastric emptying time in elderly with hip fracture. Conclusion: This pilot study in women with an acute hip fracture shows no evidence of delayed gastric emptying after an orally taken carbohydrate-rich beverage during the pre-operative fasting period. This implies no increased risk of pulmonary aspiration in these patients. Therefore, we advocate oral pre-operative management with carbohydrate-rich beverage in order to mitigate fasting-induced additive stress in the elderly with hip fracture. Trial Registration: ClinicalTrials.gov NCT02753010 . Registered 17 April 2016, retrospectively. abstract_id: PUBMED:34220720 The Postprandial Glycaemic and Hormonal Responses Following the Ingestion of a Novel, Ready-to-Drink Shot Containing a Low Dose of Whey Protein in Centrally Obese and Lean Adult Males: A Randomised Controlled Trial. Purpose: Elevated postprandial glycaemia [PPG] increases the risk of cardiometabolic complications in insulin-resistant, centrally obese individuals. Therefore, strategies that improve PPG are of importance for this population. Consuming large doses of whey protein [WP] before meals reduces PPG by delaying gastric emptying and stimulating the secretion of the incretin peptides, glucose-dependent insulinotropic polypeptide [GIP] and glucagon-like peptide 1 [GLP-1]. It is unclear if these effects are observed after smaller amounts of WP and what impact central adiposity has on these gastrointestinal processes. Methods: In a randomised-crossover design, 12 lean and 12 centrally obese adult males performed two 240 min mixed-meal tests, ~5-10 d apart. After an overnight fast, participants consumed a novel, ready-to-drink WP shot (15 g) or volume-matched water (100 ml; PLA) 10 min before a mixed-nutrient meal. Gastric emptying was estimated by oral acetaminophen absorbance. Interval blood samples were collected to measure glucose, insulin, GIP, GLP-1, and acetaminophen. Results: WP reduced PPG area under the curve [AUC0-60] by 13 and 18.2% in the centrally obese and lean cohorts, respectively (both p &lt;0.001). In both groups, the reduction in PPG was accompanied by a two-three-fold increase in GLP-1 and delayed gastric emptying. Despite similar GLP-1 responses during PLA, GLP-1 secretion during the WP trial was ~27% lower in centrally obese individuals compared to lean (p = 0.001). In lean participants, WP increased the GLP-1ACTIVE/TOTAL ratio comparative to PLA (p = 0.004), indicative of reduced GLP-1 degradation. Conversely, no treatment effects for GLP-1ACTIVE/TOTAL were seen in obese subjects. Conclusion: Pre-meal ingestion of a novel, ready-to-drink WP shot containing just 15 g of dietary protein reduced PPG in lean and centrally obese males. However, an attenuated GLP-1 response to mealtime WP and increased incretin degradation might impact the efficacy of nutritional strategies utilising the actions of GLP-1 to regulate PPG in centrally obese populations. Whether these defects are caused by an individual's insulin resistance, their obese state, or other obesity-related ailments needs further investigation. Clinical Trial Registration: ISRCTN.com, identifier [ISRCTN95281775]. https://www.isrctn.com/. abstract_id: PUBMED:20670666 Carbohydrate and fat digestion is necessary for maximal suppression of total plasma ghrelin in healthy adults. It is uncertain whether the postprandial suppression of ghrelin is dependent on digestion and absorption of nutrients or whether the presence of nutrients in the small intestine is sufficient. Twenty-four healthy young adults with a mean age of 23 ± 0.6 years were examined on 3 separate days after an overnight fast. Twelve subjects participated in Part A, and the other 12 subjects in Part B. In Part A, subjects consumed, in random order, one of three study drinks: 300 mL water; 300 mL high-fat drink, with and without, 120 mg orlistat. In Part B, subjects received, in random order, one of three drinks: 300 mL water; 300 mL sucrose, with and without, 100mg acarbose. In both parts gastric emptying as measured by 2-D ultrasound. In Part A, plasma ghrelin concentrations decreased following ingestion of the high-fat drink, but did not change with the high-fat-orlistat drink or water. In Part B, the suppression of plasma ghrelin following the sucrose drink, was attenuated by acarbose. Orlistat accelerated gastric emptying of the high-fat drink, while acarbose delayed gastric emptying of the sucrose drink. In conclusion, fat and carbohydrate digestion is required for maximal suppression of ghrelin secretion. abstract_id: PUBMED:18803348 Satiety testing: ready for the clinic? Drink tests are advocated as an inexpensive, noninvasive technique to assess gastric function in patients with a variety of upper digestive symptoms. Many patients with dyspeptic complaints will achieve satiation or develop symptoms at ingested volumes below those typically required to achieve these endpoints in controls. Substantial variation in test performance exists and a greater degree of standardization is required. Additionally, it remains unclear exactly what drink tests measure, as correlations with measures of gastric sensation, accommodation and emptying are modest at best. Finally, results of drink tests do not guide therapy. At present, these tests are best reserved for research studies and are not advocated for use in clinical practice. abstract_id: PUBMED:26399958 Ameliorating effect of transcutaneous electroacupuncture on impaired gastric accommodation induced by cold meal in healthy subjects. Background: Impaired gastric accommodation is recognized as one of major pathophysiologies in functional dyspepsia and gastroparesis. Electroacupuncture has been shown to improve gastric accommodation in laboratory settings. It is, however, unknown whether it exerts similar ameliorating effect in humans and whether needleless transcutaneous electroacupuncture (TEA) is also effective in improving gastric accommodation. Aim: The aim was to investigate the effects of TEA on gastric accommodation, gastric slow waves, and dyspeptic related symptoms. Methods: Thirteen healthy volunteers were studied in four randomized sessions: control, cold nutrient liquid, cold nutrient liquid + sham-TEA, and cold nutrient liquid + TEA. The subjects were requested to drink Ensure until reaching maximum satiety. The electrogastrogram (EGG) and electrocardiogram (ECG) were recorded to assess the gastric and autonomic functions respectively. Results: 1) Gastric accommodation was reduced with the cold drink in comparison with the warm drink (P = 0.023). TEA improved the impaired gastric accommodation from 539.2 ± 133.8 ml to 731.0 ± 185.7 ml (P = 0.005). 2) The percentage of normal gastric slow waves in six subjects was significantly decreased in the cold session (P = 0.002) and improved in the TEA session (P = 0.009 vs sham; P &lt; 0.001 vs cold). 3) TEA showed significant improvement in the bloating (80.8 ± 5.7 vs 61.2 ± 26.2, P = 0.011), postprandial fullness (48.1 ± 12.0 vs 34.2 ± 21.2, P = 0.042), and nausea (29.6 ± 10.9 vs 19.2 ± 11.2, P = 0.026) in comparison with sham-TEA session. 4) Neither cold drink nor TEA altered vagal activities (P &gt; 0.05). Conclusions: TEA improves impaired gastric accommodation and slow waves induced by cold drink and the effect does not seem to be mediated via the vagal mechanisms. abstract_id: PUBMED:32710649 Functional dyspepsia in children: A study of pathophysiological factors. Background And Aim: Functional dyspepsia (FD) is common in children, and treatment targeted towards the altered pathophysiology can improve outcome. We evaluated FD children for abnormality of gastric accommodation and emptying, psychological stressors (PS), Helicobacter pylori (HP) infection, and post-infectious FD. Methods: Diagnosis of FD was based on ROME III criteria. Clinical evaluation including dyspeptic symptom scoring and assessment for PS was performed. Satiety drink test for gastric accommodation, gastroscopy with biopsy for HP infection, and solid meal gastric emptying were performed. Sixty-seven healthy children were enrolled for assessing PS and satiety drink test. Results: Fifty-five FD children (33 boys, age 12 [6-18] years) with symptoms for 4 (2-48) months and dyspeptic score of 5 (1-13) were enrolled. PS were more common in FD than in controls (46/55 vs 9/67; P &lt; 0.001). Median satiety drink volume was 360 mL (180-1320 mL); no patients had satiety drink volume of &lt; 5th centile of healthy children. The frequency (98% vs 85%; P = 0.01) and severity (65 [10-175] vs 50 [5-130]; P &lt; 0.001) of postprandial symptoms were higher in FD than in controls. Of the postprandial symptoms, pain (20.3% vs 0%; P = 0.000) was present only in FD. Delayed gastric emptying was present in 6.5%, HP infection in 11%, and post-infectious FD in 13% cases. Etiological factor was identified in 87% children, with 20% having multiple factors. Conclusions: Abnormality of gastric sensorimotor function is seen in one-fourth of FD cases. HP infection and post-infectious FD are present in 11% and 13% cases, respectively. abstract_id: PUBMED:6089862 Does guar gum improve post-prandial hyperglycaemia in humans by reducing small intestinal contact area? The possibility that viscous polysaccharides, such as guar gum, could lower post-prandial blood glucose levels in part by restricting carbohydrate solutions to a smaller area of small intestine was investigated in twenty healthy human volunteers. Addition of guar gum (22.5 g/l) delayed the mouth-to-caecum transit time of a hypotonic lactulose drink, but did not affect gastric emptying. When a 250 ml solution containing 50 g glucose was confined to a 550 mm length of intestine by an occluding balloon attached to an intestinal tube, maximum blood glucose response was significantly reduced (P less than 0.05) though only by 0.9 mmol/l. Addition of guar gum (36 g/l) had no effect on the distribution of a radio-labelled glucose drink (250 ml; 200 g glucose/l) in the small intestine, monitored using a gamma camera, although it significantly delayed gastric emptying (t 1/2 (min): guar gum v. control 115 (SE 15) v. 73 (SE 8)). Reduced contact area is unlikely to be one of the mechanisms by which guar gum improves glucose tolerance. Answer: The mysterious case of gastroparesis discussed in the provided abstracts indeed found its secret in a drink. The case involved a female patient with idiopathic gastroparesis characterized by a sudden onset. The diagnosis was confirmed through various methods including ultrasonographic study of gastric emptying, electrogastrography, gastric endoscopy/histology, and allergy tests. Remarkably, the disorder was attributed to an allergic predisposition, as the patient had consumed a liqueur containing gold flakes and subsequently developed an eosinophilic aggregation in the gastric mucosa. This was observed during the gastric endoscopy/histology. The symptoms of gastroparesis disappeared following steroid administration, suggesting that the allergic reaction to the gold salts in the drink was the cause of the gastroparesis in this case. The authors concluded that gastric histology and a thorough investigation into any history of allergy could be useful diagnostic tools in cases of idiopathic gastroparesis (PUBMED:23547733).
Instruction: Do we need to perform cystoscopy on all adults attending urology centres as outpatients? Abstracts: abstract_id: PUBMED:15539836 Do we need to perform cystoscopy on all adults attending urology centres as outpatients? Introduction: There has been considerable expansion in the use of flexible cystoscopy (FC) and people who can perform the procedure. Hence, there is a criticism that this procedure is being overused with no management benefit. Materials And Methods: We audited the use of FC in a district hospital for a period of 1 year. The results of FC for non-standard indications (other than haematuria and check cystoscopy) were analysed for their diagnostic yield. Results: Of the 1,390 FCs performed, 295 were done for non-standard indications. 46.14% of these cystoscopies had positive findings. Cancer detection rate was 6.10%. Cystoscopy altered the management in 14.08% of patients and was supportive to diagnosis and management in 32.06%. Conclusion: This procedure is certainly not overused and the ever-increasing requirement of this simple procedure has serious resource implications for the National Health Service. abstract_id: PUBMED:17444769 Ureteroscopy and cystoscopy simulation in urology. Transformations of many aspects of surgery have provided a potentially fertile ground for the implementation of surgical simulators in the medical mainstream. The expansion of minimally invasive diagnostic and therapeutic modalities, increasing healthcare demands, fiscal constraints, and sensitivity to medicolegal considerations limit resident instruction and practical experience in the operating room. Furthermore, the need for objective, structured assessments of surgical residents during training and the requirement for physicians to gain and maintain certification demand that innovative solutions be sought. Surgical simulators are poised to deliver broad-based training experiences to trainees of all levels. In urology, simulation has been centered on endourologic procedures, namely ureteroscopy and cystoscopy. In this paper, various models of simulation developed for ureteroscopy and cystoscopy in urology are reviewed, with a brief description of each model, its benefits and disadvantages, and current research surrounding each simulation model. abstract_id: PUBMED:34102797 Evolution of Urology Services in Pakistan. Urology has been separated from its parent generic specialty at different times during the last 150 years in different countries. In Pakistan, from 1947 to 1970s, urology was part of surgery and was done by general surgeons. There were only two urology units from late 1950s to early 1960s. The actual rise of urology started with the introduction of trans-urethral resection of prostate (TURP) in 1980s with the introduction of endourology. Use of cystoscopy and retrograde ureteropyelography was in vogue much before TURP. The second era of modern endourology began with the introduction of percutaneous nephrostomy, ureteroscopy along with ESWL, and percutaneous nephrolithotomy in late 1980s and 1990s. Renal transplantation started in 1979 from living-related donors in public sector hospitals. Now, there are 19 centres in the country performing regular renal transplantions. Urology has undergone a dramatic change during the new millennium. There have been sub-specialties in urology, like paediatric urology, endourology, reconstructive urology, uro-oncology, laparoscopic and robotic urology. At present, there are 11 specialised kidney centres and institutes of urology in the country and 25 recognised urology centres for FCPS. More such centres and replication of the SIUT model is expected in Pakistan. Key Words: Urology, Endourology, Transplantation, Pakistan. abstract_id: PUBMED:392989 Cystoscopy and its importance in urology From the enumeration of modern endoscopic methods for diagnosis and treatment of urological disease, the characteristics of their value and their importance for modern urology becomes evident that by means of these methods, the basis of which was created 100 years ago with the introduction of the cystoscopy, examination and treatment became possible in all regions of the urinary tracts. Based on cystoscopy a subspeciality in urology developed, which might be called endoscopic surgery in urology. The endoscopic treatment methods are relatively atraumatic and loaded with a more insignificant complication rate than the open operative interventions. However, they demand an exact knowledge of the indications and contraindications. By the great possibilities which nowadays open the modern methods of endoscopic diagnostics and therapy we are indebted to their inventors still 100 years after the detection of the cystoscope, among the inventors German researchers and masters play an important role. abstract_id: PUBMED:24939286 The clinic Heilanstalt Weidenplan in Halle (Saale), origin of the German urology. Otto Kneise's founding of the first independent urology department in Germany The routine use of cystoscope initiated the development of the modern urology. Otto Kneise (1875-1953) extended the targets of cystoscopy by including examinations of the male bladder and prostate. He achieved the goal that "cystoscopy is part of general work in urology and not a pure gynecological act". He, thus, founded the specialty gynecological urology in the field urology, which prevented it from becoming an independent field. Under the leadership of Otto Kneise, the first independent urology department in Germany was created in the hospital Heilanstalt Weidenplan. abstract_id: PUBMED:37317442 Standardized Office Cystoscopy Training for Advanced Practice Providers in Urology. Introduction: Cystoscopy is one of the most commonly performed urological procedures. Indications include evaluation of hematuria and bladder cancer monitoring, which requires frequent surveillance for management. The challenges of maintaining the urology workforce are well-documented, and alternative options should be developed for performing cystoscopy safely and effectively. Nurse practitioners and physician assistants (ie advanced practice providers) are established professionals who have provided urological care for decades and who could acquire the necessary procedural skills following establishment of practice guidelines. Methods: Review and synthesis of the available world literature were completed to form an evidence-based proposal for a flexible cystoscopy training curriculum targeted to advanced practice providers in outpatient urology care settings. Results: Of 49 primary sources 10 were appropriate for evaluation, resulting in development of clinical and technical knowledge domains for training U.S. based advanced practice providers in cystoscopy. Skills checklists were developed to aid in training, evaluation and privileging. Conclusions: Based on analysis of the existing literature, we propose a framework for standardizing outpatient flexible cystoscopy training for U.S. based advanced practice providers. Adoption of this framework will establish the standards necessary to ensure high quality, reproducible outcomes essential for seamlessly integrating advanced practice providers into this procedural role within the urological health care team. abstract_id: PUBMED:23769561 Effectiveness of the UroMentor virtual reality simulator in the skill acquisition of flexible cystoscopy. Background: Virtual reality (VR) has been recognized as a useful modality in the training of surgical skills. With respect to basic endoscopic skill training of urology, we sought to investigate the effectiveness of the UroMentor(TM) virtual reality simulator (VRS) in the skill acquisition of flexible cystoscopy. Methods: Urologists familiar with rigid cystoscopy procedures were selected to take part in a virtual training course of flexible cystoscopy. Changes in total operating time, frequency of injury, number of digital markers inside the bladder, and the global rating scale (GRS) scores were assessed following eight repeated training sessions on the UroMentor(TM). Results: Eighteen urologists voluntarily took part in the study. Total operating time was significantly lower after eight sessions of training by comparison ((111 ± 10) seconds and (511 ± 67) seconds, respectively; P &lt; 0.001). Additionally, the frequency of injury decreased with training from (12 ± 2) times to (5 ± 1) times (P &lt; 0.001), while the number of digital markers observed increased from 9 ± 0 to 10 ± 1 (P = 0.005). Finally, training with the UroMentor(TM) resulted in a GRS increase from (1.3 ± 0.2) points to (3.9 ± 0.2) points (P &lt; 0.001). Conclusion: the VRS UroMentor(TM) can improve urologists' ability to perform flexible cystoscopy and could be used as an effective training tool for trainees. abstract_id: PUBMED:37633791 The Diagnostic Accuracy of Cystoscopy for Detecting Bladder Cancer in Adults Presenting with Haematuria: A Systematic Review from the European Association of Urology Guidelines Office. Context: Haematuria can be macroscopic (visible haematuria [VH]) or microscopic (nonvisible haematuria [NVH]), and may be caused by a number of underlying aetiologies. Currently, in case of haematuria, cystoscopy is the standard diagnostic tool to screen the entire bladder for malignancy. Objective: The objective of this systematic review is to determine the diagnostic test accuracy of cystoscopy (compared with other tests, eg, computed tomography, urine biomarkers, and urine cytology) for detecting bladder cancer in adults. Evidence Acquisition: A systematic review of the literature was performed according to the Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy and Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) extension for diagnostic test accuracy studies' checklist. The MEDLINE, Embase, Cochrane CENTRAL, and Cochrane CDSR databases (via Ovid) were searched up to July 13, 2022. The population comprises patients presenting with either VH or NVH, without previous urological cancers. Two reviewers independently screened all articles, searched reference lists of retrieved articles, and performed data extraction. The risk of bias was assessed using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Evidence Synthesis: Overall, nine studies were included in the qualitative analysis. Seven out of nine included trials covered the use of cystoscopy in comparison with radiological imaging. Overall, sensitivity of cystoscopy ranged from 87% to 100%, specificity from 64% to 100%, positive predictive value from 79% to 98%, and negative predictive values between 98% and 100%. Two trials compared enhanced or air cystoscopy versus conventional cystoscopy. Overall sensitivity of conventional white light cystoscopy ranged from 47% to 100% and specificity from 93.4% to 100%. Conclusions: The true accuracy of cystoscopy for the detection of bladder cancer within the context of haematuria has not been studied extensively, resulting in inconsistent data regarding its performance for patients with haematuria. In comparison with imaging modalities, a few trials have prospectively assessed the diagnostic performance of cystoscopy, confirming very high accuracy for cystoscopy, exceeding the diagnostic value of any other imaging test. Patient Summary: Evidence of tests for detecting bladder cancer in adults presenting with haematuria (blood in urine) was reviewed. The most common test used was cystoscopy, which remains the current standard for diagnosing bladder cancer. abstract_id: PUBMED:29240348 Avoiding Same-Day Cystoscopy Cancellations by Veterans: Nurse-Led Education Improved Efficiency for Cystoscopy in VA Urology Clinics. Same-day cancellation of outpatient cystoscopy was recognized as a growing problem in a New Mexico Veterans Administration regional urology clinic. A multidisciplinary team initiated a performance improvement project using the VA-TAMMCS method to address the problem. The resulting changes in patient education significantly reduced cystoscopy cancellations and improved overall patient satisfaction. abstract_id: PUBMED:36427624 Do We Need a Urine Culture Before Cystoscopy? Time to Shift Away from Routine Testing. Objective: To determine if cystoscopy could be safely performed without a urine culture by analyzing 2 prospectively cohorts of patients undergoing outpatient cystoscopy, 1 without urine culture and the other cohort screened and treated for ASB as per protocol. We aim to report the rate of symptomatic UTI and the rate of urinary sepsis within 30 days of cystoscopy. Methods: All patients who underwent cystoscopy between 2021 and 2022 were invited to participate. Patients were prospectively recruited in a 2:1 ratio. Cohort A: patients who did not require urine culture before cystoscopy. Cohort B: patients who were required to have sterile urine before the procedure. Primary endpoint was the occurrence of symptomatic UTI ≤ 30 days after cystoscopy. Results: A total of 461 patients were recruited, 316 for cohort A and 145 for cohort B. Fifteen patients had symptomatic UTIs between both groups, 8 in Cohort A and 7 in Cohort B. The overall rate of symptomatic UTI was 3.2%, 2.5% for Cohort A and 4.8% for Cohort B. We had only 1 case of urinary sepsis in Cohort B. The RR for developing UTI in cohort B was 1.91 in the univariate model and 1.74 in the multivariate analysis. Conclusion: Routine urine culture before outpatient cystoscopy is unnecessary. In asymptomatic patients, subsequent UTIs are rare and can be easily treated. Avoiding routine urine culture helps prevent misuse of antibiotics, reduce cancellation rates, and ease the burden of patients that get cancelled because of ASB. The low risk of symptomatic UTI after the procedure must be discussed with the patient. Answer: Based on the provided abstracts, it is not necessary to perform cystoscopy on all adults attending urology centers as outpatients. An audit of the use of flexible cystoscopy (FC) in a district hospital showed that while 46.14% of cystoscopies for non-standard indications had positive findings, and the cancer detection rate was 6.10%, cystoscopy altered the management in only 14.08% of patients and was supportive to diagnosis and management in 32.06% (PUBMED:15539836). This suggests that while cystoscopy can be a useful diagnostic tool, it may not be necessary for every outpatient visit, especially if the indications are not standard. Furthermore, the diagnostic accuracy of cystoscopy for detecting bladder cancer in adults presenting with haematuria is high, with sensitivity ranging from 87% to 100% and specificity from 64% to 100% (PUBMED:37633791). However, the true accuracy of cystoscopy for the detection of bladder cancer within the context of haematuria has not been studied extensively, and there is inconsistent data regarding its performance for patients with haematuria. Additionally, a study on whether a urine culture is needed before cystoscopy concluded that routine urine culture before outpatient cystoscopy is unnecessary, as subsequent UTIs are rare and can be easily treated (PUBMED:36427624). This suggests that not all patients require pre-cystoscopy urine culture, which could reduce the need for cystoscopy in some cases. In conclusion, while cystoscopy is a valuable diagnostic and management tool in urology, the decision to perform it should be based on specific indications rather than as a routine procedure for all adults attending urology centers as outpatients.
Instruction: Does prenatal cadmium exposure affect fetal and child growth? Abstracts: abstract_id: PUBMED:21186202 Does prenatal cadmium exposure affect fetal and child growth? Objectives: Cadmium is known to be a significant health hazard, but most information comes from studies of adults. The effects of exposure to cadmium during fetal life on early growth and development remain uncertain. In this study we investigated the placental transport of cadmium and the effects of prenatal cadmium exposure on fetal and child growth in Taiwan. Methods: The data in this study were from a birth cohort study in Taiwan which started in 2004. Pregnant women were recruited from four hospitals and interviewed after delivery to collect information on themselves and their infants. Children were followed up to obtain information on growth up to 3years of age. Whole blood cadmium concentrations in maternal and cord blood samples were measured and the relationship with birth size and growth assessed using linear regression and mixed models. Results: 321 maternal blood samples and 402 cord blood samples were eligible for analysis. Among 289 pairs with maternal and cord blood suitable for measurement, the median cadmium concentration in cord blood (0.31μg/l) was less than that in maternal blood (1.05μg/l), with low correlation between the two (r=0.04). An increase in cord blood cadmium was found to be associated with newborn decreased head circumference and to be significantly and consistently associated with a decrease in height, weight and head circumference up to 3 years of age. Conclusions: Placental transport of cadmium is limited. However, prenatal cadmium exposure may have a detrimental effect on head circumference at birth and child growth in the first 3years of life. abstract_id: PUBMED:30252047 Associations of Prenatal Exposure to Cadmium With Child Growth, Obesity, and Cardiometabolic Traits. Prenatal cadmium exposure has been associated with impaired fetal growth; much less is known about the impact during later childhood on growth and cardiometabolic traits. To elucidate the associations of prenatal cadmium exposure with child growth, adiposity, and cardiometabolic traits in 515 mother-child pairs in the Rhea Mother-Child Study cohort (Heraklion, Greece, 2007-2012), we measured urinary cadmium concentrations during early pregnancy and assessed their associations with repeated weight and height measurements (taken from birth through childhood), waist circumference, skinfold thickness, blood pressure, and serum lipid, leptin, and C-reactive protein levels at age 4 years. Adjusted linear, Poisson, and mixed-effects regression models were used, with interaction terms for child sex and maternal smoking added. Elevated prenatal cadmium levels (third tertile of urinary cadmium concentration (0.571-2.658 μg/L) vs. first (0.058-0.314 μg/L) and second (0.315-0.570 μg/L) tertiles combined) were significantly associated with a slower weight trajectory (per standard deviation score) in all children (β = -0.17, 95% confidence interval (CI): -0.32, -0.02) and a slower height trajectory in girls (β = -0.30, 95% CI: -0.52,-0.09; P for interaction = 0.025) and in children born to mothers who smoked during pregnancy (β = -0.48, 95% CI: -0.83, -1.13; P for interaction = 0.027). We concluded that prenatal cadmium exposure was associated with delayed growth in early childhood. Further research is needed to understand cadmium-related sex differences and the role of coexposure to maternal smoking during early pregnancy. abstract_id: PUBMED:35640466 Association between prenatal cadmium exposure and child development: The Japan Environment and Children's study. Cadmium is a heavy metal that can be found in soil, air, food, and water. Cadmium has toxic effects on the kidneys, bones, and respiratory system. Prenatal exposure to cadmium has been found to affect the mental development of children, but inconsistent results have been found in different studies. Therefore, it is unknown that prenatal cadmium exposure associated with child development after birth. To elucidate whether cadmium affect the child development or not, we analyzed nation-wide cohort study data, the Japan Environment and Children's Study. Prenatal cadmium concentrations in blood from mothers in the second or third trimester were determined by inductively coupled plasma mass spectrometry. Child development was evaluated using "Ages and Stages" questionnaires. The association between cadmium and child development were investigated by performing logistic regression analyses, multinomial logistic regression analyses and generalized linear mixed model using the child development parameters as dependent variables and the cadmium concentrations in maternal blood as the independent variable. There were significant associations between the cadmium concentration and child development at 6 months, 1 year, and 1.5 years after birth. However, the effect had disappeared at 2 years after birth or later. The number of developmental delays was positively associated with the cadmium concentration after adjusting individual difference. The results indicate that prenatal exposure affects child development, but the effect decreases with age. abstract_id: PUBMED:25960943 The Epigenetic Effects of Prenatal Cadmium Exposure. Prenatal exposure to the highly toxic and common pollutant cadmium has been associated with adverse effects on child health and development. However, the underlying biological mechanisms of cadmium toxicity remain partially unsolved. Epigenetic disruption due to early cadmium exposure has gained attention as a plausible mode of action, since epigenetic signatures respond to environmental stimuli and the fetus undergoes drastic epigenomic rearrangements during embryogenesis. In the current review, we provide a critical examination of the literature addressing prenatal cadmium exposure and epigenetic effects in human, animal, and in vitro studies. We conducted a PubMed search and obtained eight recent studies addressing this topic, focusing almost exclusively on DNA methylation. These studies provide evidence that cadmium alters epigenetic signatures in the DNA of the placenta and of the newborns, and some studies indicated marked sexual differences for cadmium-related DNA methylation changes. Associations between early cadmium exposure and DNA methylation might reflect interference with de novo DNA methyltransferases. More studies, especially those including environmentally relevant doses, are needed to confirm the toxicoepigenomic effects of prenatal cadmium exposure and how that relates to the observed health effects of cadmium in childhood and later life. abstract_id: PUBMED:35405126 Association between prenatal cadmium exposure and cord blood DNA methylation. Prenatal cadmium exposure is known to affect infant growth and organ development. Nonetheless, the role of DNA methylation in cadmium-related health effects has yet to be determined. To this end, we investigated the relationship between prenatal cadmium exposure and cord blood DNA methylation in Korean infants through an epigenome-wide association study. Cadmium concentrations in maternal blood during early and late pregnancy and in cord blood collected from newborns were measured using atomic adsorption spectrometry and DNA methylation analysis was conducted using HumanMethylationEPIC BeadChip kits. After adjusting for infant sex, maternal pregnancy body mass index, smoking status, and estimated leukocyte composition, we analyzed the association between CpG methylation and cadmium concentration in 364 samples. Among 835,252 CpG sites, maternal blood cadmium concentration in early pregnancy was significantly associated with two differentially methylated CpG sites, cg05537752 and cg24904393, which were annotated ATP9A and no gene, respectively. The study findings indicate that prenatal cadmium exposure is significantly associated with methylation statuses of several CpG sites and regions in Korean infants, especially during early pregnancy. abstract_id: PUBMED:34256298 Association of prenatal exposure to cadmium with neurodevelopment in children at 2 years of age: The Japan Environment and Children's Study. Background: Prenatal cadmium exposure has been associated with adverse neurodevelopmental outcomes. However, previous findings are contradictory, and little is known about the potential modifiers of the cadmium-related neurodevelopmental risk. We investigated the associations between prenatal cadmium exposure and neurodevelopment in 2-year-old children and examined the influence of mother/child characteristics. Methods: We recruited 3545 mother-child pairs from the Japan Environment and Children's Study. We collected maternal blood during mid/late pregnancy and cord blood at delivery, and measured cadmium concentrations using inductively coupled plasma mass spectrometry. Neurodevelopment was assessed using the Kyoto Scale of Psychological Development (KSPD), which includes cognitive-adaptive (C-A), language-social (L-S), postural-motor (P-M) and developmental quotient (DQ) domains. Associations between cadmium and KSPD scores were tested using multivariable models after controlling for confounders. Results: Median levels (interquartile ranges) of cadmium in maternal and cord blood were 0.70 (0.52-0.95) and 0.04 (0.03-0.06) μg/L, respectively. Maternal blood cadmium concentrations were inversely associated with P-M scores in boys (β = -1.4, 95% confidence interval (CI): -2.7, -0.038), DQ in children of mothers who smoked during pregnancy (β = -2.9, 95% CI: -5.7, -0.12), P-M (β = -5.4, 95% CI: -10, -0.67), C-A (β = -6.1, 95% CI: -11, -1.8), L-S (β = -9.0, 95% CI: -13, -4.8) and DQ scores (β = -6.4, 95% CI: -9.6, -3.1) in children born to mothers with gestational diabetes. Cord blood cadmium concentrations were negatively associated with L-S scores (β = -6.0., 95% CI: -11, -0.91) in children born to mothers with gestational diabetes. Conclusions: Prenatal cadmium exposure was negatively associated with neurodevelopment in boys, in children whose mothers smoked, and in children born to mothers with gestational diabetes. Further studies in other populations are needed to confirm our findings. abstract_id: PUBMED:36029842 The associations of prenatal exposure to PM2.5 and its constituents with fetal growth: A prospective birth cohort in Beijing, China. Background: Limited studies investigated the association of prenatal exposure to PM2.5 and fetal growth measured by ultrasound with inconsistent results. No study evaluated the effect of PM2.5 constituents on fetal growth in utero. We aimed to investigated whether prenatal exposure to PM2.5 and its constituents was associated with fetal growth measured by ultrasound. Methods: A total of 4319 eligible pregnant women in Peking University Birth Cohort in Tongzhou (PKUBC-T) were included in the study. Based on mothers' residential addresses, we estimated prenatal PM2.5 concentrations with a satellite-based spatiotemporal model and PM2.5 constituents concentrations with a modified Community Multiscale Air Quality model. Fetal growth parameters of abdominal circumference (AC), head circumference (HC), and femur length (FL) were measured by ultrasound and then estimated fetal weight (EFW) was calculated. We calculated sex and gestational age-specific fetal growth Z-score and then defined the corresponding fetal undergrowth. Generalized estimating equation was used to investigate the association of PM2.5 and its constituents with fetal growth Z-score and fetal undergrowth. Results: Prenatal exposure to PM2.5, OC, EC, SO42-, NH4+, or NO3- was consistently associated with decreased Z-scores of fetal growth parameters (AC, HC, FL, EFW). One IQR increase of PM2.5, OC, EC, SO42-, NH4+, or NO3- was associated with -0.183 [95% confident interval (CI): -0.225, -0.141], -0.144 (95%CI: -0.181, -0.107), -0.123 (95%CI: -0.160, -0.085), -0.035 (95%CI: -0.055, -0.015), -0.095 (95%CI: -0.126, -0.064), and -0.124 (95%CI: -0.159, -0.088) decrease in EFW Z-score, respectively. Prenatal exposure to PM2.5, OC, EC, SO42-, NH4+, or NO3- was also associated with higher risk of fetal AC, HC, FL or EFW undergrowth. Conclusion: The study identified that prenatal exposure to PM2.5 or its constituents was associated with impaired fetal growth. The findings provided evidence that control measures for PM2.5 constituents should be implemented for further promoting fetal growth. abstract_id: PUBMED:16532170 Prenatal exposure to androgens as a factor of fetal programming Both epidemiological and clinical evidence suggest a relationship between the prenatal environment and the risk of developing diseases during adulthood. The first observations about this relationship showed that prenatal growth retardation or stress conditions during fetal life were associated to cardiovascular, metabolic and other diseases in later life. However, not only those conditions may have lasting effects after birth. Growing evidence suggests that prenatal exposure to steroids (either of fetal or maternal origin) could be another source of prenatal programming with detrimental consequences during adulthood. We have recently demonstrated that pregnant women with polycystic ovary syndrome exhibit elevated androgen levels compared to normal pregnant women, which could provide an androgen excess for both female or male fetuses. We have further tested this hypothesis in an animal model of prenatal androgenization, finding that females born from androgenized mothers have a low birth weight and high insulin resistance, that starts at an early age. On the other hand, males have low testosterone and LH secretion in response to a GnRH analogue test compared to control males and alterations in seminal parameters. We therefore propose that our efforts should be directed to modify the hyperandrogenic intrauterine environment to reduce the potential development of reproductive and metabolic diseases during adulthood. abstract_id: PUBMED:31678728 Coal smoke, gestational cadmium exposure, and fetal growth. Background: Gestational cadmium exposure may impair fetal growth. Coal smoke has largely been unexplored as a source of cadmium exposure. We investigated the relationship between gestational cadmium exposure and fetal growth, and assessed coal smoke as a potential source of airborne cadmium, among non-smoking pregnant women in Ulaanbaatar, Mongolia, where coal combustion in home heating stoves is a major source of outdoor and indoor air pollution. Methods: This observational study was nested within the Ulaanbaatar Gestation and Air Pollution Research (UGAAR) study, a randomized controlled trial of portable high efficiency particulate air (HEPA) filter air cleaner use during pregnancy, fetal growth, and early childhood development. We measured third trimester blood cadmium concentrations in 374 out of 465 participants who had a live birth. We used multiple linear and logistic regression to assess the relationships between log2-transformed maternal blood cadmium concentrations and birth weight, length, head circumference, ponderal index, low birth weight, small for gestational age, and preterm birth in crude and adjusted models. We also evaluated the relationships between log2-transformed blood cadmium concentrations and the density of coal-burning stoves within 5000 m of each participant's apartment as a proxy of coal smoke emissions from home heating stoves. Results: The median (25th,75th percentile) blood cadmium concentration was 0.20 (0.15, 0.29) μg/L. A doubling of blood cadmium was associated with a 95 g (95% CI: 34, 155 g) reduction in birth weight in adjusted models. An interquartile range increase in coal stove density (from 3.4 to 4.9 gers/hectare) surrounding participants' apartments was associated with a 12.2% (95% CI: 0.3, 25.6%) increase in blood cadmium concentrations. Conclusions: Gestational cadmium exposure was associated with reduced birth weight. In settings where coal is a widely used fuel, cadmium may play a role in the putative association between air pollution and impaired fetal growth. abstract_id: PUBMED:3736760 Prenatal exposure to lead and cadmium and psychomotor development of the child at 6 years. In 1977, a hair sample was taken from 26 newborn babies and their mothers and analysed for lead and cadmium. Six years later, each of these 26 children was given a psychometric test (McCarthy Scales of Children's Abilities). Statistical analysis shows a significant negative relationship between the degree of in utero exposure to cadmium and lead and the child's motor and perceptual abilities. Any effect on memory or verbal skills was not statistically significant. Allowing for the confounding variables does not consistently affect these results. Answer: Yes, prenatal cadmium exposure has been associated with adverse effects on fetal and child growth. Studies have found that increased cadmium levels in cord blood were associated with decreased head circumference at birth and consistently linked with reduced height, weight, and head circumference up to 3 years of age (PUBMED:21186202). Additionally, elevated prenatal cadmium levels were significantly associated with a slower weight trajectory in all children and a slower height trajectory in girls and in children born to mothers who smoked during pregnancy (PUBMED:30252047). Furthermore, prenatal cadmium exposure was found to affect child development at 6 months, 1 year, and 1.5 years after birth, with the number of developmental delays being positively associated with cadmium concentration (PUBMED:35640466). Research also indicates that prenatal cadmium exposure can have epigenetic effects, such as alterations in DNA methylation in the placenta and newborns, which might reflect interference with de novo DNA methyltransferases (PUBMED:25960943). Specifically, prenatal cadmium exposure was significantly associated with methylation statuses of several CpG sites and regions in Korean infants, especially during early pregnancy (PUBMED:35405126). Moreover, prenatal cadmium exposure was negatively associated with neurodevelopment in boys, in children whose mothers smoked, and in children born to mothers with gestational diabetes (PUBMED:34256298). These findings suggest that prenatal cadmium exposure can have lasting effects on child growth and development, and further research is needed to understand the mechanisms and potential for mitigation.
Instruction: The health information brochure: a useful tool for chiropractic practice? Abstracts: abstract_id: PUBMED:11416823 The health information brochure: a useful tool for chiropractic practice? Background: It has been suggested that clinicians should be looking at new ways to enhance their patients' self-care. Patient education is one strategy that primary providers may use. Objective: This study investigates the format in which patients would like to pursue their health education within the chiropractic clinic. Methods: An exploratory study of chiropractic patients was undertaken to investigate patients' preferred health education formats, their commitment to pursuing health objectives, and their literacy level. Purposive sampling of 9 Australian chiropractic clinics was undertaken. Convenience sampling of patients attending these clinics resulted in 102 patients participating. Participants completed a questionnaire. A research assistant was available to clarify any questions. Data were collected and collated. A Likert scale was used to capture responses to questions ascertaining patient opinions. Results: Patients considered health the most important of the life objectives listed; however, they preferred spending time with family to undertaking health- and fitness-promoting activity. More chiropractic patients opted for health information brochures than health promotion classes, personally supervised self-care programs, or practitioner-supervised self-care contracts. Patient literacy levels varied within and between clinics. Conclusions: Brochures may provide a definitive health information tool for chiropractors who limit their clinical role to primary contact and a helpful adjunct to patient education for chiropractors committed to a primary care role. However, care should be taken to select brochures consistent with the patients' literacy level. Tips for selecting and preparing suitable brochures are provided. The discrepancy between how greatly patients value health and how they prefer spending their time may have implications for successful behavior change. Brochures may not alone constitute adequate practitioner involvement. abstract_id: PUBMED:12021742 Health information and promotion in chiropractic clinics. Objective: To explore the current health education behaviors of chiropractors, ascertain their willingness to provide patient counseling, and compare this with topics of interest to chiropractic patients. Methods: This study involved a postal survey of 400 randomly selected members of the Chiropractic Association of Australia (35% response rate) and a semistructured interview of 316 patients attending one of 20 purposively selected chiropractic practices. Data were collated, and the current health information practices of chiropractic respondents and their willingness to undertake counseling on various topics was identified and compared with the information interests of participating patients. Particular emphasis was placed on injury prevention both with respect to patient counseling and chiropractic practice risk. Results: Respondents expressed varying degrees of willingness to provide health information on diverse topics, but no clear health education chiropractic practice pattern emerged. Although expressing willingness to undertake counseling, respondents were more likely to provide health information brochures than develop a tailored health promotion contract. Health education topics ranged from exercise (91%) to osteoporosis prevention (23%). Seventy-eight percent of chiropractors were prepared to offer counseling on injury prevention, yet 45% of respondents themselves reported having some work-related injury. Maintenance care failed to emerge as a global term for describing a common core of topics or chiropractic health education practices. Conclusion: This study demonstrated an interest by chiropractors in providing and by chiropractic patients in obtaining health information that extends beyond spinal health. The range of relevant topics covered and modes used for health information transmission in chiropractic practice requires clarification. The prevalence of work-related injury among chiropractors suggests a need to develop safe chiropractic clinical practice protocols. abstract_id: PUBMED:28439405 The STarT back tool in chiropractic practice: a narrative review. Background: The Keele STarT Back Tool was designed for primary care medical physicians in the UK to determine the risk for persistent disabling pain in patients with musculoskeletal pain and to tailor treatments accordingly. In medical and physical therapy settings, STarT Back Tool's tailored care plans improved patients' low back pain outcomes and lowered costs. Objective: Review studies using the STarT Back Tool in chiropractic patient populations. Methods: PubMed, The Cochrane Library, Index to Chiropractic Literature, and Science Direct databases were searched. Articles written in English, published in peer-reviewed journals, that studied the STarT Back Tool in patients seeking chiropractic care were included. Results: Seven articles were selected based on inclusion and exclusion criteria. The STarT Back Tool was feasibly incorporated into 19 chiropractic clinics in Denmark. Total STarT Back 5-item score correlated moderately with total Bournemouth Questionnaire score. Two studies reported that the STarT Back Tool's predictive ability was poor, while another reported that the tool predicted outcomes in patients scoring in the medium and high risk categories who completed the STarT Back 2 days after their initial visit. A study examining Danish chiropractic, medical and physical therapy settings revealed that only baseline episode duration affected STarT Back's prognostic ability across all care settings. The tool predicted pain and disability in chiropractic patients whose episode duration was at least 2 weeks, but not in patients with an episode duration &lt;2 weeks. Conclusion: While the STarT Back Tool can be incorporated into chiropractic settings and correlates with some elements of the Bournemouth Questionnaire, its prognostic ability is sometimes limited by the shorter low back pain episodes with which chiropractic patients often present. It may be a better predictor in patients whose episode duration is at least 2 weeks. Studies examining outcomes of stratified care in chiropractic patients are needed. abstract_id: PUBMED:2351881 Nutritional intervention in chiropractic clinical practice: the chiropractic student's perspective. A survey of chiropractic students reveals that they regard chiropractors as a knowledgeable source of information about nutrition. While the dissonance produced by the use of nutrients in clinical practice may be overcome by denying that this constitutes drug therapy, the controversies of appropriate use need to be addressed. The challenge faced by chiropractic undergraduate nutrition education involves identification of justifiable nutritional interventions, selection of effective doses and an awareness of the potential side effects. By resolving to match or better the medical profession as a source of nutritional health care, the future chiropractor has unwittingly embraced the cost-benefit paradigm of medical drug therapy. abstract_id: PUBMED:35421996 Factors that influence scope of practice of the chiropractic profession in Australia: a scoping review. Introduction: The World Health Organization describes chiropractic as a health profession that treats the musculoskeletal system and the effects of that system on the function of the nervous system and general health. Notwithstanding such descriptions, scope of practice remains a contentious issue in Australia chiropractic with various authors defining it differently. To date, the peak governing body, the Chiropractic Board of Australia, has focused on title protection rather than defining a scope of practice for the profession. A well-defined scope of practice is important, as it helps to identify what is acceptable in the profession and the role chiropractic has in the broader healthcare system. Objective: The objective of this scoping review was to explore the literature on the factors that influence scope of practice of chiropractic in Australia. Methods: This study employed scoping review methodology to document the current state of the literature on factors that influence scope of practice of the chiropractic profession in Australia. Results: A total of 1270 articles were identified from the literature search. Six studies fulfilled the inclusion criteria and were included in the final analysis. Four factors that influence scope of practice were identified: education, professional identity, patient safety, and organisational structure. Conclusion: The results of this study will inform future discussions around establishing a framework for a more comprehensive scope of practice for the chiropractic profession in Australia. Such a framework has the potential to benefit patient safety, professional identity, public perception, education, and regulation of the profession. abstract_id: PUBMED:22742964 Consensus process to develop a best-practice document on the role of chiropractic care in health promotion, disease prevention, and wellness. Objective: The purposes of this project were to develop consensus definitions for a set of best practices that doctors of chiropractic may use for promoting health and wellness and preventing disease and to describe the appropriate components and procedures for these practices. Methods: A multidisciplinary steering committee of 10 health care professionals developed seed statements based on their clinical experience and relevant literature. A Delphi consensus process was conducted from January to July 2011, following the RAND methodology. Consensus was reached when at least 80% of the panelists were in agreement. There were 44 Delphi panelists (36 doctors of chiropractic, 6 doctors of philosophy, 1 doctor of naturopathy, 1 registered nurse). Results: The statements developed defined the terms and practices for chiropractic care to promote health and wellness and prevent disease. Conclusion: This document describes the procedures and features of wellness care that represent a reasonable approach to wellness care and disease prevention in chiropractic clinical practice. This living document provides a general framework for an evidence-based approach to chiropractic wellness care. abstract_id: PUBMED:1919365 Science in chiropractic clinical practice: identifying a need. The chiropractic profession has resolved to establish chiropractic clinical care upon a scientifically acceptable foundation. In order for such an ambition to be realized, the cooperation and participation of field practitioners is required. A survey of chiropractors practicing in Australia demonstrated that respondents largely failed to appreciate the power of various research designs to provide clinical practice information. This paper suggests the chasm between professional resolve and clinical practice is not being adequately bridged at the level of field practitioners. abstract_id: PUBMED:8046278 An overview of European chiropractic practice. Objective: The aim was to survey chiropractic practice in Europe. Design: A postal questionnaire survey of all chiropractors in the European Chiropractors' Union (1990) yielded demographic information and practice characteristics. Patient case forms, completed by randomly selected practitioners, revealed the demographic features, presenting complaints, diagnoses and management procedures used. Setting: The survey was conducted in private chiropractic practices of 13 European countries. Participants: Nine hundred five practitioner questionnaires (70% response) and 1014 patient case forms were used. Main Results: Demographic features of chiropractors and patients compare well to previous studies. Most chiropractors, one-quarter of whom are now females, were European trained. Many practice in groups and in cooperation with other health professions, especially in relation to radiology. Radiographs are used in nearly two-thirds of cases, yet only 25% of patients were X-rayed in chiropractic clinics. Nearly half of the patients consulted in the first month of their complaints, which were mainly of musculoskeletal pain. Virtually no evidence appeared of attempts to manage viscerosystemic disease. The manual techniques used varied considerably according to country. Most patients were at work during the course of their treatment. Conclusions: The results reflect a considerable maturation of the profession over the past two decades. The study highlights the socioeconomic potential for the treatment of chronic and severe musculoskeletal pain. abstract_id: PUBMED:34314609 The Role of Chiropractic Care in Providing Health Promotion and Clinical Preventive Services for Adult Patients with Musculoskeletal Pain: A Clinical Practice Guideline. Objective: To develop evidence-based recommendations on best practices for delivery of clinical preventive services by chiropractors and to offer practical resources to empower provider applications in practice. Design: Clinical practice guideline based on evidence-based recommendations of a panel of practitioners and experts on clinical preventive services. Methods: Synthesizing the results of a literature search for relevant clinical practice guidelines and systematic reviews, a multidisciplinary steering committee with training and experience in health promotion, clinical prevention, and/or evidence-based chiropractic practice drafted a set of recommendations. A Delphi panel of experienced practitioners and faculty, primarily but not exclusively chiropractors, rated the recommendations by using the formal consensus methodology established by the RAND Corporation/University of California. Results: The Delphi consensus process was conducted during January-February 2021. The 65-member Delphi panel reached a high level of consensus on appropriate application of clinical preventive services for screening and health promotion counseling within the chiropractic scope of practice. Interprofessional collaboration for the successful delivery of clinical preventive services was emphasized. Recommendations were made on primary, secondary, tertiary, and quaternary prevention of musculoskeletal pain. Conclusions: Application of this guideline in chiropractic practice may facilitate consistent and appropriate use of screening and preventive services and foster interprofessional collaboration to promote clinical preventive services and contribute to improved public health. abstract_id: PUBMED:23875117 Evidence-Based Practice and Chiropractic Care. Evidence-based practice has had a growing impact on chiropractic education and the delivery of chiropractic care. For evidence-based practice to penetrate and transform a profession, the penetration must occur at 2 levels. One level is the degree to which individual practitioners possess the willingness and basic skills to search and assess the literature. Chiropractic education received a significant boost in this realm in 2005 when the National Center for Complementary and Alternative Medicine awarded 4 chiropractic institutions R25 education grants to strengthen their research/evidence-based practice curricula. The second level relates to whether the therapeutic interventions commonly employed by a particular health care discipline are supported by clinical research. A growing body of randomized controlled trials provides evidence of the effectiveness and safety of manual therapies. Answer: The health information brochure appears to be a useful tool for chiropractic practice. A study investigating the format in which patients would like to pursue their health education within the chiropractic clinic found that more chiropractic patients opted for health information brochures than for other forms of health education such as health promotion classes, personally supervised self-care programs, or practitioner-supervised self-care contracts (PUBMED:11416823). This suggests that brochures are a preferred method for patients to receive health information in a chiropractic setting. However, it is important to note that while patients considered health the most important of life objectives listed, they preferred spending time with family to undertaking health- and fitness-promoting activity. This discrepancy between valuing health and preferring to spend time on other activities may have implications for successful behavior change, indicating that brochures alone may not constitute adequate practitioner involvement in patient education (PUBMED:11416823). Additionally, the study highlighted the need to select brochures consistent with the patients' literacy level, providing tips for selecting and preparing suitable brochures. This is crucial as patient literacy levels varied within and between clinics (PUBMED:11416823). In conclusion, health information brochures can serve as a definitive tool for chiropractors who limit their clinical role to primary contact and as a helpful adjunct to patient education for chiropractors committed to a primary care role. However, the effectiveness of brochures in changing patient behavior and promoting health may be limited if not accompanied by other forms of practitioner involvement and tailored to the literacy levels of patients.
Instruction: Agreement and correctness in adjusting antiepileptic drug treatment: a need for rational drug treatment? Abstracts: abstract_id: PUBMED:10368076 Agreement and correctness in adjusting antiepileptic drug treatment: a need for rational drug treatment? Purpose: To study interobserver variation in treatment decisions in a first follow-up contact after initiation of antiepileptic drug (AED) treatment. The results should aid us in the assessment of whether decision support can be of value in this situation. Methods: Data from patient records were used to construct 270 different test cases containing information about the course of the disease after initiation of drug treatment. The cases were presented to five neurologists from different general hospitals who previously agreed about the diagnosis and the initial treatment for these cases. They were asked to write a prescription for each test case. Results: All five neurologists agreed on a treatment decision in 21.9% of the 265 cases available for analysis. Each neurologist made a decision different from the decisions taken by all other neurologists in 14.0-19.6% of the cases. Kappa values for agreement among individual neurologists as well as for agreement between an individual and the group of his peers were low. In 82.6% of the cases, a majority of the neurologists agreed on a treatment decision. Comparing the decisions of individual neurologists with the majority decision reference (219 cases) showed a significant difference in correctness (range, 67.1-82.6%) among the neurologists. Conclusions: The fact that a majority decision could be reached in a considerable number of cases, as well as the variability in adjustment of an initiated drug treatment, leads us to the conclusion that decision support can contribute to a rational adjustment of drug treatment. abstract_id: PUBMED:27713357 Rational Polytherapy with Antiepileptic Drugs. Approximately 30-40% of patients do not achieve seizure control with a single antiepileptic drug (AED). With the advent of multiple AEDs in the past 15 years, rational polytherapy, the goal of finding combinations of AEDs that have favorable characteristics, has become of greater importance. We review the theoretical considerations based on AED mechanism of action, animal models, human studies in this field, and the challenges in finding such optimal combinations. Several case scenarios are presented, illustrating examples of rational polytherapy. abstract_id: PUBMED:27818363 Antiepileptic drug adherence and persistence in children with epilepsy attending a large tertiary care children's hospital. We aimed to evaluate antiepileptic drug treatment persistence and adherence in paediatric epilepsy patients and investigate the association between medication-taking behaviours and clinical outcome. Medical and prescription records of newly treated paediatric epilepsy patients, aged 1-18 years who initiated antiepileptic drug monotherapy in a tertiary teaching hospital, were retrospectively reviewed. The rates of overall treatment, non-persistence, a treatment gap &gt;60 days, and adherence, as measured by a medication possession ratio ≥0.8, were assessed. The relationship between non-adherence and clinical outcome, defined as an emergency department visit or hospitalisation due to seizure-related reasons, was analysed. A total of 1,172 patients met the inclusion criteria. The proportion of patients who were both persistent and adherent at one year was 70.14% and decreased to 56.83% at two years. Patients who started an antiepileptic drug at one year of age, took older generation antiepileptic drugs as the initial treatment, and those diagnosed with localized seizures were less likely to be adherent to and persistent with overall antiepileptic drug treatment. Patients who were non-adherent to antiepileptic drug treatment were at an increased risk of hospitalisation or emergency department visits for seizure-related reasons (adjusted HR 2.10, 95% CI 1.25-3.55). This large population study shows that 70% of paediatric epilepsy patients were persistent with and adherent to antiepileptic drugs after one year of treatment and confirms that non-adherence to antiepileptic drug treatment is an important factor in seizure-related clinical outcome. abstract_id: PUBMED:33480193 Antiepileptic Drug Therapy for Status Epilepticus. Status epilepticus (SE) is one of the most serious neurologic emergencies. SE is a condition that encompasses a broad range of semiologic subtypes and heterogeneous etiologies. The treatment of SE primarily involves the management of the underlying etiology and the use of antiepileptic drug therapy to rapidly terminate seizure activities. The Drug Committee of the Korean Epilepsy Society performed a review of existing guidelines and literature with the aim of providing practical recommendations for antiepileptic drug therapy. This article is one of a series of review articles by the Drug Committee and it summarizes staged antiepileptic drug therapy for SE. While evidence of good quality supports the use of benzodiazepines as the first-line treatment of SE, such evidence informing the administration of second- or third-line treatments is lacking; hence, the recommendations presented herein concerning the treatment of established and refractory SE are based on case series and expert opinions. The choice of antiepileptic drugs in each stage should consider the characteristics and circumstances of each patient, as well as their estimated benefit and risk to them. In tandem with the antiepileptic drug therapy, careful searching for and treatment of the underlying etiology are required. abstract_id: PUBMED:26924638 Exploring the latest avenues for antiepileptic drug discovery and development. Introduction: In about 30% of patients with epilepsy, seizures are not sufficiently controlled with the available antiepileptic medication. There is a need for newer drugs, not only aimed at suppressing seizure activity but for the efficient inhibition of epileptogenesis. Areas Covered: In this review, the authors consider different approaches for the management of drug-resistant epilepsy and various possible ways on how to stop epileptogenesis. Expert Opinion: There is limited evidence for antiepileptogenic effects of antiepileptic drugs. Post-status epilepticus animal models will probably help the discovery of antiepileptogenic compounds among already approved non-antiepileptic drugs or other agents. A good example is losartan, which significantly inhibits epileptogenesis in vivo. Some agents that affect the mTOR signaling system, or that inhibit the synthesis of the proconvulsant cytokine as well as those derived from plants (resveratrol) also seem effective in this regard. Furthermore, agonists of peroxisome proliferator-activated receptors have proven effective in some models of seizures but data on their possible antiepileptogenic activity is quite limited. In addition to the discovery of new antiepileptogenic and/or antiepileptic compounds, there is a possibility of improving the treatment of drug-resistant cases through the rational use of antiepileptic drug combinations. abstract_id: PUBMED:31855066 The role of polytherapy in the management of epilepsy: suggestions for rational antiepileptic drug selection. Introduction: Antiepileptic polytherapy may be indicated in patients experiencing drug-resistant epilepsy. To date, there are no evidence-based criteria on how to combine different antiepileptic drugs (AEDs) together, in order to obtain the best therapeutic response.Areas covered: This paper reviews the available data about the various associations of AEDs in patients undergoing polytherapy, focusing on the most effective and well-tolerated polytherapies. Moreover, some controversial aspects of this topic are addressed.Expert opinion: Nowadays, there are no guidelines on polytherapy in patients with epilepsy; thus, the management of pharmacoresistant epilepsy is still uncertain, except for valproate/lamotrigine combination, which seems to be the only one recommended. Data regarding mechanism of action, pharmacokinetics, tolerability, and, more importantly, the analysis of the valuable clinical studies of drug combinations can help physicians to choose the best and most effective AED association for each patient. abstract_id: PUBMED:31482053 Antiepileptic Drug Therapy in Patients with Drug-Resistant Epilepsy. Antiepileptic drug (AED) therapy starts with an accurate diagnosis of epilepsy and is followed by sequential drug trials. Seizure freedom is largely achieved by the first two drug trials; thus, epilepsy that cannot be controlled after appropriately conducted trials of the first two drugs is defined as drug-resistant epilepsy (DRE). It is still unclear which mode of pharmacotherapy, among monotherapy and polytherapy, shows better outcomes in cases of DRE. However, in a recent large hospital cohort study over past two decades, combination therapy was associated with a progressive increase in seizure-free rate than monotherapy in DRE. The benefits of polytherapy in the management of DRE might be related to the recent introduction of many new AEDs with different and novel mechanisms of action and better pharmacokinetic and tolerability profiles. These new AEDs were introduced to the market after they have proven their superiority over placebos in randomized controlled trials (RCTs) on add-on therapy in patients with DRE. Therefore, polytherapy including these new AEDs in the regimen is the approved mode of treatment for cases of DRE; this has prompted physicians to try various combinations of polytherapy to optimize the clinical outcomes. In addition, the significant discrepancies in AED responder rates between RCTs and real-world practice may support the importance of judicious use of new drugs in polytherapy by experienced epileptologists. Most experts now agree to the concept of "rational polytherapy" consisting of mechanistic combinations of AEDs exerting synergistic interactions and to the importance of continuing trials of different rational polytherapy regimens to improve the outcome of the core population of epilepsy patients in the long term. abstract_id: PUBMED:27443649 Pharmacological Treatment of Drug-Resistant Epilepsy in Adults: a Practical Guide. More than 30 % of adults with epilepsy do not fully control on the currently available antiepileptic drugs (AEDs). For these and many other patients, combinations of agents, often possessing different mechanisms of actions, are employed with the aim of achieving seizure freedom or the best available prognosis in terms of reduced seizure numbers and severity. This review discusses my own approach to optimising outcomes in as many of these patients as possible by adjusting the drug burden using a combination of two, three or sometimes four or more AEDs. Modes of drug action are reviewed and practical strategies for treating different patients with drug-resistant epilepsy have been explored. Only for sodium valproate with lamotrigine is there good evidence of synergism. The final part of this practical paper consists of six individual illustrative cases with appropriate comments. abstract_id: PUBMED:32361205 A review of the drug-drug interactions of the antiepileptic drug brivaracetam. Brivaracetam is an antiepileptic drug (AED) indicated for the treatment of focal seizures, with improved safety and tolerability vs first-generation AEDs. Brivaracetam binds with high affinity to synaptic vesicle protein 2A in the brain, which confers its antiseizure activity. Brivaracetam is rapidly absorbed and extensively biotransformed, and exhibits linear and dose-proportional pharmacokinetics at therapeutic doses. Brivaracetam does not interact with most metabolizing enzymes and drug transporters, and therefore does not interfere with drugs that use these metabolic routes. The favorable pharmacokinetic profile of brivaracetam and lack of clinically relevant drug-drug interactions with commonly prescribed AEDs or oral contraceptives allows administration without dose adjustment, and avoids potential untoward events from decreased efficacy of an AED or oral contraceptive due to a drug-drug interaction. Few agents have been reported to affect the pharmacokinetics of brivaracetam. The strong enzyme-inducing AEDs carbamazepine, phenytoin and phenobarbital/primidone have been shown to moderately lower brivaracetam plasma concentrations, with no adjustment of brivaracetam dose needed. Dose adjustment should be considered when brivaracetam is coadministered with the more potent CYP inducer, rifampin. Additionally, caution should be used when adding or ending treatment with the strong enzyme inducer, St. John's wort. In summary, brivaracetam (50-200 mg/day) has a favorable pharmacokinetic profile and is associated with few clinically relevant drug-drug interactions. abstract_id: PUBMED:36977859 Risk of delirium with antiepileptic drug use: a study based on the Japanese Adverse Drug Event Report database. Background: Antiepileptic drugs may cause delirium, and the risk may vary with each drug. However, related studies have provided inconsistent results. Aim: The aim of this study was to investigate whether the use of antiepileptic drugs is a risk factor for delirium development. Method: Using the Japanese Adverse Drug Event Report database, we analysed 573,316 reports pertaining to the period from 2004 to 2020. Reporting odds ratios and 95% confidence intervals of delirium associated with use of antiepileptic drugs were calculated after adjusting for potential confounders. Furthermore, for each antiepileptic drug, we performed an analysis stratified based on older age and benzodiazepine receptor agonist usage. Results: There were 27,439 reports of antiepileptic drug-related adverse events. Of these, 191 reports were associated with antiepileptic drugs and delirium (crude reporting odds ratio [cROR], 1.66; 95% confidence interval [CI], 1.43-1.93). The use of lacosamide (adjusted reporting odds ratio [aROR], 2.44; 95% CI, 1.24-4.80), lamotrigine (aROR, 1.54; 95% CI, 1.05-2.26), levetiracetam (aROR, 1.91; 95% CI, 1.35-2.71), and valproic acid (aROR, 1.49; 95% CI, 1.16-1.91) was related to a significantly higher reporting odds ratio for delirium, even after adjustment for possible confounding factors. However, when used in combination with benzodiazepine receptor agonists, none of the antiepileptic drugs were found to be associated with delirium. Conclusion: Our study's findings suggest that antiepileptic drug usage may be associated with delirium development. Answer: The study on interobserver variation in treatment decisions for antiepileptic drug (AED) treatment adjustment highlights the need for rational drug treatment. The study found that only 21.9% of treatment decisions were agreed upon by all five neurologists, and each neurologist made a unique decision in 14.0-19.6% of the cases. The low kappa values for agreement among individual neurologists and between an individual and the group of peers indicate a high level of variability in treatment decisions. Despite this variability, a majority decision was reached in 82.6% of the cases, but there was a significant difference in the correctness of decisions among neurologists, ranging from 67.1-82.6%. These findings suggest that decision support could contribute to more rational adjustment of AED treatment (PUBMED:10368076). Rational polytherapy, which involves finding combinations of AEDs with favorable characteristics, is of increasing importance given that 30-40% of patients do not achieve seizure control with a single AED. The goal is to optimize combinations based on AED mechanisms of action, animal models, human studies, and the challenges in finding optimal combinations (PUBMED:27713357). In the context of drug-resistant epilepsy, the use of polytherapy, including new AEDs with different mechanisms of action and better pharmacokinetic and tolerability profiles, has been associated with a progressive increase in seizure-free rates compared to monotherapy. The concept of "rational polytherapy," which consists of mechanistic combinations of AEDs exerting synergistic interactions, is supported by experts as a means to improve long-term outcomes for epilepsy patients (PUBMED:31482053). Overall, the evidence points to the complexity of AED treatment decisions and the potential benefits of rational polytherapy in managing epilepsy, especially in cases where monotherapy is insufficient. The variability in treatment decisions among neurologists underscores the need for decision support systems and rational approaches to AED selection and combination to enhance treatment outcomes.
Instruction: Do plasma biomarkers of coagulation and fibrinolysis differ between patients who have experienced an acute myocardial infarction versus stable exertional angina? Abstracts: abstract_id: PUBMED:18035075 Do plasma biomarkers of coagulation and fibrinolysis differ between patients who have experienced an acute myocardial infarction versus stable exertional angina? Background: Circulating concentrations of proteins associated with coagulation and fibrinolysis may differ between individuals with coronary artery disease (CAD) who develop an acute myocardial infarction (AMI) rather than stable exertional angina. Methods: We compared plasma concentrations of fibrinogen, d-dimer, tissue-type plasminogen activator, and plasminogen activator inhibitor-1 (PAI-1) between patients whose first clinical manifestation of CAD was an AMI (n = 198) rather than stable exertional angina (n = 199). We also compared plasma concentrations of these proteins between patients with symptomatic CAD (either AMI or stable angina; n = 397) and healthy, control subjects (n = 197) to confirm the sensitivity of these assays to detect epidemiologic associations. Results: At a median of 15 weeks after presentation, patients with AMI had slightly higher d-dimer concentrations than patients with stable angina (P = .057), but were not significantly different in other markers. By contrast, fibrinogen, d-dimer, and tissue-type plasminogen activator were significantly higher (P &lt; .001) and PAI-1 lower in patients with CAD than in healthy control subjects. After statistical adjustment for clinical covariates, cardiac risk factors, medications, and other confounders, fibrinogen, d-dimer, and PAI-1 remained significantly associated with CAD. Conclusion: Selected plasma markers of coagulation and fibrinolysis did not distinguish patients presenting with AMI from those with stable exertional angina. abstract_id: PUBMED:26964999 Acute and long-term effect of percutaneous coronary intervention on serially-measured oxidative, inflammatory, and coagulation biomarkers in patients with stable angina. To derive insights into the temporal changes in oxidative, inflammatory and coagulation biomarkers in patients with stable angina undergoing percutaneous coronary intervention (PCI). PCI is associated with a variety of biochemical and mechanical stresses to the vessel wall. Oxidized phospholipids are present on plasminogen (OxPL-PLG) and potentiate fibrinolysis in vitro. We recently showed that OxPL-PLG increase following acute myocardial infarction, suggesting that they are involved in atherothrombosis. Plasma samples were collected before, immediately after, 6 and 24 h, 3 and 7 days, and 1, 3, and 6 months after PCI in 125 patients with stable angina undergoing uncomplicated PCI. Plasminogen levels, OxPL-PLG, and an array of 16 oxidative, inflammatory and coagulation biomarkers were measured with established assays. OxPL-PLG and plasminogen declined significantly immediately post-PCI, rebounded to baseline, peaked at 3 days and slowly returned to baseline by 6 months (p &lt; 0.0001 by ANOVA). The temporal trends to maximal peak in biomarkers were as follows: immediately post PCI: OxPL-apoB and lipoprotein (a); Day 1-the inflammatory biomarker IL-6; Day 3-CRP and coagulation biomarkers OxPL-PLG, plasminogen and tissue plasminogen activity; Day 3 to 7-plasminogen activator inhibitor activity, and complement factor H binding to malondialdehyde-LDL and MDA-LDL IgG; Day 7-30 MDA-LDL IgM, CuOxLDL IgM, and ApoB-IC IgM and IgG; &gt;30 days uPA activity, uPA antigen, CuOxLDL IgG and peptide mimotope to MDA-LDL. Most of the biomarkers trended to baseline by 6 months. PCI results in a specific, temporal sequence of changes in plasma biomarkers. These observations provide insights into the effects of iatrogenic barotrauma and plaque disruption during PCI and suggest avenues of investigation to explain complications of PCI and development of targeted therapies to enhance procedural success. abstract_id: PUBMED:19682232 Circulating inflammatory and hemostatic biomarkers are associated with risk of myocardial infarction and coronary death, but not angina pectoris, in older men. Aims: The extent to which hemostatic and inflammatory biomarkers are related to angina pectoris as compared with myocardial infarction (MI) remains uncertain. We examined the relationship between a wide range of inflammatory and hemostatic biomarkers, including markers of activated coagulation, fibrinolysis and endothelial dysfunction and viscosity, with incident myocardial infarction (MI) or coronary heart disease (CHD) death and incident angina pectoris uncomplicated by MI or CHD death in older men. Methods: A prospective study of 3217 men aged 60-79 years with no baseline CHD (angina or MI) and who were not on warfarin, followed up for 7 years during which there were 198 MI/CHD death cases and 220 incident uncomplicated angina cases. Results: Inflammatory biomarkers [C-reactive protein (CRP), interleukin-6, fibrinogen], plasma viscosity and hemostatic biomarkers [von Willebrand factor (VWF) and fibrin D-dimer] were associated with a significant increased risk of MI/CHD death but not with uncomplicated angina even after adjustment for age and conventional risk factors. Adjustment for CRP attenuated the relationships between VWF, fibrin D-dimer and plasma viscosity with MI/CHD death. Comparisons of differing associations with risk of MI/CHD deaths and uncomplicated angina were significant for the inflammatory markers (P &lt; 0.05) and marginally significant for fibrin D-dimer (P = 0.05). In contrast, established risk factors including blood pressure and high-density lipoprotein (HDL)-cholesterol were associated with both MI/CHD death and uncomplicated angina. Conclusion: Circulating biomarkers of inflammation and hemostasis are associated with incident MI/CHD death but not incident angina uncomplicated by MI or CHD death in older men. abstract_id: PUBMED:8107282 Blood coagulation and fibrinolysis in ischemic heart disease Intracoronary thrombus formation has been thought to play an important role in the genesis of acute myocardial infarction an unstable angina. To examine whether the coagulation and fibrinolytic systems are altered in such ischemic heart diseases, the plasma levels of fibrinopeptide A (FPA) and plasminogen activator (PAI) were measured. The plasma level of FPA was increased in patients with variant angina as compared with those with stable exertional angina and there was a significant circadian variation in the plasma level of FPA in parallel with that of the frequency of the attacks with the peak level occurring from midnight to early morning in patients with variant angina. The plasma FPA level increased in patients with coronary spastic angina after the ischemic attack induced by hyperventilation. Furthermore, FPA was released into the coronary circulation after the anginal attack induced by intracoronary injection of acetylcholine. These findings suggest that the coronary artery spasm may induce thrombin generation and trigger thrombus formation in the coronary artery. On the other hand, the plasma level of PAI activity was higher in patients with unstable angina and coronary spastic angina than in those with stable exertional angina. Moreover, the PAI activity in patients with unstable angina decreased to the level in patients with stable exertional angina after the stabilization of their symptoms by drugs. Our findings suggest that the increased plasma PAI activity may reduce fibrinolytic activity and attenuate removal of the thrombus and may ultimately lead to acute myocardial infarction in some patients with unstable angina and coronary spastic angina.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:10493848 D-dimers in relation to the severity of arteriosclerosis in patients with stable angina pectoris after myocardial infarction. Background: Plasma concentrations of D-dimers show the extent of intravascular fibrinolysis of cross-linked fibrin. Higher concentrations of D-dimers are found in the plasma of arteriosclerosis patients with increased fibrin metabolism. The present study was performed in order to investigate whether there is a relationship between the severity of arteriosclerosis and fibrinolytic activity indicated by plasma levels of D-dimer. Methods: The study populations consisted of 1112 men and 299 women with stable angina pectoris, on average 36+/-5.6 days after a myocardial infarction, as well as 326 men and 138 women with no clinical signs of cardiovascular disease. In addition to cardiological and angiological examinations, the lipid status and levels of fibrinogen, plasma viscosity, F 1+2, plasminogen, plasminogen activator inhibitor-1, D-dimer, and C-reactive protein of the participants were determined. Results: The plasma concentration of D-dimers increases with age, both in the group with coronary artery disease and in the control group, with the female gender showing consistently higher concentrations in both groups. D-dimers correlate with other parameters of the lipid and coagulation systems, which explains 32.0% and 39.2% of the variance in D-dimer values in men and women, respectively. A significant increase in the level of D-dimers can be found in participants with generalized arteriosclerosis, with a left ventricular ejection fraction &lt;/=40% as well as those with left-ventricular aneurysm. Conclusion: This study indicates that there is increased fibrinolytic activity in patients with severe arteriosclerosis. This finding gives further support to the hypothesis that D-dimer concentration is dependent on the amount of fibrin associated with arteriosclerotic thrombi. However, because of the low specificity and wide overlap of D-dimer values between patients and controls, enhanced D-dimer values are of limited relevance above and beyond other lipid metabolism risk indicators for coronary artery disease or coronary artery disease and peripheral arterial occlusive disease. abstract_id: PUBMED:3372073 Effects of exercise on plasma fibrinolytic activity in patients with ischaemic heart disease. In order to investigate the response of plasma fibrinolytic activity to exercise in patients with ischaemic heart disease, we studied 109 patients with chronic stable angina pectoris. All patients were exercised on the treadmill according to the Bruce protocol. They were subsequently divided into 2 groups according to the results of exercise testing. The first group comprised 47 patients with negative exercise tests and the second group 62 patients with positive exercise tests. The plasma fibrinolytic activity was determined in all patients by the method of euglobulin lysis time before exercise, at peak exercise and 2 hours later. The results were compared with those of a control group which consisted of 30 normal healthy volunteers. In the control group there was a significant increase of plasma fibrinolytic activity (P less than 0.001) at peak exercise, which almost returned to the resting levels at 2 hours. In all patients with angina, resting and peak exercise fibrinolytic activity level was significantly lower than that of the control group (P less than 0.01 and less than 0.001, respectively), although at peak exercise it also increased significantly (P less than 0.001) compared to resting levels and it remained elevated at 2 hours after exercise (P less than 0.001). In patients with negative exercise test, plasma fibrinolytic activity at rest was almost identical to that of the control group, it increased with exercise (P less than 0.001) but to values below those of the control group (P less than 0.01) and although there was a tendency to return to resting levels it remained elevated (P less than 0.01) at 2 hours.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:14656922 Involvement of tissue factor pathway inhibitor in the coronary circulation of patients with acute coronary syndromes. Background: Tissue factor pathway inhibitor (TFPI) is the endogenous inhibitor of the extrinsic coagulation pathway; however, its involvement during thrombus formation in patients with acute coronary syndromes (ACS) is still unknown. Methods And Results: Transcardiac (aorta/coronary sinus) free and total TFPI (free + lipoprotein-bound form) levels, as well as TFPI/factor Xa (FXa) complex levels, were measured in plasma samples obtained from patients with acute myocardial infarction undergoing primary PTCA and patients with unstable angina undergoing urgent PTCA. Patients with stable angina undergoing elective PTCA served as controls. In addition, prothrombin fragment 1+2 and fibrinopeptide A plasma levels were measured. Samples were collected at baseline, after PTCA, and after stent deployment. In patients with ACS, both total and free TFPI plasma levels in the coronary sinus were significantly lower than the corresponding levels measured in the aorta at any time point of the study; conversely, a significant increase in TFPI/FXa complex plasma levels was observed in the coronary sinus as compared with the aorta. In contrast, in patients with stable angina, no differences were observed in TFPI and TFPI/FXa levels at baseline in the coronary sinus as compared with the aorta. Conclusions: TFPI is involved in the process of thrombus formation in vivo in patients with ACS, which suggests a potential role for TFPI in modulating coronary thrombosis. abstract_id: PUBMED:10342131 Pathophysiology of acute coronary syndromes Coronary atherosclerosis and plaque disruption with superimposed thrombosis are the main causes of the acute coronary syndromes of unstable angina, myocardial infarction, and sudden death. Coronary artery spasm has also been implicated in the pathogenesis of acute coronary syndromes. Other researchers and we have reported that the plasma levels of fibrinopeptide A, a sensitive marker of thrombin generation, and plasminogen activator inhibitor activity, an indicator of the impairment of fibrinolysis, increase in patients with unstable angina and acute myocardial infarction. We also showed that coronary artery spasm induced fibrinopeptide A generation and may lead to thrombus formation in the coronary artery involved, and plasminogen activator inhibitor activity increased after coronary spasm. Tissue factor is the primary initiator of the extrinsic coagulation cascade. We have recently demonstrated that the plasma TF antigen levels increase in patients with acute myocardial infarction and unstable angina. Furthermore, we examined directional coronary atherectomy specimens from 24 patients with unstable angina and 23 with stable exertional angina. We have shown that tissue factor expression on macrophages was more frequent in coronary atherosclerotic plaques in patients with unstable angina. Tissue factor expressed on macrophages may play an important role in the thrombogenicity in coronary atherosclerotic plaques of these patients. In conclusion, increased coagulation cascade and impaired fibrinolysis occurs and leads to coronary thrombosis in patients with acute coronary syndromes. These phenomena also occur in patients with coronary spasm. abstract_id: PUBMED:6521187 Prognostic significance of the state of the blood coagulation system for the development of myocardial infarction and sudden death in patients with ischemic heart disease A 7-year follow-up covered 120 patients with coronary heart disease including 60 with angina pectoris and 60 myocardial infarction survivors. The patients were subjected to repeated examinations of the clotting factors and the fibrinolytic activity of the plasma. Account was taken of the cases of the development of primary or secondary myocardial infarction and sudden death. The patients with myocardial infarction (primary and secondary) had elevated levels of fibrinogen and reduced levels of plasma fibrinolytic activity whereas the patients who died suddenly had increased concentrations of fibrinogen. High fibrinogen level in the blood of coronary heart disease patients is considered as a prognostically unfavourable sign. abstract_id: PUBMED:2112469 A depression of active tissue plasminogen activator in plasma characterizes patients with unstable angina pectoris who develop myocardial infarction. The balance between the coagulation system generating fibrin and its subsequent removal by the fibrinolytic system determines the fate of fibrin deposited in the vascular system. In a prospective study, selected haemostatic variables assessing this balance were determined in plasma samples from 20 consecutive patients admitted with unstable angina pectoris. Over a follow-up period of 6 years, eight patients developed myocardial infarction, whereas 12 patients did not. There was no significant difference between the two groups in the median plasma concentrations of thrombin-antithrombin III complexes reflecting the coagulant activity. The infarction group was characterized by a significantly lower median activity of tissue plasminogen activator in plasma euglobulins (P less than 0.05), a higher median concentration of tissue plasminogen activator antigen in plasma (P less than 0.05) and a tendency to higher plasma levels of antigenic and functional plasminogen activator inhibition. In all patients, the activities of tissue plasminogen activator inhibitor and of tissue plasminogen activator were significantly associated (rs = -0.4811, P less than 0.05). We conclude that a depressed fibrinolytic capacity attributable to a low tissue plasminogen activator activity is of pathogenetic importance for the development of myocardial infarction in patients with unstable angina pectoris. Answer: Plasma biomarkers of coagulation and fibrinolysis do not appear to significantly distinguish patients who have experienced an acute myocardial infarction (AMI) from those with stable exertional angina. A study comparing plasma concentrations of fibrinogen, d-dimer, tissue-type plasminogen activator, and plasminogen activator inhibitor-1 (PAI-1) between patients presenting with AMI and those with stable exertional angina found that, at a median of 15 weeks after presentation, AMI patients had slightly higher d-dimer concentrations than those with stable angina, but there were no significant differences in other markers (PUBMED:18035075). However, it is important to note that these biomarkers do show significant differences when comparing patients with coronary artery disease (CAD), including both AMI and stable angina, to healthy control subjects. In this broader comparison, fibrinogen, d-dimer, and tissue-type plasminogen activator were significantly higher, and PAI-1 was lower in patients with CAD than in healthy controls (PUBMED:18035075). Other studies have shown that certain biomarkers are associated with the risk of myocardial infarction and coronary death, but not with angina pectoris. Inflammatory biomarkers such as C-reactive protein (CRP), interleukin-6, fibrinogen, plasma viscosity, and hemostatic biomarkers like von Willebrand factor (VWF) and fibrin D-dimer were associated with a significant increased risk of MI/CHD death but not with uncomplicated angina (PUBMED:19682232). In summary, while certain plasma biomarkers are associated with CAD and the risk of myocardial infarction and coronary death, they do not reliably differentiate between patients who have had an AMI and those with stable exertional angina.
Instruction: Impact of oxidative stress on early postoperative knee function and muscle injury biochemical markers: Is it possible to create an ischemic preconditioning effect in sequential ischemic surgical procedures? Abstracts: abstract_id: PUBMED:28217217 The Effect of Air Tourniquet on Interleukin-6 Levels in Total Knee Arthroplasty. Background: Air tourniquet-induced skeletal muscle injury increases the concentrations of some cytokines such as interleukin-6 (IL-6) in plasma. However, the effect of an air tourniquet on the IL-6 concentrations after total knee arthroplasty (TKA) is unclear. We therefore investigated the impact of tourniquet-induced ischemia and reperfusion injury in TKA using the IL-6 level as an index. Methods: Ten patients with primary knee osteoarthrosis who underwent unilateral TKA without an air tourniquet were recruited (Non-tourniquet group). We also selected 10 age- and sex-matched control patients who underwent unilateral TKA with an air tourniquet (Tourniquet group). Venous blood samples were obtained at 3 points; before surgery, 24 h after surgery, and 7 days after surgery. The following factors were compared between the two groups; IL-6, C-reactive protein (CRP), creatine phosphokinase (CPK), the mean white blood cell (WBC) counts, and the maximum daily body temperatures. Results: The IL-6 level at 24 h after surgery was significantly higher than that at any other point (p&lt;0.01). No significant differences were observed in the WBC count, the body temperature, or the CRP, CPK, or IL-6 levels of the two groups at any of the time points. Conclusion: The effect of ischemia and reperfusion due to the use of an air tourniquet on increasing the IL-6 level was much smaller than that induced by surgical stress in TKA. abstract_id: PUBMED:38059212 The optimized tourniquet versus no tourniquet in total knee arthroplasty. Analysis of muscle injury, functional recovery, and knee strength. Background: Tourniquet is widely used in total knee replacement surgery because it reduces intraoperative hemorrhage and provides a comfortable surgical area for the surgeon. It's possible that its use could lead to impaired postoperative functional and motor recovery, as well as local and systemic complications. Our goal was to compare the outcomes of total knee replacement without ischemia using an optimized protocol, consisting of tourniquet inflation before skin incision and deflation after cementing, with a pressure of one hundred millimeters above systolic blood pressure and without postoperative articular suction drains.). We believed that tourniquet effectively would result in no additional muscle damage and no functional or knee strength impairment compared to no tourniquet. Methods: In a prospective and randomized study, 60 patients with osteoarthritis were evaluated for total knee replacement, divided in two groups: 'without tourniquet' and 'optimized tourniquet'. Outcomes were mean creatine phosphokinase levels, Knee Society Score and knee isokinetic strength. Data were considered significant when p &lt; 0.05. Results: Creatine phosphokinase levels and functional score were similar between groups. There were no differences between groups regarding knee extension strength on the operated limbs, although the knee flexors' peak torque in the operated limb in the optimized tourniquet group was significantly higher at 6 months relative to preoperative and 3 months assessments. Conclusions: The optimized tourniquet protocol use in total knee replacement combines the benefits of tourniquet use without compromising functional recovery and without additional muscle damage and strength deficits compared to surgery without its use. abstract_id: PUBMED:34737143 Febuxostat treatment attenuates oxidative stress and inflammation due to ischemia-reperfusion injury through the necrotic pathway in skin flap of animal model. Background: Ischemia-reperfusion (I/R) injury is a major contributor to skin flap necrosis, which is a serious complication of reconstructive surgery. The purpose of this study was to evaluate the protective effect of treatment with febuxostat, a selective xanthine oxidase inhibitor, on I/R injury in the skin flap of an animal (rat) model. Methods: Superficial epigastric flaps were raised in Sprague-Dawley rats and subjected to ischemia for 3 h. Febuxostat at a dose of 10 mg/kg/day was administered to rats in drinking water from 1 week before the surgery (Feb group). Control animals received no drugs (Con group). The mean ratio of flap survival and contraction was evaluated and compared between animals with and without administration of febuxostat on day 5 after the surgery. In addition, infiltration by polymorphonuclear leukocytes and muscles of the panniculus carnosus in the flap were histologically evaluated using hematoxylin-eosin staining. Furthermore, xanthine oxidase activity, ATP levels, superoxide dismutase activity, and expression of 8-hydroxy-2'-deoxyguanosine (8-OHdG), tumor necrosis factor-α, and interleukin-1β were quantitatively assessed in the skin flap 24 h after the surgery. Results: In the Feb group, the survival and contraction rates at the 5 d timepoint post-surgery were significantly higher and lower than those in the Con group, respectively. Histological analysis showed significant reduction in polymorphonuclear leukocyte infiltration and muscle injury scores due to I/R injury in the Feb group. The expression of 8-OHdG was also significantly inhibited in animals administered febuxostat. Biochemical analysis showed a significant reduction in xanthine oxidase activity and significant increases in ATP levels and superoxide dismutase activity in the Feb group. Furthermore, the expression of interleukin-1β was significantly lower in the Feb group than in the Con group. Conclusion: Febuxostat, which is clinically used for the treatment of hyperuricemia, was effective against necrosis of the skin flap via inhibition of oxidative stress and inflammation caused by I/R injury. abstract_id: PUBMED:12745799 Serum myoglobin/carbonic anhydrase III ratio in the diagnosis of perioperative myocardial infarction during coronary bypass surgery. Objective: The purpose of the present study was to evaluate the usefulness of the myoglobin/carboanhydrase III (Myo/CAIII) ratio in the diagnosis of perioperative myocardial infarction during coronary artery bypass surgery. Design: Thirty patients undergoing elective coronary artery bypass grafting (CABG) were included in the series. The patients were randomized in two groups: one received conventional normothermic retrograde blood cardioplegia, while the other was subjected to a 5-min period of ischemic preconditioning before cardioplegia. Biochemical markers for myocardial and skeletal muscle injury were measured in serial blood samples taken postoperatively from 4 h after aortic declamp. Results: Three patients were diagnosed to have suffered from perioperative myocardial infarction on the basis of significant elevations of troponin T and creatine kinase MB-isoenzyme (CK-MB) concentrations. In these particular patients the Myo/CAIII ratio increased rapidly after aortic declamping. In uncomplicated patients, the median value of the Myo/CAIII ratio remained within normal limits. There was a positive correlation between the net output of lactate during the aortic cross-clamping period and postoperative Myo/CAIII ratio. The Myo/CAIII ratio proved to be a more specific indicator for myocardial damage than myoglobin alone. The Myo/CAIII ratio was higher in the preconditioning group than in the control group. Conclusion: Myo/CAIII ratio is a sensitive and specific marker for perioperative myocardial infarction increasing rapidly after aortic declamping. This ratio could also be used when assessing the extent of ischemic myocardial injury and comparing different surgical and cardioprotective techniques. abstract_id: PUBMED:28706944 Schisandrin B Prevents Hind Limb from Ischemia-Reperfusion-Induced Oxidative Stress and Inflammation via MAPK/NF-κB Pathways in Rats. Schisandrin B (ScB), isolated from Schisandra chinensis (S. chinensis), is a traditional Chinese medicine with proven cardioprotective and neuroprotective effects. However, it is unclear whether ScB also has beneficial effects on rat hind limb ischemia/reperfusion (I/R) injury model. In this study, ScB (20 mg/kg, 40 mg/kg, and 80 mg/kg) was administered via oral gavage once daily for 5 days before the surgery. After 6 h ischemia and 24 h reperfusion of left hind limb, ScB reduced I/R induced histological changes and edema. ScB also suppressed the oxidative stress through decreasing MDA level and increasing SOD activity. Moreover, above changes were associated with downregulated TNF-α mRNA expression and reduced level of IL-1β in plasma. Meanwhile, ScB treatment downregulated activation of p38MAPK, ERK1/2, and NF-κB in ischemic skeletal muscle. These results demonstrate that ScB treatment could prevent hind limb I/R skeletal muscle injury possibly by attenuating oxidative stress and inflammation via p38MAPK, ERK1/2, and NF-κB pathways. abstract_id: PUBMED:22796117 Remote and local ischemic postconditioning further impaired skeletal muscle mitochondrial function after ischemia-reperfusion. Objective: Muscular injuries contribute to perioperative and long-term morbidity after vascular surgery in humans. We determined whether local and remote ischemic postconditioning might similarly decrease muscle mitochondrial dysfunction through reduced oxidative stress. Methods: Eighteen male Black-6 mice were divided in three groups: (1) sham mice had no ischemia (sham), (2) ischemia-reperfusion (IR) mice underwent 2-hour tourniquet-induced ischemia on both hind limbs, followed by 2-hour reperfusion, and (3) postconditioning (PoC) mice underwent four bouts of 30-second reperfusion and 30-second ischemia at the onset of reperfusion on the right limb; thus, the right limb underwent local PoC and left limb underwent remote PoC (rPoC). Maximal oxidative capacity (V(max)) of the gastrocnemius muscle mitochondrial respiratory chain was measured. Oxidative stress was evaluated by dihydroethidium staining. Expressions of genes involved in antioxidant defense (superoxide dismutase [SOD1], SOD2, glutathione peroxidase [GPx]), apoptosis (Bax, BclII), and inflammation (interleukin-6) were determined by quantitative real-time polymerase chain reaction. Muscle inflammation was determined using immunohistochemistry. Results: IR reduced V(max) (8.5 ± 2.2 vs 10.2 ± 1.8 μmol O(2)/min/g dry weight; P = .034), and increased dihydroethidium staining (134.8%; P = .039). IR decreased GPx expression (-47.9%; P = .048) and increased the proapoptotic marker Bax (255.5%; P = .020). Local PoC and rPoC further increased these deleterious effects. PoC decreased V(max) to 4.4 ± 1.4 μmol O(2)/min/g dry weight (sham vs PoC, -56.9% [P &lt; .001]; IR vs PoC, -48.2% [P &lt; .001]). rPoC similarly reduced V(max) to 5.1 ± 1.9 μmol O(2)/min/g dry weight (sham vs PoC, -50.0% [P &lt; .001]; IR vs PoC, -40.0% [P = .001]). Dihydroethidium staining was further increased by PoC (207.2%; P = .002) and rPoC (305.4%; P &lt; .001) compared with sham and was associated with macrophage infiltration. Local PoC increased SOD1, SOD2, and the antiapoptotic Bcl-2, and rPoC increased Bax (391.6%; P &lt; .001) and the Bax/BclII ratio (621.7%; P &lt; .001). Conclusions: Local and remote ischemic postconditioning further increased injury by enhancing mitochondrial dysfunction, oxidative stress production, and inflammation. Caution should be applied when considering ischemic postconditioning in vascular surgery. abstract_id: PUBMED:22691884 Combination of hypoxic preconditioning and postconditioning does not induce additive protection of ex vivo human skeletal muscle from hypoxia/reoxygenation injury. We previously demonstrated that hypoxic preconditioning (HPreC) or postconditioning (HPostC) protected ex vivo human skeletal muscle from hypoxia/reoxygenation injury. Here, we investigated if combined HPreC and HPostC could convey additive protection. Human rectus abdominis muscle strips were cultured in normoxic Krebs buffer for 5 hours (control) or in 3 hours hypoxic/2 hours normoxic buffer (treatment groups). HPreC and HPostC were induced by 1 cycle of 5 minutes hypoxia/5 minutes reoxygenation immediately before or after 3 hours hypoxia, respectively. Muscle injury, viability, and adenosine triphosphate (ATP) synthesis were assessed by measuring lactate dehydrogenase release, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide reduction, and ATP content, respectively. Hypoxia/reoxygenation caused lactate dehydrogenase to increase and 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-2H-tetrazolium bromide reduction and ATP content to decrease (P &lt; 0.05; n = 7). HPreC, HPostC, and combination of both were equally effective in protection of muscle from hypoxia/reoxygenation injury. Atractyloside (5 × 10 M), a mitochondrial permeability transition pore opener, abolished the protective effect of HPreC or HPostC. We conclude that HPreC and HPostC protect ex vivo human skeletal muscle against hypoxia/reoxygenation injury by closing the mitochondrial permeability transition pore. For that reason, they are equally effective and do not demonstrate an additive effect. Moreover, the potent effect of HPostC indicates ischemic postconditioning as an effective clinical intervention against reperfusion injury in autogenous skeletal muscle transplantation and replantation surgery. abstract_id: PUBMED:15540688 Chronic lower extremity ischemia: a human model of ischemic tolerance. Background: Ischemic preconditioning (IPC) has been found in animals to have a protective effect against future ischemic injury to muscle tissue. Such injury is unavoidable during some surgical procedures. To determine whether chronic ischemia in the lower extremities would imitate IPC and reduce ischemic injury during vascular surgery, we designed a controlled clinical study. Patients And Methods: Two groups of patients at a university-affiliated medical centre with chronic lower-extremity ischemia served as models of IPC: 6 patients awaiting femoral distal bypass (FDB) and 4 scheduled for aortobifemoral (ABF) bypass grafting for aortoiliac occlusive disease. Seven patients undergoing elective open repair of an infrarenal abdominal aortic aneurysm (AAA) were chosen as non-IPC controls. Three hematologic indicators of skeletal-muscle injury, lactate dehydrogenase (LDH), creatine kinase (CK) and myoglobin, were measured before placement of the proximal clamp, during surgical ischemia, immediately upon reperfusion, 15 minutes after and 1 hour after reperfusion, and during the first, second and third postoperative days. Results: Baseline markers of skeletal-muscle injury were similar in all groups. In postreperfusion samples, concentrations of muscle-injury markers were significantly lower in the 2 PC groups than in the control group. For example, at day 2, LDH levels were increased by about 30% over baseline measures in the elective AAA (control) group, whereas levels in the FDB and ABF groups remained statistically unchanged from baseline. Myoglobin in controls had increased by 977%, but only by 160% in the FDB and 528% in the ABF groups. CK levels, in a similar trend, were 1432% higher in the control group and only 111% (FDB) and 1029% (ABF) in the study groups. Taken together, these data represent a significant level of protection. Conclusions: Patients with chronic lower-extremity ischemia suffered less severe ischemic injury after a period of acute ischemia than those with acute ischemia alone. Ischemic preconditioning is one proposed mechanism to help explain this protective effect. abstract_id: PUBMED:20719704 Myoelectric and mechanical changes elicited by ischemic preconditioning in the feline hindlimb. Tourniquet use is fraught with potential complications. For example, ischemia produced by the tourniquet may lead to nerve and muscle injuries. One technique shown in cardiovascular and free-flap surgery to improve the viability of muscle subjected to ischemia is preconditioning. This technique involves an initial brief period of ischemia, followed by reperfusion before a prolonged ischemic episode. The purpose of this study was to explore ischemic preconditioning as a method to reduce tourniquet-related morbidity. In six cats, one leg was preconditioned by 10 min of tourniquet-induced ischemia followed by 10 min of reperfusion. The contralateral limb was not preconditioned. Both limbs underwent 1 h of tourniquet inflation followed by a 2-h recovery period. Isometric force and electromyographic (EMG) amplitude were recorded throughout the procedure at 20-min intervals in both medial gastrocnemius muscles. Analysis of variance (ANOVA) with repeated measures shows that, after 60 min of tourniquet application, maximal isometric force was significantly larger in the preconditioned group. Furthermore, the EMG amplitude during recovery was found to be significantly larger in the preconditioned limbs. These results suggest that preconditioning improves skeletal muscle viability in vivo. Further research is needed, however, to assess the long-term effects of this technique, and to delineate appropriate preconditioning protocols that would improve surgical outcome without significantly increasing the complexity of the procedures. abstract_id: PUBMED:22271000 Effect of thymoquinone, a constituent of Nigella sativa L., on ischemia-reperfusion in rat skeletal muscle. Thymoquinone have been shown to decrease ischemia-reperfusion injury (IRI) in some tissues such as gastric mucosa and brain. In this study, the effect of thymoquinone was evaluated on an animal model of IRI in the rat hind limb. Hind limb ischemia was induced by clamping the common femoral artery and vein. After 2 h ischemia, the clamp on the femoral vessels was taken off and the animal underwent 1 h reperfusion. Muscle injuries were evaluated by recording the electromyographic (EMG) potentials and performing some biochemical analysis including thiobarbituric acid reactive substances (TBARS), total sulfhydryl (SH) groups, and antioxidant capacity of muscle using ferric reducing ability of plasma (FRAP) assay. Ischemia was induced using free-flap surgery in skeletal muscle. Thymoquinone (20, 40 and 80 mg/kg) and normal saline (10 ml/kg) were administered intraperitoneally 1 h prior to reperfusion. The average peak-to-peak amplitude during ischemic reperfusion was significantly increased in thymoquinone groups in comparison with the control group. Following thymoquinone administration, the total SH contents and antioxidant capacity were elevated in muscle flap. The malondialdehyde (MDA) level was declined significantly in test groups. It is concluded that thymoquinone have some protective effects against the muscle tissue injury caused by lower limb ischemia-reperfusion. Answer: The impact of oxidative stress on early postoperative knee function and muscle injury biochemical markers can be significant, as oxidative stress is known to contribute to inflammation and tissue damage following ischemia-reperfusion (I/R) injury, which is a common occurrence in surgeries such as total knee arthroplasty (TKA) (PUBMED:34737143; PUBMED:28706944). The use of a tourniquet during TKA can lead to I/R injury, and while some studies have shown that an optimized tourniquet protocol may not result in additional muscle damage or impair functional recovery and knee strength compared to surgery without a tourniquet (PUBMED:38059212), others have found that ischemic postconditioning can exacerbate muscle mitochondrial dysfunction, oxidative stress production, and inflammation (PUBMED:22796117). Ischemic preconditioning (IPC) is a protective strategy where brief episodes of ischemia are applied before a prolonged ischemic event, which has been shown to reduce subsequent ischemic injury in various tissues (PUBMED:15540688; PUBMED:20719704). In the context of sequential ischemic surgical procedures, it is theoretically possible to create an IPC effect to mitigate the negative impact of oxidative stress on postoperative knee function and muscle injury markers. For instance, patients with chronic lower-extremity ischemia have been found to suffer less severe ischemic injury after a period of acute ischemia than those with acute ischemia alone, suggesting that IPC may be a contributing factor to this protective effect (PUBMED:15540688). However, the effectiveness of IPC in the context of TKA or other surgeries involving the knee joint is not well established. While some studies have shown that preconditioning can improve skeletal muscle viability in vivo (PUBMED:20719704), others have found that the combination of hypoxic preconditioning and postconditioning does not induce additive protection against hypoxia/reoxygenation injury in ex vivo human skeletal muscle (PUBMED:22691884). Additionally, the use of pharmacological agents such as febuxostat and Schisandrin B has been shown to attenuate oxidative stress and inflammation due to I/R injury in animal models (PUBMED:34737143; PUBMED:28706944), suggesting that pharmacological preconditioning could be another avenue to explore for protecting knee function and reducing muscle injury markers postoperatively. In conclusion, In conclusion, while there is evidence that ischemic preconditioning can reduce ischemic injury in various tissues, its application in sequential ischemic surgical procedures such as TKA to mitigate oxidative stress and improve early postoperative knee function is not fully established. The potential for creating an IPC effect in such surgeries remains an area for further research. Studies have shown that pharmacological agents like febuxostat and Schisandrin B can reduce oxidative stress and inflammation in animal models of I/R injury (PUBMED:34737143; PUBMED:28706944), suggesting that pharmacological approaches to preconditioning may be worth exploring. Additionally, the use of an optimized tourniquet protocol in TKA did not show additional muscle damage or functional impairment compared to no tourniquet use (PUBMED:38059212), indicating that surgical techniques can also be optimized to minimize I/R injury. However, caution is advised as some forms of ischemic postconditioning have been found to exacerbate muscle injury (PUBMED:22796117), and the combination of hypoxic preconditioning and postconditioning did not provide additive protection in ex vivo human skeletal muscle (PUBMED:22691884). Therefore, while the concept of IPC is promising, more research is needed to determine the best strategies for its implementation in the context of knee surgery and to understand its effects on muscle injury biochemical markers and postoperative knee function.
Instruction: Reproductive ageing and ovarian function: is the early follicular phase FSH rise necessary to maintain adequate secretory function in older ovulatory women? Abstracts: abstract_id: PUBMED:15550499 Reproductive ageing and ovarian function: is the early follicular phase FSH rise necessary to maintain adequate secretory function in older ovulatory women? Background: Serum FSH elevations and decreases in inhibin B have been consistently demonstrated in the early follicular phase of cycles in women of advanced reproductive age. However, secretory products of the dominant follicle (estradiol and inhibin A) in the serum of older ovulatory women are maintained at levels similar to those of their younger counterparts. The goal of this investigation was to determine if ovarian secretory capacity is dependent on relative FSH levels and if basal measures of ovarian reserve reflect ovarian secretory capacity. Methods: We administered equivalent low, but effective doses of recombinant FSH for 5 days to a group of older subjects (40-45 years, n=9) and younger controls (20-25 years, n=10) after pituitary suppression with a GnRH agonist. Outcome measures included follicular development as determined by serial transvaginal ultrasound examinations and serum levels of estradiol, inhibin A and inhibin B. Results: Serum levels of estradiol and inhibin A were not statistically different between the two groups, while the number of large follicles formed was greater in the younger subjects. Basal parameters of ovarian reserve were not significantly correlated with ovarian secretory capacity, but did correlate with the number of follicles recruited in response to low-dose FSH. Conclusions: By providing equivalent serum levels of FSH in older and younger reproductive aged women, this study demonstrates that the secretory capacity of recruited follicles is maintained in older reproductive aged women. abstract_id: PUBMED:8772573 Reproductive aging: accelerated ovarian follicular development associated with a monotropic follicle-stimulating hormone rise in normal older women. Women experience a decline in fertility that precedes the menopause by several years. Previous studies have demonstrated a monotropic rise in FSH associated with reproductive aging: however, the mechanism of this rise and its role in the aging process are poorly understood. The purpose of this study was to characterize ovarian follicular development and ovarian hormone secretion in older reproductive age women. Sixteen women, aged 40-45 yr, with regular ovulatory cycles were studied. The control group consisted of 12 ovulatory women, aged 20-25 yr. Serum obtained by daily blood sampling was analyzed for FSH, LH, estradiol (E), progesterone, and inhibin (Monash polyclonal assay). Follicle growth and ovulation were documented by transvaginal ultrasound. Older women had significantly higher levels of FSH throughout the menstrual cycle. E, progesterone, LH, and inhibin levels did not differ between the two age groups when compared relative to the day of the LH surge. Ultrasound revealed normal growth, size, and collapse of a dominant follicle in all subjects. Older women had significantly shorter follicular phase length associated with an early acute rise in follicular phase E, reflecting accelerated development of a dominant follicle. We conclude that older reproductive age women have accelerated development of a dominant follicle in the presence of the monotropic FSH rise. This is manifested as a shortened follicular phase and elevated follicular phase E. The fact that ovarian steroid and inhibin secretion were similar to those in the younger women suggests that elevated FSH in women of advanced reproductive age may represent a primary neuroendocrine change associated with reproductive aging. abstract_id: PUBMED:12466381 Is the short follicular phase in older women secondary to advanced or accelerated dominant follicle development? This study sought to determine whether the shortened follicular phase in ovulatory older women is secondary to advanced (i.e. earlier) or accelerated (i.e. more rapid) folliculogenesis. Normal ovulatory women, aged 40-45 yr (n = 15) and 20-25 yr (n = 13), underwent daily venipuncture and transvaginal ultrasonography throughout the follicular phase of a spontaneous menstrual cycle (control cycle) and after pituitary down-regulation with a GnRH agonist (study cycle). As expected, the older subjects in the control cycles demonstrated an elevated d 3 FSH and a shortened follicular phase compared with the younger subjects. After release from hypothalamic-pituitary-ovarian axis suppression, the early follicular phase FSH peak occurred earlier (6.8 vs. 9.8 d; P &lt; 0.01) and was of a greater magnitude (12.1 vs. 6.5 mIU/ml; P &lt; 0.01) in the older subjects. The time from release of suppression until the subsequent LH surge was also shorter (17.5 vs. 20.8 d; P &lt; 0.01) in the older group. However, the time from FSH peak to LH surge was similar in the older and younger groups (10.7 vs. 11.0 d; P = 0.74). Compared with younger women, older subjects had normal follicular phase levels of estradiol and inhibin A and lower levels of inhibin B in both control and study cycles. We conclude that the shortened follicular phase observed in older ovulatory women is due to earlier dominant follicle selection, independent of hormonal influences from the preceding luteal phase. abstract_id: PUBMED:12137865 Women with regular menstrual cycles and a poor response to ovarian hyperstimulation for in vitro fertilization exhibit follicular phase characteristics suggestive of ovarian aging. Objective: To investigate whether follicular phase characteristics associated with ovarian aging can be observed in women of normal reproductive age, who had previously shown a poor response to ovarian hyperstimulation for IVF. Design: Observational, prospective study. Setting: Tertiary fertility center. Patient(s): Eleven regularly cycling, ovulatory women, aged 29-40 years who previously presented with fewer than four dominant follicles after ovarian hyperstimulation for IVF. Intervention(s): Frequent serum hormone assessments and transvaginal ultrasound during the follicular phase of a spontaneous, unstimulated cycle. Main Outcome Measure(s): Duration of the follicular phase; serum LH, FSH, E(2), P, inhibin A, and inhibin B levels; and number of antral follicles observed by ultrasound. Results were compared with the cycle characteristics of a reference population of 38 healthy normo-ovulatory women aged 20-36 years (as published elsewhere). Result(s): Poor responders had significantly fewer antral follicles than controls. Median FSH concentrations were significantly higher compared with controls, but the majority had FSH levels within the normal range. Follicular phase P levels were significantly higher in poor responders. Duration of the follicular phase, E(2), and inhibin A and inhibin B serum levels did not differ between poor responders and controls. Conclusion(s): Normo-ovulatory regularly cycling women with a previous poor response to ovarian hyperstimulation for IVF show follicular phase characteristics suggestive of ovarian aging. abstract_id: PUBMED:25796443 Mathematical modelling of decline in follicle pool during female reproductive ageing. The factors which govern the subtle links between follicle loss and mammalian female reproductive ageing remain unclear despite extensive studies undertaken to understand the critical physiological and biochemical mechanisms that underly the accelerated decline in follicle numbers in women older than 37 years. It is not certain whether there is a sole control by the ovary or whether other factors which affect ageing also intersect with the ovarian effect. There is convincing experimental evidence for an interplay of several processes that seem to influence the follicle loss-female reproductive ageing links, with specific hormones (follicle-stimulating hormone, anti-Müllerian hormone, dehydroepiandrosterone) noted to play important roles in follicular dynamics and ovarian ageing. In this work, we examine the subtle links between the rate of follicular decline with ageing and the role of hormones via a series of non-autonomous equations. Simulation results based on the time evolution of the number of ovarian follicles and biochemical changes in the ovarian environment influenced by hormone levels is compared with empirical data based on follicle loss-reproductive ageing correlation studies. abstract_id: PUBMED:9688368 Lack of correlation between maximum early follicular phase serum follicle stimulating hormone concentrations and menstrual cycle characteristics in women under the age of 35 years. The gradual increase in follicle stimulating hormone (FSH) concentrations in women approaching menopause results from the depletion of the ovarian follicular pool, a process referred to as 'ovarian ageing'. This study investigates whether variable endogenous FSH concentrations, as have been observed in normo-ovulatory young women, are related to menstrual cycle characteristics, including predictors of ovarian ageing. Serum concentrations of immunoreactive FSH, oestradiol, and inhibin-A and inhibin-B were measured, and follicular growth was assessed by transvaginal ultrasound throughout the follicular phase in 39 healthy volunteers (20-35 years) with regular menstrual cycles. Median serum FSH concentration on cycle day 3 was 5.1 IU/l (range 3.6-11.2), and median maximum follicular phase FSH was 6.2 IU/l (range 4.3-11.2), observed on cycle day 6 (range 2-15). Maximum FSH concentrations were not correlated with age or cycle length, nor with maximum inhibin-B. The number of small (&lt;10 mm) antral follicles on cycle day 3 was 11 (range 4-21) and was not correlated with age, nor with maximum FSH. Inhibin-A remained low until a significant rise on cycle day 9 (range 3-12), which was significantly correlated with the late follicular rise in oestradiol (r = 0.56, P = 0.01). These observations indicate a lack of correlation between maximum follicular phase serum FSH concentrations and parameters of ovarian ageing in women under the age of 35 years. In addition, FSH concentrations assessed on cycle day 3 represent an underestimation of maximum early follicular phase FSH. Distinct individual differences in intra-ovarian modification of FSH action, resulting in differences in the FSH threshold for stimulation of ovarian function, may be operative. abstract_id: PUBMED:26099445 Genetic variants associated with female reproductive ageing--potential markers for assessing ovarian function and ovarian stimulation outcome. This study searched for genetic markers of ovarian function, ovarian stimulation and IVF treatment outcome among genetic variants related to female reproductive ageing. It included 471 treatment cycles from 306 women undergoing IVF treatment. Genotypes for 36 single nucleotide polymorphisms (SNPs) were evaluated for their association with early follicular phase parameters together with ovarian stimulation and IVF outcome parameters. Results show that genetic variation related to menopause timing also affects ovarian function, as several selected genetic markers were associated with studied traits. For example, rs2153157 (SYCP2L) was associated with amount of recombinant FSH (rFSH) necessary for obtaining one oocyte (P = 0.049) and the chances of biochemical and clinical pregnancy (P = 0.024 and P = 0.011, respectively), while rs4886238 (TDRD3) showed association with both the number of punctured ovarian follicles and oocytes obtained (P = 0.008 and P = 0.037, respectively). Furthermore, FSHB polymorphisms influence early follicular phase FSH concentrations and IVF treatment outcome, whereas SNPs in FSHR affect early antral follicle count and follicle numbers obtained during ovarian stimulation. This study suggests that genetic markers of female reproductive ageing are potential new biomarker candidates that could be considered in clinical ovarian reserve and function assessment in assisted conception. abstract_id: PUBMED:29564925 Luteal-phase ovarian stimulation increases the number of mature oocytes in older women with severe diminished ovarian reserve. In older women with severe diminished ovarian response (DOR), in vitro fertilization (IVF) treatment is much less successful due to the low number of mature oocytes collected. The objective of this study was to assess whether follicular-phase stimulation (FPS) and luteal-phase stimulation (LPS) in the same menstrual cycle (double ovarian stimulation) in older women with severe DOR will produce a higher number of oocytes compared to FPS alone. Women with DOR (n = 69; mean age = 42.4) who underwent double ovarian stimulation for IVF were included. Women underwent ovarian stimulation in FPS using clomiphene citrate, letrozole, and gonadotropins followed by oocyte retrieval. The next day following oocyte retrieval, women underwent a second ovarian stimulation (LPS) using the same medications followed by a second oocyte retrieval. T-test was performed in order to compare the clinical characteristics and outcome in the same participant between FPS and LPS. Although antral follicle count at the start of FPS tended to be higher than at the start of the LPS cycle, there was no statistically significant difference between the duration of ovarian stimulation, peak estradiol levels, number of small (&lt;14 mm) or large (≥ 14 mm) follicles, the total number of oocytes retrieved, or the total number of mature oocytes. Each woman had double the number of mature oocytes collected following a double ovarian stimulation compared to FPS alone. The addition of LPS to the conventional FPS increases the number of mature oocytes retrieved in the same IVF cycle, thus potentially increasing the chances of pregnancy in older women with severe DOR. Abbreviations: AFC: antral follicle count; BMI: body mass index; DOR: diminished ovarian reserve; E2: estradiol; FPS: follicular-phase stimulation; FSH: follicle stimulating hormone; GnRH: gonadotropin-releasing hormone; HCG: human chorionic gonadotropin; IRB: institutional review board; IVF: in vitro fertilization; LH: luteinizing hormone; LPS: luteal-phase stimulation; MII: metaphase II. abstract_id: PUBMED:8796804 The gonadotropin secretion pattern in normal women of advanced reproductive age in relation to the monotropic FSH rise. Objective: Women of advanced reproductive age are known to demonstrate subtle FSH elevations (monotropic FSH rise) while still retaining ovulatory function. The purpose of this study was to investigate the hypothesis that the physiologic basis for the monotropic FSH rise is an alteration in the secretion pattern of the GnRH pulse generator. Methods: The subjects were 11 normal women age 40-45 years who underwent 24 hours of frequent blood sampling in the follicular (EF) and/or midluteal (ML) phases of spontaneous menstrual cycles. The controls were 11 normal women age 20-25 years. The respective gonadotropin secretion patterns were analyzed for LH pulse frequency, mean LH and FSH levels, and LH pulse amplitude. Results: There were no differences between the groups for estradiol (E2) and progesterone when the respective cycle phases were compared. The 24-hour mean FSH level was significantly increased in the older women in both the EF and ML phases. There were no differences between the groups in either cycle phase for LH pulse frequency, LH pulse amplitude, and mean LH levels. Conclusion: The results lend no support to the hypothesis that a slowing or other alteration of the GnRH pulse generator is the basis for the monotropic FSH rise in older ovulatory women. Other possibilities include the dynamics of E2 secretion or changes in FSH-modulating peptides (i.e., inhibin) in these women. abstract_id: PUBMED:14567886 Beneficial effects of GnRH agonist administration prior to ovarian stimulation for patients with a short follicular phase. A short follicular phase is an early clinical feature of declining reproductive competence. The shortening of the follicular phase length is related to both advanced recruitment and selection of the dominant follicle secondary to an earlier and higher FSH rise during the luteal-follicular transition, while the late follicular growth is normal. As a short follicular phase may be detrimental for reproduction, it was postulated that increasing the duration of follicular phase could improve conception rate. For that purpose, gonadotrophin-releasing hormone agonist minidoses were administered in the mid-luteal phase to prevent the intercycle FSH rise before tailoring follicular growth by controlled exogenous FSH administration. This regimen, applied to 69 infertile ovulatory women with a short follicular phase (9.6 +/- 1.2 days) actually lengthened the follicular phase by about 3 days. It proved to be effective in 179 cycles to induce paucifollicular development (1.8 +/- 0.9 follicles) with a low cancellation rate (4%) and a moderate requirement for gonadotrophins [13.3 +/- 6.3 ampoules (75 IU)]. In those women with a high frequency (80%) of elevated basal FSH or oestradiol concentrations, the pregnancy rate reached 15.1%/cycle but the miscarriage rate remained high (44%). Thus, increasing the follicular phase length in patients with a short follicular phase may partially restore fecundity. Answer: The early follicular phase FSH rise is not necessary to maintain adequate secretory function in older ovulatory women. Studies have shown that despite the rise in serum FSH levels and decreases in inhibin B in the early follicular phase of cycles in women of advanced reproductive age, the secretory products of the dominant follicle, such as estradiol and inhibin A, are maintained at levels similar to those of younger women (PUBMED:15550499). This suggests that the secretory capacity of recruited follicles is preserved in older reproductive aged women when provided with equivalent serum levels of FSH (PUBMED:15550499). Furthermore, older reproductive age women have been observed to have accelerated development of a dominant follicle in the presence of the monotropic FSH rise, which is manifested as a shortened follicular phase and elevated follicular phase estradiol, yet ovarian steroid and inhibin secretion remain similar to those in younger women (PUBMED:8772573). This indicates that the elevated FSH in women of advanced reproductive age may represent a primary neuroendocrine change associated with reproductive aging rather than a requirement for maintaining ovarian secretory function (PUBMED:8772573). Additionally, the shortened follicular phase observed in older ovulatory women is due to earlier dominant follicle selection, independent of hormonal influences from the preceding luteal phase (PUBMED:12466381). This further supports the notion that the early follicular phase FSH rise is not necessary for maintaining adequate secretory function in older ovulatory women.
Instruction: Loss of Weight in Obese Older Adults: A Biomarker of Impending Expansion of Multimorbidity? Abstracts: abstract_id: PUBMED:26311068 Loss of Weight in Obese Older Adults: A Biomarker of Impending Expansion of Multimorbidity? Objectives: To determine whether weight loss in older adults may be a marker of impending burden of multimorbidity regardless of initial weight, testing the hypotheses that obesity but not overweight in elderly adults is associated with greater number of diseases than normal weight and that obese older adults who lose weight over time have the greatest burden of multimorbidity. Design: Longitudinal cohort study (Invecchiare in Chianti Study). Setting: Community. Participants: Individuals aged 60 and older at baseline followed for an average of 4 years (N = 1,025). Measurements: Multimorbidity was measured as number of diagnosed diseases. Baseline body mass index (BMI) was categorized as normal weight (&lt;25.0 kg/m(2)), overweight (25.0-29.9 kg/m(2)), and obese (≥30.0 kg/m(2)). Loss of weight was defined as decrease over time in BMI of at least 0.15 kg/m(2) per year. Age, sex, and education were covariates. Results: Baseline obesity was cross-sectionally associated with high multimorbidity and greater longitudinal increase of multimorbidity than normal weight (P = .005) and overweight (P &lt; .001). Moreover, obese participants who lost weight over follow-up had a significantly greater increase in multimorbidity than other participants, including obese participants who maintained or gained weight over time (P = .005). In nonobese participants, changes in weight had no effect on changes in multimorbidity over time. Sensitivity analyses confirmed that one specific disease did not drive the association and that competing mortality did not bias the association. Conclusion: Loss of weight in obese older persons is a strong biomarker of impending expansion of multimorbidity. Older obese individuals who lose weight should receive thoughtful medical attention. abstract_id: PUBMED:35956086 Prospective Association between Multimorbidity and Falls and Its Mediators: Findings from the Irish Longitudinal Study on Ageing. This study including older adults from Ireland aimed to analyze the prospective association between multimorbidity and falls and to identify the mediators in this relationship. The present study used data from two consecutive waves of the Irish Longitudinal Study on Ageing (TILDA) survey. Multimorbidity was assessed at Wave 1 (2009-2011) and was defined as the presence of at least two chronic conditions. Falls occurring at Wave 2 (2012-2013) were self-reported. Mediating variables considered were polypharmacy, cognitive impairment, sleep problems, pain, low handgrip strength, difficulty in activities of daily living (ADL), obesity, and underweight. Multivariable binary logistic regression and mediation analysis using the Karlson Holm Breen method were conducted. This study included 6900 adults aged ≥50 years (51.6% women; mean [SD] age 63.1 [8.9] years). Compared to no chronic conditions at baseline, there was a positive and significant association between multimorbidity and falls at follow-up, with ORs ranging from 1.32 (95% CI = 1.06-1.64) for 2 conditions to 1.92 (95% CI = 1.54-2.38) for ≥4 conditions. Pain (23.5%), polypharmacy (13.3%), and difficulty in ADL (10.7%) explained the largest proportion of the multimorbidity-fall relationship. Multimorbidity increased risk for incident falls in older adults from Ireland. Interventions should be implemented to reduce fall risk in people with multimorbidity, especially targeting the identified mediators. abstract_id: PUBMED:29908859 Frailty and pre-frailty in middle-aged and older adults and its association with multimorbidity and mortality: a prospective analysis of 493 737 UK Biobank participants. Background: Frailty is associated with older age and multimorbidity (two or more long-term conditions); however, little is known about its prevalence or effects on mortality in younger populations. This paper aims to examine the association between frailty, multimorbidity, specific long-term conditions, and mortality in a middle-aged and older aged population. Methods: Data were sourced from the UK Biobank. Frailty phenotype was based on five criteria (weight loss, exhaustion, grip strength, low physical activity, slow walking pace). Participants were deemed frail if they met at least three criteria, pre-frail if they fulfilled one or two criteria, and not frail if no criteria were met. Sociodemographic characteristics and long-term conditions were examined. The outcome was all-cause mortality, which was measured at a median of 7 years follow-up. Multinomial logistic regression compared sociodemographic characteristics and long-term conditions of frail or pre-frail participants with non-frail participants. Cox proportional hazards models examined associations between frailty or pre-frailty and mortality. Results were stratified by age group (37-45, 45-55, 55-65, 65-73 years) and sex, and were adjusted for multimorbidity count, socioeconomic status, body-mass index, smoking status, and alcohol use. Findings: 493 737 participants aged 37-73 years were included in the study, of whom 16 538 (3%) were considered frail, 185 360 (38%) pre-frail, and 291 839 (59%) not frail. Frailty was significantly associated with multimorbidity (prevalence 18% [4435/25 338] in those with four or more long-term conditions; odds ratio [OR] 27·1, 95% CI 25·3-29·1) socioeconomic deprivation, smoking, obesity, and infrequent alcohol consumption. The top five long-term conditions associated with frailty were multiple sclerosis (OR 15·3; 99·75% CI 12·8-18·2); chronic fatigue syndrome (12·9; 11·1-15·0); chronic obstructive pulmonary disease (5·6; 5·2-6·1); connective tissue disease (5·4; 5·0-5·8); and diabetes (5·0; 4·7-5·2). Pre-frailty and frailty were significantly associated with mortality for all age strata in men and women (except in women aged 37-45 years) after adjustment for confounders. Interpretation: Efforts to identify, manage, and prevent frailty should include middle-aged individuals with multimorbidity, in whom frailty is significantly associated with mortality, even after adjustment for number of long-term conditions, sociodemographics, and lifestyle. Research, clinical guidelines, and health-care services must shift focus from single conditions to the requirements of increasingly complex patient populations. Funding: CSO Catalyst Grant and National Health Service Research for Scotland Career Research Fellowship. abstract_id: PUBMED:30357990 Multimorbidity and functional impairment-bidirectional interplay, synergistic effects and common pathways. This review discusses the interplay between multimorbidity (i.e. co-occurrence of more than one chronic health condition in an individual) and functional impairment (i.e. limitations in mobility, strength or cognition that may eventually hamper a person's ability to perform everyday tasks). On the one hand, diseases belonging to common patterns of multimorbidity may interact, curtailing compensatory mechanisms and resulting in physical and cognitive decline. On the other hand, physical and cognitive impairment impact the severity and burden of multimorbidity, contributing to the establishment of a vicious circle. The circle may be further exacerbated by people's reduced ability to cope with treatment and care burden and physicians' fragmented view of health problems, which cause suboptimal use of health services and reduced quality of life and survival. Thus, the synergistic effects of medical diagnoses and functional status in adults, particularly older adults, emerge as central to assessing their health and care needs. Furthermore, common pathways seem to underlie multimorbidity, functional impairment and their interplay. For example, older age, obesity, involuntary weight loss and sedentarism can accelerate damage accumulation in organs and physiological systems by fostering inflammatory status. Inappropriate use or overuse of specific medications and drug-drug and drug-disease interactions also contribute to the bidirectional association between multimorbidity and functional impairment. Additionally, psychosocial factors such as low socioeconomic status and the direct or indirect effects of negative life events, weak social networks and an external locus of control may underlie the complex interactions between multimorbidity, functional decline and negative outcomes. Identifying modifiable risk factors and pathways common to multimorbidity and functional impairment could aid in the design of interventions to delay, prevent or alleviate age-related health deterioration; this review provides an overview of knowledge gaps and future directions. abstract_id: PUBMED:36514331 Epidemiology and predictors of multimorbidity in Kharameh cohort study: A population-based cross-sectional study in southern Iran. Background And Aim: Multimorbidity is one of the problems and concerns of public health. The aim of this study was to estimate the prevalence and identify the risk factors associated with multimorbidity based on the data of the Kherameh cohort study. Methods: This cross-sectional study was performed on 10,663 individuals aged 40-70 years in the south of Iran in 2015 to 2017. Demographic and behavioral characteristics were investigated. Multimorbidity was defined as the coexistence of two or more of two chronic diseases in a person. In this study, the prevalence of multimorbidity was calculated. Logistic regression was used to identify the predictors of multimorbidity. Results: The prevalence of multimorbidity was 24.4%. The age-standardized prevalence rate was 18.01% in males and 29.6% in females. The most common underlying diseases were gastroesophageal reflux disease with hypertension (33.5%). Multiple logistic regression results showed that the age of 45-55 years (adjusted odds ratio [ORadj]] = 1.22, 95% confidence interval [CI], 1.07-1.38), age of over 55 years (ORadj = 1.21, 95% CI, 1.06-1.37), obesity (ORadj = 3.65, 95% CI, 2.55-5.24), and overweight (ORadj = 2.92, 95% CI, 2.05-4.14) were the risk factors of multimorbidity. Also, subjects with high socioeconomic status (ORadj = 1.27, 95% CI, 1.1-1.45) and very high level of socioeconomic status (ORadj = 1.53, 95% CI, 1.31-1.79) had a higher chance of having multimorbidity. The high level of education, alcohol consumption, having job, and high physical activity had a protective role against it. Conclusion: The prevalence of multimorbidity was relatively high in the study area. According to the results of our study, age, obesity, and overweight had an important effect on multimorbidity. Therefore, determining interventional strategies for weight loss and control and treatment of chronic diseases, especially in the elderly, is very useful. abstract_id: PUBMED:30459401 Short-term weight gain is associated with accumulation of multimorbidity in mid-aged women: a 20-year cohort study. Background/objectives: Although weight change has been studied in relation to many individual chronic conditions, limited studies have focused on weight change and multimorbidity. This study examines the relationship between short-term weight change and the accumulation of multimorbidity in midlife. Methods: We used data from 7357 women aged 45-50 years without a history of any chronic conditions. The women were surveyed approximately every 3 years from 1996 to 2016. Associations between short-term weight change and accumulation of multimorbidity (two or more of nine chronic conditions) over each 3-year period, adjusting for baseline body mass index (BMI) or time-varying BMI (3-year period), were examined using repeated measures models. Short-term weight change was categorised into seven groups of annual weight change from high weight loss ( ≤ -5%) to high weight gain (&gt; + 5%). Results: Over 20 years, 60.4% (n = 4442) of women developed multimorbidity. Baseline BMI, time-varying BMI and short-term weight gain were all associated with the accumulation of multimorbidity. After controlling for sociodemographic, lifestyle factors and menopausal status, high weight gain was associated with a 25% increased odds of multimorbidity (odds ratio (OR) 1.25, 95% confidence interval (CI) 1.08-1.45) compared with maintaining a stable weight. The results were consistent among models adjusting for baseline BMI (OR 1.24, 95% CI 1.07-1.44) or time-varying BMI (OR 1.34, 95% CI 1.16-1.54). Weight loss was associated with increased odds of multimorbidity in women with normal BMI (baseline or time-varying). Conclusions: Short-term weight gain is associated with significantly increased odds of multimorbidity in mid-aged women. This association is independent from baseline BMI (at 45-50 years) and time-varying BMI. These findings support a persistent weight management regime and prevention of weight gain throughout women's midlife. abstract_id: PUBMED:35248171 Body-mass index and risk of obesity-related complex multimorbidity: an observational multicohort study. Background: The accumulation of disparate diseases in complex multimorbidity makes prevention difficult if each disease is targeted separately. We aimed to examine obesity as a shared risk factor for common diseases, determine associations between obesity-related diseases, and examine the role of obesity in the development of complex multimorbidity (four or more comorbid diseases). Methods: We did an observational study and used pooled prospective data from two Finnish cohort studies (the Health and Social Support Study and the Finnish Public Sector Study) comprising 114 657 adults aged 16-78 years at study entry (1998-2013). A cohort of 499 357 adults (aged 38-73 years at study entry; 2006-10) from the UK Biobank provided replication in an independent population. BMI and clinical characteristics were assessed at baseline. BMIs were categorised as obesity (≥30·0 kg/m2), overweight (25·0-29·9 kg/m2), healthy weight (18·5-24·9 kg/m2), and underweight (&lt;18·5 kg/m2). Via linkage to national health records, participants were followed-up for death and diseases diagnosed according to the International Classification of Diseases 10th Revision (ICD-10). Hazard ratios (HRs) with 95% CIs and population attributable fractions (PAFs) for associations between BMI and multimorbidity were calculated. Findings: Mean follow-up duration was 12·1 years (SD 3·8) in the Finnish cohorts and 11·8 years (1·7) in the UK Biobank cohort. Obesity was associated with 21 non-overlapping cardiometabolic, digestive, respiratory, neurological, musculoskeletal, and infectious diseases after Bonferroni multiple testing adjustment and ignoring HRs of less than 1·50. Compared with healthy weight, the confounder-adjusted HR for obesity was 2·83 (95% CI 2·74-2·93; PAF 19·9% [95% CI 19·3-20·5]) for developing at least one obesity-related disease, 5·17 (4·84-5·53; 34·4% [33·2-35·5]) for two diseases, and 12·39 (9·26-16·58; 55·2% [50·9-57·5]) for complex multimorbidity. The proportion of participants of healthy weight with complex multimorbidity by age 75 years was observed by age 55 years in participants with obesity, and degree of obesity was associated with complex multimorbidity in a dose-response relationship. Compared with obesity, the association between overweight and complex multimorbidity was more modest (HR 2·67, 95% CI 1·94-3·68; PAF 13·3% [95% CI 9·6-16·3]). The same pattern of results was observed in the UK Biobank cohort. Interpretation: Obesity is associated with diverse, increasing disease burdens, and might represent an important target for multimorbidity prevention that avoids the complexities of multitarget preventive regimens. Funding: Wellcome Trust, Medical Research Council, National Institute on Aging. abstract_id: PUBMED:38281058 Multimorbidity and its associated risk factors among adults in northern Sudan: a community-based cross-sectional study. Background: Multimorbidity (having two or more coexisting long-term conditions) is a growing global challenge. However, data on multimorbidity among adults in Africa, including Sudan, are scarce. Thus, this study aimed to investigate the prevalence of multimorbidity and its associated risk factors among adults in Sudan. Methods: A community-based cross-sectional study was conducted in northern Sudan from March 2022 to May 2022. Participants' sociodemographic characteristics were assessed using a questionnaire. Multimorbidity was defined as having two or more coexisting long-term conditions, including diabetes mellitus (DM), hypertension, obesity, anaemia and depression-anxiety. Multivariate logistic regression analyses were performed to determine the associated factors. Results: The participants included 250 adults: 119 (47.6%) males and 131(52.4%) females. The median interquartile range (IQR) of the enrolled adults of the age was 43.0 (30.0‒55.0) years. Of the 250 adults, 82(32.8%), 17(6.8%), 84(33.6%), and 67(26.8%) were normal weight, underweight, overweight, and obese, respectively; 148(59.2%), 72(28.8%), 63(25.2%), 67(26.8%), and 98(39.2%) had hypertension, DM, anaemia, obesity, and depression-anxiety, respectively. A total of 154 adults (61.6%) had multimorbidity: 97(38.8%), 49(19.6%), and 8(3.2%) had two, three, and four morbidities, respectively. The remaining 21 (8.4%), and 75 (30.0%) adults had no morbidity, and one morbidity, respectively. In amultivariate logistic regression analysis, increasing age (adjusted odd ratio [AOR] = 1.03, 95% CI = 1.01‒1.05), and female sex (AOR = 2.17, 95% CI = 1.16‒4.06) were associated with multimorbidity. Conclusions: The high prevalence of multimorbidity revealed in this study uncovers a major public health problem among Sudanese adults. Our results show that increasing age and female sex are associated with multimorbidity. Additional extensive studies are necessary to evaluate the magnitude of multimorbidity for improved future planning and establishing effective health systems. abstract_id: PUBMED:38097298 Prevalence of hypertension, diabetes, obesity, multimorbidity, and related risk factors among adult Gambians: a cross-sectional nationwide study. Background: As countries progress through economic and demographic transition, chronic non-communicable diseases (NCDs) overtake a previous burden of infectious diseases. We investigated the prevalence of hypertension, diabetes, obesity, and multimorbidity in older adults in The Gambia. Methods: We embedded a survey on NCDs into the nationally representative 2019 Gambia National Eye Health Survey of adults aged 35 years or older. We measured anthropometrics, capillary blood glucose, and blood pressure together with sociodemographic information, personal and family health history, and information on smoking and alcohol consumption. Hypertension was defined as systolic blood pressure of 140 mmHg or more, diastolic blood pressure of 90 mmHg or more, or receiving treatment for hypertension. Diabetes was defined as fasting capillary blood glucose of 7 mmol/L or more, random blood glucose of 11·1mmol/L or more, or previous diagnosis or treatment for diabetes. Overweight was defined as BMI of 25-29·9 kg/m2 and obesity as 30 kg/m2 or more. Multimorbidity was defined as the coexistence of two or more conditions. We calculated weighted crude and adjusted estimates for each outcome by sex, residence, and selected sociodemographic factors. Findings: We analysed data from 9188 participants (5039 [54·8%] from urban areas, 6478 [70·5%] women). The prevalence of hypertension was 47·0%; 2259 (49·3%) women, 2052 (44·7%) men. The prevalence increased with age, increasing from 30% in those aged 35-45 years to over 75% in those aged 75 years and older. Overweight and obesity increased the odds of hypertension, and underweight reduced the odds. The prevalence of diabetes was 6·3% (322 [7·0%] women, 255 [5·6%] men), increasing from 3·8% in those aged 35-44 years to 9·1% in those aged 65-75 years, and then declining. Diabetes was much more common among urban residents, especially in women (peaking at 13% by age 65 years). Diabetes was strongly associated with BMI and wealth index. The prevalence of obesity was 12·0% and was notably higher in women than men (880 [20·2%] vs 170 [3·9%]). Multimorbidity was present in 932 (10·7%), and was more common in women than men (694 [15·9] vs 238 [5·5]). The prevalence of smoking was 9·7%; 5 (0·1%) women, 889 (19·3%) men. Alcohol consumption in the past year was negligible. Interpretation: We have documented high levels of NCDs and associated risk factors in Gambian adults. This presents a major stress on the country's fragile health system that requires an urgent, concerted, and targeted mutisectoral strategy. Funding: The Queen Elizabeth Diamond Jubilee Trust and Wellcome Trust. abstract_id: PUBMED:33267558 Impact of an 8-Year Intensive Lifestyle Intervention on an Index of Multimorbidity. Background/objectives: Type 2 diabetes mellitus and obesity are sometimes described as conditions that accelerate aging. Multidomain lifestyle interventions have shown promise to slow the accumulation of age-related diseases, a hallmark of aging. However, they have not been assessed among at-risk individuals with these two conditions. We examined the relative impact of 8 years of a multidomain lifestyle intervention on an index of multimorbidity. Design: Randomized controlled clinical trial comparing an intensive lifestyle intervention (ILI) that targeted weight loss through caloric restriction and increased physical activity with a control condition of diabetes support and education (DSE). Setting: Sixteen U.S. academic centers. Participants: A total of 5,145 volunteers, aged 45 to 76, with established type 2 diabetes mellitus and overweight or obesity who met eligibility criteria for a randomized controlled clinical trial. Measurements: A multimorbidity index that included nine age-related chronic diseases and death was tracked over 8 years of intervention delivery. Results: Among individuals assigned to DSE, the multimorbidity index scores increased by an average of .98 (95% confidence interval [CI] = .94-1.02) over 8 years, compared with .89 (95% CI = .85-.93) among those in the multidomain ILI, which was a 9% difference (P = .003). Relative intervention effects were similar among individuals grouped by baseline body mass index, age, and sex, and they were greater for those with lower levels of multimorbidity index scores at baseline. Conclusions: Increases in multimorbidity over time among adults with overweight or obesity and type 2 diabetes mellitus may be slowed by multidomain ILI. J Am Geriatr Soc 68:2249-2256, 2020. Answer: The longitudinal cohort study conducted as part of the Invecchiare in Chianti Study suggests that weight loss in obese older adults may indeed be a biomarker of impending expansion of multimorbidity (PUBMED:26311068). The study found that baseline obesity was associated with high multimorbidity and a greater longitudinal increase of multimorbidity than normal weight and overweight. Moreover, obese participants who lost weight over the follow-up period had a significantly greater increase in multimorbidity than other participants, including obese participants who maintained or gained weight over time. This indicates that weight loss in this population should be carefully monitored, as it may signal a worsening of health status rather than an improvement. The study's results were robust, as sensitivity analyses confirmed that the association was not driven by one specific disease and that competing mortality did not bias the association. Therefore, it is recommended that older obese individuals who lose weight should receive thoughtful medical attention to address the potential expansion of multimorbidity.
Instruction: Serum cytokine tumor necrosis factor-alpha and interleukin-6 associated with the severity of coronary artery disease: indicators of an active inflammatory burden? Abstracts: abstract_id: PUBMED:18751625 Serum cytokine tumor necrosis factor-alpha and interleukin-6 associated with the severity of coronary artery disease: indicators of an active inflammatory burden? Background: Atherosclerosis is a chronic inflammatory process resulting in coronary artery disease. Objectives: To determine the relationship between inflammatory markers and the angiographic severity of CAD. Methods: We measured inflammatory markers in consecutive patients undergoing coronary angiography. This included C-reactive protein, fibrinogen, serum cytokines (interleukin-1 beta, IL-1 receptor antagonist, IL-6, IL-8, IL-10) and tumor necrosis factor-alpha), all measured by high sensitivity enzyme-linked immunoabsorbent assay. Results: There was a significant correlation between TNFalpha and the severity of CAD as assessed by the number of obstructed coronary vessels and the Gensini severity score, which is based on the proximity and severity of the lesions. Patients had more coronary vessel disease (&gt; 70% stenosis) with increasing tertiles of serum TNFalpha; the mean number of vessels affected was 1.15, 1.33, and 2.00 respectively (P&lt; 0.001). IL-6 correlated with the Gensini severity score and coronary vessel disease (&gt; 70% stenosis). A weaker correlation was present with IL-1 receptor antagonist. A significant correlation was not found with the other inflammatory markers. After adjustment for major risk factors, multivariate analyses showed that significant independent predictors of CAD vessel disease were TNFalpha (P&lt; 0.05) and combined levels of TNFalpha and IL-6 (P&lt; 0.05). IL-6 levels were independently predictive of Gensini coronary score (P&lt; 0.05). Conclusion: TNFalpha and IL-6 are significant predictors of the severity of coronary artery disease. This association is likely an indicator of the chronic inflammatory burden and an important marker of increased atherosclerosis risk. abstract_id: PUBMED:12943869 Impact of pathogen burden in patients with coronary artery disease in relation to systemic inflammation and variation in genes encoding cytokines. The number of infectious pathogens to which an individual has been exposed (pathogen burden) has been linked to the development and the prognosis of coronary artery disease (CAD). The interaction among infection, genetic host susceptibility, and CAD remains unclear. This study was aimed at evaluating the modulation of the association between CAD and pathogen burden, by serum levels of inflammatory markers and polymorphisms of the interleukin (IL)-6 and tumor necrosis factor (TNF)-alpha genes. Immmunoglobulin (Ig) G or IgA antibodies to 8 pathogens were determined in 991 patients with CAD and 333 control subjects. Serum levels of high-sensitivity C-reactive protein, fibrinogen, IL-6, and TNF-alpha were also measured. All subjects were genotyped for the IL-6/G-174C, the TNF/C-851T, and the TNF/G-308A polymorphisms. Analysis of single pathogens demonstrated a positive relation to the presence of CAD for some (Chlamydia pneumoniae, cytomegalovirus, Helicobacter pylori, and herpes virus simplex type 1), but not all pathogens. A strong association between increasing pathogen burden and CAD was confirmed, even after adjustment for risk factors. The prevalence of a high pathogen burden (&gt;/=4 pathogens) was 50% in patients and 21% in controls (p &lt;0.0001). A high pathogen burden was associated with decreased high-density lipoprotein cholesterol levels (p &lt;0.001). The association between CAD and pathogen burden was modulated by the IL6/G-174C polymorphism, the odds ratio being higher in heterozygotes than in both types of homozygotes (p &lt;0.05). This interaction appeared to be mediated by variations in serum IL-6 levels. No such interaction was detected with any of the 2 TNF-alpha polymorphisms. abstract_id: PUBMED:29742255 Circulation levels of acute phase proteins pentraxin 3 and serum amyloid A in atherosclerosis have correlations with periodontal inflamed surface area. Objectives One of the plausible mechanisms in the relationship between periodontitis and coronary artery disease (CAD) is the systemic inflammatory burden comprised of circulating cytokines/mediators related to periodontitis. This study aims to test the hypothesis that periodontal inflamed surface area (PISA) is correlated with higher circulating levels of acute phase reactants (APR) and pro-inflammatory cytokines/mediators and lower anti-inflammatory cytokines/mediators in CAD patients. Material and Methods Patients aged from 30 to 75 years who underwent coronary angiography with CAD suspicion were included. Clinical periodontal parameters (probing depth - PD, clinical attachment loss, and bleeding on probing - BOP) were previously recorded and participants were divided into four groups after coronary angiography: Group 1: CAD (+) with periodontitis (n=20); Group 2: CAD (+) without periodontitis (n=20); Group 3: CAD (-) with periodontitis (n=21); Group 4: CAD (-) without periodontitis (n = 16). Serum interleukin (IL) -1, -6, -10, tumor necrosis factor (TNF)-α, serum amyloid A (SAA), pentraxin (PTX) 3, and high-sensitivity C-reactive protein (hs-CRP) levels were measured with ELISA. Results Groups 1 and 3 showed periodontal parameter values higher than Groups 2 and 4 (p&lt;0.0125). None of the investigated serum parameters were statistically significantly different between the study groups (p&gt;0.0125). In CAD (-) groups (Groups 3 and 4), PISA has shown positive correlations with PTX3 and SAA (p&lt;0.05). Age was found to predict CAD significantly according to the results of the multivariate regression analysis (Odds Ratio: 1.17; 95% Confidence Interval: 1.08-1.27; p&lt;0.001). Conclusions Although age was found to predict CAD significantly, the positive correlations between PISA and APR in CAD (-) groups deserve further attention, which might depend on the higher PISA values of periodontitis patients. In further studies conducted in a larger population, the stratification of age groups would provide us more accurate results. abstract_id: PUBMED:17372667 Current update on HIV-associated vascular disease and endothelial dysfunction. Highly active antiretroviral therapy (HAART) has greatly reduced the risk of early death from opportunistic infections and extended the lifespan of people infected with the human immunodeficiency virus (HIV). Thus, many complications and organic damage in the HIV-infected population emerge. Cardiovascular disease as coronary artery disease has become a matter of particular concern. Its incidence is greatly increased in the HIV-infected population over that of people of the same age in the absence of general cardiovascular risk factors. Despite several clinical and laboratory studies in the association between HIV infection and cardiovascular disease, the pathogenic mechanisms of this significant clinical problem are largely unknown and are now under active investigation. Endothelial dysfunction is possibly the most plausible link between HIV infection and atherosclerosis. Increased expression of adhesion molecules such as intercellular adhesion molecule (ICAM)-1 and endothelial adhesion molecule (E-selectin) and inflammatory cytokines such as tumor necrosis factor (TNF)-alpha and interleukin (IL-6 has been reported in HIV-positive patients. The effect of HAART on endothelial function in HIV-positive patients is also demonstrated. In this review, we focus on the recent research update of HIV-associated vascular disease and vascular injury. We analyze and discuss the recent clinical and laboratory investigations on the effect of HIV, viral protein, and HAART therapy on endothelial injury and vascular disease; identify the areas of controversy and clinical relevance; and suggest some directions for future research. abstract_id: PUBMED:35345332 Alterations in the fecal microbiota and serum metabolome in unstable angina pectoris patients. Background: Unstable angina pectoris (UAP) is a type of Coronary artery disease (CAD) characterized by a series of angina symptoms. Insulin-like growth factor 1 (IGF-1) system may be related to CAD. However, the correlation between the IGF-1 system, metabolism, and gut microbiota has not been studied. In the present study, we investigated the alterations of serum IGF-1 system, metabolomics, and gut microbiota in patients with UAP. Methods: Serum and stool samples from healthy volunteers and UAP patients were collected. Serum metabolomics, PAPP-A, IGF-1, IGFBP-4, STC2, hs-CRP, TNF-α, and IL-6 were detected in serum samples by LC-MS, and commercial ELISA kits, respectively. Fecal short-chain fatty acids (SCFAs) were measured by gas chromatography. 16S rDNA was used to measure the changes of the gut microbiota. The correlation of the above indicators was analyzed. Results: There were 24 upregulated and 31 downregulated metabolites in the serum of UAP patients compared to those in the controls. Pathway analysis showed that these metabolites were enriched in pathways including linoleic acid metabolism, amino acid metabolism, starch metabolism, sucrose metabolism, and citrate cycle (TCA cycle), etc. Additionally, the UAP patients had lower fecal levels of 2-hydroxyisobutyric acid and succinic acid. 16S rDNA sequencing results showed that the relative abundances of Bacteroidetes, Synergistetes, Lactobacillaceae, Burkholderiaceae, Synergistaceae, and Subdoligranulum were significantly higher in the UAP patients than the healthy subjects. Moreover, the UAP patients had lower serum IGF-1, IGFBP-4, and STC2 and higher serum inflammatory cytokines (hs-CRP, TNF-α, and IL-6) levels than the healthy controls. Furthermore, there was a strong correlation between serum amino acids and IL-6, which played an important role in the development of UAP. Conclusions: These results indicated that the UAP patients had decreased serum IGF-1 level and imbalanced amino acids metabolism, which may be caused by the altered gut microbiota. It may provide a new therapeutic strategy for unstable angina pectoris. abstract_id: PUBMED:25743240 Serum levels of interleukin-2 predict the recurrence of atrial fibrillation after pulmonary vein ablation. Aims: Interleukin-2 has a significant antitumor activity in some types of cancer, and has been associated with the development of atrial fibrillation (AF). In addition, IL-2 serum levels in recent onset AF have been related with pharmaceutical cardioversion outcomes. We evaluated the hypothesis that a relationship exists between inflammation and the outcome of catheter ablation of AF. Methods: We studied 44 patients with paroxysmal AF who underwent catheter ablation. Patients with structural heart disease, coronary artery or valve disease, active inflammatory disease, known or suspected neoplasm, endocrinopathies, or exposure to anti-inflammatory drugs were excluded. All study participants underwent evaluation with a standardized protocol, including echocardiography, and cytokine levels of interleukin-2, interleukin-4, interleukin-6, interleukin-10, tumour necrosis factor-alpha, and gamma-interferon determination before procedure. Clinical and electrocardiographic follow-up were performed with Holter-ECG at 3, 6 and 12months in order to know if sinus rhythm was maintained. Results: After catheter ablation of the 44 patients included (53±10years, 27.3% female), all patients returned to sinus rhythm. During the first year of follow-up seven patients (15.9%) experienced recurrence of AF. The demographics, clinical and echocardiographic features, and pharmacological treatments of these patients were similar to those who maintained sinus rhythm. The only independent factor predictive of recurrence of AF was an elevated level of IL-2 (OR 1.18, 95% CI 1.12-1.38). Conclusions: High serum levels of interleukin-2, a pro-inflammatory non-vascular cytokine, are associated with the recurrence of AF in patients undergoing catheter ablation. abstract_id: PUBMED:16797387 Serum myeloperoxidase and mortality in maintenance hemodialysis patients. Background: During inflammation, myeloperoxidase (MPO) is released, for which its measurement in systemic circulation may be used as an index of leukocyte activation and oxidant stress. MPO levels correlate with angiographic evidence of coronary atherosclerosis and cardiovascular events in subjects with chest pain within the general population. We hypothesized that serum MPO levels are associated with adverse clinical outcomes in maintenance hemodialysis (MHD) patients. Methods: MPO levels were determined in serum samples from 356 MHD patients at the start of a 3-year cohort. Results: Patients (46% women, 28% blacks, 54% with diabetes) were 54.6 +/- 14.6 (SD) years old and had undergone MHD for a median period of 26 months. Measured serum MPO level was 2,005 +/- 1,877 pmol/L (median, 1,444 pmol/L; interquartile range, 861 to 2,490 pmol/L). MHD patients with greater total body fat had greater MPO levels. MPO level had statistically significant (P &lt; 0.01) and positive correlations with values for serum C-reactive protein (CRP; r = +0.15), interleukin 6 (IL-6; r = +0.23), tumor necrosis factor alpha (TNF-alpha; r = +0.21), and white blood cell count (r = +0.21). A death hazard ratio for each 1,000-pmol/L increase in serum MPO level was 1.14 (95% confidence interval [CI], 1.03 to 1.26; P = 0.01) after controlling for age, race (black), diabetes mellitus, dialysis vintage, Charlson comorbidity score, history of previous cardiovascular disease, blood hemoglobin level, and serum concentrations of albumin, CRP, IL-6, and TNF-alpha. After dividing MPO values into 3 equal groups (tertiles), the death hazard ratio of the highest tertile (versus the middle tertile) was 1.82 (95% CI, 1.07 to 3.10; P = 0.03). Conclusion: Serum MPO levels correlate with levels of markers of inflammation and prospective mortality risk in MHD patients. abstract_id: PUBMED:31081425 Association of C1q/TNF-related protein-1 (CTRP1) serum levels with coronary artery disease. Objective: Complement C1q tumor necrosis factor-related proteins (CTRPs), belonging to the CTRP superfamily, are extensively involved in regulating metabolism and the immune-inflammatory response. The inflammatory process is linked to the pathogenesis of coronary artery disease (CAD). Here, we investigated the association of serum levels of CTRP1 with CAD. Methods: Study participants were divided into two groups according to the results of coronary angiography: a control group (n = 63) and a CAD group (n = 76). The concentrations of serum CTRP1 and inflammatory cytokines were determined by enzyme-linked immunosorbent assay. Further analysis of CTRP1 levels in individuals with different severities of CAD was conducted. The CAD severity was assessed by Gensini score. Results: Serum levels of CTRP1 were significantly higher in CAD patients than in controls (17.24 ± 1.07 versus 9.31 ± 0.56 ng/mL), and CTRP1 levels increased with increasing severity of CAD. CTRP1 levels were positively correlated with concentrations of tumor necrosis factor-α and interleukin-6. Multiple logistic regression analysis showed that CTRP1 was significantly associated with CAD. Conclusions: Our data showed close associations of serum CTRP1 levels with the prevalence and severity of CAD, indicating that CTRP1 can be regarded as a novel and valuable biomarker for CAD. abstract_id: PUBMED:34086163 Serum levels of IL-32 in patients with coronary artery disease and its relationship with the serum levels of IL-6 and TNF-α. The coronary artery disease (CAD) is a chronic inflammatory disease caused by atherosclerosis, in which arteries become clogged due to plaque formation, fat accumulation, and various sorts of immune cells. IL-32 is a proinflammatory cytokine, which enhances inflammation through inducing the secretion of different inflammatory cytokines. The main objective of the current study was to assess the serum levels of IL-32 in subjects with obstructive CAD and its relationship with the serum levels of IL-6 and TNF-α. This study was performed on 42 subjects with obstructive CAD and 42 subjects with non-obstructive CAD. The serum levels of TNF-α, IL-6, and IL-32 were measured using the enzyme-linked immunosorbent assay (ELISA). The serum levels of TNF-α, IL-6, and IL-32 were 3.2, 3.48, and 2.7 times higher in obstructive CAD compared to non-obstructive CAD, respectively. Moreover, the serum levels of TNF-α and IL-32 in obstructive CAD with cardiac arterial stenosis in one major vessel were significantly higher than the levels in obstructive CAD with cardiac arterial stenosis in more than one major vessel. ROC curve analysis revealed that the serum levels of TNF-α, IL-6, and IL-32 were good predictors of obstructive CAD. Moreover, multiple logistic regression analyses suggested that the serum levels of TNF-α, IL-6, IL-32, LDL, and ox-LDL were independently related to the presence of obstructive CAD, while serum levels of HDL were not. TNF-α, IL-32, and IL-6 showed an increase in obstructive CAD, and the serum levels of these cytokines showed a satisfactory ability for predicting obstructive CAD. abstract_id: PUBMED:35878588 The Serum Levels of IL-36 in Patients with Coronary Artery Disease and Their Correlation with the Serum Levels of IL-32, IL-6, TNF-α, and Oxidative Stress. Introduction: Atherosclerosis is a chronic inflammatory process maintained during all stages of the disease by several proinflammatory mediators, such as cytokines and chemokines. Interleukin (IL)-36 cytokines are proinflammatory and have an essential role in innate and adaptive immunity, but the role of IL-36 has not been determined in coronary artery disease (CAD). This study aimed to measure the serum levels of IL-36 in patients with CAD and their association with the serum levels of tumor necrosis factor (TNF)-α, IL-6, and IL-32 and also investigate their correlation with the serum levels of malondialdehyde (MDA) and ferric reducing antioxidant power (FRAP). Methods: A total of 168 subjects (84 CAD and 84 control subjects) were examined in this research. The total serum levels of IL-36 were measured using the enzyme-linked immunosorbent assay (ELISA). Also, some oxidative stress parameters were evaluated by FRAP and MDA assays in the serum. Results: The serum levels of IL-36 and MDA were significantly higher, and FRAP was significantly lower in the CAD group compared to the controls. Furthermore, the serum levels of IL-36, MDA, and FRAP significantly correlated with the CAD group's cardiac arterial stenosis. Also, the serum levels of IL-36 had a positive and significant correlation with the serum levels of TNF-α, IL-6, IL-32, and biochemical parameters in the CAD group. Conclusion: Higher serum levels of IL-36 and its association with the serum levels of TNF-α, IL-32, and IL-6 may play a key role in the pathogenesis of CAD, leading to an increased risk of clogged arteries and oxidative stress. Answer: Yes, serum cytokine tumor necrosis factor-alpha (TNF-alpha) and interleukin-6 (IL-6) are associated with the severity of coronary artery disease (CAD) and are indicators of an active inflammatory burden. A study found a significant correlation between TNF-alpha and the severity of CAD as assessed by the number of obstructed coronary vessels and the Gensini severity score. Patients with more coronary vessel disease (>70% stenosis) had increasing tertiles of serum TNF-alpha, and IL-6 correlated with the Gensini severity score and coronary vessel disease (>70% stenosis). After adjusting for major risk factors, TNF-alpha and combined levels of TNF-alpha and IL-6 were significant independent predictors of CAD vessel disease, and IL-6 levels were independently predictive of the Gensini coronary score. This suggests that TNF-alpha and IL-6 are significant predictors of the severity of CAD, likely indicating the chronic inflammatory burden and an important marker of increased atherosclerosis risk (PUBMED:18751625). Additionally, other studies have shown that inflammatory cytokines, including TNF-alpha and IL-6, are elevated in various conditions related to CAD. For instance, in patients with unstable angina pectoris (UAP), a type of CAD, there were higher serum inflammatory cytokines (TNF-alpha and IL-6) levels compared to healthy controls (PUBMED:35345332). Moreover, serum levels of IL-32 in patients with CAD were found to be higher in obstructive CAD compared to non-obstructive CAD, and there was a significant correlation between the serum levels of TNF-alpha, IL-6, and IL-32 (PUBMED:34086163). Elevated levels of IL-36, another proinflammatory cytokine, were also found in CAD patients and correlated with TNF-alpha, IL-32, and IL-6, suggesting a role in the pathogenesis of CAD (PUBMED:35878588). These findings collectively indicate that proinflammatory cytokines, including TNF-alpha and IL-6, are associated with the severity of CAD and reflect an active inflammatory state contributing to the disease process.
Instruction: Does presurgical IQ predict seizure outcome after temporal lobectomy? Abstracts: abstract_id: PUBMED:9578051 Does presurgical IQ predict seizure outcome after temporal lobectomy? Evidence from the Bozeman Epilepsy Consortium. Purpose: Considerable debate exists concerning whether the presence of low preoperative IQ should be a contraindication for focal resective epilepsy surgery. Methods: We examined the relationship between baseline IQ scores and seizure outcome in 1,034 temporal lobectomy cases from eight epilepsy surgery centers participating in the Bozeman Epilepsy Consortium. Results: Those patients who continued to have seizures following surgery had statistically lower preoperative IQ scores than those who were seizure-free (p &lt; 0.009), but only by 2.3 points. This small but statistically significant relationship was fairly robust; it was observed across seven of the eight centers, and indicates that the findings can be generalized. Among patients with IQ scores of &lt; or = 75, 32.8% continued to have seizures following surgery, whereas 23.8% and 16.9% were not seizure-free when IQ scores were between 76 and 109 and &gt; or = 110, respectively. Relative risk analyses revealed no significant increase in risk among patients with low IQ scores who had no structural lesions other than mesial temporal sclerosis. However, patients with IQ scores of &lt; or = 75 had nearly a fourfold (390%) increase in risk for continued seizures as compared with those with higher IQ scores if structural lesions were present. Conclusions: While our results suggest that preoperative IQ scores alone are not good predictors of seizure outcome and should not be used to exclude patients as potential surgical candidates. IQ scores can be useful for counseling patients and their families concerning the relative risks of surgery. abstract_id: PUBMED:14981178 Long-term effects of temporal lobectomy on intelligence. Objective: To characterize the long-term effects of anterior temporal resection on intelligence. Methods: Twenty-eight left temporal lobectomy (LTL) and 43 right temporal lobectomy (RTL) patients were followed at standard time points for at least 6 years after surgery. Results: The average gain 6 years after operation was 3.6 Verbal IQ (VIQ) points and 10.3 Performance IQ (PIQ) points in LTL patients and 2.9 VIQ points and 7.7 PIQ points in RTL patients. A seizure-free outcome did not influence the increase in IQ, nor was the extent of resection related to IQ scores at the 6-year follow-up. Patients with exclusively mesial temporal sclerosis did not perform as well as patients with other pathologies, both before and after surgery. Major predictors of improved performance at 6 years were initial higher level of performance and lower age at surgery. Much of the observed improvement may be related to retest effects. Conclusions: The effects of epilepsy surgery on intelligence in the long term are limited. The largest gain in VIQ is seen from 2 to 6 years after surgery. abstract_id: PUBMED:10616078 Temporal lobectomy in children: cognitive outcome. Object: The authors sought to determine the impact of early temporal lobectomy (in patients younger than age 17 years) on intellectual functioning. The efficacy of temporal lobectomy for treating seizures is well established and the procedure is becoming more acceptable as a treatment for children whose seizures are intractable. However, cognitive outcomes of temporal lobectomy in children and adolescents are largely unreported. The present study takes advantage of a unique multicenter collaboration to examine retrospectively intellectual functioning in a large sample of children who underwent temporal lobectomy. Methods: Intellectual functioning was assessed before and after temporal lobectomy for treatment of medication-resistant seizures in 82 patients at eight centers of epilepsy surgery. All children underwent standard presurgical examinations, including electroencephalography-video monitoring, magnetic resonance (MR) imaging, and neuropsychological testing, at their respective centers. Forty-three children underwent left temporal lobectomy and 39 underwent right temporal lobectomy. For the entire sample, there were no significant declines in intelligence quotient (IQ) following surgery. Children who underwent left temporal lobectomy demonstrated no significant loss in verbal intellectual functioning and improved significantly in nonverbal intellectual functioning. Children who underwent right temporal lobectomy did not demonstrate significant changes in intellectual functioning. Although group scores showed no change in overall IQ values, an analysis of individual changes revealed that approximately 10% of the sample experienced a significant decline and 9% experienced significant improvement in verbal functioning. Significant improvement in nonverbal cognitive function was observed in 16% of the sample and only 2% of the sample showed significant declines. Risk factors for significant decline included older patient age at the time of surgery and the presence of a structural lesion other than mesial temporal sclerosis on MR imaging. Conclusions: The present study provides preliminary data for establishing the risk of cognitive morbidity posed by temporal lobectomy performed during childhood. With respect to global intellectual functioning, a slight improvement was significantly more likely to occur than a decline. However, there were several patients in whom significant declines did occur. It will be necessary to study further the factors associated with such declines. In addition, further study of more specific cognitive functions, particularly memory, is needed. abstract_id: PUBMED:19683476 Presurgical neuropsychological testing predicts cognitive and seizure outcomes after anterior temporal lobectomy. We sought to determine significant predictors of seizure and cognitive outcome following surgery for epilepsy. Participants included 41 patients who had undergone anterior temporal lobectomy (ATL). Higher presurgical verbal/language scores and lower nonverbal memory scores were predictive of seizure-free status following ATL. Overall, the presurgical predictors were 93% accurate in discriminating between seizure-free and non-seizure-free patients postsurgery. Surgery in the nondominant-for-language hemisphere was predictive of higher postsurgical verbal/language and verbal memory scores. Higher presurgical visual/construction, nonverbal memory, and verbal/language scores were predictive of better postsurgical verbal/language functioning. Better presurgical verbal/language functioning was predictive of the same skills postsurgically as well as visual/construction outcomes. Exploratory analyses in a subset of participants (n=25) revealed that dominant and nondominant intracarotid amobarbital (Wada) memory scores added unique variance only for predicting nonverbal memory following ATL. Presurgical neuropsychological testing provides significant and unique information regarding postsurgical seizure freedom and cognitive outcome in patients who have undergone ATL. abstract_id: PUBMED:22738237 Psychiatric history does not predict seizure outcome following temporal lobectomy for mesial temporal sclerosis. Purpose: A lifetime psychiatric history has been reported to be associated with poorer seizure outcome following temporal lobectomy for drug-resistant focal epilepsy, but it remains unclear whether this is confounded by the nature of the epileptogenic pathology. Here we examined this association in a pathologically homogeneous group of patients with mesial temporal sclerosis (MTS). Methods: The study population included 72 consecutive patients who underwent a temporal lobectomy for drug resistant temporal lobe epilepsy (TLE) and had histopathologically proven MTS. All patients were assessed preoperatively by a neuropsychiatrist. Chi-square analysis was undertaken to look for demographic, clinical, psychiatric, or neurologic factors associated with seizure outcome at 1 year. The relationship between having a psychiatric disorder and seizure outcome was examined by generating Kaplan-Meier curves and comparing between groups the log rank test as well as generating Cox regression models to estimate hazard ratios. Key Findings: There were no significant associations between postsurgery seizure outcome and a current or lifetime history of any psychiatric disorder. Significance: A history of psychiatric disorder, in particular depression and psychosis, is not associated with a poorer surgical outcome in patients with MTS. These findings have implications for the clinical management of patients under consideration for temporal lobectomy. abstract_id: PUBMED:25439484 Neurocognitive function in children after anterior temporal lobectomy with amygdalohippocampectomy. Background: We assessed the postoperative neurocognitive function after temporal lobectomy in children with temporal lobe epilepsy. Methods: This was a retrospective analysis of the data of 20 patients with Engel's class I or II outcomes after anterior temporal lobectomy with amygdalohippocampectomy between 2005 and 2008. Twenty children underwent resection of either dominant (n = 8) or nondominant (n = 12) temporal lobes, and their median age at surgery was 12.8 ± 3.2 years. We serially assessed intelligence and memory function as measured by the Korean-Wechsler Scales of Intelligence and Rey-Kim Memory test both before and after surgery. Results: Intelligence quotient (IQ) and memory quotient scores remained stable during a 3.6-year median follow-up in these children after the surgery. There was no decrease of IQ or memory quotient scores in either the dominant or non-dominant hemisphere groups. Later onset of epilepsy, a shorter epilepsy duration, a smaller number of antiepileptic drugs, and postoperative seizure-free outcomes were significant good predictors of the postoperative IQ. Conclusion: Temporal lobectomy in children did not provoke a significant decline in intelligence or memory function. Early surgical treatment in children with intractable seizures of temporal lobe origin may result in better neurocognitive outcomes. abstract_id: PUBMED:15230705 Seizure outcome after anterior temporal lobectomy and its predictors in patients with apparent temporal lobe epilepsy and normal MRI. Purpose: Very little reliable information is available regarding the role of anterior temporal lobectomy (ATL), optimal presurgical evaluation strategy, post-ATL seizure outcome, and the factors that predict the outcome in patients with medically refractory temporal lobe epilepsy (TLE) and normal high-resolution magnetic resonance imaging (MRI). To be cost-effective, epilepsy surgery centers in developing countries will have to select candidates for epilepsy surgery by using the locally available technology and expertise. Methods: We reviewed the electroclinical and pathological characteristics and seizure outcome of 17 patients who underwent ATL for medically refractory TLE after being selected for ATL based on a noninvasive selection protocol without the aid of positron emission tomography (PET) or single-photon emission computed tomography (SPECT), despite a normal preoperative high-resolution MRI. Results: Seven (41%) patients achieved an excellent seizure outcome; five of them were totally seizure free. An additional five (29%) patients had &gt;75% reduction in seizure frequency. The following pre-ATL factors predicted an excellent outcome: antecedent history of febrile seizures, strictly unilateral anterior temporal interictal epileptiform discharges (IEDs), and concordant type 1 ictal EEG pattern. All the five patients with pathologically verified hippocampal formation neuronal loss were seizure free. The presence of posterior temporal, bilateral temporal, and generalized IEDs portended unfavorable post-ATL seizure outcome. Conclusions: A subgroup of patients destined to have an excellent post-ATL outcome can be selected from MRI-negative TLE patients by using history and scalp-recorded interictal and ictal EEG data. The attributes of these patients are antecedent history of febrile seizures, strictly unilateral anterior IEDs, and concordant type 1 ictal EEG pattern. abstract_id: PUBMED:21069902 Latency to first seizure after temporal lobectomy predicts long-term outcome. Purpose: Temporal lobectomy is a well-established treatment for refractory temporal lobe epilepsy, yet many patients experience at least one seizure postoperatively. Little is known about the prognostic significance of the time from surgery to first seizure relapse in predicting long-term outcome. Methods: In a retrospective analysis of patients who reported at least one complex partial seizure (CPS) or generalized tonic–clonic seizure (GTCS) after anterior temporal lobectomy (n = 268), we used a nominal response logistic model to predict the odds ratio (OR) of a seizure outcome based on length of the latency period from surgery to first postoperative seizure. A modified Engel outcome class scheme was used. We controlled for factors known to influence postoperative outcome, including history of tonic–clonic seizures, intelligence quotient (IQ), preoperative seizure frequency, magnetic resonance imaging (MRI) findings, and history of febrile convulsions. Results: In the univariate analysis, the latency from surgery to the first postoperative disabling seizure was significantly associated with long-term outcome. Longer latency was associated with higher odds of being seizure-free or improved (modified Engel's classes 1, 2, and 3) relative to the unimproved state (class 4) (p &lt; 0.001, 0.001 and 0.004, respectively). Conversely, a shorter latency increased the likelihood of achieving the worst prognosis (class 4) relative to class 1 (p &lt; 0.001). Multivariate analysis yielded similar results. Discussion: Latency to the first postoperative seizure predicts long-term outcome, with short latencies portending poor prognosis and long latencies portending a good prognosis. This information can be used for patient counseling and may influence decisions regarding reoperation. abstract_id: PUBMED:22050314 Subtypes of medial temporal lobe epilepsy: influence on temporal lobectomy outcomes? Surgical resection of the hippocampus is the most successful treatment for medication-refractory medial temporal lobe epilepsy (MTLE) due to hippocampal sclerosis. Unfortunately, at least one of four operated patients continue to have disabling seizures after surgery, and there is no existing method to predict individual surgical outcome. Prior to surgery, patients who become seizure free appear identical to those who continue to have seizures after surgery. Interestingly, newly converging presurgical data from magnetic resonance imaging (MRI) and intracranial electroencephalography (EEG) suggest that the entorhinal and perirhinal cortices may play an important role in seizure generation. These areas are not consistently resected with surgery and it is possible that they continue to generate seizures after surgery in some patients. Therefore, subtypes of MTLE patients can be considered according to the degree of extrahippocampal damage and epileptogenicity of the medial temporal cortex. The identification of these subtypes has the potential to drastically improve surgical results via optimized presurgical planning. In this review, we discuss the current data that suggests neural network damage in MTLE, focusing on the medial temporal cortex. We explore how this evidence may be applied to presurgical planning and suggest approaches for future investigation. abstract_id: PUBMED:14636350 Seizure outcome after temporal lobectomy in temporal lobe cortical dysplasia. Purpose: To identify the temporal lobe cortical dysplasia (CD) histopathology classification subtype and determine the seizure outcome of patients who underwent temporal lobectomy with coincident CD. Methods: We reviewed the data of 28 patients with temporal lobe epilepsy who underwent surgery with pathologically verified CD at our institution from 1990 to 2000. The seizure outcome was assessed at a minimum of 1 year after surgery according to Engel's classification. Results: Of 28 patients who underwent surgery, nine (32.1%) had isolated CD, and 19 (67.9%) had CD and hippocampal sclerosis (CD&amp;HS). Twenty-six (92.9%) patients had histopathology subtype Ia (architectural abnormalities). Twenty (71.4%) patients were seizure free (Engel class I). Favorable seizure outcome (Engel class I, II) was achieved in 26 (92.9%) patients. No difference in seizure outcome was noted between patients with CD and CD&amp;HS. Conclusions: The most common histopathologic subtype in patients with temporal lobe CD is type Ia (architectural abnormalities). Temporal lobectomy in temporal lobe epilepsy patients with CD can achieve favorable seizure outcome. Answer: Presurgical IQ does not appear to be a strong predictor of seizure outcome after temporal lobectomy. The study from the Bozeman Epilepsy Consortium examined the relationship between baseline IQ scores and seizure outcome in 1,034 temporal lobectomy cases and found that while patients who continued to have seizures post-surgery had slightly lower preoperative IQ scores than those who were seizure-free, the difference was only by 2.3 points, which is statistically significant but small (PUBMED:9578051). Additionally, the presence of low IQ scores did not significantly increase the risk of continued seizures unless structural lesions other than mesial temporal sclerosis were present. Therefore, the authors concluded that preoperative IQ scores alone should not be used to exclude patients as potential surgical candidates, although they can be useful for counseling about relative risks (PUBMED:9578051). Other studies have focused on different aspects of cognitive and seizure outcomes post-temporal lobectomy. For instance, one study found that presurgical neuropsychological testing, rather than IQ alone, could predict seizure freedom and cognitive outcome following anterior temporal lobectomy (ATL) (PUBMED:19683476). Another study indicated that a history of psychiatric disorder, which could be related to cognitive function, does not predict poorer surgical outcomes in patients with mesial temporal sclerosis (PUBMED:22738237). Furthermore, seizure outcome after temporal lobectomy in patients with normal MRI was predicted by factors such as history of febrile seizures, strictly unilateral anterior temporal interictal epileptiform discharges, and concordant type 1 ictal EEG pattern, rather than IQ (PUBMED:15230705). In summary, while presurgical IQ may have a slight correlation with seizure outcomes post-temporal lobectomy, it is not a strong or sole predictor. Other factors, including neuropsychological profiles, seizure history, and EEG patterns, seem to play a more significant role in predicting outcomes after the surgery.
Instruction: Ability of modern distal tibia plates to stabilize comminuted pilon fracture fragments: Is dual plate fixation necessary? Abstracts: abstract_id: PUBMED:27264277 Ability of modern distal tibia plates to stabilize comminuted pilon fracture fragments: Is dual plate fixation necessary? Objectives: The purpose of this study was to examine the screw trajectory of ten commercially available distal tibia plates and compare them to common fracture patterns seen in OTA C type pilon fractures to determine their ability to stabilize the three most common fracture fragments while buttressing anterolateral zones of comminution. Hypothesis: We hypothesized that a single plate for the distal tibia would fail to adequately stabilize all three main fracture fragments and zones of comminution in complex pilon fractures. Methods: Ten synthetic distal tibia sawbones models were used in conjunction with ten different locking distal tibia plate designs from three manufacturers (Depuy Synthes, J&amp;J Co, Paoli, PA; Smith &amp; Nephew, Memphis, TN; and Stryker, Mawa, NJ). Both medial and anterolateral plates from each company were utilized and separately applied to an individual sawbone model. Three implants allowing variable angle screw placement were used. The location of the locking screws and buttress effect 1cm above the articular surface was noted for each implant using axial computed tomography (CT). The images were then compared to a recently published "pilon fracture map" using an overlay technique to establish the relationship between screw location and known common fracture lines and areas of comminution. Each of the three main fragments was considered "captured" by a screw if it was purchased by at least two screws thereby controlling rotational forces on each fragment. Results: Three of four anterolateral plates lacked stable fixation in the medial fragment. Of the 4 anterolateral plates used, only the variable angle anterolateral plate by Depuy Synthes captured the medial fragment with two screws. All four anterolateral plates buttressed the area of highest comminution and had an average of 1.25 screws in the medial fragment and an average of 3 screws in the posterolateral fragment. All five direct medial plates had variable fixation within anterolateral and posterolateral fragments with an average of 1.8 screws in the anterolateral fragment and an average of 1.3 screws in the posterolateral fragment. The Depuy Synthes variable angle anterolateral plate allowed for fixation of the medial fragment with two screws while simultaneously buttressing the zone of highest comminution and capturing both the anterolateral and posterolateral fragments with five and three screws respectively. The variable angle anteromedial plate by Depuy Synthes captured all three main fracture fragments but it did not buttress the anterolateral zone of comminution. Conclusion: In OTA 43C type pilon fractures, 8 out of 10 studied commercially available implants precontoured for the distal tibia, do not adequately stabilize the three primary fracture fragments typically seen in these injuries. Anterolateral plates were superior in addressing the coronal primary fracture line across the apex of the plafond, and buttressing the zone of comminution. None of the available plates can substitute for an understanding of the fracture planes and fragments typically seen in complex intra-articular tibia fractures and the addition of a second plate is necessary for adequate stability. Level Of Evidence: Level IV. abstract_id: PUBMED:28288519 Evaluation of Fibular Fracture Type vs Location of Tibial Fixation of Pilon Fractures. Background: Comminuted fibular fractures can occur with pilon fractures as a result of valgus stress. Transverse fibular fractures can occur with varus deformation. No definitive guide for determining the proper location of tibial fixation exists. The purpose of this study was to identify optimal plate location for fixation of pilon fractures based on the orientation of the fibular fracture. Methods: One hundred two patients with 103 pilon fractures were identified who were definitively treated at our institution from 2004 to 2013. Pilon fractures were classified using the AO/OTA classification and included 43-A through 43-C fractures. Inclusion criteria were age of at least 18 years, associated fibular fracture, and definitive tibial plating. Patients were grouped based on the fibular component fracture type (comminuted vs transverse), and the location of plate fixation (medial vs lateral) was noted. Radiographic outcomes were assessed for mechanical failures. Results: Forty fractures were a result of varus force as evidenced by transverse fracture of the fibula and 63 were due to valgus force with a comminuted fibula. For the transverse fibula group, 14.3% mechanical complications were noted for medially placed plate vs 80% for lateral plating ( P = .006). For the comminuted fibular group, 36.4% of medially placed plates demonstrated mechanical complications vs 16.7% for laterally based plates ( P = .156). Time to weight bearing as tolerated was also noted to be significant between groups plated medially and laterally for the comminuted group ( P = .013). Conclusions: Correctly assessing the fibular component for pilon fractures provides valuable information regarding deforming forces. To limit mechanical complications, tibial plates should be applied in such a way as to resist the original deforming forces. Level of Evidence Level III, comparative study. abstract_id: PUBMED:37038161 A retrospective comparison of double-hooked locking plates versus non-locking plates in minimally invasive percutaneous plate osteosynthesis for the treatment of comminuted distal fibular fractures accompanied by tibial Pilon fractures. Background: Surgical approach and fixation material are crucial in the treatment of comminuted distal fibular fractures accompanied by tibial Pilon fractures. This study compared the efficacy of double-hooked locking plates and anatomic plates in minimally invasive percutaneous plate osteosynthesis (MIPPO) for the treatment of comminuted distal fibular fractures accompanied by tibial Pilon fractures. Methods: Clinical data were collected from 96 patients diagnosed with comminuted distal fibular fractures accompanied by tibial Pilon fractures who had undergone MIPPO. Patients in the study group (n = 48) received double-hooked locking plate fixations and the control group (n = 48) received anatomical plate fixations. The operating time, intraoperative bleeding, length of hospital stays, full weight-bearing time, fracture healing time and complication rates in the two groups were compared. The quality of fracture reduction was evaluated using the Burwell-Chamley imaging scoring system; the ankle function was assessed based on the American Orthopaedic Foot and Ankle Society Ankle-Hindfoot Score. Results: Patients in the study group had shorter operating time, less bleeding, significantly shorter hospital stays, and shorter time to full weight-bearing as well as fracture healing compared to the control group (P &lt; 0.05). Additionally, the post-operative complication rates were significantly lower in the study group (6.16% vs. 22.92%) (P &lt; 0.05), but there was no significant difference in the fracture reduction rate between the two groups (P &gt; 0.05). Patients in the study group experienced better ankle recovery than those in the control group (93.75% vs. 75.00%) (P &lt; 0.05). Conclusion: Double-hooked locking plates have advantages in the treatment of comminuted distal fibular fractures accompanied by tibial Pilon fractures during MIPPO due to their shorter operating time and less intraoperative bleeding, as well as shorter hospital stays, full weight-bearing time and fracture healing time, fewer complications and better ankle recovery. Therefore, double-hooked locking plates are worthy of clinical application. abstract_id: PUBMED:37533163 Analysis of the Ability of a Distal Tibial Anatomical Locking Plate to Capture the Distal Tibial Fragments in Patients with Pilon Fractures. Objective: Although pilon fractures are rare in clinical practice, they are difficult to treat because of their complexity. Effective fixation of the fracture fragment is the key to the treatment of pilon fractures. Plate osteosynthesis is common clinically, but there are many types of plates and the evaluation of the effect of fixation plates is not comprehensive. This study attempted to compare the capture effect of different fixation plates on the fracture fragments based on 3D modeling and fine distinctions of fracture fragments. Methods: The computed tomography (CT) images before treatment of 127 patients with pilon fractures from January 2019 to December 2021 were retrospectively collected. The fracture lines were mapped and digitally displayed as 3D images using MIMICS 21 software. APLUS distal tibia anatomical locking plate (Plate A) and ZIMMER distal tibia anatomical plate (Plate B) were placed on a pseudo-bone model and CT scans were used to determine the number of screws in the major and minor fragments of pilon fractures. The frequency of the two plates capturing the fracture fragments was recorded. Results: Under Assumption 1 or 2, Plate A performed significantly better than Plate B in capturing the major, Chaput, Volkmann, medial malleolus, and die-punch fracture fragments. Plate A captured markedly more minor fragments than Plate B under Assumption 2 but was not significantly different from Plate B under Assumption 1. Plate A or Plate B showed no obvious difference between major and minor capture rates under the same assumption, and A1 or B1 showed a markedly higher capture rate compared with A2 or B2. In addition, there was a significant positive correlation between the major capture rate and the major fragments in B1, and a significant negative correlation between the minor capture rate and the minor fragments in Plates A and B. However, there was no correlation between the major capture rate of Plate A and the major fragments. Conclusion: The APLUS distal tibial anatomical locking plate is superior to the ZIMMER distal tibia anatomical plate in the ability to capture distal tibial fragments in pilon fracture cases. abstract_id: PUBMED:29305233 Anterolateral distal tibia locking plate osteosynthesis and their ability to capture OTAC3 pilon fragments. Background: Intra-articular Pilon fractures remain therapeutically challenging due to osteochondral fracturing and comminution, marginal impaction, and insult to the soft tissue envelope. The purpose of this study was to compare the efficacy of anterolateral distal tibial locking plates in capturing main fracture fragments in tibial plafond fractures. Methods: From May 2011 to Dec 2015, 169 OTA C-type pilon fractures met inclusion and exclusion criteria with computed tomographic (CT) scans performed prior to definitive fixation. For each patient, the fracture lines were mapped, digitized, and graphically superimposed to create a compilation of fracture lines. Based on these average measurements, three distal tibia sawbones had three different anterolateral plates applied. Axial CT scan images were used to determine the efficacy of screw purchase in main fracture fragments in pilon fractures. Results: The Smith &amp; Nephew PERI-LOC plate secured the largest number of fracture lines (90.1%) but missed the Volkmann fragment with greatest frequency at 3.6%. The Synthes 2.7/3.5 mm VA-LCP captured 87.3% of the fracture lines while missing the Volkmann fragment 3.2% of the time. The Synthes 3.5 mm LCP captured 86.5% of the fracture lines but was the best at securing the Volkmann fragment (1.2% missed). All three implants were deficient in capturing the medial malleolar fragment. The PERI-LOC and 2.7/3.5 mm VA-LCP did not differ with respect to percentage of fragments captured (p = 0.721) but both outperformed the 3.5 mm LCP (p = 0.021 and p = 0.05, respectively). Conclusions: This study was consistent with prior literature in defining three main fracture fragments: anterior, medial, and posterior. All three plates were deficient in capturing the medial malleolar fragment. The Smith and Nephew PERI-LOC plate secured the most number of fracture lines, while the Synthes 3.5 mm LCP was least likely to miss the Volkmann fragment and most likely to miss the medial malleolar fragment. No plate was found to be superior to the other in capturing all fracture lines of the OTAC3 pilon fragments. Level Of Evidence: Three. abstract_id: PUBMED:27997468 Locking Compression Pilon Plate for Fixation of Comminuted Posterior Wall Acetabular Fractures: A Novel Technique. Posterior wall acetabular fractures involving a large portion the wall's width and with extensive comminution are difficult fractures to manage operatively. Cortical substitution with a pelvic reconstruction plate and supplemental spring plates has been the traditional means of fixation for these fractures. This option, however, requires the use of multiple, unlinked plates and provides no reliable option for peripheral fixation in comminuted fragments. We describe a novel technique for operative fixation of large, comminuted posterior wall fractures using a single distal tibia pilon plate with the option for peripheral locking screw fixation and report on a series of 20 consecutive patients treated with this method. abstract_id: PUBMED:34651492 Current status and progress of locking plate in the treatment of distal femoral comminuted fracture Objective: To review the current status and progress of locking plate for the treatment of distal femoral comminuted fractures. Methods: The related literature was extensively reviewed to summarize the current status and progress in the treatment of distal femoral comminuted fracture with locking plate from four aspects: the current treatment situation, the shortcomings of locking plate and countermeasures, the progress of locking technology, locking plate and digital orthopedic technology. Results: Treatment of distal femoral comminuted fractures is challenging. Locking plates, the most commonly used fixation for distal femoral comminuted fractures, still face a high rate of treatment failure. Double plates can improve the mechanical stability of comminuted fractures, but specific quantitative criteria are still lacking for when to choose double plates for fixation. The far cortial locking screw has shown good application value in improving the micro-movement and promoting the growth of callus. The biphasic plating is a development of the traditional locking plate, but needs further clinical examination. As an auxiliary means, digital orthopedic technology shows a good application prospect. Conclusion: The inherent defect of locking plate is a factor that affects the prognosis of distal femoral comminuted fracture. The optimization of locking technology combined with digital orthopedic technology is expected to reduce the failure rate of treatment of distal femoral comminuted fracture. abstract_id: PUBMED:33892879 Plate Fixation of Distal Radius Fractures: What Type of Plate to Use and When? There are several options for plate fixation of distal radius fractures. Volar plating has broad applicability and consistent outcomes, and thus is the most commonly used plate type. Dorsal plates are advantageous for specific fracture patterns, and can provide direct fracture reduction and buttressing, but may be prominent and can cause tendon irritation. Bridge plates offer an alternative to external fixation while avoiding the complications with prominent hardware, because they span highly comminuted fractures and can be used for immediate weight bearing; however, they require plate removal. Choice of plate fixation should depend on fracture type, patient factors, and surgeon experience. abstract_id: PUBMED:28986074 Finite element analysis of the stability of AO/OTA 43-C1 type distal tibial fractures treated with distal tibia medial anatomic plate versus anterolateral anatomic plate. Background: The treatment of a pilon fracture type is possible from a medial, an anterolateral approach or both medial and lateral. The aim of study was compare the stability of two different plate fixation of a tibia pilon AO-43C1 type fractures. Material And Methods: In this study, three-dimensional finite element stress analysis was applied using isotropic materials and static linear analysis. Loading of 400 N force was applied to the model of a patient fixed in a standing position. In the model, first the fibula was treated by plating and then in one group the pilon fracture was treated by medial plating, and in the other by anterolateral plating. The displacement and stress values of the fragments of the fracture line were compared of the same points in each model. Results: The magnitude of the displacement of fragments in the total displacement magnitudes of X, Y, Z axis were measured in the medial plate and anterolateral plate. The anterolateral plate results were similar to those of the medial plate and the displacement values in the Y axis were determined to be lower than in the medial plate. Conclusion: In AO 43 C1 distal tibia fractures, medial or anterolateral plates can be used, and the results of this study showed similar biomechanical stiffness in the two plates. abstract_id: PUBMED:34674899 Comminuted Dorsal Ulnar Fragment in Distal Radius Fractures Treated Using the Integrated Compression Screw With a Mini-Plate. Stabilization for displaced dorsoulnar fragments in distal radius fractures is challenging to treat with conventional volar locking plates alone. The integrated compression screw combined with a volar locking plate has been introduced as an additional tool to stabilize the dorsoulnar fragment and has been reported to work effectively. However, the compression screw is unable to stabilize a comminuted dorsal ulnar fragment; therefore, it is necessary to consider using an additional dorsal plate. We have developed a modified surgical technique to stabilize a comminuted dorsal intra-articular fragment by combining the integrated compression screw with a mini-plate as a washer or a buttress. Answer: The ability of modern distal tibia plates to stabilize comminuted pilon fracture fragments has been a subject of investigation in several studies. The consensus appears to be that while modern plates have made advancements, there are still limitations in their ability to stabilize all fracture fragments in complex pilon fractures, and in some cases, dual plate fixation may be necessary to achieve adequate stability. One study examined the screw trajectory of ten commercially available distal tibia plates and compared them to common fracture patterns seen in OTA C type pilon fractures. The results indicated that 8 out of 10 studied implants did not adequately stabilize the three primary fracture fragments typically seen in these injuries. Anterolateral plates were superior in addressing the coronal primary fracture line and buttressing the zone of comminution, but none of the available plates could substitute for an understanding of the fracture planes and fragments typically seen in complex intra-articular tibia fractures. The addition of a second plate was deemed necessary for adequate stability (PUBMED:27264277). Another study evaluated the optimal plate location for fixation of pilon fractures based on the orientation of the fibular fracture. The findings suggested that tibial plates should be applied in such a way as to resist the original deforming forces to limit mechanical complications. This implies that the choice of plate and its location can be critical in managing these fractures (PUBMED:28288519). A retrospective comparison of double-hooked locking plates versus non-locking plates in minimally invasive percutaneous plate osteosynthesis for the treatment of comminuted distal fibular fractures accompanied by tibial Pilon fractures found that double-hooked locking plates had advantages in terms of operating time, bleeding, hospital stays, full weight-bearing time, fracture healing time, fewer complications, and better ankle recovery. This suggests that the type of plate used can significantly impact the outcome (PUBMED:37038161). An analysis of the ability of a distal tibial anatomical locking plate to capture distal tibial fragments in patients with pilon fractures concluded that the APLUS distal tibial anatomical locking plate was superior to the ZIMMER distal tibia anatomical plate in capturing distal tibial fragments in pilon fracture cases (PUBMED:37533163).
Instruction: Courses of substance use and schizophrenia in the dual-diagnosis patients: is there a relationship? Abstracts: abstract_id: PUBMED:12944690 Courses of substance use and schizophrenia in the dual-diagnosis patients: is there a relationship? Background: Interrelationship of schizophrenia and substance use is complex and multifactorial. Examining the influence of various psychoactive substances on course of patients with pure dual-diagnosis schizophrenia may help to solve this riddle. Aim: To examine the relationship of the courses of substance use and schizophrenic symptomatology in substance abusing "dual-diagnosis" patients with schizophrenia. Settings And Design: Outpatient Deaddiction and Treatment Center of a tertiary care hospital with a retrospective design. Methods And Material: Twenty-two substance abusing dual-diagnosis patients with schizophrenia were investigated regarding the course of substance use (abuse/dependence, use, non-use) and that of schizophrenia (psychotic, non-psychotic, in remission). A graphical representation of course of schizophrenia and substance abuse was made and their relationship studied by superimposition of respective graphs. Statistics: The Friedman two-way analysis of variance of ranks was applied to see the relationship between time spent by patients while on and off various substances. Results: In five cases, the onset of schizophrenia preceded the onset of substance use. In seven out of 22 subjects, the schizophrenic exacerbation was clearly preceded by increase in substance abuse in the preceding two-twelve months. In none of the subjects decrease in substance use led to a decrease or increase in schizophrenic symptoms. Conclusions: Although substance use disorder preceded the onset of schizophrenic illness in the majority, and increase in substance abuse preceded schizophrenic exacerbation in one-third of dual-diagnosis patients, overall there was no evidence that the course of substance use was associated with that of schizophrenia after both disorders were diagnosed. abstract_id: PUBMED:36271867 Time trends in co-occurring substance use and psychiatric illness (dual diagnosis) from 2000 to 2017 - a nationwide study of Danish register data. Objective: This article aims to describe the time trend in number of dual diagnosis patients treated in the psychiatric system in Denmark from 2000 to 2017. Method: We calculated the share of patients with dual diagnosis, number of dual diagnosis contacts, number of unique individuals with dual diagnosis as well as number of new patients with dual diagnosis among patients in psychiatric treatment, i.e. among inpatients, outpatients and patients in emergency departments. In order to calculate this, we merged data from the National Patient Register (NPR), the National Registry of Alcohol Treatment, the National treatment registry for substance use, the National Prescription Registry and the Danish National Health Service register in the period from 2000 to 2017. Results: We found an overall increase in patients with dual diagnosis in psychiatric treatment in Denmark from 2000 to 2017. We further detected an increase in the age and sex-standardized number of patients with dual diagnosis in treatment over time, however most markedly for outpatients. Further, inclusion of data from other sources of data than the NPR dramatically increased the number of patients that could be identified as dual diagnosis patients. Using these data, almost half of all male inpatients could be identified as dual diagnosis while the share was more than 40% for patients with schizophrenia, schizotypal and delusional disorders (F2) and patients with personality disorders (F6). Conclusions: The increase of individual diagnosis patients necessitates action at different levels. This includes improvement of preventive measures as well as improvement of treatment for this underserved group. abstract_id: PUBMED:20146152 Dual diagnosis psychosis and substance use disorders in adolescents--part 1 Epidemiological studies suggest that 20 % to 50 % of patients with schizophrenia have a lifetime comorbid substance use disorder (SUD). In first-episode psychosis this prevalence is even higher and varies between 20 % and 75 % with cannabis being the most widely used illicit drug. These difficult to treat patients usually have a worse prognosis as compared with non-substance abusing schizophrenic patients. Despite multiple theories proposed such as the self medication hypothesis, common or bidirectional factor models or genetic vulnerability, there is no consensus on the aetiology of increased rates of substance use in people with psychosis which is important to treat these patients. The dually diagnosed population is a heterogeneous group and it is likely that different models may explain comorbidity in different subgroups. The present review part one gives an overview on prevalence and explanation models for dual diagnosis psychosis and substance use with focus in adolescent and young adult populations, the second part reviews the clinical course for both disorders and current psychosocial treatment options. abstract_id: PUBMED:27445939 Unraveling Executive Functioning in Dual Diagnosis. In mental health, the term dual-diagnosis is used for the co-occurrence of Substance Use Disorder (SUD) with another mental disorder. These co-occurring disorders can have a shared cause, and can cause/intensify each other's expression. Forming a threat to health and society, dual-diagnosis is associated with relapses in addiction-related behavior and a destructive lifestyle. This is due to a persistent failure to control impulses and the maintaining of inadequate self-regulatory behavior in daily life. Thus, several aspects of executive functioning like inhibitory, shifting and updating processes seem impaired in dual-diagnosis. Executive (dys-)function is currently even seen as a shared underlying key component of most mental disorders. However, the number of studies on diverse aspects of executive functioning in dual-diagnosis is limited. In the present review, a systematic overview of various aspects of executive functioning in dual-diagnosis is presented, striving for a prototypical profile of patients with dual-diagnosis. Looking at empirical results, inhibitory and shifting processes appear to be impaired for SUD combined with schizophrenia, bipolar disorder or cluster B personality disorders. Studies involving updating process tasks for dual-diagnosis were limited. More research that zooms in to the full diversity of these executive functions is needed in order to strengthen these findings. Detailed insight in the profile of strengths and weaknesses that underlies one's behavior and is related to diagnostic classifications, can lead to tailor-made assessment and indications for treatment, pointing out which aspects need attention and/or training in one's self-regulative abilities. abstract_id: PUBMED:23894784 Patients with a diagnosis of pancreatitis and "dual diagnosis" "Dual diagnosis"is usually understood as the co-occurrence of mental illness and addiction to psychoactive substances. In the last two decades can be seen an increased interest of researchers and practitioners in theoretical aspects of dual diagnosis, their means of diagnosis and planning treatment programs. Establishing a medical diagnosis may be questioned whether psychopathological symptoms are due to psychoactive substance use or are caused by mental illness. Difficulty in recognizing the problem of dual diagnosis may be questioned whether psychopathological symptoms are due to psychoactive substance use or are caused by mental illness. Mental disorders coexisting with substance use are: schizophrenia, delusional disorder, bipolar one and disorder, anxiety disorders, personality disorders. The consumption of alcohol can cause a recurrence of mental illness and contribute to the need for rehospitalization. It is important to take the history of a patient who are suspected of co-occurrence of mental illness and addiction to psychoactive substances. abstract_id: PUBMED:33832406 Atypical antipsychotics in the treatment of patients with a dual diagnosis of schizophrenia spectrum disorders and substance use disorders: the results of a randomized comparative study. The article presents the results of a randomized comparative study of Aripiprazole and Quetiapine in the treatment of patients with a dual diagnosis: schizophrenia and substance use disorders. During the study, 90 of the 266 male patients were screened. Among them, 54 individuals (60%) had a previously established diagnosis of mental disorder and 36 patients (40%) had no established psychiatric diagnosis. They were randomly randomized into three groups of 30 patients, each receiving an antipsychotic: Aripiprazole at a dose of up to 20 mg daily, Quetiapine at a dose of up to 600 mg daily, or Haloperidol at a dose of up to 30 mg daily. The efficacy of Aripiprazole and Quetiapine was evaluated using the following scales: PANSS, BPRS, VAS, and Substance Craving Scale (SCS). Drug safety was assessed by the development of adverse events, serious adverse events, or adverse reactions. Study results demonstrated the efficacy of atypical antipsychotics in the three groups. Analysis of independent variables showed significant differences between Aripiprazole and Haloperidol in PANSS and BPRS scores by Visit 4, in VAS scores by Visit 3, and in SCS scores by Visit 2. Intergroup analysis of independent variables showed significant differences between Quetiapine and Haloperidol in PANSS, VAS, and SCS scores by Visit 4. Intergroup analysis of independent variables showed significant differences between Aripiprazole and Quetiapine in the VAS and SCS scores. The correlation analysis allowed drawing conclusions about the close connection of the symptoms of schizophrenia and substance use disorders in patients with a dual diagnosis. abstract_id: PUBMED:23888766 Dual diagnosis in psychoactive substance abusing or dependent persons Background: There has been noticed a systematic growth of using psychoactive substance (SP) in last years. The co-occurrence of mental and physical disorders related to substance abuse of treated patients is more often a serious problem to medical services. Dual diagnosis (DD) is a clinical term referring to co-morbidity or the co-occurrence in the same individual of a psychoactive substance use disorder and another psychiatric disorder. The aim of the study is to investigate the prevalence of dual diagnosis in patients with diagnosis of substance use disorder hospitalized in years 1994-2005, to assess the kind of co-morbid mental disorders and the course of treatment in three groups: patients with DD, with diagnosis of mental disorder without substance use and with diagnosis related to substance use. Methods: The retrospective study of 4 349 case records of patients hospitalized in the department of psychiatry in years 1994-2005. Out of this number two groups of patients were separated: persons abusing or dependent on SP (n = 825) and patients with dual diagnosis (n = 362). The control group (n = 200) was created among patients with mental disorders and without SP abuse. Socio-demographic factors, number and the length of hospitalizations, aggressive behaviours, suicide attempts, discharges from hospital on demand were analyzed. In the DD group there was an attempt to evaluate the relation between substance use disorders and co-occuring mental disorders performed. Results: The frequency of DD among all patients hospitalized in the studied period of time was 8.3%, whereas among patients abusing SP was 30.5%. This study demonstrates that patients with the DD are statistically longer hospitalized, discharged from hospitals at their own request and more often need treatment in hospitals, statistically more often try to commit suicide and perform aggressive behavior. Mental disorders were substantially often secondary to substance related disorders in the DD group. There was proved that patients mainly abused alcohol and the most frequent mental disorder were mood (affective) disorders. abstract_id: PUBMED:18654956 Dual diagnosis psychosis and substance use disorders: theoretical foundations and treatment Dual Diagnosis (DD) patients with psychosis and substance use disorders (SUD) represent a large core group among patients with schizophrenia. Cannabis use disorders are most prevalent among DD patients, particularly in adolescent and young adult populations. There are different models to explain the high rates of comorbidity between psychosis and SUD. Currently, evidence is best for the model of cannabis use being a component cause of psychosis in individuals who are highly vulnerable to psychosis. There is also some evidence for the model of common vulnerability factors for psychosis and SUD. DD patients are difficult to treat as they comply poorly, their long-term outcomes are unfavourable and they suffer frequent psychotic relapses and hospitalisations. Successful treatment models integrate traditional psychiatric therapy for psychosis and therapy for addiction in one setting, modifying and adjusting the two components to the special needs of the DD patients. Integrated programmes focus mostly on long-term outpatient treatment and offer pharmacotherapy, motivational enhancement, psychoeducation, cognitive-behavioural therapy and family interventions. Current clinical research demonstrates that integrated treatment programmes can achieve significant improvements with regard to the social adjustment of, as well as decreased substance use by DD patients. abstract_id: PUBMED:22395768 Treatment of dual diagnosis disorders. Purpose Of Review: Treatment of dual diagnosis [co-occurrence of a substance use disorder (SUD) in patients with mental illness] poses several challenges for mental health professionals. This article seeks to review the recent advances in dual diagnosis treatment with respect to pharmacotherapy and psychosocial approaches. Recent Findings: Atypical antipsychotics are commonly used for comorbid schizophrenia and SUD. Whereas there is no difference between risperidone and olanzapine, clozapine appears to have a distinct advantage in reducing psychotic symptoms as well as substance abuse (including smoking). There is emerging evidence that quetiapine is beneficial in dually diagnosed patients, particularly using alcohol, cocaine and amphetamine. A combination of naltrexone and sertraline was found to be effective in patients with depressive disorder and alcohol dependence. Effectiveness of atomoxetine is yet to be established in patients with comorbid adult attention-deficit/hyperactivity disorder with respect to decrease in substance abuse. Integrated intervention is the choice of treatment for patients with dual diagnosis. Summary: In spite of the high association between substance use and psychiatric disorders, there is a surprising paucity of studies related to treatment and outcome. A few well-designed studies have been recently published and more studies of this nature are required in order to address the challenges posed in the treatment of dual disorders. abstract_id: PUBMED:7966510 Clinical diagnosis of substance use disorders in private psychiatric populations. The clinical diagnoses of comorbid substance use and psychiatric disorders by psychiatrists in a private psychiatric inpatient setting were analyzed. Two hundred patients consecutively discharged from a private university-affiliated psychiatric service by clinical psychiatrists were examined for concomitant substance use and psychiatric disorders according to DSM-III-R criteria for Axis I and Axis II disorders. Fifty-nine patients (30%) were diagnosed with a comorbid substance use and psychiatric disorder on Axis I, supporting findings of previous studies. Comorbid diagnosis patients were more likely to be male (54%, 32) than were psychiatric only (noncomorbid diagnosis) patients (males 37%, 51). There were no differences between comorbid (dual diagnosis) and noncomorbid diagnoses in age, mean length of stay, or discharge type. Of the 59 patients with comorbid disorders, 83% had Axis I diagnoses, and 64% had Axis II diagnoses. Polysubstance use disorder was the most common dual diagnosis, cooccurring with an Axis I diagnosis in 47% of patients with an Axis II diagnosis in 45%. Psychiatrists in clinical psychiatric inpatient settings diagnose substance use disorders in rates similar to public settings despite absence of specific addiction treatment. Answer: The relationship between substance use and schizophrenia in dual-diagnosis patients is complex and multifaceted. A study examining the courses of substance use and schizophrenic symptomatology in dual-diagnosis patients found that although substance use disorder often preceded the onset of schizophrenic illness and an increase in substance abuse preceded schizophrenic exacerbation in one-third of the patients, there was no evidence that the course of substance use was associated with that of schizophrenia after both disorders were diagnosed (PUBMED:12944690). This suggests that while there may be a temporal relationship between the onset and exacerbation of schizophrenia with substance use, the ongoing course of the two conditions may not be directly related once both are established. Additionally, epidemiological studies indicate that a significant proportion of patients with schizophrenia have a lifetime comorbid substance use disorder (SUD), with even higher prevalence in first-episode psychosis. Cannabis is the most widely used illicit drug among these patients, who generally have a worse prognosis compared to non-substance abusing schizophrenic patients (PUBMED:20146152). The dual-diagnosis population is heterogeneous, and different models may explain comorbidity in different subgroups, but there is no consensus on the etiology of increased rates of substance use in people with psychosis (PUBMED:27445939). Furthermore, a nationwide study in Denmark showed an overall increase in patients with dual diagnosis in psychiatric treatment from 2000 to 2017, with almost half of all male inpatients and more than 40% of patients with schizophrenia, schizotypal, and delusional disorders being identified as dual diagnosis patients (PUBMED:36271867). This increase underscores the need for improved preventive measures and treatment for this underserved group. In summary, while there is evidence of a temporal relationship between substance use and the onset or exacerbation of schizophrenia in dual-diagnosis patients, the ongoing interrelationship between the courses of the two conditions is not clearly established once both disorders are present. The increase in dual-diagnosis patients over time highlights the importance of addressing this issue in psychiatric care.
Instruction: Pain management during invasive procedures at Italian NICUs: has anything changed in the last 5 years? Abstracts: abstract_id: PUBMED:22958050 Pain management during invasive procedures at Italian NICUs: has anything changed in the last 5 years? Objective: To ascertain the extent to which neonatal analgesia for invasive procedures has changed in the last 5 years since the publication of Italian guidelines. Methods: We compared survey data for the years 2004 and 2010 on analgesia policy and practices for common invasive procedures at Italian Neonatal Intensive Care Units (NICUs); 75 NICUs answered questionnaires for both years and formed the object of this analysis. Results: By 2010, analgesia practices for procedural pain had improved significantly for almost all invasive procedures (p &lt; 0.05), with both non-pharmacological and pharmacological methods being adopted by the majority of NICUs (unlike the situation in 2004). The routine use of medication for major invasive procedures was still limited, however (35% of lumbar punctures, 40% of tracheal intubations, 46% during mechanical ventilation). Postoperative pain treatment was still inadequate, and 41% of facilities caring for patients after surgery did not treat pain routinely. Pain monitoring had definitely improved since 2004 (p &lt; 0.05), but not enough: only 21 and 17% of NICUs routinely assess pain during mechanical ventilation and after surgery, respectively. Conclusion: There have been improvements in neonatal analgesia practices in Italy since national guidelines were published, but pain is still undertreated and underscored, especially during major invasive procedures. It is mandatory to address the gap between the recommendations in the guidelines and clinical practice must be addressed through with effective quality improvement initiatives. abstract_id: PUBMED:23039224 Pain management during invasive procedures at Italian NICUs: has anything changed in the last five years? Objective: To ascertain the extent to which neonatal analgesia for invasive procedures has changed in the last 5 years since the publication of Italian guidelines. Methods: We compared survey data for the years 2004 and 2010 on analgesia policy and practices for common invasive procedures at Italian Neonatal Intensive Care Units (NICUs); 75 NICUs answered questionnaires for both years and formed the object of this analysis. Results: By 2010 analgesia practices for procedural pain had improved significantly for almost all invasive procedures (p &lt; 0.05), both non-pharmacological and pharmacological methods being adopted by the majority of NICUs (unlike the situation in 2004). The routine use of medication for major invasive procedures was still limited, however (35% of lumbar punctures, 40% of tracheal intubations, 46% during mechanical ventilation). Postoperative pain treatment was still inadequate, and 41% of facilities caring for patients after surgery did not treat pain routinely. Pain monitoring had definitely improved since 2004 (p &lt; 0.05), but not enough: only 21% and 17% of NICUs routinely assess pain during mechanical ventilation and after surgery, respectively. Conclusion: There have been improvements in neonatal analgesia practices in Italy since national guidelines were published, but pain is still undertreated and underscored, especially during major invasive procedures. It is mandatory to address the gap between the recommendations in the guidelines and clinical practice must be addressed through with effective quality improvement initiatives. abstract_id: PUBMED:21268371 Antibiotic-prophylaxis in the minimally invasive pain surgical procedures The minimally invasive pain surgical procedures are more and more frequently used in the treatment and management of the chronic pain. The patients will often have recourse to a higher infection's risk during the proceedings for acquired general conditions (like enterotomy, skin ulcers, bladder catheter).The analysis literature doesn't produce specific treatment guidelines about antibiotic prophylaxis in pain therapy. This document, drawn up with multidisciplinary approach, correspond to rational, efficacy and functional guide about the choice and management of the antibiotic prophylaxis during the minimally invasive pain surgical procedures. abstract_id: PUBMED:26936922 Features and Role of Minimally Invasive Palliative Procedures for Pain Management in Malignant Pelvic Diseases: A Review. Pain is a common and debilitating symptom in pelvic cancer diseases. Failure in controlling this pain through pharmacological approaches calls for employing multimodal management and invasive techniques. Various strategies are commonly used for this purpose, including palliative radiotherapy, epidural medications and intrathecal administration of analgesic and local anesthetic drugs with pumps, and neural or plexus blockade. This review focuses on the features of minimally invasive palliative procedures (MIPPs), such as radiofrequency ablation, laser-induced thermotherapy, cryoablation, irreversible electroporation, electrochemotherapy, microwave ablation, and cementoplasty as well as their role in palliation of cancer pelvic pain. Despite the evidence of effectiveness and safety of these interventions, there are still many barriers to accessing MIPPs, including the availability of trained staff, the lack of precise criteria of indication, and the high costs. abstract_id: PUBMED:32969522 Analysis on chronic pain management: Focus on the Italian network. The Italian Law 38/2010, 'Dispositions to guarantee access to Palliative Care and Pain Management' orders that the health care systems of Italian regions create dedicated structures for palliative care and pain therapies, according to a specific organizational model called 'Hub-Spoke', to ensure the diagnostic-therapeutic continuity of patients affected by chronic pain (CP). The aim of our study was to investigate the Italian pain therapy network, 8 years following the approval of the Law. We sent a questionnaire to the national health representatives operating in CP management. The main result emerging from the analysis concerns the management of mini-invasive procedures, showing that 93.2% of the responding facilities carry out invasive procedures, 6.8% do not perform them and that 100% of the facilities belonging to 12 regions provided these procedures, while in eight regions more than 80%. Finally, only 38.5% of facilities declared to have a shared protocol with the relevant territorial facilities in order to guarantee the process of care and assistance of patients affected by CP. In conclusion, our study demonstrated the efficacy of the organizational model in most of the responding facilities, although the territorial management of patients after their hospital discharge should be strengthened. abstract_id: PUBMED:25861610 Minimally invasive procedures. Minimally invasive procedures, which include laparoscopic surgery, use state-of-the-art technology to reduce the damage to human tissue when performing surgery. Minimally invasive procedures require small "ports" from which the surgeon inserts thin tubes called trocars. Carbon dioxide gas may be used to inflate the area, creating a space between the internal organs and the skin. Then a miniature camera (usually a laparoscope or endoscope) is placed through one of the trocars so the surgical team can view the procedure as a magnified image on video monitors in the operating room. Specialized equipment is inserted through the trocars based on the type of surgery. There are some advanced minimally invasive surgical procedures that can be performed almost exclusively through a single point of entry-meaning only one small incision, like the "uniport" video-assisted thoracoscopic surgery (VATS). Not only do these procedures usually provide equivalent outcomes to traditional "open" surgery (which sometimes require a large incision), but minimally invasive procedures (using small incisions) may offer significant benefits as well: (I) faster recovery; (II) the patient remains for less days hospitalized; (III) less scarring and (IV) less pain. In our current mini review we will present the minimally invasive procedures for thoracic surgery. abstract_id: PUBMED:36701566 Emerging Minimally Invasive Percutaneous Procedures for Periacetabular Osteolytic Metastases. ➤: Periacetabular osteolytic skeletal metastases are frequently associated with pain and impaired ambulatory function. Minimally invasive techniques allow for the restoration of ambulation without interrupting critical systemic cancer therapy. ➤: The open surgical management of massive periacetabular osteolytic lesions, such as by curettage, internal fixation, or complex total hip reconstruction, is associated with blood loss, hospitalization, rehabilitation, and complications such as infection or delayed wound-healing. ➤: Minimally invasive percutaneous procedures have become increasingly popular for the management of periacetabular osteolytic metastases by interventional oncologists and orthopaedic surgeons before complex open surgical procedures are considered. ➤: Minimally invasive procedures may include various methods of cancer ablation and reinforcement techniques. Minimally invasive procedures may entail cancer ablation, polymethylmethacrylate (PMMA) cement reinforcement, balloon osteoplasty, percutaneous screw fixation, or combinations of the aforementioned techniques (e.g., ablation-osteoplasty-reinforcement-internal fixation [AORIF]). abstract_id: PUBMED:21663631 The opinion of clinical staff regarding painfulness of procedures in pediatric hematology-oncology: an Italian survey. Background: Beliefs of caregivers about patient's pain have been shown to influence assessment and treatment of children's pain, now considered an essential part of cancer treatment. Painful procedures in hematology-oncology are frequently referred by children as the most painful experiences during illness. Aim of this study was to evaluate professionals' beliefs about painfulness of invasive procedures repeatedly performed in Pediatric Hemato-Oncology Units. Methods: Physicians, nurses, psychologists and directors working in Hemato-Oncology Units of the Italian Association of Pediatric Hematology-Oncology (AIEOP) were involved in a wide-nation survey. The survey was based on an anonymous questionnaire investigating beliefs of operators about painfulness of invasive procedures (lumbar puncture, bone marrow aspirate and bone marrow biopsy) and level of pain management. Results: Twenty-four directors, 120 physicians, 248 nurses and 22 psychologists responded to the questionnaire. The score assigned to the procedural pain on a 0-10 scale was higher than 5 in 77% of the operators for lumbar puncture, 97.5% for bone marrow aspiration, and 99.5% for bone marrow biopsy. The scores assigned by nurses differed statistically from those of the physicians and directors for the pain caused by lumbar puncture and bone marrow aspiration. Measures adopted for procedural pain control were generally considered good. Conclusions: Invasive diagnostic-therapeutic procedures performed in Italian Pediatric Hemato-Oncology Units are considered painful by all the caregivers involved. Pain management is generally considered good. Aprioristically opinions about pain depend on invasiveness of the procedure and on the professional role. abstract_id: PUBMED:20704684 Analgesic techniques in minor painful procedures in neonatal units: a survey in northern Italy. Introduction: The aim of this survey was to evaluate the current practice regarding pain assessment and pain management strategies adopted in commonly performed minor painful procedures in Northern Italian Neonatal Intensive Care Units (NICUs). Methods: A multicenter survey was conducted between 2008 and 2009 in 35 NICUs. The first part of the survey form covered pain assessment tools, the timing of analgesics, and the availability of written guidelines. A second section evaluated the analgesic strategies adopted in commonly performed painful procedures. The listed analgesic procedures were as follows: oral sweet solutions alone, non-nutritive sucking (NNS) alone, a combination of sweet solutions and NNS, breast-feeding where available, and topical anesthetics. Results: Completed questionnaires were returned from 30 neonatal units (85.7% response rate). Ten of the 30 NICUs reported using pain assessment tools for minor invasive procedures. Neonatal Infant Pain Scale was the most frequently used pain scale (60%). Twenty neonatal units had written guidelines directing pain management practices. The most frequently used procedures were pacifiers alone (69%), followed by sweet-tasting solutions (58%). A 5% glucose solution was the most frequently utilized sweet-tasting solution (76.7%). A minority of NICUs (16.7%) administered 12% sucrose solutions for analgesia and the application of topical anesthetics was found in 27% of NICUs while breast-feeding was performed in 7% of NICUs. Discussion: This study found a low adherence to national and international guidelines for analgesia in minor procedures: the underuse of neonatal pain scales (33%), sucrose solution administration before heel lance (23.3%), topical anesthetics before venipuncture, or other analgesic techniques. The presence of written pain control guidelines in these regions of Northern Italy increased in recent years (from 25% to 66%). abstract_id: PUBMED:26765229 Comparative study between open and minimally invasive approach in the surgical management of esophageal leiomyoma. Introduction: Leiomyomas are the most common benign tumors of the esophagus. Although classically surgical enucleation through thoracotomy or laparotomy has been widely accepted as treatment of choice, development of endoscopic and minimally invasive procedures has completely changed the surgical management of these tumors. Material And Methods: We performed a retrospective review of all esophageal leiomyoma operated at Hospital Universitario Ramón y Cajal (Madrid, Spain) between January 1986 and December 2014, analyzing patients' demographic data, symptomatology, tumor size and location, diagnostic tests, surgical data, complications and postoperative stay. Results: Thirteen patients were found within that period, 8 men and 5 women, with a mean age of 53.62 years (range 35-70 years). Surgical enucleation was achieved in all patients. In 8 cases (61.54%) a thoracic approach was performed (4 thoracotomies and 4 thoracoscopies), and in 5 cases (38.56%) an abdominal approach was performed (3 laparotomies and 2 laparoscopies); enucleation was carried out through a minimally invasive approach in 6 patients (46.15%). There were no cases of endoscopic resection alone. Surgery mean length was 174.38 minutes (range 70-270 minutes) and median postoperative stay was 6.5 days (range 2-27 days). There was neither mortality nor cases of intraoperative complications were described. No postoperative major complications were reported; however one patient presented important pain in his right hemithorax that required management and long term follow-up by the Pain Management Unit. With a mean follow-up of 165.57 months (median 170; range 29-336 months) no recurrences were reported. Conclusion: Enucleation is the treatment of choice for the majority of esophageal leiomyomas. In our experience, duration of the surgical procedure through minimally invasive approach was longer than surgery through open approach; however, postoperative stay was shorter in the first group. Paradoxically, incision pain after surgery (thoracic neuralgia) was found to be higher in the minimally invasive approach group. Nevertheless, none of the results obtained in the study reached statistical significance, probably due to the small simple size. Answer: In the context of Italian Neonatal Intensive Care Units (NICUs), there have been significant improvements in analgesia practices for procedural pain since the publication of national guidelines. A comparison of survey data from the years 2004 and 2010 revealed that by 2010, both non-pharmacological and pharmacological methods for managing procedural pain were adopted by the majority of NICUs, which was not the case in 2004. However, the routine use of medication for major invasive procedures remained limited, with only 35% of lumbar punctures, 40% of tracheal intubations, and 46% during mechanical ventilation being treated with medication. Postoperative pain treatment was also found to be inadequate, with 41% of facilities not treating pain routinely after surgery. Pain monitoring showed improvement since 2004, but it was still insufficient, with only 21% and 17% of NICUs routinely assessing pain during mechanical ventilation and after surgery, respectively. Despite the progress, pain is still undertreated and underscored, particularly during major invasive procedures, indicating a need for effective quality improvement initiatives to bridge the gap between guideline recommendations and clinical practice (PUBMED:22958050, PUBMED:23039224).
Instruction: Structural and functional measures of social relationships and quality of life among older adults: does chronic disease status matter? Abstracts: abstract_id: PUBMED:26143057 Structural and functional measures of social relationships and quality of life among older adults: does chronic disease status matter? Purpose: To evaluate the relative importance of structural and functional social relationships for quality of life (QoL) and the extent to which diagnosed chronic disease modifies these associations. Methods: Multivariate linear regression was used to investigate time-lagged associations between structural and functional measures of social relationships and QoL assessed 5 years apart by CASP-19, in 5925 Whitehall II participants (mean age 61, SD 6.0). Chronic disease was clinically verified coronary heart disease, stroke, diabetes or cancer. Results: Social relationships-QoL associations were consistent across disease status (P-values for interaction: 0.15-0.99). Larger friend network (β = 1.9, 95% CI 1.5-2.3), having a partner (β = 1.2, 95% CI 0.5-1.7), higher confiding support (β = 2.2, 95% CI 1.8-2.7) and lower negative aspects of close relationships (β = 3.3, 95% CI 2.8-3.8) were independently related to improved QoL in old age. The estimated difference in QoL due to social relationships was equivalent to up to 0.5 SD of the CASP-19 score and was stronger than the effect of chronic disease (coronary heart disease β = 2.0, 95% CI 1.4-2.6). Conclusions: We found that beneficial aspects of social relationships in relation to QoL were, in order of importance: avoiding negative aspects of close relationships, having confiding support, having a wide network of friends and having a partner. These associations were not modified by chronic disease. Thus, despite inevitable physical deterioration, we may be able to enhance a satisfying late life by optimizing our social relationships. abstract_id: PUBMED:36923236 Social Well-Being, Psychological Factors, and Chronic Conditions Among Older Adults. Background: Aging is characterized by the decline in physical health, functional status, and loss of social roles and relationships that can challenge the quality of life. Social well-being may help explain how aging individuals experience declining physical health and social relationships. Despite the high prevalence of chronic conditions among older adults, research exploring the relationship between social well-being and chronic disease is sparse. Objectives: The study aims were to investigate the relationship between social well-being and psychological factors (e.g., perceived control, life satisfaction, self-esteem, active coping, optimism, and religious coping) by chronic condition in older adults. Design: Cross-sectional study. Participants: The current study comprises older adults (N = 1,251, aged ≥ 65 y) who participated in the third wave of the National Survey of Midlife in the United States (i.e., MIDUS). Setting: MIDUS was conducted on a random-digit-dial sample of community-dwelling, English-speaking adults. Measurements: Six instruments representing psychological resources (life satisfaction, perceived control, self-esteem, optimism, active coping, and religious coping) and five dimensions of social well-being (social actualization, social coherence, social acceptance, social contribution, social integration) were measured. An index of chronic disease comprised of self-reported data whether they had received a physician's diagnosis for any chronic conditions over the past year. Results: The findings indicated that the individuals without chronic conditions had significantly higher social integration, social acceptance, and social contribution scores than the individuals with chronic conditions (t = 2.26, p &lt; 0.05, t = 2.85, p &lt; 0.01, and t = 2.23, p &lt; 0.05, respectively). For individuals diagnosed with more than one chronic condition, perceived control, self-esteem, and optimism were positively related to their social well-being (β = .33, p &lt; .001, β = .17, p &lt; .001, and β = .33, p &lt; .001, respectively). Conclusion: Findings suggested that older adults with multiple chronic conditions have a decrease in social well-being. Chronic disease management programs may help increase social well-being among individuals with multiple chronic conditions. abstract_id: PUBMED:32980575 Multimorbidity and functional limitation: the role of social relationships. Objectives: To examine the relationship between multimorbidity and functional limitation, and how social relationships alter that association. Methods: This cross-sectional study used data collected by self-reported questionnaires from adults aged 65 years and older living in a rural area in Japan in 2017. This analysis included complete data from 570 residents. Multimorbidity status was defined as having two chronic diseases exist simultaneously in one individual, and the function status was measured by their long-term care needs. Social relationships were assessed by the Index of Social Interaction and divided into high and low levels. Multiple logistic regression analysis was used to examine the association between social relationships and functional limitation and to assess the role of social relationships in this association. Results: The logistic regression model indicated that the risk of functional limitation was higher in multimorbidity participants than free-of-multimorbidity participants (OR = 2.55, 95% CI = 1.56-4.16). Compared with participants with no multimorbidity and a high level of social relationships, low level of social relationships increased the risk of functional limitation among participants both with and without multimorbidity, with the OR = 7.71, 95% CI = 3.03-19.69 and OR = 3.28, 95% CI = 1.30-8.27, respectively. However, no significant result was found in participants with multimorbidity and a high level of social relationships (P = 0.365). Conclusions: Multimorbidity was associated with functional limitations. However, this association could be increased by a low level of social relationships and decreased by a high level of social relationships. abstract_id: PUBMED:32831028 Social participation is an important health behaviour for health and quality of life among chronically ill older Chinese people. Background: Health behaviours (physical activity, maintenance of a healthy diet and not smoking) are known to be beneficial to the health and well-being of chronically ill people. With China's ageing population and increased prevalence of people with chronic diseases, the improvement of unhealthy behaviours in this population has become crucial. Although recent studies have highlighted the importance of social participation for health and quality of life (QoL) among older people, no study to date has included social participation along with more traditional health behaviours. Therefore, this study aimed to identify associations of multiple health behaviours (social participation, physical activity, maintenance of a healthy diet and not smoking) with health and QoL outcomes (including cognitive and physical function) among chronically ill older adults in China. Methods: For this nationally representative cross-sectional study, wave 1 data from the World Health Organization's Study on global AGEing and adult health (China) were examined. In total, 6629 community-dwelling older adults (mean age, 64.9 years) with at least one chronic disease were included. Multivariate linear regression analyses were used to evaluate associations of health behaviours with health and QoL outcomes while controlling for background characteristics. Results: Greater social participation was associated with better QoL [β = 0.127, standard error (SE) = 0.002, p &lt; 0.001], cognitive function (β = 0.154, SE = 0.033, p &lt; 0.001) and physical function (β = - 0.102, SE = 0.008, p &lt; 0.001). Physical activity was associated with better QoL (β = 0.091, SE = 0.015, p &lt; 0.001) and physical function (β = - 0.155, SE = 0.062, p &lt; 0.001). Sufficient fruit and vegetable consumption was associated with better QoL (β = 0.087, SE = 0.015, p &lt; 0.001). Conclusions: Our findings suggest that social participation is an important health behaviour for quality of life and cognitive function among chronically ill older people in China. Health promotion programmes should expand their focus to include social participation as a health behaviour, in addition to physical activity, maintenance of a healthy diet and not smoking. abstract_id: PUBMED:10528450 Depression, social support, and quality of life in older adults with osteoarthritis. Purpose: To develop an understanding of the quality of life of older adults with osteoarthritis (OA) with varying levels of depression and social support as a basis for nursing interventions. Osteoarthritis in the United States is the number one chronic disease in late life and the major cause of disability in older adults. In addition to the functional disability and economic effect of OA, older people with this disease experience suffering, depression, and diminished quality of life. Design: For this cross-sectional survey, a convenience sample of 50 older adults with OA was recruited from two U.S. hospital-based arthritis clinics in northern Ohio for 3-months during 1995. Methods: During face-to-face interviews, the Arthritis Impact Scales, Center for Epidemiological Studies Depression Scale, Social Support Questionnaire, and Quality of Life Survey, were used to measure osteoarthritis severity, depression, informal social support, and quality of life. Findings: Although few formal social support services were used, high levels of satisfaction from the subjects' large informal networks of family and friends were reported. In addition, satisfaction with subjects' quality of life was extremely high despite depression, co-morbid conditions, pain, and functional limitation. Conclusions: Social support appeared to play an important role in moderating the effects of pain, functional limitation, and depression on these subjects' quality of life. Nurses who work with older adults are in a unique position to help them adjust to living with osteoarthritis by providing them the support needed to help them manage their disease. abstract_id: PUBMED:32623910 Social environment and quality of life among older people with diabetes and multiple chronic illnesses in New Zealand: Intermediary effects of psychosocial support and constraints. Purpose: In older people with diabetes, multimorbidity is highly prevalent and it can lead to poor quality of life. The overall purpose of this study was to examine the association between the social environment, psychosocial support and constraints, and overall quality of life among older people with and without with diabetes and multiple chronic illnesses. Methods: Self-reported data from participants in a cohort study of older New Zealanders was analysed. Responses from 380 older people diagnosed with diabetes and multiple chronic illnesses were compared with 527 older people with no health issues on indicators related to the associations of neighbourhood, health and ageing, using structural equation modelling. Results: The final model suggests that social provision, purpose in life and capabilities mediated between the social environment and quality of life, indicate that older people with positive social environment (i.e., neighbourhood advantage, residential stability) are much less likely to experience depression due to having good social support, meaningful life purpose and opportunities to engage. Conclusions: Perceived neighbourhood advantages, such as positive neighbourhood qualities, social cohesion and housing satisfaction, along with the focus on increasing social support, enhancing purpose in life and supporting one's capability to achieve, may serve as protective factors against depression.IMPLICATIONS FOR REHABILITATIONEnvironmental and personal circumstances can contribute to quality of life among older people with diabetes and multimorbidity.By providing older people with diabetes and multiple chronic illnesses a socially just environment that challenges ageism and other forms of oppression, this could reduce social disparities in health, improve inclusion and access to resources.Social and healthcare professionals are encouraged to design clinical care guidelines and rehabilitation goals from a wholistic and person/client centred approach to support older people with diabetes and multiple chronic illnesses. abstract_id: PUBMED:36767986 Association between Sense of Loneliness and Quality of Life in Older Adults with Multimorbidity. Background: Multimorbidity has been associated with adverse health outcomes, such as reduced physical function, poor quality-of-life (QoL), poor self-rated health. Objective: The association between quality of life, social support, sense of loneliness and sex and age in older adult patients affected by two or more chronic diseases (multimorbidity) was evaluated. Methods: Patients n. 162 with multimorbidity and living with family members. Tests: MMSE-Mini-Mental-State-Examination; ADL-Activities of Daily Living; Social Schedule: demographic variables; Loneliness Scale -de Jong Gierveld; Quality-of-Life-FACT-G; WHOQOL-BRIEF Social relationships. Statistical Analysis: Multivariate Regression Analysis. Results: The patients with three or more diseases have worse dimensions of FACT-G total score (p = 0.029), QoL Physical-well-being (p = 0.003), Social well-being (p = 0.003), Emotional-well-being (p = 0.012), Functional-well-being (p &lt; 0.001), than those with two. Multiple linear regression QoL: FACT_G total score, PWB, SWB, EWB, FWB as dependent variables. In the presence of multimorbidity with an increase in the patient's age FACT-G total score (B = -0.004, p = 0.482), PWB (B = -0.024, p = 0.014), SWB (B = -0.022, p = 0.051), EWB (B = -0.001, p = 0.939), FWB (B = -0.023, p = 0.013) decrease by an average of 0.1, and as the sense of solitude increases FACT-G total score (B = -0.285, p &lt; 0.000), PWB (B = -0.435, p &lt; 0.000), SWB(B = -0.401, p &lt; 0.000), EWB(B = -0.494, p &lt; 0.000), FWB(B = -0.429, p &lt; 0.000) decrease by 0.4. Conclusions: A sense of loneliness and advancing age are associated with bad quality-of life in self-sufficient elderly patients with multimorbidity. Implications For Practice: Demonstrating that loneliness, as well as in the presence of interpersonal relations, is predictive of worse quality of life in patients with multimorbidity helps identify people most at risk for common symptoms and lays the groundwork for research concerning both diagnosis and treatment. abstract_id: PUBMED:38332388 The effect of social frailty on mental health and quality of life in older people: a cross-sectional study. Purpose: This study aims to evaluate anxiety, depression, loneliness, death anxiety, and quality of life and investigate their relationship with social frailty in the geriatric population. Additionally, it aimed to identify social frailty predictors. Methods: The study included 136 participants admitted to the geriatric outpatient clinic. The 15-item Geriatric Depression Scale (GDS-15), the Multidimensional Scale of Perceived Social Support (MSPSS), the Cumulative Illness Rating Scale for Geriatrics (CIRS-G), the Templer Death Anxiety Scale (T-DAS), the Loneliness Scale for the Elderly (LSE), the Quality of Life Scale (CASP-19), the Generalized Anxiety Disorder-7 Test (GAD-7), the Tilburg Frailty Indicator (TFI), the FRAIL Scale, and the Clinical Frailty Scale (CFS) were performed. The TFI was used to collect data about social frailty. Results: There were 61.8% females, and the median age (min-max) was 72.2 (65.3-90.3) years. The prevalence rate of social frailty was 26.7%. The rates of depression, loneliness, anxiety, death anxiety, the burden of chronic disease, and frailty were higher in the social frailty group. Furthermore, logistic regression analysis revealed a strong relationship between social frailty status and widowhood (odds ratio (OR) 6.86; 95% confidence interval (95% CI), 2.42-19.37; p &lt; 0.001), moderate to severe anxiety symptoms (OR 4.37; 95% CI 1.08-17.68; p = 0.038), and a TFI-physical frailty score (OR 1.40; 95% CI 1.12-1.73; p = 0.002). Conclusion: In older adults, the social dimension of frailty is associated with quality of life and psychological state. Physical frailty and sociodemographic characteristics may affect the development of social frailty. abstract_id: PUBMED:24320819 Health-related quality of life and functional status quality indicators for older persons with multiple chronic conditions. Objectives: To explore central challenges with translating self-reported measurement tools for functional status and health-related quality of life (HRQOL) into ambulatory quality indicators for older people with multiple chronic conditions (MCCs). Design: Review. Setting: Sources including the National Quality Measures Clearinghouse and National Quality Forum were reviewed for existing ambulatory quality indicators relevant to functional status, HRQOL, and people with MCCs. Participants: Seven informants with expertise in indicators using functional status and HRQOL. Measurements: Informant interviews were conducted to explore knowledge about these types of indicators, particularly usability and feasibility. Results: Nine important existing indicators were identified in the review. For process, identified indicators addressed whether providers assessed functional status; outcome indicators addressed quality of life. In interviews, informants agreed that indicators using self-reported data were important in this population. Challenges identified included concerns about usability due to inability to discriminate quality of care adequately between organizations and feasibility concerns regarding high data collection burden, with a correspondingly low response rate. Validity was also a concern because evidence is mixed that healthcare interventions can improve HRQOL or functional status for this population. As a possible first step, a structural standard could be systematic collection of these measures in a specific setting. Conclusion: Although functional status and HRQOL are important outcomes for older people with MCCs, few relevant ambulatory quality indicators exist, and there are concerns with usability, feasibility, and validity. Further research is needed on how best to incorporate these outcomes into quality indicators for people with MCCs. abstract_id: PUBMED:28399316 Predicting Health Promotion and Quality of Life With Symptom Clusters and Social Supports Among Older Adults With Multiple Sclerosis. Symptoms common among individuals with multiple sclerosis (MS) may influence health promotion and quality of life, especially among older adults, who often experience multiple chronic conditions. To identify and examine symptom clusters' effect on health promotion and quality of life, data from 215 adults with MS older than 60 (average years with diagnosis = 29) were analyzed. Correlations among symptoms ranged from 0.33 to 0.81. Factor analysis identified two symptom clusters: (a) physical/psychological/cognitive symptoms and (b) pain symptoms. In multiple hierarchical regressions, controlling for demographics and functional limitations, physical/psychological/cognitive symptoms significantly improved prediction on Health-Promoting Lifestyle Profile II interpersonal relations, stress management, and total scores; pain symptoms predicted nutrition scores. Both symptom clusters predicted spiritual growth and quality of life. Social support was a significant predictor of all outcomes. Symptom clusters, along with social support, should be considered in care and interventions for older adults with MS. [Journal of Gerontological Nursing, 43(10), 27-36.]. Answer: The relationship between structural and functional measures of social relationships and quality of life (QoL) among older adults has been explored in various studies, and chronic disease status appears to be an important factor to consider. One study found that the associations between social relationships and QoL were consistent across different chronic disease statuses, including clinically verified coronary heart disease, stroke, diabetes, or cancer. The study highlighted that beneficial aspects of social relationships in relation to QoL were, in order of importance: avoiding negative aspects of close relationships, having confiding support, having a wide network of friends, and having a partner. These associations were not modified by chronic disease, suggesting that optimizing social relationships can enhance a satisfying late life despite physical deterioration (PUBMED:26143057). Another study indicated that older adults with multiple chronic conditions had decreased social well-being, and for those diagnosed with more than one chronic condition, psychological factors such as perceived control, self-esteem, and optimism were positively related to their social well-being (PUBMED:36923236). Further research showed that multimorbidity was associated with functional limitations, but this association could be increased by a low level of social relationships and decreased by a high level of social relationships (PUBMED:32980575). Additionally, social participation was identified as an important health behavior for QoL and cognitive function among chronically ill older people in China (PUBMED:32831028). In older adults with osteoarthritis, social support played an important role in moderating the effects of pain, functional limitation, and depression on QoL (PUBMED:10528450). Similarly, for older people with diabetes and multiple chronic illnesses, the social environment, psychosocial support, and constraints were associated with overall QoL (PUBMED:32623910). The sense of loneliness and advancing age were associated with poor QoL in self-sufficient elderly patients with multimorbidity (PUBMED:36767986), and social frailty was linked to quality of life and psychological state in older adults (PUBMED:38332388). In summary, while chronic disease status does matter, the structural and functional measures of social relationships play a significant role in the QoL among older adults, and these associations are generally consistent regardless of chronic disease status. Optimizing social relationships and addressing psychological factors can potentially enhance QoL even in the presence of chronic diseases.
Instruction: Does carbon dioxide retention during exercise predict a more rapid decline in FEV1 in cystic fibrosis? Abstracts: abstract_id: PUBMED:16040875 Does carbon dioxide retention during exercise predict a more rapid decline in FEV1 in cystic fibrosis? Background: Carbon dioxide (CO2) retention during exercise is uncommon in mild to moderate lung disease in cystic fibrosis (CF). The ability to deal with increased CO2 is dependent on the degree of airflow limitation and inherent CO2 sensitivity. CO2 retention (CO2R) can be defined as a rise in P(ET)CO2 tension of &gt; or =5 mm Hg with exercise together with a failure to reduce P(ET)CO2 tension after peak work by at least 3 mm Hg by the termination of exercise. Aim: To ascertain if carbon dioxide retention during exercise is associated with more rapid decline in lung function. Methods: Annual spirometric and exercise data from 58 children aged 11-15 years, with moderate CF lung disease between 1996 and 2002 were analysed. Results: The mean FEV1 at baseline for the two groups was similar; the CO2R group (n = 15) was 62% and the non-CO2 retention group (CO2NR) was 64% (n = 43). The decline in FEV1 after 12 months was -3.2% (SD 1.1) in the CO2R group and -2.3% (SD 0.9) in the CO2NR group. The decline after 24 months was -6.3% (SD 1.3) and -1.8% (SD 1.1) respectively. After 36 months, the decline in FEV1 was -5.3% (SD 1.2) and -2.6% (SD 1.1) respectively. The overall decline in lung function was 14.8% (SD 2.1) in the CO2R group and 6.7% (SD 1.8) in the CO2NR group. Using the primary outcome measure as a decline in FEV(1) of &gt;9%, final multivariate analysis showed that the relative risks for this model were (95% CIs in parentheses): DeltaP(ET)CO2 11.61 (3.41 to 24.12), peak VO2 1.23 (1.10 to 1.43), and initial FEV(1) 1.14 (1.02 to 1.28). Conclusion: Results show that the inability to defend carbon dioxide during exercise is associated with a more rapid decline in lung function. abstract_id: PUBMED:31701307 Non-invasive carbon dioxide monitoring in patients with cystic fibrosis during general anesthesia: end-tidal versus transcutaneous techniques. Introduction: The gold standard for measuring the partial pressure of carbon dioxide remains arterial blood gas (ABG) analysis. For patients with cystic fibrosis undergoing general anesthesia or polysomnography studies, continuous non-invasive carbon dioxide monitoring may be required. The current study compares end-tidal (ETCO2), transcutaneous (TCCO2), and capillary blood gas carbon dioxide (Cap-CO2) monitoring with the partial pressure of carbon dioxide (PaCO2) from an ABG in patients with cystic fibrosis. Methods: Intraoperatively, a single CO2 value was simultaneously obtained using ABG (PaCO2), capillary (Cap-CO2), TCCO2, and ETCO2 techniques. Tests for correlation (Pearson's coefficient) and agreement (Bland-Altman analysis) were performed. Data were further stratified into two subgroups based on body mass index (BMI) and percent predicted forced expiratory volume in 1 s (FEV1%). Additionally, the absolute difference in the TCCO2, ETCO2, and Cap-CO2 values versus PaCO2 was calculated. The mean ± SD differences were compared using a paired t test while the number of times the values were ≤ 3 mmHg and ≤ 5 mmHg from the PaCO2 were compared using a Fishers' exact test. Results: The study cohort included 47 patients (22 males, 47%) with a mean age of 13.4 ± 7.8 years, median (IQR) BMI of 18.7 kg/m2 (16.7, 21.4), and mean FEV1% of 87.3 ± 18.3%. Bias (SD) was 4.8 (5.7) mmHg with Cap-CO2 monitoring, 7.3 (9.7) mmHg with TCCO2 monitoring, and 9.7 (7.7) mmHg with ETCO2 monitoring. Although there was no difference between the degree of bias in the population as a whole, when divided based on FEV1% and BMI, there was greater bias with ETCO2 in patients with a lower FEV1% and a higher BMI. The Cap-CO2 vs. PaCO2 difference was 5.2 ± 5.3 mmHg (SD), with 16 (48%) ≤ 3 mmHg and 20 (61%) ≤ 5 mmHg from the ABG value. The TCCO2-PaCO2 difference was 9.1 ± 7.2 mmHg (SD), with 11 (27%) ≤ 3 mmHg and 15 (37%) ≤ 5 mmHg from the ABG value. The ETCO2-PaCO2 mean difference was 11.2 ± 7.9 mmHg (SD), with 5 (12%) ≤ 3 mmHg and 11 (26%) ≤ 5 mmHg from the ABG value. Conclusions: While Cap-CO2 most accurately reflects PaCO2 as measured on ABG, of the non-invasive continuous monitors, TCCO2 was a more accurate and reliable measure of PaCO2 than ETCO2, especially in patients with worsening pulmonary function (FEV1% ≤ 81%) and/or a higher BMI (≥ 18.7 kg/m2). abstract_id: PUBMED:31276310 Ventilation efficiency to exercise in patients with cystic fibrosis. Introduction: Exercise ventilation efficiency index in cardiopulmonary exercise testing (CPET) is elevated in patients with heart failure providing useful information on disease progression and prognosis. Few data, however, exist for ventilation efficiency index among cystic fibrosis (CF) patients. Aims: To assess ventilation efficiency index (ΔVE/ΔVCO2 or V'E/V'CO2 slope) and intercept of ventilation (VE-intercept) in CF patients with mild, moderate, and severe cystic fibrosis (CF) lung disease. To assess possible correlations with ventilation inhomogeneity and structural damages as seen on high resolution computed tomography (HRCT). Methods: CF patients with mild (FEV1 &gt; 80%, n = 47), moderate (60% &lt; FEV1 &lt; 80%, n = 21), and severe (FEV1 &lt; 60%, n = 9) lung disease, mean age 14.9 years participated. Peak oxygen uptake (VO2 peak), pulmonary ventilation at peak exercise (VE), respiratory equivalent ratios for oxygen and carbon dioxide at peak exercise (VE/VO2 , VE/VCO2 ), end-tidal CO2 (PetCO2 ), and ΔVE/ΔVCO2 , ΔVE/ΔVO2 in a maximal CPET along with spirometry and multiple breath washout indices were examined. HRCT scans were performed and scored using Bhalla score. Results: Mean ΔVE/ΔVCO2 showed no significant differences among the three groups (P = .503). Mean VEint discriminated significantly among the different groups (p 2 &lt; 0.001). Ventilation efficiency index did not correlate either with LCI or Bhalla score. However, VE together with ΔVE/ΔVCO2 slope could predict Bhalla score (r 2 = 0.869, P = .006). Conclusion: No significant differences were found regarding ΔVE/ΔVCO2 slope levels between the three groups. Ventilation intercept (VEint ) was elevated significantly as disease progresses reflecting increased dead space ventilation. CF patients retain their ventilation efficiency to exercise even as lung function deteriorates by adopting a higher respiratory rate along with increased dead space ventilation. abstract_id: PUBMED:29954708 Correlation between parameters of volumetric capnography and spirometry during a submaximal exercise protocol on a treadmill in patients with cystic fibrosis and healthy controls. Introduction: Spirometry is the most frequently used test to evaluate the progression of lung damage in cystic fibrosis (CF). However, there has been low sensitivity in detecting early lung changes. In this context, our objective was to identify the correlation between parameters of volumetric capnography (VCap) and spirometric parameters during a submaximal treadmill exercise test. Methods: A cross-sectional and controlled study which included 64 patients with CF (CFG) and 64 healthy control subjects (CG) was performed. The CFG was from a university hospital and the CG from local schools. All participants underwent spirometry and VCap before, during and after the submaximal treadmill exercise test. The main variable analyzed by VCap was the slope of phase 3 (slope 3), which indicates the [exhaled carbon dioxide] at the end of expiration, and expresses the heterogeneity of gas emptying in pulmonary periphery. The correlation analysis between spirometry and VCap was conducted using the Spearman correlation test, considering α=0.05. Results: The indices analyzed by VCap showed correlation with parameters of VCap. Slope 3 showed an inverse correlation with forced expiratory volume in the first second of forced vital capacity (FEV1) in both groups and at all moments of the submaximal treadmill exercise test. Forced vital capacity (FVC) and FEV1/FVC ratio showed an inverse correlation with slope 3 only for CFG. Values of slope 3 corrected by the spontaneous tidal volume (VT) and end-tidal carbon dioxide tension (PetCO2) showed results similar to slope 3 analyzed separately. Conclusion: Parameters of VCap such as slope 3, slope 3/VT and slope 3/PetCO2 correlated with sensitive variables of spirometry such as FEV1, FVC and FEV1/FVC ratio. For the evaluated variables, there was consistency in the correlation between the two tests, which may indicate the impact of CF on pulmonary physiology. abstract_id: PUBMED:25985982 Ventilatory abnormalities in patients with cystic fibrosis undergoing the submaximal treadmill exercise test. Background: Exercise has been studied as a prognostic marker for patients with cystic fibrosis (CF), as well as a tool for improving their quality of life and analyzing lung disease. In this context, the aim of the present study was to evaluate and compare variables of lung functioning. Our data included: (i) volumetric capnography (VCAP) parameters: expiratory minute volume (VE), volume of exhaled carbon dioxide (VCO2), VE/VCO2, ratio of dead space to tidal volume (VD/VT), and end-tidal carbon dioxide (PetCO2); (ii) spirometry parameters: forced vital capacity (FVC), percent forced expiratory volume in the first second of the FVC (FEV1%), and FEV1/FVC%; and (iii) cardiorespiratory parameters: heart rate (HR), respiratory rate, oxygen saturation (SpO2), and Borg scale rating at rest and during exercise. The subjects comprised children, adolescents, and young adults aged 6-25 years with CF (CF group [CFG]) and without CF (control group [CG]). Methods: This was a clinical, prospective, controlled study involving 128 male and female patients (64 with CF) of a university hospital. All patients underwent treadmill exercise tests and provided informed consent after study approval by the institutional ethics committee. Linear regression, Kruskal-Wallis test, and Mann-Whitney test were performed to compare the CFG and CG. The α value was set at 0.05. Results: Patients in the CFG showed significantly different VCAP values and spirometry variables throughout the exercise test. Before, during, and after exercise, several variables were different between the two groups; statistically significant differences were seen in the spirometry parameters, SpO2, HR, VCO2, VE/VCO2, PetCO2, and Borg scale rating. VCAP variables changed at each time point analyzed during the exercise test in both groups. Conclusion: VCAP can be used to analyze ventilatory parameters during exercise. All cardiorespiratory, spirometry, and VCAP variables differed between patients in the CFG and CG before, during, and after exercise. abstract_id: PUBMED:15350995 Diffusing capacity for carbon monoxide (T(LCO)) and oxygen saturation during exercise in patients with cystic fibrosis Objectives: To estimate the value of diffusing capacity for carbon monoxide (T(LCO)) in patients with cystic fibrosis and to evaluate its ability to predict arterial desaturation during exercise. Method: Fourty-four patients (9-30 years) with cystic fibrosis performed pulmonary function tests with measure of T(LCO) and a bicycle incremental exercise test. They represent a wide variation in disease severity: mean Shwachman score: 77.8 (range: 40-100), mean FEV1%: 72.8 (range: 17-131). This study investigated the relationship between T(LCO), lung volumes and exercise data. Results: T(LCO) remained normal for a long time in patients with cystic fibrosis: 82% of them show a normal T(LCO) (mean value: 91.3% of predicted). T(LCO) was significantly correlated with FEV(1), residual volume, maximal work load and maximum oxygen uptake. A fall in arterial oxygen saturation was uncommon in our study (five patients) and not significantly correlated with T(LCO). Conclusions: T(LCO) is a good criter of severity of cystic fibrosis but remains unreliable to predict values above which physical activity is safe, without arterial desaturation. Exercise tests should be proposed in order to evaluate exercise adaptation of each patient and determine which factor limits maximal performance. abstract_id: PUBMED:28950435 Lung clearance index (LCI) as a predictor of exercise limitation among CF patients. Introduction: FEV1 is often considered the gold standard to monitor lung disease in cystic fibrosis (CF). Recently, there has been increasing interest in multiple breath washout (MBW) and cardiopulmonary exercise testing (CPET) as alternative or even more sensitive techniques. However, limited data exist on associations among the above methods. Aim: To evaluate the correlations between outcome measures of MBW and CPET and to examine if ventilation inhomogeneity can predict exercise intolerance. Subjects And Methods: Ninety-seven children and adults with CF (47 males, mean [range] age 14.9 (6.6; 26.7) years, mean FEV1 : 90.8% predicted, mean lung clearance index [LCI]: 11.4, and mean peak oxygen uptake [VO2 peak]: 82.4% predicted) performed spirometry, MBW, and CPET on the same day during their admission or outpatient visit. Results: LCI, m1 /m0 , and m2 /m0 (P &lt; 0.001) as well as VO2 peak%, breathing reserve (BR), minute ventilation (VE)/VO2 (P &lt; 0.001), and VE/carbon dioxide release (VCO2 ) (P = 0.006) correlated significantly with FEV1 %. LCI, m1 /m0 , and m2 /m0 correlated with VO2 peak (P ≤ 0.001), VE (L/min) (P &lt; 0.05), BR (P &lt; 0.01), VE/VO2 (P &lt; 0.001), and VE/VCO2 (P &lt; 0.01). Multiple regression analysis showed that LCI could predict BR% (P &lt; 0.001, r2 :0.272) and VE/VO2 (P &lt; 0.001, r2 : 0.207) while LCI and FRC could predict VO2 peak% P &lt; 0.001, r2 : 0.216) and VE/VCO2 (P &lt; 0.001, r2 : 0.226). Conclusion: Ventilation inhomogeneity as indicated by increased LCI is associated with less efficient ventilation during strenuous exercise and negatively impacts exercise capacity in CF. abstract_id: PUBMED:37671821 Mechanisms of ventilatory limitation to maximum exercise in children and adolescents with chronic airway diseases. Introduction: Exercise intolerance is common in chronic airway diseases (CAD), but its mechanisms are still poorly understood. The aim of this study was to evaluate exercise capacity and its association with lung function, ventilatory limitation, and ventilatory efficiency in children and adolescents with cystic fibrosis (CF) and asthma when compared to healthy controls. Methods: Cross-sectional study including patients with mild-to-moderate asthma, CF and healthy children and adolescents. Anthropometric data, lung function (spirometry) and exercise capacity (cardiopulmonary exercise testing) were evaluated. Primary outcomes were peak oxygen consumption (VO2 peak), forced expiratory volume in 1 s (FEV1 ), breathing reserve (BR), ventilatory equivalent for oxygen consumption (VE /VO2 ) and for carbon dioxide production (VE /VCO2 ), both at the ventilatory threshold (VT1 ) and peak exercise. Results: Mean age of 147 patients included was 11.8 ± 3.0 years. There were differences between asthmatics and CF children when compared to their healthy peers for anthropometric and lung function measurements. Asthmatics showed lower VO2 peak when compared to both healthy and CF subjects, although no differences were found between healthy and CF patients. A lower BR was found when CF patients were compared to both healthy and asthmatic. Both CF and asthmatic patients presented higher values for VE /VO2 and VE /VCO2 at VT1 when compared to healthy individuals. For both VE /VO2 and VE /VCO2 at peak exercise CF patients presented higher values when compared to their healthy peers. Conclusion: Patients with CF achieved good exercise capacity despite low ventilatory efficiency, low BR, and reduced lung function. However, asthmatics reported reduced cardiorespiratory capacity and normal ventilatory efficiency at peak exercise. These results demonstrate differences in the mechanisms of ventilatory limitation to maximum exercise testing in children and adolescents with CAD. abstract_id: PUBMED:17099020 Pulmonary abnormalities on high-resolution CT demonstrate more rapid decline than FEV1 in adults with cystic fibrosis. Background: FEV1 may remain stable while high-resolution CT (HRCT) appearances deteriorate in children with cystic fibrosis (CF). However, spirometry results commonly decline in older age groups. Objectives: To compare the rate of decline in HRCT abnormalities and spirometry results over time in an adult cohort with CF. Methods: The HRCT scans of 39 consecutive patients (19 males and 20 females; mean age, 22 years; range, 16 to 48 years) with two HRCT scans &gt; 18 months apart were randomly and blindly scored using a modified Bhalla scoring system by two independent chest radiologists. Age, body mass index, spirometry, and sputum cultures were recorded at the time of both HRCTs. Rates of change in clinical parameters and HRCT abnormalities were calculated and compared using repeated-measures analysis of variance. Results: Mean FEV1 declined at a rate of - 2.3% per year, while mean HRCT total score declined at a rate of -2.7% per year. Several individual HRCT abnormalities as well as HRCT total scores declined significantly faster than FEV1 (p &lt; 0.001). Six patients showed stable spirometry results but worsening HRCT scores. Mucus plugging and extent of bronchiectasis deteriorated at a more rapid rate in the group with mildly impaired lung function. Air trapping, collapse/consolidation, peribronchial thickening, severity of bronchiectasis, and generations of bronchial divisions involved deteriorated at a more rapid rate in the group with moderate-to-severely impaired lung function. Conclusions: Adult CF patients have more rapid rates of decline in HRCT abnormalities than in spirometry results. Individual HRCT abnormalities decline at different rates depending on the degree of lung function impairment. abstract_id: PUBMED:35096701 Assessing the Utility of an Outpatient Exercise Program for Children With Cystic Fibrosis: A Quality Improvement Project. Children with cystic fibrosis (CF) (cwCF) suffer from inadequate weight gain, failure to thrive, and muscle weakness. The latter may be secondary to disuse atrophy (muscle wasting or reduction in muscle size associated with reduced physical activity and inflammation). Handgrip strength (HGS) is a reliable surrogate for muscle strength and lean body mass. Data from our CF center have shown an association between low HGS and forced expiratory volume in 1 s (FEV1) in cwCF. High-intensity interval training (HIIT) improves physical strength. Therefore, we devised a project to assess implementing a HIIT exercise program in the home setting, in order to improve physical strength in cwCF with HGS ≤ 50th percentile. Patients were instructed to complete 3-5 sessions of HIIT exercises per week. Wilcoxon matched-pairs signed-rank tests were used to compare HGS, FEV1, and body mass index (BMI) percentile at baseline and at a follow-up clinic visit. Follow-up was limited due to the COVID pandemic. Adherence to the HIIT regimen was poor. A total of twenty-nine cwCF participated in the program. However, a total of 13 individuals reported some form of moderate activity at follow-up and therefore constituted our final study population. There was a statistically significant increase in absolute grip strength (AGS) and FEV1 for these individuals. Even though the home HIIT protocol was not followed, the project demonstrated that moderate physical activity in cwCF can lead to significant improvement in HGS and overall physical strength. Answer: Yes, carbon dioxide retention during exercise is associated with a more rapid decline in FEV1 in cystic fibrosis. A study analyzed annual spirometric and exercise data from 58 children with moderate cystic fibrosis lung disease and found that the group with carbon dioxide retention (CO2R) experienced a greater decline in FEV1 over 12, 24, and 36 months compared to the non-CO2 retention group (CO2NR). The overall decline in lung function was 14.8% in the CO2R group compared to 6.7% in the CO2NR group. The study concluded that the inability to defend against carbon dioxide during exercise is associated with a more rapid decline in lung function (PUBMED:16040875).
Instruction: Ultrasound-guided greater occipital nerve block: an efficient technique in chronic refractory migraine without aura? Abstracts: abstract_id: PUBMED:25794201 Ultrasound-guided greater occipital nerve block: an efficient technique in chronic refractory migraine without aura? Background: The effectiveness of greater occipital nerve block (GONB) in patients with primary headache syndromes is controversial. Few studies have been evaluated the usefulness of GONB in patients with migraine without aura (MWOA). Objective: To compare the effectiveness of ultrasound-guided GONB using bupivacaine 0.5% and placebo on clinical improvement in patients with refractory MWOA in a randomized, double-blinded clinical trial. Study Design: A prospective, randomized, placebo-controlled, double-blind pilot trial. Setting: Physical medicine and rehabilitation and neurology departments of a University Hospital. Methods: Thirty-two patients with a diagnosis of MWOA according to the International Classification of Headache Disorders-II criteria were included in the study. Twenty-three patients (2 men, 21 women) completed the study. They were randomly assigned to receive either GONB with local anesthetic (bupivacaine 0.5% 1.5 mL) or greater occipital nerve (GON) injection with normal saline (0.9% 1.5 mL). Ultrasound-guided GONB was performed to more accurately locate the nerve. All procedures were performed using a 7 - 13 MHz high-resolution linear ultrasound transducer. The treatment group was comprised of 11 patients and the placebo group was comprised of 12 patients. The primary outcome measure was the change in the headache severity score during the one-month post-intervention period. Headache severity was assessed with a visual analogue scale (VAS) from 0 (no pain) to 10 (intense pain). Results: In both groups, a decrease in headache intensity on the injection side was observed during the first post-injection week and continued until the second week. After the second week, the improvement continued in the treatment group, and the VAS score reached 0.97 at the end of the fourth week. In the placebo group after the second week, the VAS values increased again and nearly reached the pre-injection levels. The decrease in the monthly average pain intensity score on the injected side was statistically significant in the treatment group (P = 0.003), but not in the placebo group (P = 0.110). No statistically significant difference in the monthly average pain intensity score was observed on the uninjected side in either group (treatment group, P = 0.994; placebo group, P = 0.987). No serious side effect was observed after the treatment in either group. Only one patient had a self-limited vaso-vagal syncope during the procedure. Limitations: This trial included a relatively small sample. This may have been the result of the inclusion of only those patients who correctly completed their pain diaries. Another major limitation is the short follow-up duration. Patients were followed for one month after the injection, thus relatively long-term effects of the injection have not been observed. Conclusions: Ultrasound guided GONB with 1.5 mL of 0.5% bupivacaine for the treatment of migraine patients is a safe, simple, and effective technique without severe adverse effects. To increase the effectiveness of the injection, and to implement the isolated GONB, ultrasonography guidance could be suggested. abstract_id: PUBMED:28360765 Effectiveness of Greater Occipital Nerve Blocks in Migraine Prophylaxis. Introduction: Peripheral nerve blocks have been used in primary headache treatment since a long time. In this study, we aimed to examine the efficiency of greater occipital nerve (GON) block in migraine prophylaxis. Methods: Data from migraine without aura patients who had GON block were collected and divided into two groups: Group PGON (n=25), which included patients who were under medical prophylaxis and had GON block, and Group GON (n=53), which included patients who had only GON blocks. Migraine was diagnosed using International Headache Society (IHS) classification. Data of 78 patients were analyzed. Headache attack frequency, headache duration, and severity were compared between and within groups in a 3-month follow-up period. Results: The decrease in headache parameters after GON block in both groups was significantly similar. Headache attack frequency decreased from 15.73±7.21 (pretreatment) to 4.52±3.61 (3rd month) in Group GON and from 13.76±8.07 to 3.28±2.15 in Group PGON (p&lt;0.05). Headache duration decreased from 18.51±9.43 to 8.02±5.58 at 3rd month in Group GON and from 15.20±9.16 to 7.20±4.16 in Group PGON (p&lt;0.05). Headache severity decreased from 8.26±1.32 to 5.16±2.64 in Group GON and from 8.08±0.90 to 5.96±1.20 in Group PGON (p&lt;0.05). There was no statistically significant difference between the groups in 3rd month after treatment (p&gt;0.05). Conclusion: This study showed significant decreases in headache parameters in both groups. As GON blocks were performed in patients unresponsive to medical prophylaxis, a decrease in the headache parameters in Group PGON similar to that in Group GON can be attributed to GON blocks. Consequently, these results show that repeated GON blocks with local anesthetic can be an effective alternative treatment in migraine patients who are unresponsive to medical prophylaxis or who do not prefer to use medical prophylaxis. abstract_id: PUBMED:25705314 Chronic migraine - new treatment options. Chronic migraine (CM) is defined as headache occurring more than fifteen days/month for at least three consecutive months, with headache having the clinical features of migraine without aura for at least eight days per month. Recently, new treatment options became available in chronic migraine patients. Topiramate is effective in chronic migraine, in the presence or absence of medication overuse, and/or other migraine prophylaxis. Efficacy of onabotulinumtoxin A as a preventive treatment of chronic migraine has been shown in the PREEMPT studies. Occipital nerve stimulation (ONS) is an invasive treatment for refractory chronic headaches. ONS has encouraging results in refractory chronic migraine patients in commercially funded, multi-centre randomized trials. abstract_id: PUBMED:11588633 Anesthetic blockade of the greater occipital nerve in migraine prophylaxis Migraine comprises a great many encephalic structures in its pathophysiology with the trigeminal nerve (TN) type being one of the main ones. For the purpose of determining a possible influence of the greater occipital nerve (GON) on migraine behavior, 37 patients who showed this pathology were studied. Using a double blind "cross over" group and submitting those patients to a GON infiltration with bupivacaina 0.5% (BP) and physiological serum 0.9% (PS), the clinical effects were evaluated: subjectively, through a pain analytical visual scale; objectively, by determining the threshold of pain perception (algometry). The comparison between the two groups (BP-PS) and (PS-BP) has shown that the number and duration of the attacks did not show significant statistical differences during the study. The intensity of the attacks was lower in group (BP-PS) only after the second infiltration (p=0.020), in the other moments no differences have been observed between the groups. The conclusion is that the anesthetic blockage with BP on the GON does not change the number of crises and their duration, but it does provokes an intensity reduction after 60 days from the infiltration. The results shown here suggest that GON participates in the cranial nociceptive modulation during crises of migraine without aura. abstract_id: PUBMED:33386026 Entrapment of the Greater Occipital Nerve with Chronic Migraine and Severe Facial Pain: A Case Report. Migraine is thought to be a primary neurovascular headache due to brain dysfunction and is known to involve peripheral and central sensitization. A female patient with chronic migraine symptoms for 30 years reported severe pain in the deep ear and face. This headache always showed the same pattern and temporal progression. The sudden onset of ache and throbbing pain in the right temporo-occipital area extended to the left temporo-occipital areas. She felt sick as if the head would burst, and nausea and vomiting occurred. During the last 3 years, the patient endured sharp pain in bilateral deep ears and severe pain in the face as if all the facial bones were broken, and tears flowed. Chronic disabling headache and facial pain improved with the decompression of the greater occipital nerve. This case suggests that peripheral sensitization may be related to the pathophysiology of migraine, especially in the migraine without aura. abstract_id: PUBMED:17876398 Massaging over the greater occipital nerve reduces the intensity of migraine attacks: evidence for inhibitory trigemino-cervical convergence mechanisms. Activation of the trigemino-cervical system constitutes one of the first steps in the genesis of migraine. The objective of this study was to confirm the presence of trigemino-cervical convergence mechanisms and to establish whether such mechanisms may also be of inhibitory origin. We describe a case of a 39-years-old woman suffering from episodic migraine who showed a significant improvement in her frontal headache during migraine attacks if the greater occipital nerve territory was massaged after the appearance of static mechanical allodynia (cortical sensitization). We review trigemino-cervical convergence and diffuse nociceptive inhibitory control (DNIC) mechanisms and suggest that the convergence mechanisms are not only excitatory but also inhibitory. abstract_id: PUBMED:15645833 Local anesthetic blocks of the second cervical ganglion: a technique with application in occipital headache. Dissections of five human adult cadavers revealed that the C2 spinal ganglion bears a constant relationship to the dorsal aspect of the lateral atlanto-axial joint. Radiologically, the ganglion lies extradurally opposite the midpoint of the silhouette of the lateral atlanto-axial joint space. Needles can be introduced onto this target point using fluoroscopic control and used to perform selective local anesthetic blocks of the C2 spinal nerve. This technique is applicable in cases where it is difficult to decide on clinical grounds whether occipital headaches are due to an upper cervical abnormality or are a symptom of tension headache or common migraine. In particular the technique anesthetizes the otherwise inaccessible articular branches of the median and lateral atlanto-axial joints which may be an occult source of headache. abstract_id: PUBMED:35421036 Impact of Greater Occipital Nerve Block on Photophobia Levels in Migraine Patients. Background: To study the effect of greater occipital nerve (GON) block on migraine-associated photophobia levels. Photophobia is one of the most bothersome symptoms reported by migraine patients. Studies investigating the impact of migraine treatment on this symptom are scarce. Methods: This is an observational prospective case-control study. Patients with migraine and photophobia attending a Headache Clinic were recruited. Cases were defined as patients in whom GON block was performed, following usual clinical practice guidelines. All patients were evaluated with the Hospital Anxiety and Depression Scale, the Migraine Specific Quality of Life Questionnaire, the Utah Photophobia Symptom Impact Scale (UPSIS-12), and the Korean Photophobia Questionnaire (KUMC-8); both in the first visit (V1) and one week after (V2). Results: Forty-one patients were recruited, 28 (68.3%) cases and 13 (31.7%) controls. At V1, there were no significant differences in the median [p25-p75] score of UPSIS-12 in cases vs controls (32.0 [21.0-34.0] vs 30.5 [22.0-37.0], P = 0.497) or KUMC-8 (6.5 [5.5-7.0] vs 7.0 [6.0-8.0], P = 0.463). At V2, cases experimented a significant improvement in UPSIS-12 of -5.5 [-8.8 to -1.3] and in KUMC-8 of -0.5 [-2.0 to 0], whereas there were no significant changes in the control group. Migraine with aura patients presented higher UPSIS-12 score at V1 (33.5 [24.5-37.0] vs 26.0 [16.0-35.0]) and lesser improvement at V2 after GON block compared with migraine without aura patients (-4.0 [-6.0 to -1.0] vs -8.0 [-17.0 to -2.0]), although statistical significance was not achieved ( P = 0.643 and P = 0.122, respectively). There was no significant variation in the remaining scales. Conclusions: Greater occipital nerve block improves migraine-associated photophobia, measured with UPSIS-12 and KUMC-8. Patients without aura may exhibit a greater improvement. Physicians could consider GON block for management of photophobia in migraine patients. abstract_id: PUBMED:30285507 Greater occipital and supraorbital nerve blockade for the preventive treatment of migraine: a single-blind, randomized, placebo-controlled study. Objective: Nerve injections have been used for the acute and preventive treatment of migraine in recent decades. Most of these injections focused on greater occipital nerve (GON) blockade. However, few studies were placebo controlled, and only a few of them investigated GON and supraorbital nerve (SON) blockade together. This study aimed to evaluate the efficacy of GON and SON blockade with local anesthetics for the preventive treatment of migraine without aura. Methods: Eighty-seven patients diagnosed with migraine without aura were included in the study. Patients were divided randomly. One group was injected with 1% lidocaine, the other group was injected with 0.9% saline. GON and SON injections were done bilaterally. The injections were repeated weekly for 3 weeks. Patients were followed up for 2 months to assess clinical response. Results: Seventy-one patients completed the study. After 2 months, the number of headache days decreased significantly from 12.8 ± 10.9 to 5.3 ± 7.4, and VAS decreased from 8.3 ± 1.0 to 5.5 ± 1.9 in the blockade group. The number of headache days decreased from 12.4 ± 10.3 to 7.5 ± 7.2 and VAS decreased from 8.2 ± 1.1 to 7.4 ± 1.3 in the placebo group. Response was seen in 65.1% of the patients in the blockade group (65.4% for episodic migraine, 64.7% for chronic migraine) and 28.6% of the patients in the placebo group. The difference was significant. Conclusions: The results suggest that GON and SON blockade with lidocaine was more effective than the placebo in the prophylactic treatment of both episodic and chronic migraine. abstract_id: PUBMED:1454403 Cervicogenic headache, migraine without aura and tension-type headache. Diagnostic blockade of greater occipital and supra-orbital nerves. The diagnostic value of greater occipital and supra-orbital nerve blockades in patients with cervicogenic headache, migraine without aura, and tension-type headache was investigated. The pain reduction after greater occipital nerve blockade was significantly more marked in the cervicogenic headache group than in the other categories. Moreover, pain reduction in the forehead was generally only found in the cervicogenic headache patients (77%). Pain reduction (in %) was significantly more marked following the greater occipital than the supra-orbital nerve blockade. The volume effect per se was evaluated by saline injection. This procedure did not result in distinct pain reduction. The effect obtained in cervicogenic headache is, accordingly, probably due to the local anaesthesia. The present results support the postulate that different pathogenetic factors probably are responsible for cervicogenic headache, tension-type headache, and migraine without aura. Answer: The use of ultrasound-guided greater occipital nerve block (GONB) appears to be an efficient technique in the treatment of chronic refractory migraine without aura (MWOA). A randomized, double-blinded clinical trial comparing the effectiveness of ultrasound-guided GONB using bupivacaine 0.5% and placebo in patients with refractory MWOA found that the treatment group experienced a statistically significant decrease in headache intensity on the injected side, with the visual analogue scale (VAS) score reaching 0.97 at the end of the fourth week, while the placebo group's VAS values increased again after the second week, nearly reaching pre-injection levels. No serious side effects were observed, suggesting that ultrasound-guided GONB with bupivacaine is a safe, simple, and effective technique without severe adverse effects (PUBMED:25794201). Additionally, another study showed that repeated GON blocks with local anesthetic can be an effective alternative treatment in migraine patients who are unresponsive to medical prophylaxis or who do not prefer to use medical prophylaxis. This study found significant decreases in headache parameters such as attack frequency, duration, and severity in both groups of patients who had GON blocks with or without medical prophylaxis (PUBMED:28360765). Furthermore, other treatment options for chronic migraine include topiramate, onabotulinumtoxin A, and occipital nerve stimulation (ONS), which have shown efficacy in this patient population (PUBMED:25705314). However, the specific focus on ultrasound-guided GONB in the context of chronic refractory MWOA suggests that this technique is particularly beneficial for patients suffering from this condition.
Instruction: Is there any evidence of changes in patterns of concurrent drug use among young Australians 18-29 years between 2007 and 2010? Abstracts: abstract_id: PUBMED:24813551 Is there any evidence of changes in patterns of concurrent drug use among young Australians 18-29 years between 2007 and 2010? Background: A significant minority of Australians engage in concurrent drug use (using more than one drug in a given period). We examined clusters and correlates of concurrent drug use using the latest available nationally representative survey data on Australian young adults. Sample: 3836 participants aged 18-29 years (mean age 24 years) from the 2010 National Drug Strategy Household Survey (NDSHS). Method: Clusters were distilled using latent class analysis of past year use of alcohol, tobacco, cannabis, cocaine, hallucinogens, ecstasy, ketamine, GHB, inhalants, steroids, barbiturates, meth/amphetamines, heroin, methadone/buprenorphine, other opiates, painkillers and tranquillisers/sleeping pills. Results: Concurrent drug use in this sample was best described using a 4-class solution. The majority (87.5%) of young adults predominantly used alcohol only (50.9%) or alcohol and tobacco (36.6%). 10.2% reported using alcohol, tobacco, marijuana, and ecstasy, and 2.3% reported using an extensive range of drugs. Conclusion: Most drug use clusters were robust in their profile and stable in their prevalence, indicating little meaningful change at the population level from 2007. The targeting of alcohol and tobacco use remains a priority, but openness to experiencing diverse drug-related effects remains a significant concern for 12.5% of young people in this age group. abstract_id: PUBMED:24692417 Changes in patterns of injecting drug use in Hungary: a shift to synthetic cathinones. The spread of synthetic cathinone injecting is a new phenomenon observed in recent years in Hungary. Until 2010, when the first anecdotal reports on cathinone injecting appeared, injecting was associated with the use of heroin and amphetamine. In this paper we review available evidence of the changes in the drug market and a concurrent shift in patterns of injecting drug use that have been taking place in Hungary since 2010. Remarkable changes have been observed in police seizures data since 2010. While new psychoactive substances have appeared, the availability of heroin has dropped significantly. A qualitative study in 2011 revealed that these market changes correlate with changes in patterns of injecting drug use: decreasing heroin use and the appearance of mephedrone injecting were reported by treatment and needle and syringe programme (NSP) personnel. These changes are detectable in other routine epidemiological data collection systems in the following years as well (i.e. treatment, drug-related deaths, NSP clientele). Heroin-related treatment demand dropped, as did heroin-related mortality. Parallel to this, a growing number of clients appeared in treatment and in NSPs who were primarily injecting cathinones. The shift to cathinones can be observed in amphetamine and heroin injectors as well. Monitoring changes in patterns of injecting drug use are especially important because of the vulnerability of this drug-user population and the consequences of this high-risk route of drug administration. The realignment observed in Hungary is to be further investigated with regard to its determinants, changes in risk behaviour, and in treatment needs. abstract_id: PUBMED:26634661 Burden of mental and substance use disorders in Indigenous Australians and Oceania. Objective: Mental and substance use disorders are responsible for significant health loss across the globe. In this review, the burden of disease attributable to mental and substance use disorders in Indigenous Australians and Pacific Island countries was described and compared. Methods: For Indigenous Australians, findings from the burden of disease and injury study by Begg and colleagues were summarised. These were then compared to the findings of the Global Burden of Disease Study 2010, which reported results for Oceania, a region consisting of Pacific Island countries. Results for mental and substance use disorder burden were described in terms of disability-adjusted life years, years lived with disability and years of life lost to due to premature mortality. Results: Mental and substance use disorders were the leading cause of non-fatal burden (i.e. disability) in both Indigenous Australia and Oceania. Furthermore, in Oceania mental and substance use disorders are projected to cause more disability than all communicable diseases combined by 2050. Conclusion: Mental and substance use disorders contribute significantly to health loss for both Indigenous Australians and Pacific Island populations. These findings indicate a substantial need to prioritise these disorders in terms of policy, services and research. abstract_id: PUBMED:24168734 Pressing need for more evidence to guide efforts to address substance use among young Indigenous Australians. Issue Addressed: There are no systematic reviews available to guide the delivery of programs to prevent or address substance misuse among young Indigenous Australians Methods: A search was conducted for peer-reviewed journal articles published between 1990 and 2011 that evaluated interventions targeting young Indigenous Australians (aged 8-25 years) with the primary aim of reducing substance use. A comprehensive search was conducted of electronic databases (Cochrane, DRUG, Embase, Informit, Medline, Nursing and Allied Health, PreMedline and PsychInfo). Retrieved manuscripts were analysed using a narrative synthesis methodology. Results: Eight published studies were found. Nearly all had major methodological limitations. Of the four projects that reported reductions in substance use, two included recreational or cultural activities and had strong community support, and one included supply control combined with employment opportunities. Two programs that provided education alone did not show changes in substance use. Conclusions: Increased systematic evaluation of efforts to prevent and treat substance use among young Indigenous Australians is needed. So what? The limited data support multiprong interventions, designed with community input, to protect young Indigenous people against substance misuse, rather than simple facts-based education. However, more research is needed. abstract_id: PUBMED:33866222 Alcohol-related cognitions: Implications for concurrent alcohol and marijuana use and concurrent alcohol and prescription stimulant misuse among young adults. Introduction: This study examined the associations between alcohol-related cognitions within the social reaction pathway of the Prototype Willingness Model and concurrent (use of two or more substances within a specified time period) use of 1) alcohol and marijuana and 2) alcohol and prescription stimulant misuse. Methods: A convenience sample of 1,062 emerging adults in the U.S. (18-20 years old; 54.5% female) who reported past 3-month alcohol use completed a baseline survey as part of a larger randomized controlled trial. Results: Results indicate that controlling for age, biological sex, race, ethnicity, and college enrollment, perceived descriptive norms and willingness to drink were associated with past 3-month concurrent alcohol and marijuana use and concurrent alcohol and prescription stimulant misuse. However, alcohol prototype similarity and alcohol-related perceived vulnerability were not associated with either concurrent use outcome examined. Discussion: These findings suggest that alcohol-related perceived descriptive norms and willingness to drink are associated with concurrent substance use among young adults. Thus, it is possible that existing efficacious alcohol interventions that target descriptive norms and willingness to drink may have the added benefit of also reducing concurrent substance cognitions and ultimately use. abstract_id: PUBMED:24957742 Patterns of concurrent substance use among nonmedical ADHD stimulant users: results from the National Survey on Drug Use and Health. Aims: To examine patterns of concurrent substance use among adults with nonmedical ADHD stimulant use. Methods: We used latent class analysis (LCA) to examine patterns of past-year problematic substance use (meeting any criteria for abuse or dependence) in a sample of 6103 adult participants from the National Surveys on Drug Use and Health 2006-2011 who reported past-year nonmedical use of ADHD stimulants. Multivariable latent regression was used to assess the association of socio-demographic characteristics, mental health and behavioral problems with the latent classes. Results: A four-class model had the best model fit, including (1) participants with low probabilities for any problematic substance use (Low substance class, 53.3%); (2) problematic users of all types of prescription drugs (Prescription drug class, 13.3%); (3) participants with high probabilities of problematic alcohol and marijuana use (Alcohol-marijuana class, 28.8%); and (4) those with high probabilities of problematic use of multiple drugs and alcohol (Multiple substance class, 4.6%). Participants in the 4 classes had distinct socio-demographic, mental health and service use profiles with those in the Multiple substance class being more likely to report mental health and behavioral problems and service use. Conclusion: Nonmedical users of ADHD stimulants are a heterogeneous group with a large subgroup with low prevalence of problematic use of other substances. These subgroups have distinct patterns of mental health comorbidity, behavior problems and service use, with implications for prevention and treatment of nonmedical stimulant use. abstract_id: PUBMED:25872596 Simultaneous versus concurrent use of alcohol and cannabis in the National Alcohol Survey. Background: Cannabis is the most commonly used drug among those who drink, yet no study has directly compared those who use cannabis and alcohol simultaneously versus concurrently (i.e., separately) in the adult general population. Here, we assess differences in demographics, alcohol-related social consequences, harms to self, and drunk driving across simultaneous, concurrent, and alcohol-only using groups. Methods: Secondary analyses of the 2005 and 2010 National Alcohol Survey (N = 8,626; 4,522 female, 4,104 male), a Computer Assisted Telephone Interview survey of individuals aged 18 and older from all 50 states and DC. Blacks and Hispanics are over-sampled. Data were collected using list-assisted Random Digit Dialing. Multinomial and multivariable logistic regressions were used for analyses. Results: The prevalence of simultaneous use was almost twice as high as concurrent use, implying that individuals who use both cannabis and alcohol tend to use them at the same time. Furthermore, simultaneous use was associated with increased frequency and quantity of alcohol use. Simultaneous use was also the most detrimental: compared to alcohol only, simultaneous use approximately doubled the odds of drunk driving, social consequences, and harms to self. The magnitudes of differences in problems remained when comparing drunk driving among simultaneous users to concurrent users. Conclusions: The overall set of results is particularly important to bear in mind when studying and/or treating problems among alcohol/cannabis co-users because they demonstrate that in the general population, co-users are a heterogeneous group who experience different likelihoods of problems relative to co-use patterns. abstract_id: PUBMED:29261344 Patterns of simultaneous and concurrent alcohol and marijuana use among adolescents. Background: Alcohol and marijuana are the most commonly used substances among adolescents but little is known about patterns of co-use. Objectives: This study examined patterns of concurrent (not overlapping) and simultaneous (overlapping) use of alcohol and marijuana among adolescents. Methods: Data from US-national samples of 12th graders (N = 84,805, 48.4% female) who participated in the Monitoring the Future study from 1976 to 2016 and who used alcohol and/or marijuana in the past 12 months were used to identify latent classes of alcohol use, marijuana use, and simultaneous alcohol and marijuana (SAM) use. Results: A four-class solution indicated four patterns of use among adolescents: (1) Simultaneous alcohol and marijuana (SAM) use with binge drinking and recent marijuana use (SAM-Heavier Use; 11.2%); (2) SAM use without binge drinking and with recent marijuana use (SAM-Lighter Use; 21.6%); (3) Marijuana use and alcohol use but no SAM use (Concurrent Use; 10.7%); and (4) Alcohol use but no marijuana or SAM use (Alcohol-Only Use; 56.4%). Membership in either SAM use class was associated with a higher likelihood of truancy, evenings out, and use of illicit drugs other than marijuana. SAM-Heavier Use, compared to SAM-Lighter Use, class members were more likely to report these behaviors and be male, and less likely to have college plans. Conclusions: Among 12th graders who use both alcohol and marijuana, the majority use simultaneously, although not all use heavily. Given the recognized increased public health risks associated with simultaneous use, adolescent prevention programming should include focus on particular risks of simultaneous use. abstract_id: PUBMED:31772440 Drug use among Teenagers and Young Adults in Bhutan. Background: Use, possession, and illegal transactions of controlled substances have increased in recent years in Bhutan. This study aimed to determine the national prevalence of ever drug use and identify its associated factors amongst teenagers and young adults. Methods: This study was conducted using data from the National Health Survey 2012 of Bhutan. The outcome variable of interest was ever drug use in teenagers and young adults. The questionnaire was developed following the WHO STEPwise approach to surveillance of non-communicable diseases (STEP). Univariate and multivariate logistic regression were performed to identify correlates of ever drug use. Results: The prevalence of ever drug use among teenagers and young adults was 3.2% (n = 672). The factors associated with ever drug use were: being men; being single; being in age group of 18-24 years; having a primary school, high school, monastic, university, or diploma education; being technicians or salespersons; feeling always lonely; having ever consumed alcohol, and having ever smoked. Conclusion: Compared to the other countries in the WHO South-east Asia region, the prevalence of ever drug use in Bhutan is low. Use of other substances, including smoking and alcohol use, was associated with ever drug use. For greater effect, drug use prevention strategies should include prevention of smoking and alcohol use. abstract_id: PUBMED:28317200 Self-limiting non-medical pharmaceutical opioid use among young people in Sydney, Australia: An exploratory study. Introduction And Aims: Indicators suggest an escalation in opioid use globally, with recent HIV outbreaks linked to non-medical pharmaceutical opioid (NMPO) use. Little is known about how young Australians engage in NMPO use. Design And Methods: During 2015, we conducted qualitative interviews with young people (16-29 years) who reported oral NMPO use at least twice in the past 90 days. The study included a sample of injecting (n = 14) and oral (n = 22) users. This paper focuses on the oral user group. Results: Most participants grew up in affluent areas of Sydney, reported few health problems, rarely accessed health or welfare services and had limited contact with police. NMPO use was part of a repertoire of drug use involving: (i) use to come down from stimulants; (ii) use in conjunction with cannabis and alcohol; and (iii) use by itself. Participants reported limiting their use in order to avoid stigma and dependence. Discussion And Conclusions: Results suggest culture of self-limiting NMPO use characterised by stable housing, disposable income and intimate friendship networks. Despite the adverse health outcomes observed in other settings, our data suggest that the patterns and meanings of NMPO use observed in this small sample may help explain the limited nature of problematic use among young Australians to date. Findings indicate a need for epidemiological data, including longitudinal data, to assess possible demographic shifts in NMPO use in Australia, and to address risk factors for dependence and transitions to injecting and heroin use in this population. [Dertadian GC, Dixon TC, Iversen J, Maher L. Self-limiting non-medical pharmaceutical opioid use among young people in Sydney, Australia: An exploratory study. Drug Alcohol Rev 2017;00:000-000]. Answer: Yes, there is evidence of changes in patterns of concurrent drug use among young Australians aged 18-29 years between 2007 and 2010. According to the study with PUBMED:24813551, the majority of young adults in this age group predominantly used alcohol only (50.9%) or alcohol and tobacco (36.6%). However, 10.2% reported using alcohol, tobacco, marijuana, and ecstasy, and 2.3% reported using an extensive range of drugs. The study concluded that most drug use clusters were stable in their prevalence, indicating little meaningful change at the population level from 2007. The targeting of alcohol and tobacco use remains a priority, but there is a concern for the 12.5% of young people who are open to experiencing diverse drug-related effects.