input
stringlengths 6.82k
29k
|
---|
Instruction: The minimum data set weight-loss quality indicator: does it reflect differences in care processes related to weight loss?
Abstracts:
abstract_id: PUBMED:14511161
The minimum data set weight-loss quality indicator: does it reflect differences in care processes related to weight loss? Objectives: To determine whether nursing homes (NHs) that score differently on prevalence of weight loss, according to a Minimum Data Set (MDS) quality indicator, also provide different processes of care related to weight loss.
Design: Cross-sectional.
Setting: Sixteen skilled nursing facilities: 11 NHs in the lower (25th percentile-low prevalence) quartile and five NHs in the upper (75th percentile-high prevalence) quartile on the MDS weight-loss quality indicator.
Participants: Four hundred long-term residents.
Measurements: Sixteen care processes related to weight loss were defined and operationalized into clinical indicators. Trained research staff conducted measurement of NH staff implementation of each care process during assessments on three consecutive 12-hour days (7 a.m. to 7 p.m.), which included direct observations during meals, resident interviews, and medical record abstraction using standardized protocols.
Results: The prevalence of weight loss was significantly higher in the participants in the upper quartile NHs than in participants in the lower quartile NHs based on MDS and monthly weight data documented in the medical record. NHs with a higher prevalence of weight loss had a significantly larger proportion of residents with risk factors for weight loss, namely low oral food and fluid intake. There were few significant differences on care process measures between low- and high-weight-loss NHs. Staff in low-weight-loss NHs consistently provided verbal prompting and social interaction during meals to a greater proportion of residents, including those most at risk for weight loss.
Conclusion: The MDS weight-loss quality indicator reflects differences in the prevalence of weight loss between NHs. NHs with a lower prevalence of weight loss have fewer residents at risk for weight loss and staff who provide verbal prompting and social interaction to more residents during meals, but the adequacy and quality of feeding assistance care needs improvement in all NHs.
abstract_id: PUBMED:20550719
The Resident Assessment Instrument-Minimum Data Set 2.0 quality indicators: a systematic review. Background: The Resident Assessment Instrument-Minimum Data Set (RAI-MDS) 2.0 is designed to collect the minimum amount of data to guide care planning and monitoring for residents in long-term care settings. These data have been used to compute indicators of care quality. Use of the quality indicators to inform quality improvement initiatives is contingent upon the validity and reliability of the indicators. The purpose of this review was to systematically examine published and grey research reports in order to assess the state of the science regarding the validity and reliability of the RAI-MDS 2.0 Quality Indicators (QIs).
Methods: We systematically reviewed the evidence for the validity and reliability of the RAI-MDS 2.0 QIs. A comprehensive literature search identified relevant original research published, in English, prior to December 2008. Fourteen articles and one report examining the validity and/or reliability of the RAI-MDS 2.0 QIs were included.
Results: The studies fell into two broad categories, those that examined individual quality indicators and those that examined multiple indicators. All studies were conducted in the United States and included from one to a total of 209 facilities. The number of residents included in the studies ranged from 109 to 5758. One study conducted under research conditions examined 38 chronic care QIs, of which strong evidence for the validity of 12 of the QIs was found. In response to these findings, the 12 QIs were recommended for public reporting purposes. However, a number of observational studies (n = 13), conducted in "real world" conditions, have tested the validity and/or reliability of individual QIs, with mixed results. Ten QIs have been studied in this manner, including falls, depression, depression without treatment, urinary incontinence, urinary tract infections, weight loss, bedfast, restraint, pressure ulcer, and pain. These studies have revealed the potential for systematic bias in reporting, with under-reporting of some indicators and over-reporting of others.
Conclusion: Evidence for the reliability and validity of the RAI-MDS QIs remains inconclusive. The QIs provide a useful tool for quality monitoring and to inform quality improvement programs and initiatives. However, caution should be exercised when interpreting the QI results and other sources of evidence of the quality of care processes should be considered in conjunction with QI results.
abstract_id: PUBMED:27059825
CNA Training Requirements and Resident Care Outcomes in Nursing Homes. Purpose Of The Study: To examine the relationship between certified nursing assistant (CNA) training requirements and resident outcomes in U.S. nursing homes (NHs). The number and type of training hours vary by state since many U.S. states have chosen to require additional hours over the federal minimums, presumably to keep pace with the increasing complexity of care. Yet little is known about the impact of the type and amount of training CNAs are required to have on resident outcomes.
Design And Methods: Compiled data on 2010 state regulatory requirements for CNA training (clinical, total initial training, in-service, ratio of clinical to didactic hours) were linked to 2010 resident outcomes data from 15,508 NHs. Outcomes included the following NH Compare Quality Indicators (QIs) (Minimum Data Set 3.0): pain, antipsychotic use, falls with injury, depression, weight loss and pressure ulcers. Facility-level QIs were regressed on training indicators using generalized linear models with the Huber-White correction, to account for clustering of NHs within states. Models were stratified by facility size and adjusted for case-mix, ownership status, percentage of Medicaid-certified beds and urban-rural status.
Results: A higher ratio of clinical to didactic hours was related to better resident outcomes. NHs in states requiring clinical training hours above federal minimums (i.e., >16hr) had significantly lower odds of adverse outcomes, particularly pain falls with injury, and depression. Total and in-service training hours also were related to outcomes.
Implications: Additional training providing clinical experiences may aid in identifying residents at risk. This study provides empirical evidence supporting the importance of increased requirements for CNA training to improve quality of care.
abstract_id: PUBMED:35054498
Weight Loss in Advanced Cancer: Sex Differences in Health-Related Quality of Life and Body Image. Weight maintenance is a priority in cancer care, but weight loss is common and a serious concern. This study explores if there are sex differences in the perception of weight loss and its association to health-related quality of life (HRQoL) and body image. Cancer patients admitted to Advanced Medical Home Care were recruited to answer a questionnaire, including characteristics, the HRQoL-questionnaire RAND-36, and a short form of the Body Image Scale. Linear regression analyses stratified by sex and adjusted for age were performed to examine associations between percent weight loss and separate domains of HRQoL and body image score in men and women separately. In total, 99 participants were enrolled, of which 80 had lost weight since diagnosis. In men, an inverse association between weight loss and the HRQoL-domain physical functioning, β = -1.34 (95%CI: -2.44, -0.24), and a positive association with body image distress, β = 0.22 (95%CI: 0.07, 0.37), were found. In women, weight loss was associated with improvement in the HRQoL-domain role limitations due to physical health, β = 2.02 (95%CI: 0.63, 3.41). Following a cancer diagnosis, men appear to experience weight loss more negatively than women do. Recognizing different perceptions of weight loss may be of importance in clinical practice.
abstract_id: PUBMED:12752906
International comparison of quality indicators in United States, Icelandic and Canadian nursing facilities. Aim: To discuss the results of a comparison using minimum data set (MDS)-based quality indicators (QIs) for residents in nursing facilities in three countries (Iceland; Ontario, Canada; and Missouri, United States) together with implications regarding nursing practices and resident outcomes in these countries.
Method: Data were extracted from databases in each country for four consecutive quarterly periods during 1997 and 1998. All facilities investigated had the required consecutive quarterly data. Analytical techniques were matched to measure resident outcomes using the same MDS-based QIs in the three countries.
Results: Similarities among the three countries included the use of nine or more multiple medications, weight loss, urinary tract infection, dehydration, and behavioural symptoms that affect others. Differences among the three countries included bowel and bladder incontinence, indwelling catheter use, fecal impaction, tube feeding use, development of pressure ulcers, bedridden residents, physical restraint use, depression without receiving antidepressant therapy, residents with depression, use of anti-anxiety or hypnotic drugs, use of anti-psychotic drugs in the absence of psychotic and related conditions, residents spending little or no time in activities, and falls.
Conclusions: Comparisons highlighted differences in clinical practices among countries, which may account for differences in resident outcomes. Learning from each other's best practices can improve the quality of care for older people in nursing homes in many countries.
abstract_id: PUBMED:26611793
Is higher nursing home quality more costly? Widespread issues regarding quality in nursing homes call for an improved understanding of the relationship with costs. This relationship may differ in European countries, where care is mainly delivered by nonprofit providers. In accordance with the economic theory of production, we estimate a total cost function for nursing home services using data from 45 nursing homes in Switzerland between 2006 and 2010. Quality is measured by means of clinical indicators regarding process and outcome derived from the minimum data set. We consider both composite and single quality indicators. Contrary to most previous studies, we use panel data and control for omitted variables bias. This allows us to capture features specific to nursing homes that may explain differences in structural quality or cost levels. Additional analysis is provided to address simultaneity bias using an instrumental variable approach. We find evidence that poor levels of quality regarding outcome, as measured by the prevalence of severe pain and weight loss, lead to higher costs. This may have important implications for the design of payment schemes for nursing homes.
abstract_id: PUBMED:16843237
Systematic review of studies of staffing and quality in nursing homes. Purpose: To evaluate a range of staffing measures and data sources for long-term use in public reporting of staffing as a quality measure in nursing homes.
Method: Eighty-seven research articles and government documents published from 1975 to 2003 were reviewed and summarized. Relevant content was extracted and organized around 3 themes: staffing measures, quality measures, and risk adjustment variables. Data sources for staffing information were also identified.
Results: There is a proven association between higher total staffing levels (especially licensed staff) and improved quality of care. Studies also indicate a significant relationship between high turnover and poor resident outcomes. Functional ability, pressure ulcers, and weight loss are the most sensitive quality indicators linked to staffing. The best national data sources for staffing and quality include the Minimum Data Set (MDS) and On-line Survey and Certification Automated Records (OSCAR). However, the accuracy of this self-reported information requires further reliability and validity testing.
Conclusions: A nationwide instrument needs to be developed to accurately measure staff turnover. Large-scale studies using payroll data to measure staff retention and its impact on resident outcomes are recommended. Future research should use the most nurse-sensitive quality indicators such as pressure ulcers, functional status, and weight loss.
abstract_id: PUBMED:30383467
Development and Validation of a Prognostic Tool for Identifying Residents at Increased Risk of Death in Long-Term Care Facilities. Background: To promote better care at the end stage of life in long-term care facilities, a culturally appropriate tool for identifying residents at the end of life is crucial.
Objective: This study aimed to develop and validate a prognostic tool, the increased risk of death (IRD) scale, based on the minimum data set (MDS).
Design: A retrospective study using data between 2005 and 2013 from six nursing homes in Hong Kong.
Setting/subjects: A total of 2380 individuals were randomly divided into two equal-sized subsamples: Sample 1 was used for the development of the IRD scale and Sample 2 for validation.
Measurements: The measures were MDS 2.0 items and mortality data from the discharge tracking forms. The nine items in the IRD scale (decline in cognitive status, decline in activities of daily living, cancer, renal failure, congestive heart failure, emphysema/chronic obstructive pulmonary disease, edema, shortness of breath, and loss of weight), were selected based on bivariate Cox proportional hazards regression.
Results: The IRD scale was a strong predictor of mortality in both Sample 1 (HRsample1 = 1.50, 95% confidence interval [CI]: 1.37-1.65) and Sample 2 (HRsample2 = 1.31, 1.19-1.43), after adjusting for covariates. Hazard ratios (HRs) for residents who had an IRD score of 3 or above for Sample 1 and Sample 2 were 3.32 (2.12-5.21) and 2.00 (1.30-3.09), respectively.
Conclusions: The IRD scale is a promising tool for identifying nursing home residents at increased risk of death. We recommend the tool to be incorporated into the care protocol of long-term care facilities in Hong Kong.
abstract_id: PUBMED:30822485
Nutrition-related parameters predict the health-related quality of life in home care patients. Introduction: There is evidence that nutritional status is one of the major factors affecting quality of life. Low quality of life is an important reason that reflects the risk of malnutrition as well as dependency and frailty.
Objective: The present study aimed to examine nutritional risk factors and sociodemographic features affecting health-related quality of life in home care patients.
Materials And Methods: The data of 209 adult or elderly eligible subjects were evaluated in the study. A general questionnaire including sociodemographic and nutritional characteristics, 'Mini Nutritional Assessment (MNA)', 'Short Form-36 (SF-36) health related life quality scale' and '24-hour dietary recall' were applied with face-to-face interview. Anthropometric measurements were performed using standard measurement protocols and, height and weight measurements of bedridden patients were calculated by equality formulas.
Results: While 52.6% of patients were malnourished according to the MNA, only 7.7% were underweight according to the body mass index (BMI). The SF-36 summary component scores (physical and mental component summary scale scores) of malnourished patients were significantly lower than patients at risk of malnutrition or normal (p < 0.05). There were significant positive correlations between SF-36 physical component summary scale scores were significantly correlated with MNA scores (r = 0.517), BMI (r = 0.140) and daily dietary macronutrient intake (energy (r = 0.328), protein (r = 0.165), carbohydrate (r = 0.305), fat (r = 0.275) and fiber (r = 0.268)) (p < 0.05). Besides there were significant positive correlation between SF-36 mental component summary scale scores and MNA scores (r = 0.719), BMI (r = 0.318), daily dietary macronutrient intake (energy (r = 0.388), protein (r = 0.204), carbohydrate (r = 0.335), fat (r = 0.365) and fiber (r = 0.349)) (p < 0.05). It was also determined that MNA had the greatest positive effect and 'having a caregiver' had the greatest negative effect on the physical and mental component summary scale scores.
Conclusion: Periodic nutritional screening of home care patients is important and necessary for early nutritional intervention and thus prevention of morbidity and mortality.
abstract_id: PUBMED:23995465
Licensed practical nurse scope of practice and quality of nursing home care. Background: Although higher levels of registered nurse (RN) staffing in nursing homes are related to better care quality, licensed practical nurses (LPNs) provide most licensed-nursing care; prior research is mixed regarding how this influences quality. The nature of LPN practice, and RN direction of that practice, follows in part from state nurse practice acts (NPAs).
Objective: Among the 50 states and the District of Columbia, the aims of this study were to describe regulatory differences in how LPNs contribute to nursing assessment, care planning, delegation and supervision, and RN practice in these domains and to explore how these regulatory differences relate to quality of care in nursing homes.
Methods: The study design was a sequential explanatory mixed-methods design of NPAs and Centers for Medicare and Medicaid quality measures of long-stay nursing home residents. In the qualitative strand, 51 NPAs and related administrative code were analyzed to classify guidance on RN and LPN practice; then, the coded data were transformed to quantitative indicators of specificity regarding LPN and RN scope of practice. In the quantitative strand, state NPA data were linked to facility-level Centers for Medicare and Medicaid staffing and quality measures (N = 12,698 facilities) for cross-sectional, quantitative analyses.
Results: States varied considerably in how NPAs guided LPN and RN scope of practice. NPA differences were related to quality indicators of resident pain, catheter use, weight loss, and restraints, even when accounting for nursing home staff mix.
Discussion: Care quality was better in states where the NPA clearly described LPN scope, but only when there was also greater RN availability (p < .05). Classifying scope of nursing practice regulations moves beyond traditional staffing measures to inform understanding of the effects of the RN-to-LPN staffing ratio on quality of care in nursing homes.
Answer: The Minimum Data Set (MDS) weight-loss quality indicator does reflect differences in care processes related to weight loss in nursing homes (NHs). A study found that nursing homes with a higher prevalence of weight loss had a significantly larger proportion of residents with risk factors for weight loss, such as low oral food and fluid intake. However, there were few significant differences in care process measures between NHs with low and high weight loss prevalence. Staff in NHs with lower prevalence of weight loss provided more verbal prompting and social interaction during meals to a greater proportion of residents, including those most at risk for weight loss. Despite these findings, the study concluded that the adequacy and quality of feeding assistance care needs improvement across all NHs (PUBMED:14511161).
The MDS weight-loss quality indicator is one of several quality indicators derived from the RAI-MDS 2.0 data, which are used to guide care planning and monitor residents in long-term care settings. However, the evidence for the reliability and validity of the RAI-MDS quality indicators (QIs) remains inconclusive. Observational studies have shown potential for systematic bias in reporting, with some indicators being under-reported and others over-reported. Therefore, while the MDS weight-loss quality indicator can reflect differences in care processes, caution should be exercised when interpreting the results, and other sources of evidence should be considered in conjunction with QI results (PUBMED:20550719).
In summary, the MDS weight-loss quality indicator does show differences in care processes related to weight loss in nursing homes, but the overall evidence suggests that the indicator should be used with caution and in the context of a broader assessment of care quality. |
Instruction: Should doctors wear white coats?
Abstracts:
abstract_id: PUBMED:15138319
Should doctors wear white coats? Objective: To compare the views of doctors and patients on whether doctors should wear white coats and to determine what shapes their views.
Methods: A questionnaire study of 400 patients and 86 doctors was performed.
Results: All 86 of the doctors' questionnaires were included in the analysis but only 276 of the patients were able to complete a questionnaire. Significantly more patients (56%) compared with their doctors (24%) felt that doctors should wear white coats (p<0.001). Only age (>70 years) (p<0.001) and those patients whose doctors actually wore white coats (p<0.001) were predictive of whether patients favoured white coats. The most common reason given by patients was for easy identification (54%). Less than 1% of patients believed that white coats spread infection. Only 13% of doctors wore white coats as they were felt to be an infection risk (70%) or uncomfortable (60%). There was no significant difference between doctor subgroups when age, sex, grade, and specialty were analysed.
Conclusion: In contrast to doctors, who view white coats as an infection risk, most patients, and especially those older than 70 years, feel that doctors should wear them for easy identification. Further studies are needed to assess whether this affects patients' perceived quality of care and whether patient education will alter this view.
abstract_id: PUBMED:11346107
Should doctors wear white coats? The wearing of white coats by hospital doctors is becoming a rarity, making it difficult for patients to identify doctors from other hospital staff. I asked patients with cancer whether they thought that doctors, both junior and senior, should wear white coats. Only a minority disapproved.
abstract_id: PUBMED:1994014
Why do hospital doctors wear white coats? Seventy-two per cent of all hospital doctors and medical students wear white coats and most wear them greater than 75% of the time. White coats are worn chiefly for easy recognition by colleagues and patients, to put items in the pockets and to keep clothes clean. Psychiatrists and paediatricians try to maximize rapport with patients by deliberately not wearing white coats.
abstract_id: PUBMED:11587285
Hospitalised patients' views on doctors and white coats. Objectives: To determine hospitalised patients' feelings, perceptions and attitudes towards doctors and how these are affected by whether or not doctors wear a white coat.
Design: Cross-sectional questionnaire survey.
Setting: The medical and surgical wards of two Sydney teaching hospitals, on one day in January 1999.
Patients: 154 of 200 consecutive patients (77%).
Main Outcome Measures: The effects of white-coat-wearing on patients' feelings and ability to communicate and on their perceptions of the doctor; why patients think doctors wear white coats and their preferences for the wearing of white coats and doctors' attire in general; and patients' rating of the importance of these effects and preferences.
Results: Patients reported that white-coat-wearing improved all aspects of the patient-doctor interaction, and that when doctors wore white coats they seemed more hygienic, professional, authoritative and scientific. The more important that patients considered an aspect, the greater the positive effect associated with wearing a white coat. From a list of doctors' reasons for wearing white coats, patients thought that doctors wore white coats because it made them seem more professional, hygienic, authoritative, scientific, competent, knowledgeable and approachable. 36% of the patients preferred doctors to wear white coats, 19% preferred them not to wear white coats and 45% did not mind.
Conclusions: Patients reported feeling more confident and better able to communicate with doctors who wore white coats. The recognition, symbolism and formality afforded by a white coat may enhance communication and facilitate the doctor-patient relationship.
abstract_id: PUBMED:12472758
What do Australian junior doctors think of white coats? Objective: To determine the attitudes of Australian junior doctors towards white coats.
Methods: We carried out a multicentred mail survey in 13 Australian teaching hospitals. A total of 337 junior medical officers (JMOs) completed an eight-item questionnaire. The survey sought to establish JMOs' views and preferences regarding the wearing of white coats and the reasons behind them.
Results: Very few Australian JMOs wear white coats. Many reasons for not wearing white coats were given, the most common being 'No one else wears a white coat' (70%). A total of 60% of JMOs are against wearing white coats; 24% are indifferent on the issue and only 16% expressed a general preference for white coats. Junior medical officers who did prefer white coats indicated reasons of convenience for carrying items, identification and/or professionalism, and hygiene and/or cleanliness.
Conclusions: White coats have largely disappeared from Australian teaching hospitals and the majority of junior doctors in Australia oppose the wearing of white coats.
abstract_id: PUBMED:23230502
Microbial contamination of the white coats of dental staff in the clinical setting. Background And Aims: Although wearing a white coat is an accepted part of medical and dental practice, it is a potential source of cross-infection. The objective of this study was to determine the level and type of microbial contamination present on the white coats of dental interns, graduate students and faculty in a dental clinic.
Materials And Methods: Questionnaire and cross-sectional survey of the bacterial contamination of white coats in two predetermined areas (chest and pocket) on the white coats were done in a rural dental care center. Paired sample t-test and chi-square test were used for Statistical analysis.
Results: 60.8% of the participants reported washing their white coats once a week. Grading by the examiner revealed 15.7% dirty white coats. Also, 82.5% of the interns showed bacterial contamination of their white coats compared to 74.7% graduate students and 75% faculty members irrespective of the area examined. However, chest area was consistently a more bacterio-logically contaminated site as compared to the pocket area. Antibiotic sensitivity testing revealed resistant varieties of micro-organisms against Amoxicillin (60%), Erythromycin (42.5%) and Cotrimoxazole (35.2%).
Conclusion: The white coats seem to be a potential source of cross-infection in the dental setting. The bacterial contamina-tion carried by white coats, as demonstrated in this study, supports the ban on white coats from non-clinical areas.
abstract_id: PUBMED:1773186
Microbial flora on doctors' white coats. Objective: To determine the level and type of microbial contamination present on the white coats of doctors in order to assess the risk of transmission of pathogenic micro-organisms by this route in a hospital setting.
Design: Cross sectional survey of the bacterial contamination of white coats in a general hospital.
Setting: East Birmingham Hospital, an urban general hospital with 800 beds.
Subjects: 100 doctors of different grades and specialties.
Results: The cuffs and pockets of the coats were the most highly contaminated areas. The level of bacterial contamination did not vary with the length of time a coat had been in use, but it increased with the degree of usage by the individual doctor. Staphylococcus aureus was isolated from a quarter of the coats examined, more commonly from those belonging to doctors in surgical specialties than medical specialties. Pathogenic Gram negative bacilli and other pathogenic bacteria were not isolated.
Conclusions: White coats are a potential source of cross infection, especially in surgical areas. Scrupulous hand washing should be observed before and after attending patients and it may be advisable to remove the white coat and put on a plastic apron before examining wounds. There is little microbiological reason for recommending a more frequent change of white coat than once a week, nor for excluding the wearing of white coats in non-clinical areas.
abstract_id: PUBMED:24378050
White coats: how long should doctors wear them? Objectives: While coat contamination increases progressively with the duration of use, there are no guidelines on how frequently medical white coats should be changed. The purpose of our study was to examine the turnover of individual batch of medical white coats in a university hospital.
Study Design And Methods: A retrospective analysis of the white coat turnover of 826 physicians was performed by using the hospital laundry computerized database and an electronic declarative survey (240 responses) to evaluate the duration of medical white coat use.
Results: There was a wide discrepancy between the data extracted from the laundry database and those from the survey. The median factual duration of use (20 days, range: 15-30) corresponding to a turnover of 2 (1-2) coats per month, was widely underestimated by the physicians. Multivariate analysis identified 4 independent factors associated with a declared use of coats longer than 7 days: estimation of insufficient gown turnover (OR 14.8 [4.8-45.8]), daily change considered as not useful (OR 5.1 [2.4-10.8]), non-medical specialty (OR 2.95 [1.5-5.6]) and presence of stains on gowns (2.9 [1.5-5.5]).
Conclusion: Shortening white coat use should be included in medical education in order to improve the good practice rules of hospital hygiene.
abstract_id: PUBMED:25386261
Dermatologists wearing white coats on practice websites: current trends. Physicians wearing white coats are perceived as having more authority, being more friendly and being more attractive than those not wearing white coats, and patients report that they prefer their dermatologist to wear a white coat. The aim of the study was to determine the prevalence of dermatologists wearing white coats on practice websites. We searched Google for dermatology practice websites in six states representing distinct geographic regions in the United States. The first one hundred search results were evaluated, and photographs of dermatologists on these websites were examined for the presence or absence of white coats. Most (77%) of dermatologists did not wear white coats. The highest prevalence was in the eastern states of Massachusetts and South Carolina, with 29% and 39%, respectively. Colorado had the lowest rate at 13%. Rates were essentially equal when segmented by gender. Although patients report that they prefer their dermatologist to wear a white coat, dermatologists often do not wear a white coat on their practice websites.
abstract_id: PUBMED:28367582
The 'auxiliary' white coat effect in hospitals: perceptions of patients and doctors. A doctor's attire is important in making a positive first impression and enhancing the overall healthcare experience for patients. We conducted a study to examine the perceptions and preferences of patients and doctors regarding six types of dress codes used by doctors in different scenarios and locations. A total of 87 patients and 46 doctors participated in the study. Separate sets of questionnaires containing four demographic questions and 14 survey questions were distributed to the two groups. Most patients preferred doctors to dress formally in white coats regardless of the scenario or location, whereas the majority of doctors preferred formal attire without white coats. Both groups preferred operating theatre attire in the emergency department. Our findings confirmed that patients perceived doctors in white coats to be more trustworthy, responsible, authoritative, confident, knowledgeable and caring. There is a need to educate the public about the reasons for changes in doctors' traditional dress codes.
Answer: The question of whether doctors should wear white coats is a subject of debate with differing opinions among doctors and patients. Studies have shown that a significant number of patients prefer doctors to wear white coats. For instance, 56% of patients felt that doctors should wear white coats, with the most common reason being easy identification (54%) (PUBMED:15138319). Similarly, hospitalised patients reported that white-coat-wearing improved all aspects of the patient-doctor interaction, and doctors in white coats seemed more hygienic, professional, authoritative, and scientific (PUBMED:11587285). Moreover, most patients preferred doctors to dress formally in white coats regardless of the scenario or location, associating white coats with trustworthiness, responsibility, authority, confidence, knowledge, and care (PUBMED:28367582).
Conversely, doctors have concerns about white coats, with many not wearing them due to the perception of them being an infection risk (70%) or uncomfortable (60%) (PUBMED:15138319). In Australia, for example, very few junior doctors wear white coats, with 60% opposing their use, citing reasons such as "No one else wears a white coat" (70%) (PUBMED:12472758). Additionally, studies have found that white coats can be a potential source of cross-infection, with microbial contamination present on the coats of dental staff and doctors (PUBMED:23230502, PUBMED:1773186).
In conclusion, while many patients favor the wearing of white coats by doctors for identification and symbolic reasons, doctors themselves are less inclined to wear them due to concerns about comfort and infection risk. The decision on whether doctors should wear white coats may need to balance patient preferences with practical considerations of hygiene and comfort. Further education of the public about the reasons for changes in doctors' traditional dress codes may also be necessary (PUBMED:28367582). |
Instruction: Retractions in the scientific literature: is the incidence of research fraud increasing?
Abstracts:
abstract_id: PUBMED:28889329
Lack of Improvement in Scientific Integrity: An Analysis of WoS Retractions by Chinese Researchers (1997-2016). This study investigated the status quo of article retractions by Chinese researchers. The bibliometric information of 834 retractions from the Web of Science SCI-expanded database were downloaded and analysed. The results showed that the number of retractions increased in the past two decades, and misconduct such as plagiarism, fraud, and faked peer review explained approximately three quarters of the retractions. Meanwhile, a large proportion of the retractions seemed typical of deliberate fraud, which might be evidenced by retractions authored by repeat offenders of data fraud and those due to faked peer review. In addition, a majority of Chinese fraudulent authors seemed to aim their articles which contained a possible misconduct at low-impact journals, regardless of the types of misconduct. The system of scientific evaluation, the "publish or perish" pressure Chinese researchers are facing, and the relatively low costs of scientific integrity may be responsible for the scientific integrity. We suggested more integrity education and severe sanctions for the policy-makers, as well as change in the peer review system and transparent retraction notices for journal administrators.
abstract_id: PUBMED:21186208
Retractions in the scientific literature: is the incidence of research fraud increasing? Background: Scientific papers are retracted for many reasons including fraud (data fabrication or falsification) or error (plagiarism, scientific mistake, ethical problems). Growing attention to fraud in the lay press suggests that the incidence of fraud is increasing.
Methods: The reasons for retracting 742 English language research papers retracted from the PubMed database between 2000 and 2010 were evaluated. Reasons for retraction were initially dichotomised as fraud or error and then analysed to determine specific reasons for retraction.
Results: Error was more common than fraud (73.5% of papers were retracted for error (or an undisclosed reason) vs 26.6% retracted for fraud). Eight reasons for retraction were identified; the most common reason was scientific mistake in 234 papers (31.5%), but 134 papers (18.1%) were retracted for ambiguous reasons. Fabrication (including data plagiarism) was more common than text plagiarism. Total papers retracted per year have increased sharply over the decade (r=0.96; p<0.001), as have retractions specifically for fraud (r=0.89; p<0.001). Journals now reach farther back in time to retract, both for fraud (r=0.87; p<0.001) and for scientific mistakes (r=0.95; p<0.001). Journals often fail to alert the naïve reader; 31.8% of retracted papers were not noted as retracted in any way.
Conclusions: Levels of misconduct appear to be higher than in the past. This may reflect either a real increase in the incidence of fraud or a greater effort on the part of journals to police the literature. However, research bias is rarely cited as a reason for retraction.
abstract_id: PUBMED:27354716
Retractions in orthopaedic research: A systematic review. Objectives: Despite the fact that research fraud and misconduct are under scrutiny in the field of orthopaedic research, little systematic work has been done to uncover and characterise the underlying reasons for academic retractions in this field. The purpose of this study was to determine the rate of retractions and identify the reasons for retracted publications in the orthopaedic literature.
Methods: Two reviewers independently searched MEDLINE, EMBASE, and the Cochrane Library (1995 to current) using MeSH keyword headings and the 'retracted' filter. We also searched an independent website that reports and archives retracted scientific publications (www.retractionwatch.com). Two reviewers independently extracted data including reason for retraction, study type, journal impact factor, and country of origin.
Results: One hundred and ten retracted studies were included for data extraction. The retracted studies were published in journals with impact factors ranging from 0.000 (discontinued journals) to 13.262. In the 20-year search window, only 25 papers were retracted in the first ten years, with the remaining 85 papers retracted in the most recent decade. The most common reasons for retraction were fraudulent data (29), plagiarism (25) and duplicate publication (20). Retracted articles have been cited up to 165 times (median 6; interquartile range 2 to 19).
Conclusion: The rate of retractions in the orthopaedic literature is increasing, with the majority of retractions attributed to academic misconduct and fraud. Orthopaedic retractions originate from numerous journals and countries, indicating that misconduct issues are widespread. The results of this study highlight the need to address academic integrity when training the next generation of orthopaedic investigators.Cite this article: J. Yan, A. MacDonald, L-P. Baisi, N. Evaniew, M. Bhandari, M. Ghert. Retractions in orthopaedic research: A systematic review. Bone Joint Res 2016;5:263-268. DOI: 10.1302/2046-3758.56.BJR-2016-0047.
abstract_id: PUBMED:36228454
Scientific integrity and fraud in radiology research. Purpose: To investigate the view of radiologists on the integrity of their own and their colleagues' scientific work.
Materials And Methods: Corresponding authors of articles that were published in 12 general radiology journals in 2021 were invited to participate in a survey on scientific integrity.
Results: A total of 219 (6.2 %) of 3,511 invited corresponding authors participated. Thirteen (5.9 %) respondents reported having committed scientific fraud, and 60 (27.4 %) witnessed or suspect scientific fraud among their departmental members in the past 5 years. Misleading reporting (32.2 %), duplicate/redundant publication (26.3 %), plagiarism (15.3 %), and data manipulation/falsification (13.6 %) were the most commonly reported types of scientific fraud. Publication bias exists according to 184 (84.5 %) respondents, and 89 (40.6 %) respondents had honorary authors on their publications in the past 5 years. General confidence in the integrity of scientific publications ranged between 2 and 10 (median: 8) on a 0-10 point scale. Common topics of interest and concern among respondents were authorship criteria and assignments, perverse incentives (including the influence of money, funding, and academic promotions on the practice of research), and poorly performed research without intentional fraud.
Conclusion: Radiology researchers reported that scientific fraud and other undesirable practices such as publication bias and honorary authorship are relatively common. Their general confidence in the scientific integrity of published work was relatively high, but far from perfect. These data may trigger stakeholders in the radiology community to place scientific integrity higher on the agenda, and to initiate cultural and policy reforms to remove perverse research incentives.
abstract_id: PUBMED:17970246
Fraud and misconduct in scientific research: a definition and procedures for investigation. Scientific fraud and misconduct appear to be on the rise throughout the scientific community. Whatever the reasons for fraud and whatever the number of cases, it is important that the academic research community consider this problem in a cool and rational manner, ensuring that allegations are dealt with through fair and impartial procedures. Increasingly, governments have either sought to regulate fraud and misconduct through legislation, or they have left it to universities and research institutions to deal with at the local level. The result has been less than uniform understanding of what constitutes scientific fraud and misconduct and a great deal of variance in procedures used to investigate such allegations. In this paper, we propose a standard definition of scientific fraud and misconduct and procedures for investigation based on natural justice and fairness. The issue of fraud and misconduct should not be left to government regulation by default. The standardized definition and procedures presented here should lead to more appropriate institutional responses in dealing with allegations of scientific fraud and misconduct.
abstract_id: PUBMED:29451549
Retractions in cancer research: a systematic survey. Background: The annual number of retracted publications in the scientific literature is rapidly increasing. The objective of this study was to determine the frequency and reason for retraction of cancer publications and to determine how journals in the cancer field handle retracted articles.
Methods: We searched three online databases (MEDLINE, Embase, The Cochrane Library) from database inception until 2015 for retracted journal publications related to cancer research. For each article, the reason for retraction was categorized as plagiarism, duplicate publication, fraud, error, authorship issues, or ethical issues. Accessibility of the retracted article was defined as intact, removed, or available but with a watermark over each page. Descriptive data was collected on each retracted article including number of citations, journal name and impact factor, study design, and time between publication and retraction. The publications were screened in duplicated and two reviewers extracted and categorized data.
Results: Following database search and article screening, we identified 571 retracted cancer publications. The majority (76.4%) of cancer retractions were issued in the most recent decade, with 16.6 and 6.7% of the retractions in the prior two decades respectively. Retractions were issued by journals with impact factors ranging from 0 (discontinued) to 55.8. The average impact factor was 5.4 (median 3.54, IQR 1.8-5.5). On average, a retracted article was cited 45 times (median 18, IQR 6-51), with a range of 0-742. Reasons for retraction include plagiarism (14.4%), fraud (28.4%), duplicate publication (18.2%), error (24.2%), authorship issues (3.9%), and ethical issues (2.1%). The reason for retraction was not stated in 9.8% of cases. Twenty-nine percent of retracted articles remain available online in their original form.
Conclusions: Retractions in cancer research are increasing in frequency at a similar rate to all biomedical research retractions. Cancer retractions are largely due to academic misconduct. Consequences to cancer patients, the public at large, and the research community can be substantial and should be addressed with future research. Despite the implications of this important issue, some cancer journals currently fall short of the current guidelines for clearly stating the reason for retraction and identifying the publication as retracted.
abstract_id: PUBMED:21081306
Retractions in the scientific literature: do authors deliberately commit research fraud? Background: Papers retracted for fraud (data fabrication or data falsification) may represent a deliberate effort to deceive, a motivation fundamentally different from papers retracted for error. It is hypothesised that fraudulent authors target journals with a high impact factor (IF), have other fraudulent publications, diffuse responsibility across many co-authors, delay retracting fraudulent papers and publish from countries with a weak research infrastructure.
Methods: All 788 English language research papers retracted from the PubMed database between 2000 and 2010 were evaluated. Data pertinent to each retracted paper were abstracted from the paper and the reasons for retraction were derived from the retraction notice and dichotomised as fraud or error. Data for each retracted article were entered in an Excel spreadsheet for analysis.
Results: Journal IF was higher for fraudulent papers (p<0.001). Roughly 53% of fraudulent papers were written by a first author who had written other retracted papers ('repeat offender'), whereas only 18% of erroneous papers were written by a repeat offender (χ=88.40; p<0.0001). Fraudulent papers had more authors (p<0.001) and were retracted more slowly than erroneous papers (p<0.005). Surprisingly, there was significantly more fraud than error among retracted papers from the USA (χ(2)=8.71; p<0.05) compared with the rest of the world.
Conclusions: This study reports evidence consistent with the 'deliberate fraud' hypothesis. The results suggest that papers retracted because of data fabrication or falsification represent a calculated effort to deceive. It is inferred that such behaviour is neither naïve, feckless nor inadvertent.
abstract_id: PUBMED:31659916
Journal retractions in oncology: a bibliometric study. Aim: To investigate secular trends in article retractions in the oncology literature, particularly relating to cancer treatments and data available to patients. Methods: A bibliometric analysis of article retractions from PubMed in the period 2000-2018. Results: Analysis shows that article lifetime - that is the time period from initial publication to ultimate retraction - has decreased in recent years. It also shows that the retraction rate has also increased over the same period. Furthermore, over 20% of retracted oncology publications analyzed in this study relate to treatment-relevant topics such as clinical trials and studies in the anticancer properties of supplements. Conclusion: The causes and context of these trends are discussed and reference made to the dangers of scientific misconduct in oncology.
abstract_id: PUBMED:9648608
Research fraud In this article problems of research fraud are discussed. Academic courses of teaching scientific integrity and the impact of clinical trial fraud on good clinical research practice are mentioned.
abstract_id: PUBMED:28653168
Science as a Matter of Honour: How Accused Scientists Deal with Scientific Fraud in Japan. Practices related to research misconduct seem to have been multiplied in recent years. Many cases of scientific fraud have been exposed publicly, and journals and academic institutions have deployed different measures worldwide in this regard. However, the influence of specific social and cultural environments on scientific fraud may vary from society to society. This article analyzes how scientists in Japan deal with accusations of scientific fraud. For such a purpose, a series of scientific fraud cases that took place in Japan has been reconstructed through diverse sources. Thus, by analyzing those cases, the social basis of scientific fraud and the most relevant aspects of Japanese cultural values and traditions, as well as the concept of honour which is deeply involved in the way Japanese scientists react when they are accused of and publicly exposed in scientific fraud situations is examined.
Answer: The incidence of research fraud appears to be increasing, as suggested by the growing number of retractions in the scientific literature. A study analyzing retractions from the PubMed database between 2000 and 2010 found that the total number of papers retracted per year had increased sharply over the decade, with retractions specifically for fraud also showing a significant rise (PUBMED:21186208). This trend is consistent with other findings in the literature. For instance, a systematic review of retractions in orthopaedic research indicated that the rate of retractions is increasing, with the majority attributed to academic misconduct and fraud (PUBMED:27354716). Similarly, a bibliometric study of retractions in oncology literature revealed that both the article lifetime (from publication to retraction) and the retraction rate have increased in recent years (PUBMED:31659916).
Moreover, a study focusing on retractions by Chinese researchers in the Web of Science SCI-expanded database found that misconduct such as plagiarism, fraud, and faked peer review accounted for approximately three-quarters of the retractions, with a significant proportion indicative of deliberate fraud (PUBMED:28889329). Another study examining retractions in cancer research identified that a large portion of retractions were due to academic misconduct, with the frequency of retractions in cancer research increasing at a similar rate to all biomedical research retractions (PUBMED:29451549).
The increase in retractions due to fraud has been attributed to various factors, including the pressure to publish, the "publish or perish" culture, and the relatively low costs of scientific integrity (PUBMED:28889329). Additionally, fraudulent papers tend to have more authors, are retracted more slowly, and are often written by authors with previous retractions, suggesting a pattern of deliberate deception (PUBMED:21081306).
In summary, the evidence suggests that the incidence of research fraud is increasing, as reflected by the growing number of retractions in the scientific literature. This trend highlights the need for more robust measures to ensure scientific integrity and address academic misconduct. |
Instruction: Endoscopic ultrasound-guided fine needle aspiration and endobronchial ultrasound-guided transbronchial needle aspiration: Are two better than one in mediastinal staging of non-small cell lung cancer?
Abstracts:
abstract_id: PUBMED:24930616
Endoscopic ultrasound-guided fine needle aspiration and endobronchial ultrasound-guided transbronchial needle aspiration: Are two better than one in mediastinal staging of non-small cell lung cancer? Objective: The role of combined endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) and endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) with a single bronchoscope is poorly understood. The purpose of the present study was to elucidate the roles of EBUS-TBNA and EUS-FNA with a single bronchoscope in the preoperative hilar and mediastinal staging of non-small cell lung cancer (NSCLC).
Methods: A total of 150 patients with potentially resectable known or suspected NSCLC were enrolled in our prospective study. EBUS-TBNA was performed, followed by EUS-FNA, with an EBUS bronchoscope for N2 and N3 nodes≥5 mm in the shortest diameter on ultrasound images, in a single session.
Results: EBUS-TBNA was performed for 257 lymph nodes and EUS-FNA for 176 lymph nodes. Of the 150 patients, 146 had a final diagnosis of NSCLC. Of these 146 patients, 33 (23%) had N2 and/or N3 nodal metastases. The sensitivity of EBUS-TBNA, EUS-FNA, and the combined approach per patient was 52%, 45%, and 73%, respectively (EBUS-TBNA vs the combined approach, P=.016, McNemar's test). The corresponding negative predictive value was 88%, 86%, and 93%. Two patients (1%) developed severe cough from EBUS-TBNA.
Conclusions: The combined endoscopic approach with EBUS-TBNA and EUS-FNA is a safe and accurate method for preoperative hilar and mediastinal staging of NSCLC, with better results than with each technique by itself.
abstract_id: PUBMED:23749884
Endoscopic and endobronchial ultrasound-guided needle aspiration in the mediastinal staging of non-small cell lung cancer. Invasive staging of mediastinal lymph nodes is recommended for the majority of patients with potentially resectable non-small cell lung cancer. In the past, 'blind' transbronchial needle aspiration during bronchoscopy and mediastinoscopy, a surgical procedure conducted under general anesthesia, were the only diagnostic methods. The latter is still considered the 'gold standard'; however, two novel, minimally-invasive techniques have emerged for the evaluation of the mediastinum: endoscopic (transesophageal) and endobronchial ultrasound--both performed using a dedicated echoendoscope, facilitating the ultrasound-guided, real-time aspiration of mediastinal lymph nodes. These methods are well-tolerated under local anesthesia and moderate sedation, with very low complication rates. Current guidelines on the invasive mediastinal staging of lung cancer still state that a negative needle aspiration result from these methods should be confirmed by mediastinoscopy. As more experience is gathered and echoendoscopes evolve, a thorough endosonographic evaluation of the mediastinum by both techniques, will obviate the need for surgical staging in the vast majority of patients and reduce the number of futile thoracotomies.
abstract_id: PUBMED:35118308
Role of endobronchial ultrasound-guided transbronchial needle aspiration in staging of lung cancer: a thoracic surgeon's perspective. In potentially resectable non-small cell lung cancer (NSCLC) accurate mediastinal staging is crucial not only to offer the optimal management but also to avoid unnecessary surgery. Mediastinal staging is generally performed by the use of imaging techniques (computed tomography and positron emission tomography). However, the accuracy of radiological imaging in mediastinal staging is suboptimal. Therefore, additional invasive mediastinal staging is frequently required to select patients who can benefit from a neoadjuvant treatment. In recent years, endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has progressively replaced mediastinoscopy as a test for invasive mediastinal staging. The considerable potential of EBUS-TBNA as minimally invasive staging method has been understood by pulmonologists since the early 2000s but only recently by thoracic surgeons. The clinical impact of this diagnostic technology has been broadly highlighted in the literature and EBUS-TBNA is currently considered the test of first choice in preoperative nodal staging of NSCLC. We analyze the actual role of EBUS-TBNA in invasive mediastinal staging of NSCLC patients from the thoracic surgeon point of view, with particular emphasis on the performance characteristics of this endoscopic diagnostic method as well as its clinical use within the published guidelines.
abstract_id: PUBMED:32736278
Endobronchial ultrasound- guided transbronchial needle aspiration for mediastinal lymph node staging in patients with typical pulmonary carcinoids. Background: Pulmonary carcinoids, which are well-differentiated lung neuroendocrine carcinomas, account for only 1-2 % of primary lung malignancies. Although fluorodeoxyglucose positron-emission tomography/computed tomography performs poorly in the identification of mediastinal lymph node metastases, particularly for pulmonary carcinoids, endobronchial ultrasound-guided (EBUS) transbronchial needle aspiration (TBNA) may be a useful means of preoperative nodal assessment in patients with these conditions. However, the diagnostic performance of EBUS TBNA is unknown. This study was designed to determine the sensitivity of EBUS for mediastinal staging in patients with typical carcinoid.
Study Design And Methods: A retrospective review of all patients with carcinoids who underwent EBUS TBNA and/or surgical resection with lymphadenectomy at The University of Texas MD Anderson Cancer Center was performed. The sensitivity of EBUS -TBNA in diagnosis of mediastinal lymph node metastases was determined.
Results: Of the 212 patients with pulmonary carcinoids we identified, 137 had surgery with no preoperative EBUS TBNA, 68 had EBUS TBNA followed by surgery, and 7 had EBUS TBNA only. The sensitivity rate for EBUS TBNA in diagnosis of mediastinal lymph node metastases was 77.78 % overall (95 % CI, 57.7-91.3%) and it was 87.5 % (95 % CI, 67.6-97.3%) when we considered only patients with EBUS TBNA-accessible lymph nodes.
Discussion: The sensitivity of EBUS TBNA for diagnosis of mediastinal lymph node metastases of pulmonary carcinoids was slightly lower than that reported previously for non-small cell lung cancer. Preoperative EBUS TBNA identified nodal metastases not previously identified by imaging.
abstract_id: PUBMED:34511572
Mediastinal Lymph Node Metastasis of Esophageal Cancer with Esophageal Stenosis Diagnosed via Transesophageal Endoscopic Ultrasound with Bronchoscope-guided Fine-needle Aspiration. An 80-year-old man underwent follow-up examinations after endoscopic submucosal dissection (ESD) for esophageal cancer. Computed tomography showed enlarged lymph nodes of the right recurrent nerve. The patient had esophageal stenosis due to repeated ESD for multiple esophageal tumors. The stenosis made the passage of an endoscopic ultrasound (EUS) scope through the esophagus difficult. Thus, an endobronchial ultrasound bronchoscope, which had a thinner diameter than that of the EUS scope, was used for transesophageal endoscopic ultrasound with bronchoscope-guided fine-needle aspiration. This technique led to the diagnosis of mediastinal lymph node metastasis of esophageal cancer.
abstract_id: PUBMED:30246650
The role of endobronchial and endoscopic ultrasound guided fine needle aspiration for mediastinal nodal staging of non-small-cell lung cancer. Introduction: Mediastinal and hilar nodal staging is one of the key points for differentiating treatment modalities in patients with non-small-cell lung cancer (NSCLC). The aim of the present study was to determinate the diagnostic yields of endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA), endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) and combined EBUS-TBNA and EUS-FNA modalities for nodal staging in potentially operable NSCLC patients.
Materials And Methods: Twenty consecutive patients were prospectively enrolled in the study between March 2014 and November 2015. All patients had a potentially operable NSCLC diagnosis before endosonographic procedures.
Result: Thirty lymph nodes were sampled by EBUS-TBNA and 17 lymph nodes were sampled by EUS-FNA in all 20 patients. The sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of F-18 fluorodeoxyglucose positron emission tomography with computed tomography (PET-CT), EBUS-TBNA, EUS-FNA and combined EBUS-TBNA and EUS-FNA were 100%, 33.3%, 64.7%, 100% and 70.0%; 81.8%, 100%, 100%, 81.8% and 90%; 81.8%, 100%, 100%, 75% and 88.2%; 90.9%, 100%, 100%, 90.0% and 95.0%, respectively.
Conclusions: The combined EBUS-TBNA and EUS-FNA technique is a successful procedure for nodal staging in potentially operable NSCLC patients.
abstract_id: PUBMED:30201065
Endobronchial Ultrasound Guided Transbronchial Needle Aspiration for The Diagnosis and Genotyping of Lung Cancer Background: Endobronchial ultrasound guided transbronchial needle aspiration (EBUS-TBNA) has emerged as an innovative technique for diagnosis and staging of lung cancer. But whether the procedure can provide enough tissue for the detection of gene mutations is still to be defined. Here we evaluated the efficacy of lung cancer diagnosis and gene analysis using samples obtain via EBUS-TBNA.
Methods: Patients with suspected lung cancer and mediastinal lesions were referred for EBUS-TBNA. Diagnosis and sub-classifications were made by pathologists. Samples with non-squamous non small cell lung cancer sub type were tested for the EGFR and/or ALK mutations.
Results: A total of 377 patients were included in this study. The median needle passes were 2.07. Lung cancer was diagnosed in 213 patients. The diagnosis accuracy for malignancy was 92%. Epidermal growth factor receptor (EGFR) mutations, anaplasticlymphoma kinase (ALK) fusion genes and double genes analysis were successfully preformed in 84 (90%), 105 (95%) and 79 (90%) patients. The number of needle passes and the diameters of lymph node were not associated with the efficacy of gene testing in univariate analysis. However, samples of adenocarcinoma sub type showed a tendency associated with higher genotyping efficacy.
Conclusions: Tissue samples obtained through EBUS-TBNA are sufficient for pathological diagnosis and genetic analysis of lung cancer. The pathology type of sample affected genotyping efficacy.
abstract_id: PUBMED:25408170
Substernal thyroid biopsy using Endobronchial Ultrasound-guided Transbronchial Needle Aspiration. Substernal thyroid goiter (STG) represents about 5.8% of all mediastinal lesions(1). There is a wide variation in the published incidence rates due to the lack of a standardized definition for STG. Biopsy is often required to differentiate benign from malignant lesions. Unlike cervical thyroid, the overlying sternum precludes ultrasound-guided percutaneous fine needle aspiration of STG. Consequently, surgical mediastinoscopy is performed in the majority of cases, causing significant procedure related morbidity and cost to healthcare. Endobronchial Ultrasound-guided Transbronchial Needle Aspiration (EBUS-TBNA) is a frequently used procedure for diagnosis and staging of non-small cell lung cancer (NSCLC). Minimally invasive needle biopsy for lesions adjacent to the airways can be performed under real-time ultrasound guidance using EBUS. Its safety and efficacy is well established with over 90% sensitivity and specificity. The ability to perform EBUS as an outpatient procedure with same-day discharges offers distinct morbidity and financial advantages over surgery. As physicians performing EBUS gained procedural expertise, they have attempted to diversify its role in the diagnosis of non-lymph node thoracic pathologies. We propose here a role for EBUS-TBNA in the diagnosis of substernal thyroid lesions, along with a step-by-step protocol for the procedure.
abstract_id: PUBMED:25005839
Determinants of false-negative results in non-small-cell lung cancer staging by endobronchial ultrasound-guided needle aspiration. Objectives: False-negative results of endobronchial ultrasound-guided transbronchial needle aspiration in non-small-cell lung cancer staging have shown significant variability in previous studies. The aim of this study was to identify procedure- and tumour-related determinants of endobronchial ultrasound-guided transbronchial needle aspiration false-negative results.
Methods: We conducted a prospective study that included non-small-cell lung cancer patients staged as N0/N1 by endobronchial ultrasound-guided transbronchial needle aspiration and undergoing therapeutic surgery. The frequency of false-negative results in the mediastinum was calculated. Procedure-related, first, and tumour-related, second, determinants of false-negative results in stations reachable and non-reachable by endobronchial ultrasound were determined by multivariate logistic regression.
Results: False-negative endobronchial ultrasound-guided transbronchial needle aspiration results were identified in 23 of 165 enrolled patients (13.9%), mainly in stations reachable by endobronchial ultrasound (17 cases, 10.3%). False-negative results were related to the extensiveness of endobronchial ultrasound sampling: their prevalence was low (2.4%) when sampling of three mediastinal stations was satisfactory, but rose above 10% when this requirement was not fulfilled (P = 0.043). In the multivariate analysis, abnormal mediastinum on computer tomography/positron emission tomography [odds ratio (OR) 7.77, 95% confidence interval (CI) 2.19-27.51, P = 0.001] and extensiveness of satisfactory sampling of mediastinal stations (OR 0.37, 95% CI 0.16-0.89, P = 0.026) were statistically significant risk factors for false-negative results in stations reachable by endobronchial ultrasound. False-negative results in non-reachable nodes were associated with a left-sided location of the tumour (OR 10.11, 95% CI 1.17-87.52, P = 0.036).
Conclusions: The presence of false-negative ultrasound-guided transbronchial needle aspiration results were observed in nearly 15% of non-small-cell lung cancer patients but in only 3% when satisfactory samples were obtained from three mediastinal stations. False-negative results in stations reachable by endobronchial ultrasound were associated with the extensiveness of sampling, and in stations out of reach of endobronchial ultrasound with left-sided tumours. These results suggest that satisfactory sampling of at least three mediastinal stations by EBUS-TBNA may be a quality criterion to be recommended for EBUS-TBNA staging.
abstract_id: PUBMED:29082634
Learning curve and advantages of endobronchial ultrasound-guided transbronchial needle aspiration as a first-line diagnostic and staging procedure. Background: Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is now the standard for mediastinal lymph node staging of lung cancer. Training and maintenance of technical skills is very important in order to apply new techniques in clinical use.
Methods: A retrospective chart review was performed of patients who underwent EBUS-TBNA from November 2009 to December 2015. We assessed the learning curve, accuracy (%), and whether this procedure shortened the duration of lung cancer staging.
Results: The EBUS-TBNA learning curve continued to improve beyond 120 procedures. Diagnostic accuracy was similar between benign and malignant populations. There was no difference in the learning curve between the groups. Non-small cell lung cancer patients who underwent EBUS-TBNA as the first investigative procedure underwent fewer subsequent investigative procedures (1.47 vs. 2.05; P < 0.001), and had a shorter staging duration (4.52 vs. 11.05 days; P = 0.006) compared to those who underwent other procedures for the first investigation.
Conclusion: EBUS-TBNA should be one of the preferred options for lung cancer diagnosis and staging because it reduces the staging duration compared to the use of other invasive procedures in initial investigation.
Answer: Yes, combining endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) and endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) appears to be more effective than using either technique alone in the mediastinal staging of non-small cell lung cancer (NSCLC). A study that enrolled 150 patients with potentially resectable NSCLC found that the sensitivity of EBUS-TBNA, EUS-FNA, and the combined approach per patient was 52%, 45%, and 73%, respectively. The combined approach also had a higher negative predictive value of 93% compared to 88% for EBUS-TBNA and 86% for EUS-FNA. This suggests that the combined endoscopic approach is a safe and more accurate method for preoperative hilar and mediastinal staging of NSCLC (PUBMED:24930616).
Furthermore, the combined EBUS-TBNA and EUS-FNA technique has been shown to be a successful procedure for nodal staging in potentially operable NSCLC patients, with a diagnostic accuracy of 95% (PUBMED:30246650). The use of both techniques may reduce the need for surgical staging in the majority of patients and decrease the number of unnecessary thoracotomies (PUBMED:23749884). Additionally, EBUS-TBNA has progressively replaced mediastinoscopy as the test of choice for invasive mediastinal staging, being recognized for its minimal invasiveness and high potential as a staging method (PUBMED:35118308).
In summary, the evidence suggests that the combined use of EUS-FNA and EBUS-TBNA is superior to either method alone in the mediastinal staging of NSCLC, offering better sensitivity and negative predictive values, which can lead to more accurate staging and potentially better patient outcomes. |
Instruction: Is obesity related to worse control in children with asthma?
Abstracts:
abstract_id: PUBMED:24814076
Is obesity related to worse control in children with asthma? Introduction: Asthma and obesity are related diseases however the influence of obesity on asthma severity is not clear yet. Therefore, the aim of our study was to evaluate the association between obesity and asthma control evaluated on the basis of symptoms and asthma control questionnaire (ACQ).
Materials And Methods: We enrolled 98 children with asthma aged 4 to 14 years consecutively and recorded their disease characteristics and severity parameters as well as the symptom scores. All children filled in the ACQ. Children were classified as obese and non-obese according to body mass index. Obesity was defined as body mass index over 90th percentile.
Results: Mean age of the children in the obese group (n= 27) was 8.1 ± 2.6 while that in the non-obese group (n= 71) was 8.6 ± 2.9 (p= 0.41). Asthma symptom score in obese and non-obese groups were not significantly different (p= 0.73). Children in the obese group had lower ACQ scores when compared to the non-obese group (1.2 ± 0.9 vs 1.7 ± 1.0, p= 0.04) however this significance was lost when controlled for age and gender in the regression model.
Conclusion: The results of this study suggest that obesity is not significantly associated with worse asthma control when adjusted for age and gender.
abstract_id: PUBMED:34117623
Association of Symptoms of Sleep-Related Breathing Disorders with Asthma Control in Indian Children. Objective: To explore the association of symptoms of sleep-related breathing disorders (SRBD) with asthma control in Indian children.
Methods: This study was carried out in the pediatric chest clinic of a tertiary care center in western India. Children from 6 to 18 y of age with a physician-diagnosed case of asthma were included in the study. A validated pediatric sleep questionnaire, SRBD scale, was used to screen the symptoms of SRBD. At the same time, Asthma Control Questionnaire (ACQ) was administered to assess asthma control.
Results: A total of 207 (73% boys) children with asthma were enrolled; the median age was 10 (7, 13) y. Asthma symptoms were well controlled (ACQ ≤ 0.75) in 102 (49.3%) and partly or poorly controlled (ACQ > 0.75) in 105 (50.7%) children. Inattention and/or hyperactivity was the most common SRBD symptom observed in 125 (60.4%) children; daytime sleepiness, mouth breathing, snoring, and night-time breathing problems were observed in 92 (44.5%), 91 (44%), 77 (37.2%), and 68 (32.8%) children, respectively. SRBD score showed a near-linear correlation with ACQ score (r = 0.28, p < 0.001). The score was positive in 52 (25.1%) children. A positive SRBD score was statistically more common in partly or poorly controlled asthma (aOR 2.5; 95% CI: 1.2-5.0; p = 0.01). However, the positive score did not show a statistically significant association with gender, being underweight, obesity, allergic rhinitis, compliance to therapy, and inhalation technique.
Conclusion: SRBD symptoms are common in children with asthma. They showed a statistically significant association with partly or poorly controlled asthma. Therefore, it would be interesting to look for SRBD symptoms in children with partly or poorly controlled asthma.
abstract_id: PUBMED:38365468
Analysis of risk factors for depression and anxiety related to the degree of asthma control in children according to gender. Objective: The purpose of the study was to investigate whether risk factors involved in the degree of asthma control were the same for children of both genders.
Methods: This cross-sectional study collected relevant data from 320 children with asthma attending the respiratory asthma clinic at a local children's hospital. All the patients passed the Asthma Control Test (ACT) or the Childhood Asthma Control Test (cACT), lung-function-related tests, the Children's Depression Inventory (CDI), the Screening Scale for Anxiety-Related Mood Disorders (SCARED), and the Family Personal Information Questionnaire.
Results: The study found that gender (p=0.034) was a risk factor for poor asthma control and that girls (odds ratio [OR]=1.669, p=0.042) were more likely to have poor asthma control than boys. Univariate logistic regression analysis found that severe wasting (OR=0.075, p=0.021), depression (OR=43. 550, p<0.001), anxiety (OR=4.769, p=0.036), FEV1% (OR=0.970, p=0.043), FEV1/FVC% (OR=0.921, p=0. 008), and PEF% (OR=0.961, p=0.012) were risk factors for poor asthma control in girls.
Conclusion: The risk factors for the degree of asthma control in children with asthma appeared to vary according to gender.
abstract_id: PUBMED:34480816
Health-related quality of life of food-allergic children compared with healthy controls and other diseases. Background: Food allergy is a potentially life-threatening disease, affecting up to 10% of the pediatric population.
Objective: The aim of our study was to assess the health-related quality of life (HRQL) of food-allergic patients compared with the general population and patients with other chronic diseases with dietary or allergic burden, in a cross-sectional study.
Methods: We recruited patients aged 8-17 years diagnosed with food allergy and matched healthy controls recruited in schools. We also included patients with asthma, inflammatory bowel disease, celiac disease, diabetes, obesity, and eating disorders. We used the CHQ-CF87 questionnaire for generic HRQL assessment. Food allergy HRQL was also assessed using specific questionnaires: Food Allergy Quality of Life Questionnaire (FAQLQ) and Food Allergy Independent Measure (FAIM).
Results: One hundred and thirty-five food-allergic children, 255 children with chronic diseases, and 463 healthy controls were included in the analyses. Food-allergic patients had a better HRQL than healthy controls in the Behavior (BE), Bodily Pain (BP), Family Activities (FA), and Mental Health (MH) domains and a worse HRQL in the General Health Perception (GH) domain (p = .048). Food-allergic patients exhibited a better HRQL than patients affected by other chronic diseases, notably diabetes. Although an epinephrine autoinjector had been prescribed to 87.4% of the food-allergic children, only 54.2% of them carried it at all times.
Conclusion: Food-allergic patients display overall good HRQL compared with the general population and those with other diseases with daily symptoms and treatments, in line with recent improvements in food allergy management.
abstract_id: PUBMED:34721414
Dexamethasone-Induced FKBP51 Expression in CD4+ T-Lymphocytes Is Uniquely Associated With Worse Asthma Control in Obese Children With Asthma. Introduction: There is evidence that obesity, a risk factor for asthma severity and morbidity, has a unique asthma phenotype which is less atopic and less responsive to inhaled corticosteroids (ICS). Peripheral blood mononuclear cells (PBMC) are important to the immunologic pathways of obese asthma and steroid resistance. However, the cellular source associated with steroid resistance has remained elusive. We compared the lymphocyte landscape among obese children with asthma to matched normal weight children with asthma and assessed relationship to asthma control.
Methods: High-dimensional flow cytometry of PBMC at baseline and after dexamethasone stimulation was performed to characterize lymphocyte subpopulations, T-lymphocyte polarization, proliferation (Ki-67+), and expression of the steroid-responsive protein FK506-binding protein 51 (FKBP51). T-lymphocyte populations were compared between obese and normal-weight participants, and an unbiased, unsupervised clustering analysis was performed. Differentially expressed clusters were compared with asthma control, adjusted for ICS and exhaled nitric oxide.
Results: In the obese population, there was an increased cluster of CD4+ T-lymphocytes expressing Ki-67 and FKBP51 at baseline and CD4+ T-lymphocytes expressing FKBP51 after dexamethasone stimulation. CD4+ Ki-67 and FKBP51 expression at baseline showed no association with asthma control. Dexamethasone-induced CD4+ FKBP51 expression was associated with worse asthma control in obese participants with asthma. FKBP51 expression in CD8+ T cells and CD19+ B cells did not differ among groups, nor did polarization profiles for Th1, Th2, Th9, or Th17 percentage.
Discussion: Dexamethasone-induced CD4+ FKBP51 expression is uniquely associated with worse asthma control in obese children with asthma and may underlie the corticosteroid resistance observed in this population.
abstract_id: PUBMED:26834184
Gastro-oesophageal reflux and worse asthma control in obese children: a case of symptom misattribution? Background: Obese children for unknown reasons report greater asthma symptoms. Asthma and obesity both independently associate with gastro-oesophageal reflux symptoms (GORS). Determining if obesity affects the link between GORS and asthma will help elucidate the obese-asthma phenotype.
Objective: Extend our previous work to determine the degree of associations between the GORS and asthma phenotype.
Methods: We conducted a cross-sectional study of lean (20%-65% body mass index, BMI) and obese (≥95% BMI) children aged 10-17 years old with persistent, early-onset asthma. Participants contributed demographics, GORS and asthma questionnaires and lung function data. We determined associations between weight status, GORS and asthma outcomes using multivariable linear and logistic regression. Findings were replicated in a second well-characterised cohort of asthmatic children.
Results: Obese children had seven times higher odds of reporting multiple GORS (OR=7.7, 95% CI 1.9 to 31.0, interaction p value=.004). Asthma symptoms were closely associated with GORS scores in obese patients (r=0.815, p<0.0001) but not in leans (r=0.291, p=0.200; interaction p value=0.003). Higher GORS scores associated with higher FEV1-per cent predicted (p=0.003), lower airway resistance (R10, p=0.025), improved airway reactance (X10, p=0.005) but significantly worse asthma control (Asthma Control Questionnaire, p=0.007). A significant but weaker association between GORS and asthma symptoms was seen in leans compared with obese in the replicate cohort.
Conclusion: GORS are more likely to associate with asthma symptoms in obese children. Better lung function among children reporting gastro-oesophageal reflux and asthma symptoms suggests that misattribution of GORS to asthma may be a contributing mechanism to excess asthma symptoms in obese children.
abstract_id: PUBMED:29273557
Overweight/obesity status in preschool children associates with worse asthma but robust improvement on inhaled corticosteroids. Background: Overweight/obesity (OW) is linked to worse asthma and poorer inhaled corticosteroid (ICS) response in older children and adults.
Objective: We sought to describe the relationships between OW and asthma severity and response to ICS in preschool children.
Methods: This post hoc study of 3 large multicenter trials involving 2- to 5-year-old children compared annualized asthma symptom days and exacerbations among normal weight (NW) (body mass index: 10th-84th percentiles) versus OW (body mass index: ≥85th percentile) participants. Participants had been randomized to daily ICS, intermittent ICS, or daily placebo. Simple and multivariable linear regression was used to compare body mass index groups.
Results: Within the group not treated with a daily controller, OW children had more asthma symptom days (90.7 vs 53.2, P = .020) and exacerbations (1.4 vs 0.8, P = .009) thanNW children did. Within the ICS-treated groups, OW and NW children had similar asthma symptom days (daily ICS: 47.2 vs 44.0 days, P = .44; short-term ICS: 61.8 vs 52.9 days, P = .46; as-needed ICS: 53.3 vs 47.3 days, P = .53), and similar exacerbations (daily ICS: 0.6 vs 0.8, P = .10; short-term ICS: 1.1 vs 0.8 days, P = .25; as-needed ICS: 1.0 vs 1.1, P = .72). Compared with placebo, daily ICS in OW led to fewer annualized asthma symptom days (90.7 vs 41.2, P = .004) and exacerbations (1.4 vs 0.6, P = .006), while similar protective ICS effects were less apparent among NW.
Conclusions: In preschool children off controller therapy, OW is associated with greater asthma impairment and exacerbations. However, unlike older asthmatic patients, OW preschool children do not demonstrate reduced responsiveness to ICS therapy.
abstract_id: PUBMED:36420526
Obesity-related pediatric asthma: relationships between pulmonary function and clinical outcomes. Objective: We hypothesized that children with obesity-related asthma would have worse self-reported asthma control, report an increased number of asthma symptoms and have lower FEV1/FVC associated with worse clinical asthma outcomes compared to children with asthma only.
Methods: Cross sectional analyses examined two hundred and eighteen (obesity-related asthma = 109, asthma only = 109) children, ages 7-15 that were recruited from clinics and hospitals within the Bronx, NY. Pulmonary function was assessed by forced expiratory volume in the first second (percent predicted FEV1) and the ratio of FEV1 to the forced vital capacity of the lungs (FEV1/FVC). Structural equation modeling examined if pulmonary function was associated with asthma control and clinical outcomes between groups.
Results: Lower percent predicted FEV1 was associated with increased hospitalizations (p = 0.03) and oral steroid bursts in the past 12 months (p = 0.03) in the obesity-related asthma group but not in the asthma only group. FEV1/FVC was also associated with increased hospitalizations (p = 0.02) and oral steroid bursts (p = 0.008) in the obesity-related asthma group but not the asthma only group. Lower FEV1/FVC was associated with the number of asthma symptoms endorsed in the asthma only group but not in the obesity-related asthma group. Percent predicted FEV1 and FEV1/FVC was not associated with asthma control in either group.
Conclusions: Pulmonary function was associated with oral steroid bursts and hospitalizations but not self-reported asthma control, suggesting the importance of incorporating measures of pulmonary function into the treatment of pediatric obesity-related asthma.
abstract_id: PUBMED:32930511
Obesity-related asthma in children: A role for vitamin D. Excess adipose tissue predisposes to an enhanced inflammatory state and can contribute to the pathogenesis and severity of asthma. Vitamin D has anti-inflammatory properties and low-serum levels are seen in children with asthma and in children with obesity. Here we review the intersection of asthma, obesity, and hypovitaminosis D in children. Supplementation with vitamin D has been proposed as a simple, safe, and inexpensive adjunctive therapy in a number of disease states. However, little research has examined the pharmacokinetics of vitamin D and its therapeutic potential in children who suffer from obesity-related asthma.
abstract_id: PUBMED:36457155
Asthma control in normal weight and overweight/obese asthmatic children following adenotonsillectomy. Objectives: Childhood adenotonsillar hypertrophy (ATH) with sleep-disordered breathing (SDB) frequently occurs concomitant with asthma. Adenotonsillectomy and reduction in asthma severity association has been reported. We describe changes in asthma control in nonobese or normal weight and obese/overweight children undergoing adenotonsillectomy for SDB.
Methods: This prospective, nonrandomized cohort trial with 6-month follow-up at a tertiary children's hospital enrolled 41 children with persistent asthma undergoing adenotonsillectomy for SDB. Children with significant chronic medical conditions, premature birth (< 28 weeks), or recent respiratory infection were excluded. Patients were stratified by baseline BMI into nonobese or normal weight (BMI < 85 percentile) and obese/overweight (BMI > 85%). The primary outcome was change in Childhood Asthma Control Test (cACT) scores 3 and 6 months following adenotonsillectomy. Secondary outcome examined improvement in Pediatric Asthma Caregiver's Quality of Life Questionnaire (PACQLQ) 3 and 6 months following adenotonsillectomy.
Results: Baseline characteristics were similar except for anthropometric measures and mean PACQLQ (P = .03). Children with nonobese or normal weight (n = 26) had statistically significant improvement in change in cACT at 3 (22.80 ± 2.33 vs. 17.86 ± 3.53, P < .001) and 6 (20.71±3.29 vs. 18.24 ± 4.16, P = .044) months compared with baseline. PACQLQ scores also improved at 3 (6.20 ± 0.87 vs. 4.56 ± 1.12, P < .001) and 6 (6.36 ± 0.72 vs. 4.93 ± 0.96, P < .001) months. Obese/overweight children (n = 10) had significant improvement in cACT scores at 6 months (20.00 ± 3.90 vs. 15.00 ± 6.90, P = .048). Change of cACT scores at 3 months (17.86 ± 3.53 vs. 14.86 ± 6.31, P = .272) was not significantly different. PACQLQ scores improved at 3 (5.47 ± 1.09 vs. 3.70 ± 0.85, P < .001) and 6 (5.75 ± 2.19 vs. 3.67 ± 1.04, P = .016) months.
Conclusion: Nonobese or normal-weight children undergoing adenotonsillectomy demonstrated significant improvement in asthma control scores at 3 and 6 and obese/overweight children at 6 months. Using the PACQLQ, caregiver quality of life improved for all children at 3 and 6 months. Surgical management of ATH in children with comorbid SBD and asthma is a good treatment option.
Answer: The relationship between obesity and asthma control in children is complex and the evidence from the provided abstracts suggests that obesity may influence asthma control, but the association is not straightforward.
One study found that obesity is not significantly associated with worse asthma control when adjusted for age and gender (PUBMED:24814076). Another study reported that symptoms of sleep-related breathing disorders (SRBD), which are common in children with asthma, showed a statistically significant association with partly or poorly controlled asthma, but the study did not show a statistically significant association with obesity (PUBMED:34117623).
A study analyzing risk factors for depression and anxiety related to the degree of asthma control in children according to gender found that gender was a risk factor for poor asthma control, with girls more likely to have poor asthma control than boys. However, the study did not directly link obesity to asthma control (PUBMED:38365468).
Research on food-allergic children indicated that these patients displayed overall good health-related quality of life (HRQL) compared with the general population and those with other diseases, but this study did not specifically address the impact of obesity on asthma control (PUBMED:34480816).
A study on obese children with asthma found that dexamethasone-induced FKBP51 expression in CD4+ T-lymphocytes is uniquely associated with worse asthma control in this population, suggesting a potential mechanism for corticosteroid resistance observed in obese children with asthma (PUBMED:34721414).
Another study suggested that gastro-oesophageal reflux symptoms (GORS) are more likely to associate with asthma symptoms in obese children, indicating that misattribution of GORS to asthma may contribute to excess asthma symptoms in this group (PUBMED:26834184).
A study on preschool children found that overweight/obesity is associated with greater asthma impairment and exacerbations off controller therapy, but overweight/obese preschool children did not demonstrate reduced responsiveness to inhaled corticosteroid (ICS) therapy (PUBMED:29273557).
Research on obesity-related pediatric asthma found that pulmonary function was associated with oral steroid bursts and hospitalizations but not self-reported asthma control, suggesting the importance of incorporating measures of pulmonary function into the treatment of pediatric obesity-related asthma (PUBMED:36420526). |
Instruction: Intracellular bacteria in Hodgkin's disease and sclerosing mediastinal B-cell lymphoma: sign of a bacterial etiology?
Abstracts:
abstract_id: PUBMED:12362281
Intracellular bacteria in Hodgkin's disease and sclerosing mediastinal B-cell lymphoma: sign of a bacterial etiology? Background: The aetiology of Hodgkin's disease is still unknown more than 160 years after its original description. In recent years a viral aetiology was the preferred hypothesis. Epidemiological, clinical, laboratory, and histological findings, however, point rather to a bacterial aetiology.
Methods: In the histological work-up of tissues from patients suffering from malignant lymphoma periodic acid-Schiff (PAS) stains are routinely done. In several bacterial infections intracellular PAS-positive material can be observed. We examined PAS-stained slides at magnifications of 1000x of six Hodgkin and twelve Non-Hodgkin patients.
Results: We found PAS-positive diastase resistant intracellular rods and spheres in all Hodgkin patients and in all of the six patients suffering from sclerosing mediastinal B-cell lymphomas, but not the other Non-Hodgkin lymphomas.
Conclusions: The diastase resistant PAS-positive structures are compatible with intracellular bacteria. After gastric MALT-lymphoma and gastric non-cardia adenocarcinoma it appears that Hodgkin's disease and sclerosing mediastinal B-cell lymphomas may also be human tumors related bacteria.
abstract_id: PUBMED:25805591
Emerging biological insights and novel treatment strategies in primary mediastinal large B-cell lymphoma. While primary mediastinal large B-cell lymphoma (PMBCL) is considered to be a subtype of diffuse large B-cell lymphoma, it is a distinct clinicopathologic entity, with clinical and biological features closely resembling nodular sclerosing Hodgkin lymphoma. Recent studies have highlighted the shared biology of these two entities and identified novel critical pathways of lymphomagenesis, including the presence of distinct mutations. Mediastinal grey zone lymphomas with features in between PMBCL and nodular sclerosing Hodgkin lymphoma have been described as the missing link between the two parent entities. While the standard therapeutic approach to PMBCL has been immunochemotherapy followed by mediastinal radiation, strategies that obviate the need for radiation and thus eliminate its long-term toxicities have recently been developed. The identification of novel targets in PMBCL and mediastinal grey zone lymphomas have paved the way for testing of agents such as small molecule inhibitors of Janus kinase pathways and immune checkpoint inhibitors. Future directions in these diseases should focus on combining effective novel agents with immunochemotherapy platforms.
abstract_id: PUBMED:15744341
Expression pattern of intracellular leukocyte-associated proteins in primary mediastinal B cell lymphoma. Two microarray studies of mediastinal B cell lymphoma have shown that this disease has a distinct gene expression profile, and also that this is closest to the pattern seen in classical Hodgkin's disease. We reported previously an immunohistologic study in which the loss of intracellular B cell-associated signaling molecules in Reed-Sternberg cells was demonstrated, and in this study we have investigated the expression of the same components in more than 60 mediastinal B cell lymphomas. We report that these signaling molecules are frequently present, and in particular that Syk, BLNK and PLC-gamma2 (absent from Reed-Sternberg cells) are present in the majority of mediastinal B cell lymphomas. The overall pattern of B cell signaling molecules in this disease is therefore closer to that of diffuse large B cell lymphoma than to Hodgkin's disease, and is consistent with a common cell of origin as an explanation of the similar gene expression profiles.
abstract_id: PUBMED:25499450
Primary mediastinal B-cell lymphoma and mediastinal gray zone lymphoma: do they require a unique therapeutic approach? Primary mediastinal B-cell lymphoma (PMBL) is a subtype of diffuse large B-cell lymphoma (DLBCL) that is putatively derived from a thymic B cell. Accounting for up to 10% of cases of DLBCL, this subtype predominantly affects women in the third and fourth decades of life. Its clinical and molecular characteristics are distinct from other subtypes of DLBCL and, in fact, closely resemble those of nodular sclerosing Hodgkin lymphoma (NSHL). Recently, mediastinal lymphomas with features intermediate between PMBL and NSHL, called mediastinal gray-zone lymphomas, have been described. The optimal management of PMBL is controversial, and most standard approaches include a combination of immunochemotherapy and mediastinal radiation. Recently, the recognition that mediastinal radiation is associated with significant long-term toxicities has led to the development of novel approaches for PMBL that have shown excellent efficacy and challenge the need for routine mediastinal radiation.
abstract_id: PUBMED:34429980
Concomitant occurrence of genetically distinct Hodgkin lymphoma and primary mediastinal lymphoma. Synchronous Hodgkin Lymphoma and Primary Mediastinal B-cell Lymphoma is possible, with molecular analyses proving the absence of clonal filiation between both entities. This suggests a common etiology but the existence of two divergent clones.
abstract_id: PUBMED:29222270
Primary mediastinal B-cell lymphoma: biology and evolving therapeutic strategies. Primary mediastinal B-cell lymphoma (PMBCL) is recognized as a distinct clinicopathologic entity that predominantly affects adolescents and young adults and is more common in female subjects. Although PMBCL is considered to be a subtype of diffuse large B-cell lymphoma, its clinical, morphologic, and biological characteristics overlap significantly with those of nodular sclerosing Hodgkin lymphoma (NSHL). Over the past few years, the shared biology of these 2 entities has been highlighted in several studies, and mediastinal gray zone lymphoma, with features intermediate between PMBCL and NSHL, has been recognized as a unique molecular entity. Although there is a lack of consensus about the optimal therapeutic strategy for adolescent and young adult patients newly diagnosed with PMCBL, highly curative strategies that obviate the need for mediastinal radiation are favored by most. Progress in understanding the biology of PMBCL and its close relationship to NSHL have helped pave the way for the investigation of novel approaches such as immune checkpoint inhibition. Other strategies such as adoptive T-cell therapy and targeting CD30 are also being studied.
abstract_id: PUBMED:19367254
Primary mediastinal large B-cell lymphoma. Primary mediastinal large B-cell lymphoma is a subtype of diffuse large B-cell lymphoma, which has distinct clinical and molecular features, many of which are similar to that of nodular sclerosing/classical Hodgkin lymphoma. Anthracycline-based chemotherapy forms the foundation for treatment of this lymphoma. This review will discuss controversial topics that warrant further study, such as the superiority of third generation regimens over CHOP-based regimens (cyclophosphamide, doxorubicin, vincristine, and prednisone), the use of involved field radiotherapy, and the assessment of clinical response by positron emission tomography scans.
abstract_id: PUBMED:23020783
Clinicopathological analysis of mediastinal large B-cell lymphoma and classical Hodgkin lymphoma of the mediastinum. Primary mediastinal (thymic) large B-cell lymphoma (PMLBCL) and nodular sclerosing classical Hodgkin lymphoma (NSCHL) are the major histological types of lymphoma affecting the mediastinum. We reviewed 27 patients with PMLBCL and 14 patients with NSCHL. A poor performance status, high serum lactate dehydrogenase level and strong positivity for PAX5 were all significantly more common in patients with PMLBCL than in those with NSCHL. Severe fibrosis was frequent in NSCHL, but not in PMLBCL. PDL1 was expressed by 11/25 PMLBCLs (44.0%) vs. 1/9 NSCHLs (11.1%). Expression of BCL6 was significantly more frequent in PDL1-positive PMLBCL than in PDL1-negative PMLBCL, but there were no clinical differences between these two groups. Two patients with PMLBCL with a poor prognosis had CD20(-), CD79a(+), CD15(-), and CD30(-), possibly representing a subtype of mediastinal gray zone lymphoma.
abstract_id: PUBMED:17575573
Primary mediastinal B-cell lymphoma. Primary mediastinal B-cell lymphoma (PMBCL) is a sub-type of the heterogeneous diffuse large B-cell lymphoma category, and comprises approximately 5% of all non-Hodgkin's lymphomas (NHL). It was first recognized as a distinct clinico-pathologic entity 20 years ago, and recent work has further characterized specific molecular features. Gene expression profiling has suggested a partial overlap with nodular sclerosing Hodgkin lymphoma (HL), with which it shares some clinical features. The optimal management remains a matter of debate. There is uncertainty as to whether weekly alternating chemotherapy regimens may be more effective than CHOP, whether consolidation radiotherapy (RT) to the mediastinum is always required, whether PET scanning can be used to determine this, and whether the use of rituximab as part of initial therapy will change the answers to these questions. The International Extranodal Lymphoma Study Group (IELSG) 26 clinicopathologic study of PMBCL, which has recently opened, represents a first attempt to gather data prospectively on some of these issues.
abstract_id: PUBMED:28910870
Primary mediastinal large B-cell lymphoma: a clinicopathologic study of 27 cases Objective: To study the clinicopathologic characteristics and diagnostic criteria of primary mediastinal B-cell lymphoma (PMBL), and to distinguish PMBL from classic Hodgkin lymphoma(CHL) and systemic diffuse large B-cell lymphoma(DLBCL). Methods: The clinical features, histologic findings, results of immunohistochemical study and prgnosis in 27 PMBL cases were analyzed, with review of literature. Results: The age of patients ranged from 19 to 82 years (median age 34 years). All cases were located in the mediastinum and frequently accompanied by superior vein cava syndrome. Histologically, the tumor cells were pleomorphic and diffusely distributed. Clear cytoplasm and spindle tumor cells were seen in some cases. Varying amount of sclerosing stroma with collagen deposition was seen.Immunohistochemical study showed that the tumor cells were positive for CD20(100%, 27/27), CD30 (64.0%, 16/25), CD23 (77.3%, 17/22) and p63 (16/19). Clonal B cell gene rearrangement was seen. Conclusions: PMBL is a subtype of diffuse large B-cell lymphoma with various histomorphology. Immunohistochemistry can help to confirm the diagnosis, and the prognosis is better than diffuse large B cell lymphoma, not otherwise specified.
Answer: The presence of PAS-positive diastase-resistant intracellular rods and spheres in all Hodgkin patients and in all patients with sclerosing mediastinal B-cell lymphomas, but not in other Non-Hodgkin lymphomas, suggests that these structures are compatible with intracellular bacteria. This finding raises the possibility that Hodgkin's disease and sclerosing mediastinal B-cell lymphomas may be human tumors related to bacteria, similar to the association between gastric MALT-lymphoma, gastric non-cardia adenocarcinoma, and bacterial infections (PUBMED:12362281). While this observation does not definitively establish a bacterial etiology, it does provide a potential avenue for further investigation into the role of bacteria in the pathogenesis of these lymphomas. |
Instruction: Can recovery of peripheral muscle function predict cognitive task performance in chronic fatigue syndrome with and without fibromyalgia?
Abstracts:
abstract_id: PUBMED:24363336
Can recovery of peripheral muscle function predict cognitive task performance in chronic fatigue syndrome with and without fibromyalgia? Background: Both good physical and cognitive functioning have a positive influence on the execution of activities of daily living. Patients with chronic fatigue syndrome (CFS) as well as patients with fibromyalgia have marked cognitive deficits. Furthermore, a good physical and functional health status may have a positive impact on a variety of cognitive skills-a link that has been observed in young and old individuals who are healthy, although evidence is limited in patients with CFS.
Objective: The purpose of this study was to examine whether recovery of upper limb muscle function could be a significant predictor of cognitive performance in patients with CFS and in patients with CFS and comorbid fibromyalgia. Furthermore, this study determined whether cognitive performance is different between these patient groups.
Design: A case-control design was used.
Methods: Seventy-eight participants were included in the study: 18 patients with CFS only (CFS group), 30 patients with CFS and comorbid fibromyalgia (CFS+FM group), and 30 individuals who were healthy and inactive (control group) were studied. Participants first completed 3 performance-based cognitive tests designed to assess selective and sustained attention, cognitive inhibition, and working memory capacity. Seven days later, they performed a fatiguing upper limb exercise test, with subsequent recovery measures.
Results: Recovery of upper limb muscle function was found to be a significant predictor of cognitive performance in patients with CFS. Participants in the CFS+FM group but not those in the CFS group showed significantly decreased cognitive performance compared with the control group.
Limitations: The cross-sectional nature of this study does not allow for inferences of causation.
Conclusions: The results suggest that better physical health status could predict better mental health in patients with CFS. Furthermore, they underline disease heterogeneity, suggesting that reducing this factor in future research is important to better understand and uncover mechanisms regarding the nature of diverse impairments in these patients.
abstract_id: PUBMED:24313704
Recovery of upper limb muscle function in chronic fatigue syndrome with and without fibromyalgia. Background: Chronic fatigue syndrome (CFS) patients frequently complain of muscle fatigue and abnormally slow recovery, especially of the upper limb muscles during and after activities of daily living. Furthermore, disease heterogeneity has not yet been studied in relation to recovery of muscle function in CFS. Here, we examine recovery of upper limb muscle function from a fatiguing exercise in CFS patients with (CFS+FM) and without (CFS-only) comorbid fibromyalgia and compare their results with a matched inactive control group.
Design: In this case-control study, 18 CFS-only patients, 30 CFS+FM patients and 30 healthy inactive controls performed a fatiguing upper limb exercise test with subsequent recovery measures.
Results: There was no significant difference among the three groups for maximal handgrip strength of the non-dominant hand. A significant worse recovery of upper limb muscle function was found in the CFS+FM, but not in de CFS-only group compared with the controls (P < 0·05).
Conclusions: This study reveals, for the first time, delayed recovery of upper limb muscle function in CFS+FM, but not in CFS-only patients. The results underline that CFS is a heterogeneous disorder suggesting that reducing the heterogeneity of the disorder in future research is important to make progress towards a better understanding and uncovering of mechanisms regarding the nature of divers impairments in these patients.
abstract_id: PUBMED:35980775
Cognitive Task Performance and Subjective Cognitive Symptoms in Individuals With Chronic Fatigue Syndrome or Fibromyalgia: A Cross-Sectional Analysis of the Lifelines Cohort Study. Objective: This study examined cognitive task performance and self-reported cognitive functioning in individuals with chronic fatigue syndrome (CFS) and fibromyalgia (FM) in a population-based sample and investigated the role of mood and anxiety disorders as well as severity of the physical symptoms.
Methods: This study was performed in 79,966 participants (mean [standard deviation] age = 52.9 [12.6] years, 59.2% women) from the Lifelines general population. Symptoms consistent with the diagnostic criteria for CFS and FM were assessed using questionnaires. Two comparison groups were used: participants with self-reported medical disorders with well-defined pathophysiology (i.e., multiple sclerosis and rheumatic arthritis) and controls without these diseases. Objective task performance was based on the computerized CogState cognitive battery and subjective cognitive symptoms using the concentration subscale of the Checklist Individual Strength.
Results: Cognitive task performance was poorer in individuals with CFS versus controls without disease and controls with a medical disorder, although the severity of cognitive dysfunction was mild. Participants meeting the criteria for CFS ( n = 2461) or FM ( n = 4295) reported more subjective cognitive symptoms compared with controls without a medical disorder ( d = 1.53, 95% confidence interval [CI] = 1.49-1.57 for CFS; d = 1.25, 95% CI = 1.22-1.29 for FM) and participants with a medical disease ( d = 0.62, 95% CI = 0.46-0.79 for CFS; d = 0.75, 95% CI = 0.70-0.80 for FM). These differences remained essentially the same when excluding participants with comorbid mood or anxiety disorders or adjusting for physical symptom severity.
Conclusions: Subjective cognitive symptoms and, to a lesser extent, suboptimal cognitive task performance are more prevalent in individuals with CFS or FM compared with controls without these conditions.
abstract_id: PUBMED:25308475
What is in a name? Comparing diagnostic criteria for chronic fatigue syndrome with or without fibromyalgia. The current study had two objectives. (1) to compare objective and self-report measures in patients with chronic fatigue syndrome (CFS) according to the 1994 Center for Disease Control (CDC) criteria, patients with multiple sclerosis (MS), and healthy controls, and (2) to contrast CFS patients who only fulfill CDC criteria to those who also fulfill the criteria for myalgic encephalomyelitis (ME), the 2003 Canadian criteria for ME/CFS, or the comorbid diagnosis of fibromyalgia (FM). One hundred six participants (48 CFS patients diagnosed following the 1994 CDC criteria, 19 MS patients, and 39 healthy controls) completed questionnaires assessing symptom severity, quality of life, daily functioning, and psychological factors. Objective measures consisted of activity monitoring, evaluation of maximal voluntary contraction and muscle recovery, and cognitive performance. CFS patients were screened whether they also fulfilled ME criteria, the Canadian criteria, and the diagnosis of FM. CFS patients scored higher on symptom severity, lower on quality of life, and higher on depression and kinesiophobia and worse on MVC, muscle recovery, and cognitive performance compared to the MS patients and the healthy subjects. Daily activity levels were also lower compared to healthy subjects. Only one difference was found between those fulfilling the ME criteria and those who did not regarding the degree of kinesiophobia (lower in ME), while comorbidity for FM significantly increased the symptom burden. CFS patients report more severe symptoms and are more disabled compared to MS patients and healthy controls. Based on the present study, fulfillment of the ME or Canadian criteria did not seem to give a clinically different picture, whereas a diagnosis of comorbid FM selected symptomatically worse and more disabled patients.
abstract_id: PUBMED:26431138
Associations Between Cognitive Performance and Pain in Chronic Fatigue Syndrome: Comorbidity with Fibromyalgia Does Matter. Background: In addition to the frequently reported pain complaints, performance-based cognitive capabilities in patients with chronic fatigue syndrome (CFS) with and without comorbid fibromyalgia (FM) are significantly worse than those of healthy controls. In various chronic pain populations, cognitive impairments are known to be related to pain severity. However, to the best of our knowledge, the association between cognitive performance and experimental pain measurements has never been examined in CFS patients.
Objectives: This study aimed to examine the association between cognitive performance and self-reported as well as experimental pain measurements in CFS patients with and without FM.
Study Design: Observational study.
Setting: The present study took place at the Vrije Universiteit Brussel and the University of Antwerp.
Methods: Forty-eight (18 CFS-only and 30 CFS+FM) patients and 30 healthy controls were studied. Participants first completed 3 performance-based cognitive tests designed to assess selective and sustained attention, cognitive inhibition, and working memory capacity. Seven days later, experimental pain measurements (pressure pain thresholds [PPT], temporal summation [TS], and conditioned pain modulation [CPM]) took place and participants were asked to fill out 3 questionnaires to assess self-reported pain, fatigue, and depressive symptoms.
Results: In the CFS+FM group, the capacity of pain inhibition was significantly associated with cognitive inhibition. Self-reported pain was significantly associated with simple reaction time in CFS-only patients. The CFS+FM but not the CFS-only group showed a significantly lower PPT and enhanced TS compared with controls.
Limitations: The cross-sectional nature of this study does not allow for inferences of causation.
Conclusions: The results underline disease heterogeneity in CFS by indicating that a measure of endogenous pain inhibition might be a significant predictor of cognitive functioning in CFS patients with FM, while self-reported pain appears more appropriate to predict cognitive functioning in CFS patients without FM.
abstract_id: PUBMED:16177595
Exercise and cognitive performance in chronic fatigue syndrome. Purpose: To determine the effect of submaximal steady-state exercise on cognitive performance in patients with chronic fatigue syndrome (CFS) alone, CFS with comorbid fibromyalgia FM (CFS + FM), and sedentary healthy controls (CON).
Methods: Twenty CFS-only patients, 19 CFS + FM, and 26 CON completed a battery of cognitive tests designed to assess speed of information processing, variability, and efficiency. Tests were performed at baseline, immediately before, and twice following 25 min of either cycle ergometry set at 40% of peak oxygen capacity or quiet rest.
Results: There were no group differences in average percentage of peak oxygen consumption during exercise (CFS = 45%; CFS + FM = 47%; Control = 43%: P = 0.2). There were no significant effects of acute exercise on cognitive performance for any group. At baseline, one-way ANOVA indicated that CFS patients displayed deficits in speed of processing, performance variability, and task efficiency during several cognitive tests compared with healthy controls. However, the CFS + FM patients were not different than controls. Repeated measures ANOVA indicated that across all tests (pre- and postexercise) CFS, but not CFS + FM, were significantly less consistent (F2,59 = 3.7, P = 0.03) and less efficient (F2,59 = 4.6, P = 0.01) than controls.
Conclusion: CFS patients without comorbid FM exhibit subtle cognitive deficits in terms of speed, consistency, and efficiency that are not improved or exacerbated by light exercise. Importantly, our data suggest that CFS + FM patients do not exhibit cognitive deficits either pre- or postexercise. These results highlight the importance of disease heterogeneity in studies determining acute exercise and cognitive function in CFS.
abstract_id: PUBMED:30159106
A Concurrent Cognitive Task Does Not Perturb Quiet Standing in Fibromyalgia and Chronic Fatigue Syndrome. Background And Objectives: Cognitive complaints are common in fibromyalgia (FM) and chronic fatigue syndrome (CFS). Fatigue as well as pain may require greater effort to perform cognitive tasks, thereby increasing the load on processing in the central nervous system and interfering with motor control.
Methods: The effect of a concurrent arithmetic cognitive task on postural control during quiet standing was investigated in 75 women (aged 19-49 years) and compared between FM, CFS, and matched controls (n=25/group). Quiet standing on a force plate was performed for 60 s/condition, with and without a concurrent cognitive task. The center of pressure data was decomposed into a slow component and a fast component representing postural sway and adjusting ankle torque.
Results: Compared to controls, CFS and FM displayed lower frequency in the slow component (p < 0.001), and CFS displayed greater amplitude in the slow (p=0.038 and p=0.018) and fast (p=0.045) components. There were no interactions indicating different responses to the added cognitive task between any of the three groups.
Conclusion: Patients displayed insufficient postural control across both conditions, while the concurrent cognitive task did not perturb quiet standing. Fatigue but not pain correlated with postural control variables.
abstract_id: PUBMED:22802155
Peripheral and central mechanisms of fatigue in inflammatory and noninflammatory rheumatic diseases. Fatigue is a common symptom in a large number of medical and psychological disorders, including many rheumatologic illnesses. A frequent question for health care providers is related to whether reported fatigue is "in the mind" or "in the body"-that is, central or peripheral. If fatigue occurs at rest without any exertion, this suggests psychological or central origins. If patients relate their fatigue mostly to physical activities, including exercise, their symptoms can be considered peripheral. However, most syndromes of fatigue seem to depend on both peripheral and central mechanisms. Sometimes, muscle biopsy with histochemistry may be necessary for the appropriate tissue diagnosis, whereas serological tests generally provide little reliable information about the origin of muscle fatigue. Muscle function and peripheral fatigue can be quantified by contractile force and action potential measurements, whereas validated questionnaires are frequently used for assessment of mental fatigue. Fatigue is a hallmark of many rheumatologic conditions, including fibromyalgia, myalgic encephalitis/chronic fatigue syndrome, rheumatoid arthritis, systemic lupus, Sjogren's syndrome, and ankylosing spondylitis. Whereas many studies have focused on disease activity as a correlate to these patients' fatigue, it has become apparent that other factors, including negative affect and pain, are some of the most powerful predictors for fatigue. Conversely, sleep problems, including insomnia, seem to be less important for fatigue. There are several effective treatment strategies available for fatigued patients with rheumatologic disorders, including pharmacological and nonpharmacological therapies.
abstract_id: PUBMED:23182635
The relationship between muscle pain and fatigue. Pain and fatigue may occur together during sustained exhausting muscle contractions, particularly as the limit of endurance is approached, and both can restrict muscle performance. Patients with neuromuscular disorders may have chronic myofascial pain (e.g. fibromyalgia) or contraction-induced pain (e.g. in metabolic myopathies). In some patients these two types of pain may coexist and both may inhibit central motor drive during exercise. Little is known about the central motor adaptations that occur in patients with neuromuscular disorders and how the effects of pain are mediated. Transcranial magnetic brain stimulation has made it possible to investigate the changes in excitability of the central motor pathway during fatiguing muscle activity and have thrown light on the mechanisms of fatigue in normal subjects and individuals with chronic fatigue syndrome and multiple sclerosis, but there have been few studies in patients with neuromuscular disorders. Repetitive magnetic brain stimulation protocols can now be used to modulate the excitability of the motor system during exercise to delay the onset of peripheral fatigue, and to reduce chronic pain. The possible application of these techniques in patients with neuromuscular disorders warrants further investigation.
abstract_id: PUBMED:17588877
Cognitive functioning and aging in women. Deficits in cognitive function may impact one's ability to attend to stimuli, think clearly, reason, and remember. Impaired cognitive function is a common complaint among older women presenting for treatment in both mental health and medical care settings, and differential diagnosis of type and extent of cognitive impairment is important for appropriate treatment planning and prognosis. Although overall gender differences in prevalence of cognitive dysfunction are minimal, it is important when treating older women to take into account unique challenges they face in the aging process that impact the cause, type and extent of cognitive complaints with which they present in clinical settings. The current paper provides an overview to guide accurate diagnosis, particularly in women, of different types of cognitive impairment under the broad category of dementias, including Alzheimer's, Lewy Body Disease, Vascular Dementia, and due to general medical conditions such as coronary artery bypass surgery, head injury, menopause, hypothyroidism, breast cancer treatment, Fibromyalgia, and chronic fatigue. In addition, emotional factors such as depression in older female patients complicate differential diagnosis of cognitive impairment and must be addressed. Given the multiplicity of causes of cognitive difficulties for women across the life span, careful assessment is crucial; the current paper reviews assessment strategies to prepare an integrated, biopsychosocial strategy for identifying particular cognitive deficits and related psychological and medical problems. In addition, prognostic indicators and treatment planning are discussed to help the practitioner organize an empathic, reasoned and multifaceted treatment approach to maximize recovery, minimize deterioration, and manage symptoms for older women in the context of their social support system and living environment.
Answer: Yes, recovery of peripheral muscle function can predict cognitive task performance in chronic fatigue syndrome (CFS) with and without fibromyalgia (FM). A study found that recovery of upper limb muscle function was a significant predictor of cognitive performance in patients with CFS. Participants in the CFS+FM group, but not those in the CFS group alone, showed significantly decreased cognitive performance compared with the control group (PUBMED:24363336). This suggests that better physical health status could predict better mental health in patients with CFS. However, the study also underlines the disease heterogeneity, indicating that CFS is a heterogeneous disorder and that reducing this heterogeneity in future research is important to better understand and uncover mechanisms regarding the nature of diverse impairments in these patients (PUBMED:24363336; PUBMED:24313704).
Furthermore, another study revealed that there was delayed recovery of upper limb muscle function in CFS+FM, but not in CFS-only patients, compared with controls (PUBMED:24313704). This indicates that the presence of FM comorbidity in CFS patients may affect the relationship between muscle function recovery and cognitive performance.
Additionally, associations between cognitive performance and pain in CFS have been studied, and it was found that in CFS+FM patients, the capacity of pain inhibition was significantly associated with cognitive inhibition, while self-reported pain was significantly associated with simple reaction time in CFS-only patients (PUBMED:26431138). This further supports the idea that the comorbidity of FM in CFS patients can influence cognitive outcomes and that the recovery of muscle function and pain experiences are relevant factors in predicting cognitive task performance in these patient groups. |
Instruction: Is prostate-specific antigen velocity selective for clinically significant prostate cancer in screening?
Abstracts:
abstract_id: PUBMED:31357651
Accuracy of Tumour-Associated Circulating Endothelial Cells as a Screening Biomarker for Clinically Significant Prostate Cancer. Even though more than 350,000 men die from prostate cancer every year, broad-based screening for the disease remains a controversial topic. Guidelines demand that the only commonly accepted screening tool, prostate-specific antigen (PSA) testing, must be followed by prostate biopsy if results are elevated. Due to the procedure's low positive predictive value (PPV), however, over 80% of biopsies are performed on healthy men or men with clinically insignificant cancer-prompting calls for new ways of vetting equivocal PSA readings prior to the procedure. Responding to the challenge, the present study investigated the diagnostic potential of tumour-associated circulating endothelial cells (tCECs), which have previously been described as a novel, blood-based biomarker for clinically significant cancers. Specifically, the objective was to determine the diagnostic accuracy of a tCEC-based blood test to detect clinically significant prostate cancer (defined as Gleason score ≥ 3 + 4) in high-risk patients. Performed in a blinded, prospective, single-centre set-up, it compared a novel tCEC index test with transrectal ultrasound-guided biopsy biopsy as a reference on a total of 170 patients and found that a tCEC add-on test will almost double the PPV of a standalone PSA test (32% vs. 17%; p = 0.0012), while retaining a negative predictive value above 90%.
abstract_id: PUBMED:18353529
Is prostate-specific antigen velocity selective for clinically significant prostate cancer in screening? European Randomized Study of Screening for Prostate Cancer (Rotterdam). Background: The value of prostate-specific antigen velocity (PSAV) in screening for prostate cancer (PCa) and especially for clinically significant PCa is unclear.
Objective: To assess the value of PSAV in screening for PCa. Specifically, the role of PSAV in lowering the number of unnecessary biopsies and reducing the detection rate of indolent PCa was evaluated.
Design, Setting, And Participants: All men included in the study cohort were participants in the European Randomized Study of Screening for Prostate Cancer (ERSPC), Rotterdam section.
Intervention: During the first and second screening round, a PSA test was performed on 2217 men, and all underwent a biopsy during the second screening round 4 yr later.
Measurements: PSAV was calculated and biopsy outcome was classified as benign, possibly indolent PCa, or clinically significant PCa.
Results And Limitations: A total of 441 cases of PCa were detected, 333 were classified as clinically significant and 108 as possibly indolent. The use of PSAV cut-offs reduced the number of biopsies but led to important numbers of missed (indolent and significant) PCa. PSAV was predictive for PCa (OR: 1.28, p<0.001) and specifically for significant PCa (OR: 1.46, p<0.001) in univariate analyses. However, multivariate analyses using age, PSA, prostate volume, digital rectal examination and transrectal ultrasonography outcome, and previous biopsy (yes/no) showed that PSAV was not an independent predictor of PCa (OR: 1.01, p=0.91) or significant PCa (OR: 0.87, p=0.30).
Conclusions: The use of PSAV as a biopsy indicator would miss a large number of clinically significant PCa cases with increasing PSAV cut-offs. In this study, PSAV was not an independent predictor of a positive biopsy in general or significant PCa on biopsy. Therefore, PSAV does not improve the ERSPC screening algorithm.
abstract_id: PUBMED:32241692
Serum and urine biomarkers for detecting clinically significant prostate cancer. Since the "prostate-specific antigen (PSA) era," we have seen an increase in unnecessary biopsies, which has ultimately lead to an overtreatment of low-risk cancers. Given the limitations of prostate-specific antigen and the invasive nature of prostate biopsy several serum and urinary biomarkers have been developed. In this paper, we provide a comprehensive review of the available biomarkers for the detection clinically significant prostate cancer namely PHI, 4Kscore, PCA3, MiPS, SelectMDx, ExosomeDX. Current literature suggests that these biomarkers can improve detection of clinically significant prostate cancer reducing overtreatment and making treatment strategies more cost-effective. Nevertheless, large prospective studies with head-to-head-comparisons of the available biomarkers are necessary to fully assess the potential of incorporating biomarkers in routine clinical practice.
abstract_id: PUBMED:34453258
Prostate Cancer in Older Adults: Risk of Clinically Meaningful Disease, the Role of Screening and Special Considerations. Purpose Of Review: Prostate cancer is the second most common cancer in men in the USA and several studies suggest more aggressive disease in older patients. However, screening remains controversial, especially in the older patient population.
Recent Findings: Aggressive prostate cancers are more common in older men. Screening trial results are conflicting but data suggest an improvement in prostate cancer mortality and increased detection of metastatic disease with screening. When PSA is utilized with multiparametric MRI and biomarker assays, patients at significant risk of clinically meaningful prostate cancer can be appropriately selected for biopsy. A thoughtful and individualized approach is central when considering prostate cancer screening in older men. This approach includes life expectancy estimation, use of appropriate geriatric assessment tools, use of multiparametric MRI and biomarkers in addition to PSA, and most importantly shared decision-making with patients.
abstract_id: PUBMED:35243388
Improving the Early Detection of Clinically Significant Prostate Cancer in Men in the Challenging Prostate Imaging-Reporting and Data System 3 Category. Background: Prostate Imaging-Reporting and Data System (PI-RADS) category 3 is a challenging scenario for detection of clinically significant prostate cancer (csPCa) and some tools can improve the selection of appropriate candidates for prostate biopsy.
Objective: To assess the performance of the European Randomized Study of Screening for Prostate Cancer (ERSPC) magnetic resonance imaging (MRI) model, the new Proclarix test, and prostate-specific antigen density (PSAD) in selecting candidates for prostate biopsy among men in the PI-RADS 3 category.
Design Setting And Participants: We conducted a head-to-head prospective analysis of 567 men suspected of having PCa for whom guided and systematic biopsies were scheduled between January 2018 and March 2020 in a single academic institution. A PI-RADS v.2 category 3 lesion was identified in 169 men (29.8%).
Outcome Measurement And Statistical Analysis: csPCa, insignificant PCa (iPCa), and unnecessary biopsy rates were analysed. csPCa was defined as grade group ≥2. Receiver operating characteristic (ROC) curves, decision curve analysis curves, and clinical utility curves were plotted.
Results And Limitations: PCa was detected in 53/169 men (31.4%) with a PI-RADS 3 lesion, identified as csPCa in 25 (14.8%) and iPCa in 28 (16.6%). The area under the ROC curve for csPCa detection was 0.703 (95% confidence interval [CI] 0.621-0.768) for Proclarix, 0.657 (95% CI 0.547-0.766) for the ERSPC MRI model, and 0.612 (95% CI 0.497-0.727) for PSAD (p = 0.027). The threshold with the highest sensitivity was 10% for Proclarix, 1.5% for the ERSPC MRI model, and 0.07 ng/ml/cm3 for PSAD, which yielded sensitivity of 100%, 91%, and 84%, respectively. Some 21.3%, 26.2%, and 7.1% of biopsies would be avoided with Proclarix, PSAD, and the ERSPC MRI model, respectively. Proclarix showed a net benefit over PSAD and the ERSPC MRI model. Both Proclarix and PSAD reduced iPCa overdetection from 16.6% to 11.3%, while the ERSPC MRI model reduced iPCa overdetection to 15.4%.
Conclusions: Proclarix was more accurate in selecting appropriate candidates for prostate biopsy among men in the PI-RADS 3 category when compared to PSAD and the ERSPC MRI model. Proclarix detected 100% of csPCa cases and would reduce prostate biopsies by 21.3% and iPCa overdetection by 5.3%.
Patient Summary: We compared three methods and found that the Proclarix test can optimise the detection of clinically significant prostate cancer in men with a score of 3 on the Prostate Imaging-Reporting and Data System for magnetic resonance imaging scans.
abstract_id: PUBMED:35463344
Modified Prostate Health Index Density Significantly Improves Clinically Significant Prostate Cancer (csPCa) Detection. Background: Early screening of clinically significant prostate cancer (csPCa) may offer opportunities in revolutionizing the survival benefits of this lethal disease. We sought to introduce a modified prostate health index density (mPHI) model using imaging indicators and to compare its diagnostic performance for early detection of occult onset csPCa within the prostate-specific antigen (PSA) gray zone with that of PHI and PHID.
Methods And Participation: Between August 2020 and January 2022, a training cohort of 278 patients (total PSA 4.0-10.0 ng/ml) who were scheduled for a prostate biopsy were prospectively recruited. PHI and PHID were compared with mPHI (LDTRD×APD×TPV×PHI) for the diagnosis performance in identifying csPCa. Pathology outcomes from systematic prostate biopsies were considered the gold standard.
Results: This model was tested in a training cohort consisting of 73 csPCa, 14 non-clinically significant prostate cancer(non-csPCa), and 191 benign prostatic hyperplasia (BPH) samples. In the univariate analysis for the PSA gray zone cohort, for overall PCa, the AUC of mPHI (0.856) was higher than PHI (0.774) and PHID (0.835). For csPCa, the AUC of mPHI (0.859) also surpassed PHI (0.787) and PHID (0.825). For detection of csPCa, compared with lower specificities from PHI and PHID, mPHI performed the highest specificity (76.5%), by sparing 60.0% of unnecessary biopsies at the cost of missing 11 cases of csPCa. The mPHI outperformed PHI and PHID for overall PCa detection. In terms of csPCa, mPHI exceeds diagnostic performance with a better net benefit in decision curve analysis (DCA) compared with PHI or PHID.
Conclusions: We have developed a modified PHI density (mPHI) model that can sensitively distinguish early-stage csPCa patients within the PSA gray zone.
Clinical Trial Registration: ClinicalTrials.gov, NCT04251546.
abstract_id: PUBMED:37734979
Development and evaluation of the MiCheck® Prostate test for clinically significant prostate cancer. Background: There is a clinical need to identify patients with an elevated PSA who would benefit from prostate biopsy due to the presence of clinically significant prostate cancer (CSCaP). We have previously reported the development of the MiCheck® Test for clinically significant prostate cancer. Here, we report MiCheck's further development and incorporation of the Roche Cobas standard clinical chemistry analyzer.
Objectives: To further develop and adapt the MiCheck® Prostate test so it can be performed using a standard clinical chemistry analyzer and characterize its performance using the MiCheck-01 clinical trial sample set.
Design, Settings, And Participants: About 358 patient samples from the MiCheck-01 US clinical trial were used for the development of the MiCheck® Prostate test. These consisted of 46 controls, 137 non-CaP, 62 non-CSCaP, and 113 CSCaP.
Methods: Serum analyte concentrations for cellular growth factors were determined using custom-made Luminex-based R&D Systems multi-analyte kits. Analytes that can also be measured using standard chemistry analyzers were examined for their ability to contribute to an algorithm with high sensitivity for the detection of clinically significant prostate cancer. Samples were then re-measured using a Roche Cobas analyzer for development of the final algorithm.
Outcome Measurements And Statistical Analysis: Logistic regression modeling with Monte Carlo cross-validation was used to identify Human Epidydimal Protein 4 (HE4) as an analyte able to significantly improve the algorithm specificity at 95% sensitivity. A final model was developed using analyte measurements from the Cobas analzyer.
Results: The MiCheck® logistic regression model was developed and consisted of PSA, %free PSA, DRE, and HE4. The model differentiated clinically significant cancer from no cancer or not-clinically significant cancer with AUC of 0.85, sensitivity of 95%, and specificity of 50%. Applying the MiCheck® test to all evaluable 358 patients from the MiCheck-01 study demonstrated that up to 50% of unnecessary biopsies could be avoided while delaying diagnosis of only 5.3% of Gleason Score (GS) ≥3+4 cancers, 1.8% of GS≥4+3 cancers and no cancers of GS 8 to 10.
Conclusions: The MiCheck® Prostate test identifies clinically significant prostate cancer with high sensitivity and negative predictive value (NPV). It can be performed in a clinical laboratory using a Roche Cobas clinical chemistry analyzer. The MiCheck® Prostate test could assist in reducing unnecessary prostate biopsies with a marginal number of patients experiencing a delayed diagnosis.
abstract_id: PUBMED:36388432
Molecular Biomarkers for the Detection of Clinically Significant Prostate Cancer: A Systematic Review and Meta-analysis. Context: Prostate cancer (PCa) is the second most common type of cancer in men. Individualized risk stratification is crucial to adjust decision-making. A variety of molecular biomarkers have been developed in order to identify patients at risk of clinically significant PCa (csPCa) defined by the most common PCa risk stratification systems.
Objective: The present study aims to examine the effectiveness (diagnostic accuracy) of blood or urine-based PCa biomarkers to identify patients at high risk of csPCa.
Evidence Acquisition: A systematic review of the literature was conducted. Medline and EMBASE were searched from inception to March 2021. Randomized or nonrandomized clinical trials, and cohort and case-control studies were eligible for inclusion. Risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Pooled estimates of sensitivity, specificity, and area under the curve were obtained.
Evidence Synthesis: Sixty-five studies (N = 34 287) were included. Not all studies included prostate-specific antigen-selected patients. The pooled data showed that the Prostate Health Index (PHI), with any cutoff point between 15 and 30, had sensitivity of 0.95-1.00 and specificity of 0.14-0.33 for csPCa detection. The pooled estimates for SelectMDx test sensitivity and specificity were 0.84 and 0.49, respectively.
Conclusions: The PHI test has a high diagnostic accuracy rate for csPCa detection, and its incorporation in the diagnostic process could reduce unnecessary biopsies. However, there is a lack of evidence on patient-important outcomes and thus more research is needed.
Patient Summary: It has been possible to verify that the application of biomarkers could help detect prostate cancer (PCa) patients with a higher risk of poorer evolution. The Prostate Health Index shows an ability to identify 95-100 for every 100 patients suffering from clinically significant PCa who take the test, preventing unnecessary biopsies in 14-33% of men without PCa or insignificant PCa.
abstract_id: PUBMED:29541458
Novel application of three-dimensional shear wave elastography in the detection of clinically significant prostate cancer. The present study evaluated three-dimensional shear wave elastography (3D SWE) in the detection of clinically significant prostate cancer. Clinically significant prostate cancer was defined by a minimum of one biopsy core with a Gleason score of 3+4 or 6 with a maximum cancer core length >4 mm. Patients with serum prostate-specific antigen levels of 4.0-20.0 ng/ml who were suspected of having prostate cancer from multi-parametric magnetic resonance imaging (mpMRI) were prospectively recruited. The 3D SWE was performed pre-biopsy, after which patients underwent MRI-transrectal ultrasound image-guided targeted biopsies for cancer-suspicious lesions and 12-core systematic biopsies. The pathological biopsy results were compared with the mpMRI and 3D SWE images. A total of 12 patients who were suspected of having significant cancer on mpMRI were included. The median pre-biopsy PSA value was 5.65 ng/ml. Of the 12 patients, 10 patients were diagnosed as having prostate cancer. In the targeted biopsy lesions, there was a significant difference in Young's modulus between the cancer-detected area (median 64.1 kPa, n=20) and undetected area (median 30.8 kPa, n=8; P<0.0001). On evaluation of receiver operating characteristics, a cut-off value of the Young's modulus of 41.0 kPa was used for the detection of clinically significant cancer, with which the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of cancer detection were 58, 97, 86 and 87%, respectively. When combining this cut-off tissue elasticity value with Prostate Imaging Reporting and Data System (PI-RADS) scores, the sensitivity, specificity, positive predictive value and negative predictive value of cancer detection were improved to 70, 98, 91 and 92%, respectively. In the cancer-detected lesions, a significant correlation was identified between the tissue elasticity value of the lesions and Gleason score (r=0.898, P<0.0001). In conclusion, PI-RADS combined with measurement of Young's modulus by 3D SWE may improve the diagnosis of clinically significant prostate cancer.
abstract_id: PUBMED:36381166
Diagnosis of clinically significant prostate cancer after negative multiparametric magnetic resonance imaging. Introduction: The diagnostic pathway after a negative magnetic resonance imaging (nMRI) exam is not clearly defined. The aim of the present study is to define the risk of prostate adenocarcinoma (PCa) at the prostate biopsy after a negative multiparametric magnetic resonance imaging (mpMRI) exam.
Material And Methods: Patients with nMRI Prostate Imaging Reporting & Data System (PI-RADS) ≤2 and without a previous diagnosis of PCa were identified among all patients undergoing mpMRI in a single referral center between 01/2016-12/2019. Detailed data about prostate biopsy after nMRI were collected, including any PCa diagnosis and clinically significant PCa diagnosis. [Gleason score (GS) ≥7]. In addition to descriptive statistics, uni and multivariable logistic regression assessed the potential predictors of any PCa and clinically significant prostate cancer (csPCa) at the biopsy after a negative mpMRI.
Results: Of 410 patients with nMRI, 73 underwent saturation biopsy. Only prostate-specific antigen (PSA) levels were significantly higher in patients undergoing biopsy (5.2 ng/ml vs 6.4, p <0.001), while Prostate Cancer Research Foundation (SWOP - Stichting Wetenschappelijk Onderzoek Prostaatkanker) risk score and other variables did not differ. A total of 22 biopsies (30.1%) were positive for PCa, GS 6 was diagnosed in 14 patients, GS 7 in 3, GS 8 in 1 and GS 9-10 in 4. csPCa was found in 8 (11%) patients. No significant predictors of any PCa or csPCa were identified at multivariate regression analysis.
Conclusions: Despite the good negative predictive value of mpMRI in the diagnosis of prostate cancer, 11% of the patients had csPCa. Specific predictive models addressing this setting would be useful.
Answer: Prostate-specific antigen velocity (PSAV) has been evaluated as a potential tool in screening for clinically significant prostate cancer (PCa). However, the evidence suggests that PSAV may not be an effective independent predictor for clinically significant PCa. A study conducted within the European Randomized Study of Screening for Prostate Cancer (Rotterdam) found that while PSAV was predictive for PCa and specifically for significant PCa in univariate analyses, it was not an independent predictor of PCa or significant PCa in multivariate analyses that included other factors such as age, PSA, prostate volume, digital rectal examination, transrectal ultrasonography outcome, and previous biopsy. The study concluded that using PSAV as a biopsy indicator would miss a large number of clinically significant PCa cases with increasing PSAV cut-offs, and therefore, PSAV does not improve the screening algorithm (PUBMED:18353529).
Given these findings, PSAV alone does not appear to be selective for clinically significant prostate cancer in screening. Instead, other biomarkers and diagnostic tools are being explored to improve the detection of clinically significant PCa and reduce the number of unnecessary biopsies. These include the use of tumour-associated circulating endothelial cells (tCECs), serum and urine biomarkers, multiparametric MRI, and various molecular biomarkers (PUBMED:31357651; PUBMED:32241692; PUBMED:34453258; PUBMED:36388432). The goal is to enhance the accuracy of screening and diagnosis, thereby improving patient outcomes and making treatment strategies more cost-effective. |
Instruction: High-dose-rate interstitial brachytherapy in combination with androgen deprivation therapy for prostate cancer: are high-risk patients good candidates?
Abstracts:
abstract_id: PUBMED:34656435
125I Interstitial brachytherapy with or without androgen deprivation therapy among unfavorable-intermediate and high-risk prostate cancer. Purpose/objective(s): To determine if patients with unfavorable intermediate-risk (UIR), high-risk (HR), or very high-risk (VHR) prostate cancer (PCa) treated with 125I interstitial brachytherapy benefit from androgen deprivation therapy (ADT).
Materials/methods: We reviewed our institutional database of patients with UIR, HR, or VHR PCa, per 2018 NCCN risk classification, treated with definitive 125I interstitial brachytherapy with or without ADT from 1998-2017. Outcomes including biochemical failure (bF), distant metastases (DM), and overall survival (OS) were analyzed with the Kaplan-Meier method and Cox proportional hazards regression. PCa-specific mortality (PCSM) was analyzed with Fine-Gray competing-risk regression.
Results: Of 1033 patients, 262 (25%) received ADT and 771 (75%) did not. Median ADT duration was 6 months. By risk group, 764 (74%) patients were UIR, 219 (21%) HR, and 50 (5%) VHR. ADT was more frequently given to HR (50%) and VHR (56%) patients compared to UIR (16%; p<0.001), to older patients (p<0.001), corresponding with increasing PSA (p<0.001) and Grade Group (p<0.001). Median follow-up was 4.9 years (0.3-17.6 years). On multivariable analysis accounting for risk group, age, and year of treatment, ADT was not associated with bF, DM, PCSM, or OS (p≥0.05 each).
Conclusion: Among patients with UIR, HR, and VHR PCa, the addition of ADT to 125I interstitial brachytherapy was not associated with improved outcomes, and no subgroup demonstrated benefit. Our findings do not support the use of ADT in combination with 125I interstitial brachytherapy. Prospective studies are required to elucidate the role of ADT for patients with UIR, HR, and VHR PCa treated with prostate brachytherapy.
abstract_id: PUBMED:24838407
High-dose-rate interstitial brachytherapy in combination with androgen deprivation therapy for prostate cancer: are high-risk patients good candidates? Background And Purpose: To evaluate the effectiveness of high-dose-rate interstitial brachytherapy (HDR-ISBT) as the only form of radiotherapy for high-risk prostate cancer patients.
Patients And Methods: Between July 2003 and June 2008, we retrospectively evaluated the outcomes of 48 high-risk patients who had undergone HDR-ISBT at the National Hospital Organization Osaka National Hospital. Risk group classification was according to the criteria described in the National Comprehensive Cancer Network (NCCN) guidelines. Median follow-up was 73 months (range 12-109 months). Neoadjuvant androgen deprivation therapy (ADT) was administered to all 48 patients; 12 patients also received adjuvant ADT. Maximal androgen blockade was performed in 37 patients. Median total treatment duration was 8 months (range 3-45 months). The planned prescribed dose was 54 Gy in 9 fractions over 5 days for the first 13 patients and 49 Gy in 7 fractions over 4 days for 34 patients. Only one patient who was over 80 years old received 38 Gy in 4 fractions over 3 days. The clinical target volume (CTV) was calculated for the prostate gland and the medial side of the seminal vesicles. A 10-mm cranial margin was added to the CTV to create the planning target volume (PTV).
Results: The 5-year overall survival and biochemical control rates were 98 and 87 %, respectively. Grade 3 late genitourinary and gastrointestinal complications occurred in 2 patients (4 %) and 1 patient (2 %), respectively; grade 2 late genitourinary and gastrointestinal complications occurred in 5 patients (10 %) and 1 patient (2 %), respectively.
Conclusion: Even for high-risk patients, HDR-ISBT as the only form of radiotherapy combined with ADT achieved promising biochemical control results, with acceptable late genitourinary and gastrointestinal complication rates.
abstract_id: PUBMED:34763369
Impact of neoadjuvant androgen deprivation therapy on postimplant prostate D90 and prostate volume after low-dose-rate brachytherapy for localized prostate cancer. Objective: Higher quality of postimplant dosimetric evaluation is associated with higher biochemical recurrence-free survival rates after low-dose-rate brachytherapy for localized prostate cancer. Postimplant prostate D90 is a key dosimetric parameter showing the quality of low-dose-rate brachytherapy. In this study, to improve the quality of low-dose-rate brachytherapy for localized prostate cancer, we investigated pre-implant factors affecting the reduction of postimplant prostate D90.
Methods: A total of 441 patients underwent low-dose-rate brachytherapy monotherapy and 474 patients underwent low-dose-rate brachytherapy with external beam radiation therapy. Logistic regression analysis was carried out to identify predictive factors for postimplant D90 decline. The cut-off value of the D90 decline was set at 170 Gy and 130 Gy in the low-dose-rate brachytherapy monotherapy group and low-dose-rate brachytherapy with external beam radiation therapy group, respectively.
Results: On multivariate analysis, neoadjuvant androgen deprivation therapy was identified as an independent predictive factor for the decline of postimplant D90 in both the low-dose-rate brachytherapy monotherapy group (P < 0.001) and low-dose-rate brachytherapy with external beam radiation therapy group (P = 0.003). Prostate volume changes and computed tomography/transrectal ultrasound prostate volume ratio were significantly and negatively correlated with the postimplant D90. The prostate volume changes and computed tomography/transrectal ultrasound prostate volume ratio were significantly higher in patients with neoadjuvant androgen deprivation therapy than those without neoadjuvant androgen deprivation therapy (P < 0.001).
Conclusions: Neoadjuvant androgen deprivation therapy decreased postimplant D90 with substantial prostate gland swelling after low-dose-rate brachytherapy. When neoadjuvant androgen deprivation therapy is required to reduce prostate volume for patients with large prostate glands and offer adequate local control for patients with high-risk prostate cancer before low-dose-rate brachytherapy, intraoperative D90 adjustment might be necessary.
abstract_id: PUBMED:32633027
High-dose-rate brachytherapy and hypofractionated external beam radiotherapy combined with long-term androgen deprivation therapy for very high-risk prostate cancer. Objective: To estimate the outcomes of high-dose-rate brachytherapy combined with hypofractionated external beam radiotherapy in prostate cancer patients classified as very high risk by the National Comprehensive Cancer Network.
Methods: Between June 2009 and September 2015, 66 patients meeting the criteria for very high-risk disease received high-dose-rate brachytherapy (2 fractions of 9 Gy) as a boost of external beam radiotherapy (13 fractions of 3 Gy). Androgen deprivation therapy was administered for approximately 3 years. Biochemical failure was assessed using the Phoenix definition.
Results: The median follow-up period was 53 months from the completion of radiotherapy. The 5-year biochemical failure-free, distant metastasis-free, prostate cancer-specific and overall survival rates were 88.7, 89.2, 98.5 and 97.0%, respectively. The independent contribution of each component of the very high-risk criteria was assessed in multivariable models. Primary Gleason pattern 5 was associated with increased risks of biochemical failure (P = 0.017) and distant metastasis (P = 0.049), whereas clinical stage ≥T3b or >4 biopsy cores with Gleason score 8-10 had no significant impact on the two outcomes. Grade 3 genitourinary toxicities were observed in two (3.0%) patients, whereas no grade ≥3 gastrointestinal toxicities occurred.
Conclusions: The present study shows that this multimodal approach provides potentially excellent cancer control and acceptable associated morbidity for very high-risk disease. Patients with primary Gleason pattern 5 are at a higher risk of poor outcomes, indicating the need for more aggressive approaches in these cases.
abstract_id: PUBMED:27965117
High-dose-rate brachytherapy monotherapy without androgen deprivation therapy for intermediate-risk prostate cancer. Purpose: Outcomes using high-dose-rate (HDR) brachytherapy monotherapy (without androgen deprivation therapy or external beam radiation therapy) for National Comprehensive Cancer Network-defined intermediate-risk (IR) patients are limited. We report our long-term data using HDR monotherapy for this patient population.
Methods And Materials: One-hundred ninety IR prostate cancer patients were treated 1996-2013 with HDR monotherapy. Biochemical prostate-specific antigen (PSA) failure was per the Phoenix definition. Acute and late genitourinary and gastrointestinal toxicities were graded according to Common Toxicity Criteria of Adverse Events, version 4. Kaplan-Meier (KM) biochemical progression-free survival (BPFS), cause-specific survival, and overall survival rates were calculated. Univariate analyses were performed to determine relationships with BPFS. The median patient age was 66 years (43-90), and the median initial PSA was 7.4 ng/mL. The Gleason score was ≤6 in 26%, 3 + 4 in 62%, and 4 + 3 in 12%. The median treatment BED1.5 was 254 Gy; 83% of patients were treated with a dose of 7.25 Gy × six fractions delivered in two separate implants.
Results: With a median follow-up of 6.2 years, KM BPFS at 5/8 years was 97%/90%, cause-specific survival at 8 years was 100%, and overall survival at 5/8 years was 93%/88%. Late genitourinary toxicities were 36.3% Grade 1, 18.9% Grade 2, and 3.7% Grade 3. Late gastrointestinal toxicities were 6.3% Grade 1, 1.1% Grade 2, and no Grade ≥3. Of the patients with no sexual dysfunction before treatment, 68% maintained potency. Age, initial PSA, T stage, Gleason score, prostate volume, and percent positive cores did not correlate with BPFS. Stratifying by favorable vs. unfavorable IR groups did not affect BPFS.
Conclusions: HDR brachytherapy monotherapy represents a safe and highly effective treatment for IR prostate cancer patients with long-term follow-up.
abstract_id: PUBMED:24222312
High-dose-rate brachytherapy and hypofractionated external beam radiotherapy combined with long-term hormonal therapy for high-risk and very high-risk prostate cancer: outcomes after 5-year follow-up. The purpose of this study was to report the outcomes of high-dose-rate (HDR) brachytherapy and hypofractionated external beam radiotherapy (EBRT) combined with long-term androgen deprivation therapy (ADT) for National Comprehensive Cancer Network (NCCN) criteria-defined high-risk (HR) and very high-risk (VHR) prostate cancer. Data from 178 HR (n = 96, 54%) and VHR (n = 82, 46%) prostate cancer patients who underwent (192)Ir-HDR brachytherapy and hypofractionated EBRT with long-term ADT between 2003 and 2008 were retrospectively analyzed. The mean dose to 90% of the planning target volume was 6.3 Gy/fraction of HDR brachytherapy. After five fractions of HDR treatment, EBRT with 10 fractions of 3 Gy was administered. All patients initially underwent ≥ 6 months of neoadjuvant ADT, and adjuvant ADT was continued for 36 months after EBRT. The median follow-up was 61 months (range, 25-94 months) from the start of radiotherapy. The 5-year biochemical non-evidence of disease, freedom from clinical failure and overall survival rates were 90.6% (HR, 97.8%; VHR, 81.9%), 95.2% (HR, 97.7%; VHR, 92.1%), and 96.9% (HR, 100%; VHR, 93.3%), respectively. The highest Radiation Therapy Oncology Group-defined late genitourinary toxicities were Grade 2 in 7.3% of patients and Grade 3 in 9.6%. The highest late gastrointestinal toxicities were Grade 2 in 2.8% of patients and Grade 3 in 0%. Although the 5-year outcome of this tri-modality approach seems favorable, further follow-up is necessary to validate clinical and survival advantages of this intensive approach compared with the standard EBRT approach.
abstract_id: PUBMED:16969980
Interstitial low dose rate brachytherapy for prostate cancer--a focus on intermediate- and high-risk disease. Aims: To investigate the role of brachytherapy in intermediate- and high-risk prostate cancer. We report our results and a review of published studies.
Materials And Methods: Between March 1999 and April 2003, 300 patients were treated with low dose rate 1-125 interstitial prostate brachytherapy and followed prospectively. The patients were stratified into low-, intermediate- and high-risk groups and received brachytherapy alone or in combination with external beam radiotherapy (EBRT) and/or neoadjuvant androgen deprivation (NAAD). One hundred and forty-six patients were classified as low risk, 111 as intermediate risk and 43 as high risk. Biochemical freedom from disease and prostate-specific antigen (PSA) nadirs were analysed for risk groups and for treatment received in each risk group.
Results: The median follow-up was 45 months (range 33-82 months) with a mean age of 63 years. Actuarial 5-year biochemical relapse-free survival for the low-risk group was 96%, 89% for the intermediate-risk group and 93% for the high-risk group. When stratified by treatment group, low-risk patients had a 5-year actuarial biochemical relapse-free survival of 94% for brachytherapy alone (n=77), 92% for NAAD and brachytherapy (n=66) and 100% for NAAD, EBRT and brachytherapy (n=3). In the intermediate-risk patients, biochemical relapse-free survival was 93% for brachytherapy alone (n=15), 94% for NAAD and brachytherapy (n=67), 75% for EBRT and brachytherapy (n=4) and 92% for NAAD, EBRT and brachytherapy (n=25). In the high-risk group, biochemical relapse-free survival was 100% for brachytherapy alone (n=2), 88% for NAAD and brachytherapy (n=7), 80% for EBRT and brachytherapy (n=5) and 96% for NAAD, EBRT and brachytherapy (n=29). Overall 3- and 4-year PSA = 0.5 ng/ml were achieved by 71 and 86%, respectively, and a 4-year PSA = 0.2 ng/ml was achieved by 63%.
Conclusion: Although the role of combination treatment with pelvic EBRT and androgen therapy is not clear, our early results show that many patients with intermediate- and high-risk disease have excellent results with brachytherapy.
abstract_id: PUBMED:34814784
Design of the novel ThermoBrachy applicators enabling simultaneous interstitial hyperthermia and high dose rate brachytherapy. Objective: In High Dose Rate Brachytherapy for prostate cancer there is a need for a new way of increasing cancer cell kill in combination with a stable dose to the organs at risk. In this study, we propose a novel ThermoBrachy applicator that offers the unique ability to apply interstitial hyperthermia while simultaneously serving as an afterloading catheter for high dose rate brachytherapy for prostate cancer. This approach achieves a higher thermal enhancement ratio than in sequential application of radiation and hyperthermia and has the potential to decrease the overall treatment time.
Methods: The new applicator uses the principle of capacitively coupled electrodes. We performed a proof of concept experiment to demostrate the feasibility of the proposed applicator. Moreover, we used electromagnetic and thermal simulations to evaluate the power needs and temperature homogeneity in different tissues. Furthermore we investigated whether dynamic phase and amplitude adaptation can be used to improve longitudinal temperature control.
Results: Simulations demonstrate that the electrodes achieve good temperature homogeneity in a homogenous phantom when following current applicator spacing guidelines. Furthermore, we demonstrate that by dynamic phase and amplitude adaptation provides a great advancement for further adaptability of the heating pattern.
Conclusions: This newly designed ThermoBrachy applicator has the potential to revise the interest in interstitial thermobrachytherapy, since the simultaneous application of radiation and hyperthermia enables maximum thermal enhancement and at maximum efficiency for patient and organization.
abstract_id: PUBMED:29032014
High-intermediate prostate cancer treated with low-dose-rate brachytherapy with or without androgen deprivation therapy. Purpose: To describe outcomes of men with unfavorable (high-tier) intermediate risk prostate cancer (H-IR) treated with low-dose-rate (LDR) brachytherapy, with or without 6 months of androgen deprivation therapy (ADT).
Methods And Materials: Patients with H-IR prostate cancer, treated before 2012 with LDR brachytherapy without external radiation are included. Baseline tumor characteristics are described. Outcomes between groups receiving ADT are measured by Phoenix (nadir +2 ng/mL), and threshold 0.4 ng/mL biochemical relapse definitions (bNEDs), as well as clinical end points. Standard descriptive and actuarial statistics are used.
Results: Two hundred sixty men were eligible, 139 (53%) did not receive ADT and 121 (47%) did. Median follow-up was 5 years. Men treated with ADT had higher T stage and percent positive cores but lower pathologic grade group. bNED rates with and without ADT at 5 years are 86% and 85% (p = 0.52) with the Phoenix definition, and 83% and 78% (p = 0.13) with the threshold definition. Local recurrence or metastasis were rare in both groups (<5%, p = not significant). Death from prostate cancer only occurred in 4 patients, 2 in each group. Overall survival was 85% in those treated with ADT and 93% without at 8 years, p = 0.15.
Conclusions: The addition of 6 months of ADT to LDR brachytherapy for H-IR prostate cancer does not improve 5 year prostate specific antigen control, and we no longer routinely recommended it.
abstract_id: PUBMED:19398902
Excellent results from high dose rate brachytherapy and external beam for prostate cancer are not improved by androgen deprivation. Purpose: Prostate cancer patients treated with high dose rate brachytherapy and external beam radiation therapy were stratified by risk group for analysis to determine whether androgen deprivation therapy (ADT) affected outcome.
Methods: From 1991 through 1998, 411 patients were treated with 4 fractions of 5.5 to 6.0 Gy high dose rate brachytherapy and a total of 36.0 to 39.6 Gy external beam radiation therapy (dose escalation over time). The dataset was prospective. Administration of ADT was not randomized, but it was the primary study variable. During this period, ADT was administered across all risk groups for various indications. It did not necessarily reflect advanced disease or large prostate size. There were 200 patients in the "ADT Group" (20% low, 48% intermediate, and 32% high risk) and 211 in the "No ADT Group" (33% low, 44% intermediate, 23% high risk). The median follow-up was 6.4 years. Cases were grouped according to low, intermediate, and high risk groups to reduce the effects of unrecognized selection bias for or against the ADT group. The prostate specific antigen (PSA) nadir plus 2.0 ng/ml (nadir + 2) was used as the biochemical control end point. Local control, PSA progression-free survival, distant metastasis free survival, and cause-specific survival were compared.
Results: The 10 year PSA-PFS (nadir + 2) for all 411 patients was 81%. The results stratified by risk group were: low 92%, intermediate 87%, and high 63%. The low and intermediate risk groups were not statistically different from one another but they were both significantly better than the high risk group. ADT versus No ADT 10-year survival showed no significant differences for any outcome variable: PSA-PFS (83% vs. 81% ns), local control (97% vs. 99%), distant metastasis free survival (94% vs. 97%), and cause-specific survival (97% vs. 97%). A subset analysis of PSA-PFS (nadir + 2) stratified by risk group revealed no difference between the ADT and No ADT groups.
Conclusions: high dose rate brachytherapy and external beam radiation therapy resulted in high rates of local control, PSA progression-free survival, distant metastasis free survival, and cause-specific survival in all risk groups. Improved outcome from the use of androgen deprivation was not observed.
Answer: High-dose-rate interstitial brachytherapy (HDR-ISBT) combined with androgen deprivation therapy (ADT) has been evaluated for its effectiveness in treating high-risk prostate cancer patients. According to a study that retrospectively evaluated the outcomes of 48 high-risk patients who underwent HDR-ISBT at the National Hospital Organization Osaka National Hospital, even for high-risk patients, HDR-ISBT as the only form of radiotherapy combined with ADT achieved promising biochemical control results, with acceptable late genitourinary and gastrointestinal complication rates (PUBMED:24838407). This suggests that high-risk prostate cancer patients are indeed good candidates for HDR-ISBT in combination with ADT.
Furthermore, another study that focused on high-dose-rate brachytherapy combined with hypofractionated external beam radiotherapy (EBRT) and long-term ADT for very high-risk prostate cancer reported potentially excellent cancer control and acceptable morbidity with this multimodal approach. The study indicated that patients with primary Gleason pattern 5 are at a higher risk of poor outcomes, suggesting the need for more aggressive approaches in these cases (PUBMED:32633027).
Additionally, a study on high-dose-rate brachytherapy and hypofractionated EBRT combined with long-term hormonal therapy for high-risk and very high-risk prostate cancer showed favorable 5-year outcomes, although further follow-up was deemed necessary to validate the clinical and survival advantages of this intensive approach compared with standard EBRT approaches (PUBMED:24222312).
In contrast, a study on low-dose-rate brachytherapy for prostate cancer with a focus on intermediate- and high-risk disease reported excellent results with brachytherapy alone or in combination with EBRT and/or neoadjuvant androgen deprivation (NAAD), suggesting that many patients with intermediate- and high-risk disease have excellent results with brachytherapy (PUBMED:16969980).
Overall, the evidence suggests that high-risk prostate cancer patients are good candidates for HDR-ISBT in combination with ADT, with studies reporting promising control rates and acceptable complication rates. However, the specific benefits and risks should be considered on an individual basis, and more aggressive treatment may be warranted for patients with particularly high-risk features such as primary Gleason pattern 5. |
Instruction: Does drug treatment improve survival?
Abstracts:
abstract_id: PUBMED:27943425
Efficacy, safety and drug survival of conventional agents in pediatric psoriasis: A multicenter, cohort study. The data on long-term efficacy, safety and drug survival rates of conventional systemic therapeutics in pediatric psoriasis is lacking. The primary aim of this study is to investigate acitretin, methotrexate, cyclosporin efficacy, safety and drug survival rates in pediatric patients as well as predictors of drug survival. This is a multicenter study including 289 pediatric cases being treated with acitretin, methotrexate and cyclosporin in four academic referral centers. Efficacy, adverse events, reasons for discontinuation, 1, 2- and 3-year drug survival rates, and determinants of drug survival were analyzed. A 75% reduction of Psoriasis Area and Severity Index score or better response rate was obtained in 47.5%, 34.1% and 40% of the patients who were treated with acitretin, methotrexate and cyclosporin, respectively. One-year drug survival rates for acitretin, methotrexate and cyclosporin were 36.3%, 21.1% and 15.1%, respectively. The most significant determinant of drug survival, which diminished over time, was treatment response whereas arthritis, body mass index and sex had no influence. Although all three medications are effective and relatively safe in children, drug survival rates are low due to safety concerns at this age group. Effective disease control through their rational use can be expected to improve survival rates.
abstract_id: PUBMED:28010883
Effectiveness and drug survival of TNF-inhibitors in the treatment of psoriatic arthritis: A prospective cohort study. Background And Objectives: Tumor necrosis factor (TNF)-inhibitors are used to treat psoriatic arthritis (PsA), but only a limited number of observational studies on this subject have been published thus far. The aim of this research was to analyze the effectiveness and drug survival of TNF-inhibitors in the treatment of PsA.
Methods: PsA patients identified from the National Register for Biologic Treatment in Finland (ROB-FIN) starting their first, second, or third TNF-inhibitor treatment between 2004 and 2014 were included. Effectiveness was measured using ACR and EULAR response criteria and modeled using ordinal logistic regression. Treatment persistence was analyzed using Kaplan-Meier survival analysis and Cox proportional hazards model.
Results: The study comprised 765 patients and 990 TNF-inhibitor treatment courses. EULAR moderate treatment responses at 6 months were achieved by 68% and 37% of the users of the first and the second or the third biologic, respectively. The probabilities of discontinuing the treatment within 12 and 24 months were 20% and 28%, respectively. Adjusted treatment responses to all TNF-inhibitors were similar; however, co-therapy with conventional synthetic disease-modifying anti-rheumatic drugs (csDMARDs) was not associated with better effectiveness. Adalimumab [hazard ratio (HR) = 0.62; 95% confidence interval (CI): 0.44-0.88] was superior to infliximab in drug survival while etanercept (HR = 0.77, 95% CI: 0.55-1.1) and golimumab (HR = 0.75, 95% CI: 0.46-1.2) did not differ from it. Co-medication with csDMARDs did not statistically improve drug survival.
Conclusion: All available TNF-inhibitors showed similar treatment responses with or without csDMARDs. Adalimumab was associated with better drug survival when compared to infliximab.
abstract_id: PUBMED:35028369
Drug survival of systemic immunosuppressive treatments for atopic dermatitis in a long-term pediatric cohort. Background: : Systemic immunosuppressive treatments are central in the treatment of severe atopic dermatitis (AD). Yet, comparative data are sparse on the performance of such immunosuppressive treatments in pediatric cohorts with severe AD.
Objective: : This study aimed to examine the drug survival of systemic immunosuppressive treatments in a cohort of children with severe AD.
Methods: : A retrospective pediatric cohort was identified using diagnosis and treatment codes registered in medical charts. In total, 135 cases were identified; of these, 36 were excluded. All information was obtained through examination of clinical records. Drug survival was analyzed with Kaplan-Meier plots, and a log-rank test was used to test for differences in drug survival.
Results: : First-line treatment was primarily methotrexate (MTX; n = 63) and azathioprine (AZA; n = 32). For MTX, the drug survival rates were 69%, 50%, and 18% after 1, 2, and 4 years, respectively, with a median drug survival time of 1.58 years. For AZA, these rates were 63%, 53%, and 21%, respectively, with a median drug survival time of 1.14 years. There was no significant difference in drug survival between the treatments. The main reason for discontinuation was adverse effects (MTX: 25%; AZA: 41%). Despite this, a majority of patients experienced a good effect at the moment of discontinuation or data-lock (MTX: 60%; AZA: 53%), and treatment effect assessed as improvement in sleep quality was highly significant (p = .001). Second-line treatments included MTX (n = 12), AZA (n = 7), and cyclosporine (n = 5). These showed a median drug survival time of 1.8, 0.2, and 0.885 years, respectively.
Conclusion: : MTX and AZA were the dominant first-line treatments prescribed and were safe and equally valuable treatment options for severe childhood AD with similar drug survival outcomes. MTX was the most used second-line treatment.
abstract_id: PUBMED:32997300
Training general practitioners to improve evidence-based drug treatment of patients with heart failure: a cluster randomised controlled trial. Aims: To assess whether a single training session for general practitioners (GPs) improves the evidence-based drug treatment of heart failure (HF) patients, especially of those with HF with reduced ejection fraction (HFrEF).
Methods And Results: A cluster randomised controlled trial was performed for which patients with established HF were eligible. Primary care practices (PCPs) were randomised to care-as-usual or to the intervention group in which GPs received a half-day training session on HF management. Changes in HF medication, health status, hospitalisation and survival were compared between the two groups. Fifteen PCPs with 200 HF patients were randomised to the intervention group and 15 PCPs with 198 HF patients to the control group. Mean age was 76.9 (SD 10.8) years; 52.5% were female. On average, the patients had been diagnosed with HF 3.0 (SD 3.0) years previously. In total, 204 had HFrEF and 194 HF with preserved ejection fraction (HFpEF). In participants with HFrEF, the use of angiotensin-converting enzyme inhibitors/angiotensin receptor blockers decreased in 6 months in both groups [5.2%; (95% confidence interval (CI) 2.0-10.0)] and 5.6% (95% CI 2.8-13.4)], respectively [baseline-corrected odds ratio (OR) 1.07 (95% CI 0.55-2.08)], while beta-blocker use increased in both groups by 5.2% (95% CI 2.0-10.0) and 1.1% (95% CI 0.2-6.3), respectively [baseline-corrected OR 0.82 (95% CI 0.42-1.61)]. For health status, hospitalisations or survival after 12-28 months there were no significant differences between the two groups, also not when separately analysed for HFrEF and HFpEF.
Conclusion: A half-day training session for GPs does not improve drug treatment of HF in patients with established HF.
abstract_id: PUBMED:36683590
First-time adverse drug reactions, survival analysis, and the share of adverse drug reactions in treatment discontinuation in real-world rheumatoid arthritis patients: a comparison of first-time treatment with adalimumab and etanercept. Background: This study aims to compare nature and frequency of adverse drug reactions (ADRs), time to first ADR, drug survival, and the share of ADRs in treatment discontinuation of first-time treatment with adalimumab (ADA) and etanercept (ETN) in real-world RA patients.
Research Design And Methods: Retrospective, single-center cohort study including naïve patients treated between January 2003-April 2020. Time to first ADR and drug survival of first-time treatment were studied using Kaplan-Meier and Cox-regression models up to 10 years, with 2- and 5-year post-hoc sensitivity analysis. Nature and frequencies of first-time ADRs and causes of treatment discontinuation were assessed.
Results: In total, 416 patients (ADA: 255, ETN: 161, 4865 patient years) were included, of which 92 (22.1%) experienced ADR(s) (ADA: 59, 23.1%; ETN: 33, 20.4%). Adjusted for age, gender and concomitant conventional DMARD use, ADA was more likely to be discontinued than ETN up to 2-, 5- and 10-year follow-up (adjusted HRs 1.63; 1.62; 1.59 (all p<0.001)). ADRs were the second reason of treatment discontinuation (ADA 20.7%, ETN 21.4%).
Conclusions: Despite seemingly different nature and frequencies, ADRs are the second reason of treatment discontinuation for both bDMARDs. Furthermore, 2-, 5-, and 10-year drug survival is longer for ETN compared to ADA.
abstract_id: PUBMED:24485061
Temporal trends in the survival of drug and alcohol abusers according to the primary drug of admission to treatment in Spain. Background: Mortality of alcohol and drug abusers is much higher than the general population. We aimed to characterize the role of the primary substance of abuse on the survival of patients admitted to treatment and to analyze changes in mortality over time.
Methods: Longitudinal study analyzing demographic, drug use, and biological data of 5023 patients admitted to three hospital-based treatment units in Barcelona, Spain, between 1985 and 2006. Vital status and causes of death were ascertained from clinical charts and the mortality register. Piecewise regression models were used to analyze changes in mortality.
Results: The primary substances of dependence were heroin, cocaine, and alcohol in 3388 (67.5%), 945 (18.8%), and 690 patients (13.7%), respectively. The median follow-up after admission to treatment was 11.6 years (IQR: 6.6-16.1), 6.5 years (IQR: 3.9-10.6), and 4.8 years (IQR: 3.1-7.8) for the heroin-, cocaine-, and alcohol-dependent patients, respectively. For heroin-dependent patients, mortality rate decreased from 7.3×100person-years (p-y) in 1985 to 1.8×100p-y in 2008. For cocaine-dependent patients, mortality rate decreased from 10.7×100p-y in 1985 to <2.5×100p-y after 2004. The annual average decrease was 2% for alcohol-dependent patients, with the lowest mortality rate (3.3×100p-y) in 2008.
Conclusions: Significant reductions in mortality of alcohol and drug dependent patients are observed in recent years in Spain. Preventive interventions, treatment of substance dependence and antiretroviral therapy may have contributed to improve survival in this population.
abstract_id: PUBMED:29199254
Drug Repositioning Research Utilizing a Large-scale Medical Claims Database to Improve Survival Rates after Cardiopulmonary Arrest Approximately 100000 people suffer cardiopulmonary arrest in Japan every year, and the aging of society means that this number is expected to increase. Worldwide, approximately 100 million develop cardiac arrest annually, making it an international issue. Although survival has improved thanks to advances in cardiopulmonary resuscitation, there is a high rate of postresuscitation encephalopathy after the return of spontaneous circulation, and the proportion of patients who can return to normal life is extremely low. Treatment for postresuscitation encephalopathy is long term, and if sequelae persist then nursing care is required, causing immeasurable economic burdens as a result of ballooning medical costs. As at present there is no drug treatment to improve postresuscitation encephalopathy as a complication of cardiopulmonary arrest, the development of novel drug treatments is desirable. In recent years, new efficacy for existing drugs used in the clinical setting has been discovered, and drug repositioning has been proposed as a strategy for developing those drugs as therapeutic agents for different diseases. This review describes a large-scale database study carried out following a discovery strategy for drug repositioning with the objective of improving survival rates after cardiopulmonary arrest and discusses future repositioning prospects.
abstract_id: PUBMED:25809764
Therapeutic drug monitoring: how to improve drug dosage and patient safety in tuberculosis treatment. In this article we describe the key role of tuberculosis (TB) treatment, the challenges (mainly the emergence of drug resistance), and the opportunities represented by the correct approach to drug dosage, based on the existing control and elimination strategies. In this context, the role and contribution of therapeutic drug monitoring (TDM) is discussed in detail. Treatment success in multidrug-resistant (MDR) TB cases is low (62%, with 7% failing or relapsing and 9% dying) and in extensively drug-resistant (XDR) TB cases is even lower (40%, with 22% failing or relapsing and 15% dying). The treatment of drug-resistant TB is also more expensive (exceeding €50,000 for MDR-TB and €160,000 for XDR-TB) and more toxic if compared to that prescribed for drug-susceptible TB. Appropriate dosing of first- and second-line anti-TB drugs can improve the patient's prognosis and lower treatment costs. TDM is based on the measurement of drug concentrations in blood samples collected at appropriate times and subsequent dose adjustment according to the target concentration. The 'dried blood spot' technique offers additional advantages, providing the rationale for discussions regarding a possible future network of selected, quality-controlled reference laboratories for the processing of dried blood spots of difficult-to-treat patients from reference TB clinics around the world.
abstract_id: PUBMED:33143166
Ustekinumab Drug Survival in Patients with Psoriasis: A retrospective Study of Real Clinical Practice. Background and objectives: The efficacy and safety of ustekinumab have been proved in clinical trials. In daily clinical practice, knowing the factors that determine survival differences of biological drugs allows psoriasis treatment to be optimized as a function of patient characteristics. The main objectives of this work are to understand ustekinumab drug survival in patients diagnosed with plaque psoriasis in the Hospital Universitario Central de Asturias (HUCA Dermatology Department, and to identify the predictors of drug discontinuation. Materials and Methods: A retrospective hospital-based study, including data from 148 patients who were receiving ustekinumab (Stelara®) between 1 February 2009 and 30 November 2019, were collected. Survival curves were approximated through the Kaplan-Meier estimator and compared using the log-rank test. Proportional hazard Cox regression models were used for multivariate analyses while both unadjusted and adjusted hazard ratios (HR) were used for summarizing the studied differences. Results: The average duration of the treatment before discontinuation was 47.57 months (SD 32.63 months; median 41 months). The retention rates were 82% (2 years), 66% (5 years), and 58% (8 years). Median survival was 80 months (95% confidence interval. CI 36.9 to 123.01 months). The survival study revealed statistically significant differences between patients with arthritis (log-rank test, p < 0.001) and those who had previously received biological treatment (log-rank test, p = 0.026). The five-year prevalence in patients still under treatment was 80% (those without arthritis) and 54% (arthritis patients). In the multivariate analysis, only the patients with arthritis had a lower rate of drug survival. No statistically significant differences were observed for any of the other comorbidities studied. The first and second most frequent causes of discontinuation were secondary failure and arthritis inefficacy, respectively. Conclusion: Ustekinumab is a biological drug conferring high survival in plaque psoriasis patients. Ustekinumab survival is lower in patients with arthritis.
abstract_id: PUBMED:31696543
Drug survival of biologic agents for psoriatic patients in a real-world setting in Japan. This is a Japanese retrospective single-center study carried out between 1 January 2010 and 21 November 2018 at the Department of Dermatology in Jichi Medical University Hospital. The drug survival rate for six biologic agents used for the treatment of psoriasis was investigated. We reviewed the clinical records of 315 treatment series of 205 patients with moderate to severe psoriasis treated with adalimumab (103 cases), infliximab (70 cases), ustekinumab (66 cases), secukinumab (38 cases), brodalumab (12 cases) and ixekizumab (26 cases). In our study, ustekinumab revealed a trend towards higher drug survival among the six biologic agents. Ustekinumab had a higher drug survival rate than infliximab and secukinumab with significant differences by log-rank test among all patients and among biologic-naive (bio-naive) patients. There was no significant difference in drug survival between bio-naive and biologic-experienced (non-naive) patients in the treatment courses with adalimumab, infliximab, ustekinumab, secukinumab and ixekizumab. The dose augmentation therapy in infliximab-treated patients was associated with longer drug survival. Of all patients, 25 cases with generalized pustular psoriasis (GPP) were included, who functioned as the negative predictor for drug persistence with a hazard ratio of 1.87 (95% confidence interval, 1.12-3.11; P = 0.016). Our results reveal that ustekinumab had a superior drug survival, which is supported by the previous studies. Further studies are needed to clarify the efficacy of biologic agents on patients with GPP.
Answer: The abstracts provided discuss the efficacy, safety, and drug survival rates of various treatments for different conditions, including psoriasis, psoriatic arthritis, atopic dermatitis, heart failure, rheumatoid arthritis, substance abuse, cardiopulmonary arrest, tuberculosis, and in the context of drug repositioning. While the term "drug survival" often refers to the duration a drug is continued before being discontinued due to lack of efficacy, side effects, or other reasons, it does not directly equate to patient survival or mortality outcomes.
However, several abstracts do suggest that drug treatments can improve outcomes for patients. For instance, in the context of psoriatic arthritis, TNF-inhibitors showed effectiveness in treatment, and adalimumab was associated with better drug survival compared to infliximab (PUBMED:28010883). In pediatric cohorts with severe atopic dermatitis, methotrexate (MTX) and azathioprine (AZA) were found to be safe and valuable treatment options with similar drug survival outcomes (PUBMED:35028369). For rheumatoid arthritis patients, drug survival was longer for etanercept compared to adalimumab, and adverse drug reactions were a significant reason for treatment discontinuation (PUBMED:36683590).
In the case of tuberculosis, appropriate dosing of anti-TB drugs can improve the patient's prognosis and lower treatment costs (PUBMED:25809764). For patients with psoriasis, ustekinumab was found to confer high drug survival, particularly in patients without arthritis (PUBMED:33143166). Additionally, in Japan, ustekinumab showed a trend towards higher drug survival among six biologic agents used for the treatment of psoriasis (PUBMED:31696543).
While these findings indicate that drug treatments can be effective and improve patient outcomes, the abstracts also highlight challenges such as the emergence of drug resistance, the need for therapeutic drug monitoring, and the importance of optimizing treatment based on patient characteristics. It is important to note that the improvement in survival rates mentioned in some abstracts may refer to drug survival rather than patient survival, and the two should not be conflated without specific evidence linking drug treatment to increased patient survival. |
Instruction: Octreotide bolus injection and azygos blood flow in patients with cirrhosis: is the effect really predictable?
Abstracts:
abstract_id: PUBMED:19996986
Octreotide bolus injection and azygos blood flow in patients with cirrhosis: is the effect really predictable? Background: Octreotide (OCT) improves the management of variceal bleeding, but the pattern of administration is not clearly defined. Available data show a transient decrease in portal pressure and azygos blood flow (AzBF) after OCT bolus injection with desensitization at readministration.
Aim: To explore the sustained hemodynamic effects of OCT and changes associated with readministration at 60 minutes on AzBF in patients with portal hypertension.
Patients And Methods: AzBF was measured invasively (thermodilution technique) in 12 patients at baseline and at 10 minutes intervals after OCT 50-μg IV bolus for a total of 60 minutes. Readministration of OCT was followed by AzBF measurement for another 15 minutes. Patients [age 51.4 y (30 to 69)] had cirrhosis (alcoholic in 9 patients; Pugh's score 8.8±0.3), portal hypertension (HVPG 19±1 mm Hg), and elevated AzBF (658±138 mL/min).
Results: The bolus of OCT was followed at 10 minutes by a 34% decline in AzBF as compared with baseline value. This AzBF reduction was sustained over the 60-minute study period (-36%±1.4%) with the values that remained decreased as compared with baseline (P<0.01). Mean arterial pressure remained stable. At 60 minutes, the repeat OCT bolus induced a further significant (P<0.01) decline in AzBF, although the response was blunted (-18%±1.2%).
Conclusion: The AzBF showed a sustained decrease of value after a bolus injection of 50-μg OCT. A further hemodynamic response is detectable at OCT readministration after 60 minutes. The pattern of hemodynamic response to OCT may not be uniform among cirrhotics.
abstract_id: PUBMED:11427837
Validation of color Doppler EUS for azygos blood flow measurement in patients with cirrhosis: application to the acute hemodynamic effects of somatostatin, octreotide, or placebo. Background: Color Doppler EUS (CD-EUS) allows minimally invasive measurement of azygos blood flow (AzBF) in portal hypertension, but further validation of the method is needed. Because a limited number of patients has been studied, the acute hemodynamic effects of somatostatin and octreotide on AzBF and gastric mucosal perfusion are poorly defined in portal hypertension.
Methods: A double-blind hemodynamic study was designed to assess rapid changes in AzBF over a 60-minute period after intravenous administration of somatostatin, octreotide, and placebo in 30 stable patients with biopsy-proven cirrhosis. AzBF was measured by using both CD-EUS and the invasive thermal dilution technique in the first 10 patients (phase 1). Then, with CD-EUS alone, the hemodynamic study was extended to a further 20 patients (phase 2). In addition, gastric mucosal perfusion changes were assessed by using laser Doppler flowmetry at endoscopy.
Results: In phase 1, the 2 methods for AzBF measurement showed significant correlations both for baseline values (r = 0.685) and for AzBF changes over 60 minutes after drug administration (r = 0.733). In phase 2, a reduction was observed in AzBF 10 minutes after octreotide or somatostatin administration (-47% and -23%, p < 0.0001 vs. placebo, p = 0.058 vs. placebo, respectively). After 60 minutes of somatostatin infusion, AzBF increased 27% over placebo values (p < 0.04). Gastric mucosal perfusion was transiently reduced 5 minutes after octreotide or somatostatin (-21% and -32%, respectively, p < 0.02 vs. placebo).
Conclusions: This is the first study to validate CD-EUS AzBF measurement with reference to the invasive thermodilution technique in cirrhosis. It confirmed the transient effects of somatostatin and octreotide on both AzBF and gastric mucosal perfusion. In addition, a significant rebound phenomenon after 60 minutes of continuous intravenous somatostatin infusion was observed.
abstract_id: PUBMED:9823558
Measurement of collateral circulation blood flow in anesthetized portal hypertensive rats. Aims: The aim of this study was to develop a technique to measure collateral blood flow in portal hypertensive rats.
Methods: Morphological techniques included inspection, casts and angiographies of portosystemic shunts. The main hemodynamic measurements were splenorenal shunt blood flow (transit time ultrasound method), percentage of portosystemic shunts and regional blood flows (microsphere method). In study 1, a model of esophageal varices was developed by ligating the splenorenal shunt. In study 2, morphological studies of the splenorenal shunt were performed in rats with portal vein ligation. In study 3, the relationship between splenorenal shunt blood flow with percentage of portosystemic shunts was evaluated in dimethylnitrosamine cirrhosis. In study 4, secondary biliary, CCl4 and dimethylnitrosamine cirrhosis were compared. In study 5, rats with portal vein ligation received acute administration of octreotide. In study 6, rats with dimethylnitrosamine cirrhosis received acute administration of vapreotide.
Results: Blood flow of para-esophageal varices could not be measured. SRS blood flow was correlated with the mesenteric percentage of portosystemic shunts (r = 0.74, P < 0.05), splenic percentage of portosystemic shunts (r = 0.54, P < 0.05) and estimated portosystemic blood flow (r = 0.91, P < 0.01). Splenorenal shunt blood flow was 6 to 12 times higher in portal hypertensive rats, e.g., in portal vein ligated rats: 2.8 +/- 2.7 vs 0.3 +/- 0.1 mL.min-1 in sham rats (P < 0.01), and was similar in the different cirrhosis models but was higher in portal vein ligated rats than in cirrhotic rats (1.2 +/- 0.7 vs 0.6 +/- 0.6 mL.min-1.100 g-1, P = 0.05). Octreotide significantly decreased splenorenal shunt blood flow: -23 +/- 20% (P < 0.01) vs -6 +/- 8% (not significant) in placebo rats. The variation of splenorenal shunt blood flow after vapreotide was significant but not that of the splenic percentage of portosystemic shunts compared to placebo.
Conclusions: The splenorenal shunt is the main portosystemic shunt in rats. The measurement of splenorenal shunt blood flow is easy, accurate and reproducible and should replace the traditional measurement of the percentage of portosystemic shunts in pharmacological studies.
abstract_id: PUBMED:9794911
Splenorenal shunt blood flow by transit-time ultrasound as an index of collateral circulation in portal hypertensive rats. The aim of this study was to develop a technique that could serve as an index of portosystemic shunt (PSS) blood flow in portal hypertensive rats whose main shunt is the splenorenal shunt (SRS). The main hemodynamic measurements performed were: SRS blood flow by the transit-time ultrasound (TTU) method, percentage of PSS, and regional blood flows by the microsphere method. We determined the accuracy and reproducibility of SRS blood flow measurements under baseline and pharmacological (octreotide) conditions. SRS blood flow was compared with other hemodynamic characteristics. Two models of portal hypertension were used: secondary biliary and dimethylnitrosamine cirrhosis. The SRS blood flow was correlated with mesenteric (r = .76; P < .001) and splenic (r = .67; P < .01) PSS percentages. The intra- and interobserver agreements for SRS blood flow were excellent: ric = .99 and ric = .98, respectively. SRS blood flow was six times higher in portal hypertensive rats (0.6 +/- 0.7 mL . min-1 . 100 g-1) than in sham rats (0.1 +/- 0.1 mL . min-1 . 100 g-1 [P < .01]). Octreotide significantly decreased SRS blood flow but not mesenteric or splenic PSS percentages. SRS is the main PSS in rats. The measurement of SRS blood flow by TTU is accurate and reproducible. This method can be used to identify new mechanisms in hemodynamic studies that differ from those identified by the measurement of the percentage of PSS by the microsphere method, especially in pharmacological studies.
abstract_id: PUBMED:15335394
Review article: a critical comparison of drug therapies in currently used therapeutic strategies for variceal haemorrhage. Vasoactive drugs are safe and easy to administer, and universal treatment is the first-line approach for all patients with suspected variceal bleeding. There are strong arguments that the combination of vasoactive drugs, started as soon as possible, and endotherapy later on is the best therapeutic option, particularly in cases of ongoing bleeding at the time of endoscopy. The main action of vasoactive drugs is to reduce variceal pressure. This can be achieved by diminishing the variceal blood flow and/or by increasing resistance to variceal blood flow inside the varices. Changes in variceal pressure parallel changes in portal pressure. Drugs for the treatment of variceal bleeding can therefore be assessed by measuring the changes in portal pressure, azygos blood flow and variceal pressure. Vasoactive drugs can be divided into two categories: terlipressin (Glypressin), and somatostatin and its analogues, especially octreotide. Terlipressin significantly reduces portal and variceal pressure and azygos flow, is superior to placebo in the control of variceal haemorrhage and improves mortality. It is beneficial when combined with sclerotherapy. It also has the advantage that it might preserve renal function, one of the most important factors affecting the outcome of cirrhosis. As such, terlipressin is the most potent of the various vasoactive drugs. Somatostatin significantly reduces portal and variceal pressure and azygos flow, is superior to placebo in controlling variceal haemorrhage, and improves the success of sclerotherapy. The effect of octreotide is well established for preventing the increase in portal pressure after a meal (similar to blood in the intestines) though the effect of ocreotide on variceal pressure is controversial.
abstract_id: PUBMED:7836713
Effects of octreotide on postprandial systemic and hepatic hemodynamics in patients with postnecrotic cirrhosis. The effects of octreotide on postprandial hemodynamic responses were evaluated in 20 patients with postnecrotic cirrhosis. They were randomly assigned to receive either a 100-micrograms bolus with a 100-micrograms/h infusion of octreotide or a placebo. Placebo administration did not affect any of the hemodynamic values. However, after a liquid meal of 500 kcal, postprandial increases in the hepatic venous pressure gradient and hepatic blood flow were observed in patients receiving placebo, while the systemic hemodynamic values remained unchanged. In contrast, in patients receiving octreotide, the hepatic blood flow was significantly decreased 30 min after administration, while the hepatic venous pressure gradient and the systemic hemodynamic values were not affected. After ingestion of a meal, the mean values of the hepatic blood flows were not significantly different from basal values. Moreover, the wedged hepatic venous pressure, the hepatic venous pressure gradient and the systemic hemodynamic values were not affected by meal ingestion. However, during octreotide infusion, hepatic blood flow 30 min after the meal had a tendency to increase compared to before the meal. In conclusion, octreotide inhibited the postprandial increase in portal pressure in patients with postnecrotic cirrhosis. In addition, octreotide decreased hepatic blood flow in the fasting state. When given before a meal, the increase in blood flow induced by the meal restored the hepatic blood flow to basal levels.
abstract_id: PUBMED:10735613
Spleno-renal shunt blood flow is an accurate index of collateral circulation in different models of portal hypertension and after pharmacological changes in rats. Background/aims: Recently, we developed a new method to measure collateral blood flow in rats: splenorenal shunt (SRS) blood flow (BF). The aims were to evaluate the reproducibility of SRSBF measurement in different models of portal hypertension, and to investigate the ability of SRSBF to disclose pharmacological changes.
Methods: Hemodynamics were determined in anesthetized rats with secondary biliary, CCl4 or DMNA cirrhosis and portal vein ligation (PVL) under baseline and pharmacological (octreotide, vapreotide) conditions. The main measurements performed were: SRSBF by the transit time ultrasound (TTU) method and % portosystemic shunts (PSS) by the microsphere method.
Results: SRSBF was 6 to 10 times higher in portal hypertensive rats and was similar in the different models of cirrhosis but was higher in portal vein ligated rats than in cirrhotic rats (1.1+/-0.7 vs 0.6+/-0.7 ml x min(-1) x 100 g(-1), p=0.01). SRSBF was correlated with mesenteric %PSS (r=0.61, p<0.01), splenic %PSS (r=0.54, p<0.05), portal pressure (r= 0.32, p<0.05) and the area of liver fibrosis (r=0.33, p<0.05). Octreotide significantly decreased SRSBF (-23+/-20%, p<0.01 vs placebo: -6+/-8%, NS). Vapreotide significantly decreased SRSBF but not mesenteric or splenic %PSS compared to placebo. The variations in SRSBF (-26+/-32%) and in splenic %PSS (0+/-15%) with vapreotide were significantly different (p<0.05) and not correlated (r=-0.1, NS).
Conclusions: Determination of SRSBF by TTU is an accurate way to measure collateral blood flow in different models of intra- and extra-hepatic portal hypertension in rats. Its sensitivity provides accurate measurement of pharmacological changes, unlike the traditional estimation of %PSS by the microsphere method.
abstract_id: PUBMED:9186833
Effect of octreotide on systemic, central, and splanchnic haemodynamics in cirrhosis. Background/aims: Cirrhosis with portal hypertension is associated with changes in the splanchnic and systemic haemodynamics, and subsequent complications, such as bleeding from oesophageal varices, have led to the introduction of long-acting somatostatin analogues in the treatment of portal hypertension. However, reports on the splanchnic and systemic effects of octreotide are contradictory and therefore the aim of the present study was to assess the effects of continuous infusion of octreotide on central and systemic haemodynamics, portal pressures, and hepatic blood flow.
Methods: Thirteen patients with cirrhosis underwent liver vein catheterisation. Portal and arterial blood pressures were determined at baseline and 10, 30, and 50 min after a bolus injection of octreotide 100 micrograms, followed by continuous infusion of octreotide 100 micrograms/ h for 1 h. Hepatic blood flow, cardiac output, central and arterial blood volume, and central circulation time were determined at baseline and 50 min after the start of the octreotide infusion.
Results: The mean arterial blood pressure increased during the first 10 min (p < 0.0005), but returned to baseline after 50 min. The central and arterial blood volume (-16%, p < 0.005) and the central circulation time (-8%, p < 0.05) were significantly decreased after 50 min, whereas the cardiac output did not change significantly. The hepatic venous pressure gradient and the hepatic blood flow did not change significantly at any time after infusion of octreotide.
Conclusions: Octreotide does not affect the portal pressure or hepatic blood flow, whereas it may further contract the central blood volume and thereby exert a potentially harmful effect on central hypovolaemia in patients with cirrhosis. However, these early effects do not exclude the possibility that administration of longacting somatostatin analogues over a longer period may have a beneficial effect.
abstract_id: PUBMED:10207229
Somatostatin or octreotide in acute variceal bleeding. In patients with cirrhosis, somatostatin or octreotide administration is followed by a transient decrease in the hepatic venous pressure gradient and azygos blood flow. Although no clear-cut changes in variceal pressure are observed and the exact mechanisms of acute hemodynamic changes induced by somatostatin or its derivatives are still unknown, this provided the rationale for its use in patients with variceal hemorrhage. The only known sustained hemodynamic effect of octreotide is to prevent increases in hepatic venous gradient or azygos blood flow in response to food intake. Somatostatin infusion can be as effective as sclerotherapy in the initial control of bleeding esophageal varices in patients with cirrhosis and is associated with fewer complications. Octreotide also seems to be as effective as endoscopic therapy in the control of acute variceal bleeding, although larger studies should be performed before its efficacy and safety profile can be fully evaluated. The combination of somatostatin or long-acting analogues to endoscopic therapy has recently been delineated as one of the most promising approaches in these patients. Early somatostatin administration with repeat boluses, starting several hours before sclerotherapy is combined, eases the endoscopic procedure and reduces bleeding control failure rate. Although two studies also showed that octreotide, when started at the time of sclerotherapy or variceal banding, also improves bleeding control, a conclusion on octreotide use in these patients is premature. Optimal administration schedules and doses of somatostatin or octreotide are still unknown. The safety of octreotide in patients with variceal bleeding, which has recently been challenged, should be assessed in larger trials. Recent data suggesting that octreotide combination to beta-blockers or sclerotherapy may represent a useful approach for long-term prevention of rebleeding in these patients will have to be confirmed.
abstract_id: PUBMED:1354573
Octreotide decreased liver metabolic activity in patients with hepatitis B surface antigen-positive cirrhosis. The influence of octreotide and somatostatin on liver metabolic activity were studied in 16 patients with cirrhosis that was positive for hepatitis B surface antigen (HBsAg). In patients receiving a 50 micrograms bolus and a 50 micrograms/hr infusion of octreotide, the hepatic blood flow, hepatic clearance, and the maximum velocity/metabolic elimination rate constant (Vmax/km) were significantly reduced after octreotide infusion compared with basal values. Similarly, the hepatic blood flow, hepatic clearance, and Vmax/km were significantly decreased in patients receiving a 250 micrograms bolus and a 250 micrograms/hr infusion of somatostatin. The extraction ratio and the systemic hemodynamic values, including cardiac index, heart rate, mean arterial pressure, and systemic vascular resistance, showed no significant changes in patients receiving either octreotide or somatostatin. These findings suggest that, as with somatostatin, octreotide reduced hepatic blood flow and impaired liver metabolic activity in patients with HBsAg-positive cirrhosis. These effects may have important clinical implications in the management of bleeding esophageal varices in patients with cirrhosis.
Answer: The effect of octreotide bolus injection on azygos blood flow (AzBF) in patients with cirrhosis appears to be somewhat predictable, but there may be variability among individuals with cirrhosis. A study by PUBMED:19996986 found that a 50-μg intravenous bolus of octreotide resulted in a 34% decline in AzBF at 10 minutes compared to baseline, and this reduction was sustained over a 60-minute period. Upon readministration of octreotide at 60 minutes, there was a further significant decline in AzBF, although the response was blunted. This suggests that while octreotide consistently reduces AzBF, the magnitude and duration of the effect can vary, and there may be some degree of desensitization upon readministration.
Another study by PUBMED:11427837 validated the use of color Doppler EUS (CD-EUS) for measuring AzBF and observed reductions in AzBF 10 minutes after octreotide or somatostatin administration. However, after 60 minutes of somatostatin infusion, there was a significant rebound phenomenon, indicating that the effects of these drugs on AzBF are transient and may not be sustained over time.
In animal models, PUBMED:9823558 and PUBMED:10735613 found that octreotide significantly decreased splenorenal shunt blood flow, which is an index of collateral circulation in portal hypertensive rats. This suggests that octreotide can effectively reduce collateral blood flow in portal hypertension, which is often a concern in cirrhotic patients with varices.
Overall, while octreotide does have a predictable effect on reducing AzBF in patients with cirrhosis, the response may not be uniform among all patients, and there can be variations in the hemodynamic response to the drug (PUBMED:19996986). Additionally, the effects are transient and may be subject to a rebound phenomenon (PUBMED:11427837), indicating the need for careful monitoring and possibly repeated dosing in the management of variceal bleeding in cirrhosis. |
Instruction: Can subthalamic nucleus stimulation reveal parkinsonian rest tremor?
Abstracts:
abstract_id: PUBMED:18808774
Can subthalamic nucleus stimulation reveal parkinsonian rest tremor? Introduction: Rest tremor, one of the main symptoms in Parkinson's disease (PD), is dramatically improved following subthalamic nucleus stimulation (STN). Results are often better than after l-dopa treatment. The occurrence of rest tremor after neurosurgery in patients without preoperative tremor is uncommon.
Aim: The aim of this work was to investigate the role of subthalamic nucleus stimulation in the appearance of parkinsonian rest tremor. PATIENTS-RESULTS: Thirty PD patients (14%) out of 215 undergoing STN deep brain stimulation had an akinetorigid form of the disease, without preoperative tremor 11 years after onset of the disease. Six of them experienced the appearance of tremor six months after bilateral STN stimulation when the stimulator was switched off in the Off medication state. This de novo parkinsonian tremor was improved by l-dopa treatment and disappeared when the stimulator was turned on.
Conclusion: This finding suggests that infraclinical parkinsonian tremor is probably present in all PD patients.
abstract_id: PUBMED:11045125
Chronic electric stimulation of the internal globus pallidus and subthalamic nucleus in Parkinson disease Pathophysiology: In Parkinson's disease, the neurodegenerative process of the nigrostriatal dopaminergic pathways induces an increase in activity of the subthalamic nucleus and the medial globus pallidus, which cause inhibition of thalamo-cortical outputs explaining parkinsonism. HIGH-FREQUENCY DEEP BRAIN STIMULATION: The adverse effects induced by lesions of subcortical structures (thalamotomy, pallidotomy) have increased interest in chronic electrical stimulation proposed as a new therapy in Parkinson's disease. This technique is reversible and can be modulated with less adverse effects.
Two Targets: Two targets may be proposed in case of severe motor fluctuations: the medial globus pallidus and the subthalamic nucleus. Pallidial stimulation improves dramatically levodopa-induced dyskinesia and, with a variable degree, the parkinsonian triad. Subthalamic stimulation rapidly reverses akinesia, rigidity and tremor and also dyskinesias which progressively tend to diminish after decreasing L-dopa dosage. LONG-TERM EFFICACY: A follow-up period of a few years has confirmed that the beneficial effect is maintained. However, stimulation dose not prevent the development of certain symptoms (postural impairment, cognitive decline).
Limited Indications: Chronic electrical stimulation of medial globus pallidus and subthalamic nucleus may be proposed for parkinsonian patients with severe motor fluctuations associated with abnormal involuntary movements which are not controlled by different medical therapies. Parkinsonian symptoms must still be levodopa responsive and cause severe clinical disability severely limiting daily living activities. Cognitive impairment and other severe pathologies are contraindications.
abstract_id: PUBMED:16905883
Relationship of stimulation site location within the subthalamic nucleus region to clinical effects on parkinsonian symptoms. Objective: To determine the relationship of the stimulation site in the subthalamic region to the clinical effects on parkinsonian symptoms, the monopolar stimulation of 4 electrode contacts and the resulting effects on parkinsonian symptoms were evaluated.
Methods: Seventeen consecutive patients (3 males and 14 females) were enrolled in the study. The patients were evaluated while in a nonmedicated state, and 10-20 min after switching on the pulse generator the effects of stimulation were assessed using separate-subset Unified Parkinson's Disease Rating Scale scores.
Results: The relationship between the site stimulated and the percent improvement was analyzed using polynomial regression. Rigidity (p = 0.0004, R2 = 0.15), akinesia (p = 0.02, R2 = 0.07) and total score (p = 0.009, R2 = 0.089) well fit to second-order polynomial regression and showed the greatest improvement after stimulation at 0-1 mm below the horizontal anterior-posterior commissure (AC-PC) plane. Tremor (p = 0.24, R2 = 0.18) and gait (p = 0.36, R2 = 0.001) had a weak relation to the site stimulated, but stimulation at the sites 0-1 mm below the AC-PC plane also produced greater improvement than stimulation at more ventral or dorsal sites. The percent improvement of the posture (p = 0.92, R2 = 0.002) had no relation to the site stimulated. The dorsal border of the subthalamic nucleus was located 0.6 +/- 1.2 mm (n = 27) below the AC-PC plane and the most effective electrode contact 1.2 +/- 1.3 mm (n = 27) below it.
Conclusions: Stimulation around the dorsal border of the subthalamic nucleus, close to the AC-PC plane, produces greater improvement of parkinsonian symptoms than stimulation at more ventral or dorsal sites.
abstract_id: PUBMED:15654041
Subthalamic nucleus stimulation in tremor dominant parkinsonian patients with previous thalamic surgery. Before the introduction of high frequency stimulation of the subthalamic nucleus (STN), many disabled tremor dominant parkinsonian patients underwent lesioning or chronic electrical stimulation of the thalamus. We studied the effects of STN stimulation in patients with previous ventral intermediate nucleus (VIM) surgery whose motor state worsened. Fifteen parkinsonian patients were included in this study: nine with unilateral and two with bilateral VIM stimulation, three with unilateral thalamotomy, and one with both unilateral thalamotomy and contralateral VIM stimulation. The clinical evaluation consisted of a formal motor assessment using the Unified Parkinson's Disease Rating Scale (UPDRS) and neuropsychological tests encompassing a 50 point frontal scale, the Mattis Dementia Rating Scale, and the Beck Depression Inventory. The first surgical procedure was performed a mean (SD) of 8 (5) years after the onset of disease. STN implantation was carried out 10 (4) years later, and duration of follow up after beginning STN stimulation was 24 (20) months. The UPDRS motor score, tremor score, difficulties in performance of activities of daily living, and levodopa equivalent daily dose significantly decreased after STN stimulation. Neither axial symptoms nor neuropsychological status significantly worsened after the implantation of the STN electrodes. The parkinsonian motor state is greatly improved by bilateral STN stimulation even in patients with previous thalamic surgery, and STN stimulation is more effective than VIM stimulation in tremor dominant parkinsonian patients.
abstract_id: PUBMED:15015014
Effects of subthalamic nucleus stimulation on parkinsonian dysarthria and speech intelligibility. Subthalamic stimulation is known to improve tremor, akinesia and rigidity in Parkinson's disease. However, other signs such as hypophonia and swallowing disorders can be relatively resistant to this technique. The effect on dysarthria remains unclear. The aim of this study was to investigate the effects of implantation of electrode and stimulation of the subthalamic nucleus (STN) on parkinsonian dysarthria. Seven patients were prospectively included. Electrodes (Medtronic) were implanted in both STN. The electrode contacts and stimulation parameters were adjusted to provide best relief of symptoms with fewest side effects. Assessment used global scales (Unified Parkinson Disease Rating Scale, UPDRS II and III), dyskinesia scale, exhaustive dysarthria assessment (bucco-facial movements, voice, articulation, intelligibility) and the 'dysarthria' item from the UPDRS III. Evaluations were performed in six conditions: before and three months after surgery (pre-op, post-op) stimulation turned off or on (off-stim, onstim), and without or with a suprathreshold levodopa dose (offdrug, on-drug). Performance level on the UPDRS III significantly improved following electrode implantation and stimulation. For dysarthria, modest beneficial effects were observed on several motor parameters, especially lip movements. Voice mildly improved, especially for the modulation in loudness and pitch. Articulation was not affected. Furthermore, intelligibility was slightly reduced in the on-stimulation condition, especially when patients received levodopa. At an individual level, negative effects on intelligibility were observed in two patients, and this was associated with a discrete increase in facial and trunk dyskinesias, but not with the electrode position or stimulation parameters. In conclusion, surgery had weak effects on dysarthria. Intelligibility can be worsened, especially in the on-drug condition. Thus, adaptation of the stimulation parameters can be difficult.
abstract_id: PUBMED:17561121
The effects of subthalamic nucleus deep brain stimulation on parkinsonian tremor. Deep brain stimulation (DBS) of the ventral intermediate (Vim) nucleus of the thalamus has been the target of choice for patients with disabling essential tremor or medication refractory parkinsonian tremor. Recently there is evidence that the subthalamic nucleus (STN) should be the targets for patients with tremor associated with Parkinson's disease (PD). To assess the effects of STN DBS on parkinsonian tremor, eight consecutive patients with PD and disabling tremor were videotaped using a standardized tremor protocol. Evaluations were performed at least 12 h after last dose of medication with the DBS turned off followed by optimal DBS on state. A rater blinded to DBS status evaluated randomized video segments with the tremor components of the Unified Parkinson Disease Rating Scale (UPDRS) and Tremor Rating Scale (TRS). Compared with DBS off state there were significant improvements in mean UPDRS tremor score 79.4% (p=0.008), total TRS score 69.9% (p=0.008) and upper extremity 92.5% (p=0.008) TRS subscore. Functional improvement was noted with pouring liquids. Our findings provide support that STN DBS is an effective treatment of tremor associated with PD.
abstract_id: PUBMED:17683793
High-frequency oscillations (>200 Hz) in the human non-parkinsonian subthalamic nucleus. The human basal ganglia, and in particular the subthalamic nucleus (STN), can oscillate at surprisingly high frequencies, around 300 Hz [G. Foffani, A. Priori, M. Egidi, P. Rampini, F. Tamma, E. Caputo, K.A. Moxon, S. Cerutti, S. Barbieri, 300-Hz subthalamic oscillations in Parkinson's disease, Brain 126 (2003) 2153-2163]. It has been proposed that these oscillations could contribute to the mechanisms of action of deep brain stimulation (DBS) [G. Foffani, A. Priori, Deep brain stimulation in Parkinson's disease can mimic the 300 Hz subthalamic rhythm, Brain 129 (2006) E59]. However, the physiological role of high-frequency STN oscillations is questionable, because they have been observed only in patients with advanced Parkinson's disease and could therefore be secondary to the dopamine-depleted parkinsonian state. Here, we report high-frequency STN oscillations in the range of the 300-Hz rhythm during intraoperative microrecordings for DBS in an awake patient with focal dystonia as well as in a patient with essential tremor (ET). High-frequency STN oscillations are therefore not exclusively related to parkinsonian pathophysiology, but may represent a broader feature of human STN function.
abstract_id: PUBMED:24990837
Successful treatment with bilateral deep brain stimulation of the subthalamic nucleus for benign tremulous parkinsonism A 62-year-old man complained of resting tremor and posture tremor. In spite of presence of the tremor, other parkinsonian component was very mild. [(11)C]2β-carbomethoxy-3β-(4-fluorophenyl)-tropane ([(11)C]CFT) PET showed asymmetrical reduction of the uptake and [(11)C]raclopride PET showed slightly increased uptake in the striatum. Although he was diagnosed as having benign tremulous parkinsonism (BTP), anti-parkinsonian medications, including anti-cholinergic agent, dopamine agonist and l-dopa, were not effective for his tremor. His tremor gradually deteriorated enough to disturb writing, working, and eating. Because his quality of life (QOL) was disturbed by the troublesome tremor, deep brain stimulation of the subthalamic nucleus (STN-DBS) was performed. After STN-DBS, his tremor was dramatically improved. According to clinical course of our patient as well as previous reports, STN-DBS should be considered as a therapeutic option for BTP patients with severe tremor.
abstract_id: PUBMED:15852367
Can chronic subthalamic nucleus stimulation induce de novo tremor in Parkinson's disease? Subthalamic nucleus stimulation dramatically improves parkinsonian symptoms, notably the tremor. The occurrence of a tremor in the first 6 months after the surgical procedure in patients without tremor preoperatively is much less common. We report on the cases of 3 patients who developed such modification of their parkinsonian symptomatology. Physiopathological hypotheses are discussed.
abstract_id: PUBMED:10811394
Implication of the subthalamic nucleus in the pathophysiology and pathogenesis of Parkinson's disease. The subthalamic nucleus (STN) has been shown to play an important role in the control of movement and has been considered as a key structure in the functional organization of the basal ganglia. Several studies postulated that the STN plays a critical role in the pathophysiology of Parkinson's disease and that its inhibition or its lesioning can reverse the cardinal motor symptoms. Nevertheless, the beneficial effect was accompanied by dyskinetic abnormal movements. In order to avoid unpleasant and irreversible side effects we used high-frequency stimulation (HFS) of the STN instead of lesions. We have shown that parkinsonian motor symptoms, akinesia, rigidity, and tremor can be alleviated by HFS of the STN in the nonhuman primate model. Side effects were controllable and appeared only at intensities higher than that inducing the improvement of motor symptoms. In severe parkinsonian patients, bilateral STN-HFS greatly improved parkinsonian motor symptoms. Motor fluctuations were attenuated and patients became independent in most activities of daily living. It appears that STN-HFS mimics the effects of lesions by inhibiting its neuronal activity. In a rat model of parkinsonism, we studied the implication of the STN in the excitotoxicity of nigral dopamine cells. We showed that kainic acid lesioning of the STN can protect nigral dopaminergic cells against 6-hydroxydopamine-induced toxicity. The evidence reviewed in the present article clearly demonstrates that the STN is implicated in the pathophysiology and pathogenesis of Parkinson's disease.
Answer: Yes, subthalamic nucleus (STN) stimulation can reveal parkinsonian rest tremor. A study involving Parkinson's disease (PD) patients who underwent STN deep brain stimulation (DBS) found that some patients with an akinetorigid form of the disease, who did not exhibit preoperative tremor, experienced the appearance of tremor six months after bilateral STN stimulation when the stimulator was switched off in the Off medication state. This de novo parkinsonian tremor was improved by l-dopa treatment and disappeared when the stimulator was turned on. This suggests that infraclinical parkinsonian tremor may be present in all PD patients, even if it is not clinically evident before the surgery (PUBMED:18808774).
Additionally, STN stimulation has been shown to rapidly reverse akinesia, rigidity, and tremor in PD patients, with the beneficial effects being maintained over a follow-up period of a few years (PUBMED:11045125). Furthermore, the location of the stimulation site within the STN region is related to the clinical effects on parkinsonian symptoms, with stimulation around the dorsal border of the STN, close to the anterior-posterior commissure (AC-PC) plane, producing greater improvement of symptoms than stimulation at more ventral or dorsal sites (PUBMED:16905883).
In summary, STN stimulation can not only improve existing parkinsonian rest tremor but can also reveal tremor in patients who did not exhibit this symptom preoperatively, indicating that tremor may be an underlying feature of PD that can be unmasked by STN DBS. |
Instruction: Does direct transport to provincial burn centres improve outcomes?
Abstracts:
abstract_id: PUBMED:22564514
Does direct transport to provincial burn centres improve outcomes? A spatial epidemiology of severe burn injury in British Columbia, 2001-2006. Background: In Canada and the United States, research has shown that injured patients initially treated at smaller emergency departments before transfer to larger regional facilities are more likely to require longer stays in hospital or suffer greater mortality. It remains unknown whether transport status is an independent predictor of adverse health events among persons requiring care from provincial burn centres.
Methods: We obtained case records from the British Columbia Trauma Registry for adult patients (age ≥ 18 yr) referred or transported directly to the Vancouver General Hospital and Royal Jubilee Hospital burn centres between Jan. 1, 2001, and Mar. 31, 2006. Prehospital and in-transit deaths and deaths in other facilities were identified using the provincial Coroner Service database. Place of injury was identified through data linkage with census records. We performed bivariate analysis for continuous and discrete variables. Relative risk (RR) of prehospital and in-hospital mortality and hospital stay by transport status were analyzed using a Poisson regression model.
Results: After controlling for patient and injury characteristics, indirect referral did not influence RR of in-facility death (RR 1.32, 95% confidence interval [CI] 0.54- 3.22) or hospital stay (RR 0.96, 95% CI 0.65-1.42). Rural populations experienced an increased risk of total mortality (RR 1.22, 95% CI 1.00-1.48).
Conclusion: Transfer status is not a significant indicator of RR of death or hospital stay among patients who received care at primary care facilities before transport to regional burn centres. However, significant differences in prehospital mortality show that improvements in rural mortality can still be made.
abstract_id: PUBMED:37827938
A comparative study of outcomes of burns across multiple levels of care. Background: Burn injuries are a significant contributor to the burden of diseases. The management of burns at specialised burn centres has been shown to improve survival. However, in low- and middle-income countries (LMICs) major burns are managed at non-specialised burn centres due to resource constraints. There is insufficient data on survival from treatment at non-specialised burn centres in LMICs. This study aimed to compare the outcomes of burns treatment between a specialised burn centre and five non-specialised centres.
Methods: A prospective cohort study was conducted on patients aged 18 years or above from January 1, 2021 to September 30, 2021. Participants were selected from the admission register at the emergency department. All burns irrespective of the mechanism of injury or %TBSA were included. Data were entered into REDCap. Statistical analysis of outcomes such as positive blood culture, length of hospital stay (LOHS) and 90-day mortality between specialised burn versus non-specialised centres was performed. Furthermore, an analysis of risk factors for mortality was performed and survival data computed.
Results: Of the 488 study participants, 36% were admitted to a specialised burn centre compared to 64% admitted to non-specialised centres. The demographic characteristics were similar between centres. Patients at the specialised burn centre compared to non-specialised centres had a significantly higher inhalation injury of 30.9% vs 7.7% (p < 0.001), > 10%TBSA at 83.4% vs 45.7% (p < 0.001), > 20%TBSA at 46.9% vs 16.6% (p < 0.001), and a median (IQR) ABSI score of 6 (5-7) vs 5 (4-6) (p < 0.0001). Furthermore, patients from specialised burn vs non-specialised centres had a longer median (IQR) time from injury to first burn excision at 7 (4-11) vs 5 (2-10) days, higher rate of burn sepsis 69% vs 35%, increased LOHS 17 (11-27) vs 12 (6-22) days, and 90-day mortality rates at 19.4% vs 6.4%. After adjusting for cofounding variables, survival data showed no difference between specialised burn and non-specialised centres (HR 1.8 95% CI 1.0-3.2, p = 0.05).
Conclusion: Although it appears that the survival of burn patients managed at non-specialised centres in a middle-income country is comparable to those managed at specialised burn centres, there is uncounted bias in our survival data. Hence, a change in practice is not advocated. However, due to resource constraint specialised burn centres in addition to managing major burns should provide training and support to the non-specialised centres.
abstract_id: PUBMED:32660831
Demographics and clinical outcomes of adult burn patients admitted to a single provincial burn centre: A 40-year review. Introduction: This study evaluated trends in demographics and outcomes of cutaneous burns over a forty-year period at a Canadian burn centre.
Methods: Retrospective review was performed of all consecutive adult burn admissions to the Vancouver General Hospital (VGH) between 1976 and 2015. Comparison was made to the 2016 American Burn Association - National Burn Repository.
Results: There were 4105 admissions during study period. Both overall admissions and admissions per 100,000 BC residents declined (p < 0.0001). Males represented three quarters of admissions. There was a decrease in large burns (p < 0.05). Flame burns were most commonly associated with larger TBSA, ICU stays, and mortality. Mortality decreased from 11.3% to 2.8% (p < 0.05). Factors found to affect mortality included: increased length of stay, age and burn size, male gender, and number of complications. Baux50 and rBaux50 increased, from 102.8 to 116.7 and 112.2 to 125.3 respectively (p < 0.05, respectively).
Conclusions: This study represents the largest report on burn epidemiology in Canada. The incidence of burns has decreased significantly over the last forty years. Mortality has improved over this time frame, as evident by increases in Baux50 and rBaux50 scores. Further data is largely in concurrence with that of the National Burn Repository's amalgamation of US centres.
abstract_id: PUBMED:26440306
Long term outcomes data for the Burns Registry of Australia and New Zealand: Is it feasible? Background: Incorporating routine and standardised collection of long term outcomes following burn into burn registries would improve the capacity to quantify burn burden and evaluate care.
Aims: To evaluate methods for collecting the long term functional and quality of life outcomes of burns patients and establish the feasibility of implementing these outcomes into a multi-centre burns registry.
Methods: Five Burns Registry of Australia and New Zealand (BRANZ) centres participated in this prospective, longitudinal study. Patients admitted to the centres between November 2009 and November 2010 were followed-up at 1, 6, 12 and 24-months after injury using measures of burn specific health, health status, fatigue, itch and return to work. Participants in the study were compared to BRANZ registered patients at the centres over the study timeframe to identify participation bias, predictors of successful follow-up were established using a Generalised Estimating Equation model, and the completion rates by mode of administration were assessed.
Results: 463 patients participated in the study, representing 24% of all BRANZ admissions in the same timeframe. Compared to all BRANZ patients in the same timeframe, the median %TBSA and hospital length of stay was greater in the study participants. The follow-up rates were 63% at 1-month, 47% at 6-months; 40% at 12-months, and 21% at 24-months after injury, and there was marked variation in follow-up rates between the centres. Increasing age, greater %TBSA and opt-in centres were associated with greater follow-up. Centres which predominantly used one mode of administration experienced better follow-up rates.
Conclusions: The low participation rates, high loss to follow-up and responder bias observed indicate that greater consideration needs to be given to alternative models for follow-up, including tailoring the follow-up protocol to burn severity or type.
abstract_id: PUBMED:35843804
Agreement of clinical assessment of burn size and burn depth between referring hospitals and burn centres: A systematic review. Background: The quality of burn care is highly dependent on the initial assessment and care. The aim of this systematic review was to investigate the agreement of clinical assessment of burn depth and %TBSA between the referring units and the receiving burn centres.
Methods: Included articles had to meet criteria defined in a PICO (patients, interventions, comparisons, outcomes). Relevant databases were searched using a predetermined search string (November 6th 2021). Data were extracted in a standardised fashion. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach for test accuracy was used to assess the certainty of evidence. The QUADAS-2 tool was used to assess the risk of bias of individual studies as 'high', 'low' or 'unclear'.
Results: A total of 412 abstracts were retrieved and of these 28 studies with a total of 6461 patients were included, all reporting %TBSA and one burn depth. All studies were cross-sectional and most of them comprising retrospectively enrolled consecutive cohort. All studies showed a low agreement between %TBSA calculations made at referring units and at burn centres. Most studies directly comparing estimations of %TBSA at referring institutions and burn centers showed a proportion of overestimations of 50% or higher. The study of burn depth showed that 55% were equal to the estimates from the burn centre. Most studies had severe study limitations and the risk of imprecision was high. The overall certainty of evidence for accuracy of clinical estimations in referring centres is low (GRADE ⊕⊕ОО) for %TBSA and very low (GRADE ⊕ООО) for burn depth and resuscitation.
Conclusion: Overestimation of %TBSA at referring hospitals occurs very frequently. The overall certainty of evidence for accuracy of clinical estimations in referring centres is low for burn size and very low for burn depth. The findings suggest that the burn community has a significant challenge in educating and communicating better with our colleagues at referring institutions and that high-quality studies are needed.
abstract_id: PUBMED:25983286
Outcomes of burns in the elderly: revised estimates from the Birmingham Burn Centre. Outcomes after burn have continued to improve over the last 70 years in all age groups including the elderly. However, concerns have been raised that survival gains have not been to the same magnitude in elderly patients compared to younger age groups. The aims of this study were to analyze the recent outcomes of elderly burn injured patients admitted to the Birmingham Burn Centre, compare data with a historical cohort and published data from other burn centres worldwide. A retrospective review was conducted of all patients ≥65 years of age, admitted to our centre with cutaneous burns, between 2004 and 2012. Data was compared to a previously published historical cohort (1999-2003). 228 patients were included. The observed mortality for the study group was 14.9%. The median age of the study group was 79 years, the male to female ratio was 1:1 and median Total Body Surface Area (TBSA) burned was 5%. The incidence of inhalation injury was 13%. Median length of stay per TBSA burned for survivors was 2.4 days/% TBSA. Mortality has improved in all burn size groups, but differences were highly statistically significant in the medium burn size group (10-20% TBSA, p≤0.001). Burn outcomes in the elderly have improved over the last decade. This reduction has been impacted by a reduction in overall injury severity but is also likely due to general improvements in burn care, improved infrastructure, implementation of clinical guidelines and increased multi-disciplinary support, including Geriatric physicians.
abstract_id: PUBMED:32313529
Influence of gender difference on outcomes of adult burn patients in a developing country. The aim of this study was to investigate the impact of gender on outcomes among adult burn patients. A retrospective study was conducted on 5061 adult burn patients (16 - 64 years old) admitted to the Vietnam National Burn Hospital over a three-year period (2016 - 2018). Demographic data, burn features and outcome including complications, length of hospital stay and mortality of male and female groups were compared. Results indicated that male patients were predominant (72.8%), younger (35.5 vs. 37.2 years old; p < .001) and admitted sooner to hospital. A greater number of males suffered electrical and flame/heat direct contact injuries, whereas more females suffered scald injury (34.7% vs. 12.2%; p < .001). Burn extent was larger among males (14.9% vs. 12.1%; p < .001). In addition, a higher proportion of deep burn injuries (44.8% vs. 41.2%; p < .05) and number of surgeries (1.2 vs. 1; p < .05), and longer hospital stay (17.8 vs. 15.8 days; p < .001) was recorded among the male group. Post burn complication and overall mortality rate did not differ between the two groups. However, death rate was remarkably higher in the female group when burn extent was ≥ 50% TBSA (72.4% vs. 57.3%; p < .05). In conclusion, burn features and outcomes were not similar between the male and female group. Male patients appear to suffer more severe injury requiring more surgeries and longer hospital stay. However, more attention should be paid to the significantly higher mortality rate among females with extensive burn.
abstract_id: PUBMED:35083681
Research on accounting of provincial carbon transfer: based on the empirical data of 30 provinces in China. Firstly, through introducing the provincial product trading data into the input-output table, this paper constructs the provincial input-output table and then deduces the provincial MRIO (Multi-Regional Input-Output) model. Secondly, based on the empirical data from 30 provinces in mainland China, the provincial MRIO model was used to analyze the provincial carbon transfer caused by provincial trading and calculate the provincial direct carbon emission and complete carbon emission. Results show that: (1) The provincial MRIO model can separate the provincial self-demand carbon emissions, the net carbon emissions of import and export trading transfer and the net carbon emissions of provincial trading transfer, and can accurately calculate the source and destination of provincial carbon emissions. (2) There is a carbon transfer relationship between any two of China's 30 mainland provinces, and the carbon emissions transferred between provinces are quite different. The five provinces with the largest net provincial trading carbon transfer are Beijing (144.04 million tons), Shanghai (160.96 million tons), Jiangsu (133.85 million tons), Zhejiang (134.12 million tons) and Guangdong (268.32 million tons). (3) Most of the central and western regions dominated by economically underdeveloped provinces in China bear the carbon transfer of economically developed provinces. (4) When considering the dual carbon transfer of import and export trading and provincial product trading, there are only 5 provinces as Beijing (130.97 million tons), Shanghai (59.20 million tons), Jiangxi (3.36 million tons), Chongqing (11.75 million tons) and Qinghai (0.02 million tons) that have the positive difference among the 30 mainland provinces,, realize carbon transfer out, and the remaining 26 provinces are carbon transfer in.
abstract_id: PUBMED:37864399
Patient-reported outcomes and their predictors 2 years after burn injury: A cross-sectional study. This study aimed to describe patient-reported outcomes 2 years after burn injury and to comprehensively elucidate predictors that may influence these outcomes. This cross-sectional, prospective study included 352 patients who were admitted to the Department of Burn Surgery at a tertiary teaching hospital between January 2017 and December 2020. We collected demographic and disease-related data and instructed participants to complete the Readiness for Hospital Discharge Scale (RHDS) and the Burn Specific Health Scale-Brief (BSHS-B) questionnaire. The overall score of patient-reported outcomes 2 years after burn injury was 126.55 ± 33.32 points, and the dimensions with the lowest scores were "hand function" (13.96 ± 5.75), "heat sensitivity" (14.84 ± 4.90), "treatment regimens" (13.41 ± 6.77) and "work" (11.30 ± 4.97). Multiple linear regression analysis revealed that less postburn pruritus, better readiness for hospital discharge, less total body surface area (TBSA), better social participation, white-collar jobs, older age, better sleep quality and burns not caused by electricity were associated with better outcomes. Patients experienced poor patient-reported outcomes 2 years after burn injury. Integrated rehabilitative care is necessary to address patients' unique needs and improve long-term patient-reported outcomes.
abstract_id: PUBMED:35995642
The association between out of hours burn centre admission and in-hospital outcomes in patients with severe burns. Introduction: Patients with severe burns (≥20 % total body surface area [TBSA]) have specific and time sensitive needs on arrival to the burn centre. Burn care systems in Australia and New Zealand are organised differently during weekday business hours compared to overnight and weekends. The aims of this study were to compare the profile of adult patients with severe burns admitted during business hours with patients admitted out of hours and to quantify the association between time of admission and in-hospital outcomes in the Australian and New Zealand context.
Methods: Data were extracted from the Burns Registry of Australia and New Zealand for adults (≥18 years) with severe burns admitted to Australian or New Zealand burn centres between July 2016 and June 2020. Differences in patient profiles, clinical management, and in-hospital outcomes were investigated. Univariable and multivariable logistic and linear regression models were used to quantify associations between time of admission and in-hospital outcomes of interest.
Results: We found 623 patients eligible for inclusion. Most patients were admitted out of hours (69.2 %), their median age was 42 years, and most were male (78 %). The median size burn was 30 % TBSA and 32 % of patients had an inhalation injury. A greater proportion of patients admitted out of hours had alcohol and/or drugs involved with injury compared to patients admitted during business hours. No other differences between groups were observed. Patients in both groups had similar odds of dying in hospital (Odds Ratio [OR], 95 % Confidence Interval [95 %CI] 1.49 [0.64, 3.48]), developing acute kidney injury within 72 h (OR, 95 %CI 0.58 [0.32, 1.07]), or sepsis (OR, 95 %CI 1.04 [0.46, 2.35]). No association was found between time of admission and hospital (%, 95 %CI 1.00 [0.82, 1.23]) nor intensive care length of stay (%, 95 %CI 0.97 [0.73, 1.27]).
Discussion: In this first Australian and/or New Zealand study to explore the association between time of admission and burn patient in-hospital outcomes, out of hours admission was not associated with patient outcomes of interest.
Conclusion: These findings support current models of care in Australian and New Zealand burn centres, however further investigation is required. Nonetheless, given most severe burns patients arrive out of hours to burn the centre, it is plausible that out of hours availability of senior burn clinicians will improve patient care and safety resilience within burn care systems.
Answer: The question of whether direct transport to provincial burn centres improves outcomes for burn patients has been explored in several studies, with varying results.
One study conducted in British Columbia found that transfer status (whether patients were directly transported to a burn centre or initially treated at smaller emergency departments before transfer) was not a significant indicator of the risk of death or hospital stay among patients who received care at primary care facilities before transport to regional burn centres (PUBMED:22564514). However, this study did note that rural populations experienced an increased risk of total mortality, suggesting that improvements in rural mortality could still be made.
In contrast, a study comparing outcomes of burns treatment between a specialised burn centre and five non-specialised centres in a middle-income country found that patients at the specialised burn centre had more severe burns and a higher rate of burn sepsis, increased length of hospital stay, and higher 90-day mortality rates. However, after adjusting for confounding variables, survival data showed no difference between specialised burn and non-specialised centres (PUBMED:37827938). This suggests that while specialised centres may handle more severe cases, the survival outcomes may be comparable to non-specialised centres when accounting for the severity of burns.
Another study reviewing 40 years of data from a Canadian burn centre reported a significant decrease in burn incidence and mortality over time, which could be attributed to improvements in burn care (PUBMED:32660831). This suggests that advancements in treatment at burn centres have positively impacted patient outcomes.
A systematic review highlighted the frequent overestimation of burn size at referring hospitals, indicating a need for better education and communication with referring institutions (PUBMED:35843804). This could imply that direct transport to burn centres, where assessment and treatment might be more accurate, could potentially improve outcomes.
In summary, while direct transport to provincial burn centres does not appear to be a significant predictor of mortality or length of hospital stay in some studies, there is evidence that specialised burn centres may handle more severe cases and that there is room for improvement in the initial assessment and treatment of burns at referring hospitals. The overall quality of care and advancements in burn treatment at specialised centres likely contribute to improved outcomes, but the evidence does not conclusively show that direct transport alone is responsible for these improvements. |
Instruction: Para-aortic lymph node dissection for women with endometrial adenocarcinoma and intermediate- to high-risk tumors: does it improve survival?
Abstracts:
abstract_id: PUBMED:24362716
Para-aortic lymph node dissection for women with endometrial adenocarcinoma and intermediate- to high-risk tumors: does it improve survival? Objective: Literature suggests that para-aortic lymphadenectomy (para-aortic lymph node dissection [PALND]) has a therapeutic benefit for women with intermediate- to high-risk endometrial adenocarcinoma. We hypothesized that the observed survival advantage of PALND is a reflection of the general health of the patient rather than a therapeutic benefit of surgery.
Methods: Women with intermediate- to high-risk endometrial adenocarcinoma diagnosed from 2002 to 2009 at a single institution were identified. Medical comorbidities, pathology, and survival information were abstracted from the medical record. The χ test or the t test was used for univariate analysis. Overall survival (OS) and disease-specific survival (DSS) were calculated using the Kaplan-Meier method.
Results: A total of 253 women with a mean age of 64 years were identified. Of these women, 174 had a pelvic lymphadenectomy (pelvic lymph node dissection [PLND]) and 82 had PLND and PALND. The rate of positive nodes was 13% (23/174) for the women who had PLND and was 7% (6/82) for those who had PLND and PALND. Only 1.2% (1/82) of the women who had PLND and PALND had negative pelvic but positive para-aortic nodes. The patients who had PALND had a lower body mass index and were less likely to have significant medical comorbidities. The patients who had PALND had improved 5-year OS (96% vs 82%, P = 0.007) but no difference in 5-year DSS (96% vs 89%, P value = not significant).
Conclusions: Women with intermediate- to high-risk endometrial adenocarcinoma who undergo PALND have improved OS but no improvement in DSS. The lack of difference in DSS supports the hypothesis that underlying comorbidities as opposed to lack of PALND result in poorer outcome.
abstract_id: PUBMED:37070779
A meta-analysis of the effect of pelvic and para-aortic lymph node dissection on the prognosis of patients with endometrial cancer. Endometrial cancer (EC) is the second most common malignant tumor of the female reproductive system, and it occurs in the peri- and post-menopausal periods. The metastasis routes of EC include direct spread, hematogenous metastasis and lymph node metastasis. Symptoms such as vaginal discharge or irregular vaginal bleeding may occur in the early stage. The pathological stage of the patients treated at this time is mostly in the early stage, and comprehensive treatment such as surgery, radiotherapy and chemotherapy can improve the prognosis. This article investigates whether endometrial cancer requires pelvic and para-aortic lymph node dissection. The clinical data of 228 patients with endometrial cancer who underwent pelvic lymphadenectomy in our hospital from July 2020 to September 2021 were retrospectively analyzed. All patients underwent preoperative clinical staging and postoperative pathological staging. This paper compared lymph node spread rates of endometrial carcinoma in different stages, depth of muscle invasion, and pathological characteristics to analyze lymph node metastasis risk factors. Results showed metastasis rates of 7.5% in 228 cases of endometrial cancer, increasing with deeper myometrial invasion. Different clinicopathological factors had varying lymph node spread rates. Different clinicopathological factors have different pelvic lymph node spread rates in surgical patients. The lymph node spread rate of differentially differentiated carcinoma is higher than that of well-differentiated carcinoma. The lymph node spread rate of serous carcinoma is 100%, but there is no difference between the lymph node metastasis rate of special type carcinoma and adenocarcinoma. Statistical significance (P > 0.05).
abstract_id: PUBMED:31293032
Successful para-aortic lymph node dissection for endometrial cancer with horseshoe kidney: A case report and review of the literature. Horseshoe kidney (HSK) is considered to impede para-aortic lymph node dissection. We report the case of a 54-year-old female patient with endometrial cancer and HSK, treated successfully with para-aortic lymph node dissection, and present literature review regarding vascular abnormalities associated with HSK affecting para-aortic lymph node dissection. Three-dimensional computed tomography reconstruction revealed the accessory renal artery, supernumerary renal vein and ventral displacement of the renal pelvis and ureter. Abdominal modified radical hysterectomy, bilateral salpingo-oophorec'tomy, pelvic and para-aortic lymph node dissection and omentectomy were then performed. Lymphadenectomy behind the isthmus of the kidney was performed without separation of the isthmus by lifting the kidneys with vessel tape. There were no intraoperative or postoperative complications. Grasping shifted ureter and complicated vascular network of HSK and securing the operative field without division of the isthmus were key to reducing complications and hemorrhage. This case report can serve as a guide for performing para-aortic lymph node dissection safely and effectively in patients with HSK.
abstract_id: PUBMED:32318344
Comparison of Laparoscopy and Laparotomy for Para-Aortic Lymphadenectomy in Women With Presumed Stage I-II High-Risk Endometrial Cancer. Objective: To compare laparoscopic surgery to laparotomy for harvesting para-aortic lymph nodes in presumed stage I-II, high-risk endometrial cancer patients. Methods: Patients with histologically proven endometrial cancer, presumed stage I-II with high-risk tumor features who had undergone hysterectomy, bilateral salpingoophorectomy, or pelvic and para-aortic lymphadenectomy by either laparoscopy or laparotomy in Samsung Medical Center from 2005 to 2017 were retrospectively investigated. The primary outcome was para-aortic lymph node count. Secondary outcomes were pelvic lymph node count, perioperative events, and postoperative complications. Results: A total of 90 patients was included (35 for laparotomy, 55 for laparoscopy) for analysis. The mean (±SD) para-aortic lymph node count was 10.66 (±7.596) for laparotomy and 10.35 (±5.848) for laparoscopy (p = 0.827). Mean pelvic node count was 16.8 (±6.310) in the laparotomy group and 16.13 (±7.626) in the laparoscopy group (p = 0.664). Lower estimated blood loss was shown in the laparoscopy group. There was no difference in perioperative outcome between the groups. Additional multivariate analysis showed that survival outcome was not affected by surgical methods in presumed stage I-II, high-risk endometrial cancer patients. Conclusions: Study results demonstrate comparable para-aortic lymph node count with less blood loss in laparoscopy over laparotomy. In women with presumed stage I-II, high-risk endometrial cancer, laparoscopy is a valid treatment modality.
abstract_id: PUBMED:28657221
Implications of para-aortic lymph node metastasis in patients with endometrial cancer without pelvic lymph node metastasis. Objective: The aim of this study was to confirm the incidence and implications of a lymphatic spread pattern involving para-aortic lymph node (PAN) metastasis in the absence of pelvic lymph node (PLN) metastasis in patients with endometrial cancer.
Methods: We carried out a retrospective chart review of 380 patients with endometrial cancer treated by surgery including PLN dissection and PAN dissection at Hokkaido Cancer Center between 2003 and 2016. We determined the probability of PAN metastasis in patients without PLN metastasis and investigated survival outcomes of PLN-PAN+ patients.
Results: The median numbers of PLN and PAN removed at surgery were 41 (range: 11-107) and 16 (range: 1-65), respectively. Sixty-four patients (16.8%) had lymph node metastasis, including 39 (10.3%) with PAN metastasis. The most frequent lymphatic spread pattern was PLN+PAN+ (7.9%), followed by PLN+PAN- (6.6%), and PLN-PAN+ (2.4%). The probability of PAN metastasis in patients without PLN metastasis was 2.8% (9/325). The 5-year overall survival rates were 96.5% in PLN-PAN-, 77.6% in PLN+PAN-, 63.4% in PLN+PAN+, and 53.6% in PLN-PAN+ patients.
Conclusion: The likelihood of PAN metastasis in endometrial cancer patients without PLN metastasis is not negligible, and the prognosis of PLN-PAN+ is likely to be poor. The implications of a PLN-PAN+ lymphatic spread pattern should thus be taken into consideration when determining patient management strategies.
abstract_id: PUBMED:20096987
Systematic pelvic and aortic lymphadenectomy in intermediate and high-risk endometrial cancer: lymph-node mapping and identification of predictive factors for lymph-node status. Objective: To systematically assess the metastatic pattern of intermediate- and high-risk endometrial cancer in pelvic and para-aortic lymph-nodes and to evaluate risk factors for lymph-node metastases.
Study Design: Between 01/2005 and 01/2009 62 consecutive patients with intermediate- and high-risk endometrial cancer who underwent a systematic surgical staging including pelvic and para-aortic lymphadenectomy were enrolled into this study. Patients' characteristics, histological findings, lymph-node localization and involvement, surgical morbidity and relapse data were analyzed. Univariate analysis was performed to define risk factors for lymph-node metastasis.
Results: Of the 13 patients (21%) with positive lymph-nodes (N1), 8 (61.5%) had both pelvic and para-aortic lymph-nodes affected, 2 (15.4%) only para-aortic and 3 (23%) only pelvic lymph-node metastases. Overall, 54% of the N1-patients had positive lymph-nodes above the inferior mesenteric artery (IMA) to the level of the renal veins. Univariate analysis revealed lymph vascular space invasion (p-value: <0.001), vascular-space-invasion (p-value: <0.001) and incomplete tumor resection (p-value: 0.008) as significant risk factors for N1-status. Overall and progression-free survival was not significantly different between N1- and N0-patients.
Conclusions: Since the proportion of N1-endometrial cancer patients with positive para-aortic lymph-nodes is, at 76%, considerably high, and more than half of them have affected lymph-nodes above the IMA-level, lymphadenectomy for endometrial cancer should be extended up to the renal veins, when indicated. The therapeutic impact of systematic lymphadenectomy on overall and progression-free survival has still to be evaluated in future prospective randomized studies.
abstract_id: PUBMED:29927197
Survival Rates between Early Stage Endometrial Carcinoma With or Without Para-Aortic Lymph Node Resection. Background: In 1988, the International Federation of Gynecology and Obstetrics (FIGO) introduced the concept of surgical staging of endometrial cancer. Pelvic lymph node resection is a part of our routine procedure for all endometrial cancer patients while the use of para-aortic lymph node resection is at the discretion of the physician during surgery.
Objective: To compare the survival rates of endometrial cancer patients receiving pelvic lymph node resection with patients receiving pelvic and para-aortic lymph node resection.
Material And Method: This was a retrospective cohort study of early stage endometrial cancer patients that underwent surgical staging with or without para-aortic lymph node resection. Eighty patients were in the only pelvic lymph node resection group (PLN group), and 284 patients were in the combined pelvic and para-aortic lymph node resection group (PPALN group). The survival data were analyzed using the Kaplan-Meier method, and the log-rank test was employed to compare the survival curves of the two groups.
Results: The median follow-up period was 31.5 months. Median number of pelvic lymph nodes removed was 9 (1-33) for the PLN group and 14 (3-44) for the PPALN group. Median number of para-aortic nodes removed was 2 (0-12), and the rate of lymph node metastasis was 8.24%. In the PPALN group, 3.52% of patients had para-aortic lymph node metastasis. The overall 3- and 5-year survival rates were 90.9% and 87.4%, respectively for the PLN group as compared to 93.2% and 88.7% respectively for the PPALN group (p = 0.484).
Conclusion: The survival rate of early stage endometrial carcinoma patients that underwent surgical staging with or without para-aortic lymph node resection is comparable.
abstract_id: PUBMED:36219260
The role of lymph node dissection in the surgical treatment of endometrial cancer patients (retrospective analysis). Purpose: Endometrial cancer in recent years has taken the lead among cancer processes of the female reproductive system. The feasibility of pelvic and para-aortic lymph node dissection in patients with endometrial cancer has always been a controversial issue. The aim of the presented paper is to evaluate the feasibility of pelvic and para-aortic lymph node dissection in patients with endometrial cancer, depending on the stage of the disease, postoperative complications, and patient survival, depending on the volume of surgical intervention.
Methods: The study involved 285 patients with stages of I-IV endometrioid endometrial cancer of the Pre-graduate Department of Oncogynecology of the National Cancer Institute. The average age of patients was 55 ± 5.7 years. In 74.5%, the disease was detected at stage I and uterine extirpation was performed with/without appendages.
Results: The duration of the operation varies depending on the volume of intervention-from 1 h 30 min ± 10 min for panhysterectomy, up to 3 h 20 min ± 10 min when performing para-aortic lymph node dissection. The average number of lymph nodes removed was-7 ± 1.1 pelvic and 12 ± 1.5 para-aortic.
Conclusion: The basic principles of surgical treatment consist in individual choice of the scope of surgical intervention, performing adequate lymph node dissection, and preventing relapse and metastasis of the disease.
abstract_id: PUBMED:26825615
A patient group at negligible risk of para-aortic lymph node metastasis in endometrial cancer. Objective: The objective of this study was to identify a group at negligible risk of para-aortic lymph node metastasis (LNM) in endometrial cancer and its presumed prognosis.
Methods: We enrolled 555 patients with endometrial cancer who underwent preoperative endometrial biopsy, pelvic magnetic resonance imaging, and determination of serum cancer antigen (CA)125, and surgical treatment including lymphadenectomy. Three risk factors for LNM confirmed in previous reports were grade 3/non-endometrioid histology, large tumor volume, and a high CA125 value. Pelvic LNM rate, para-aortic LNM rate, and 5-year overall survival rate were assessed in four groups according to the number of these risk factors.
Results: LNM was noted in medical records of 74 patients (13.3%). Of 226 patients in the no risk factor group, pelvic LNM was noted in the medical records of five (2.2%), but no para-aortic LNM was noted. The 3-year/5-year survival rates in the no risk factor group were 97.2/96.6%, with a median follow-up period of 65.5 months. Of 186 patients in the one risk factor group, 21 (11.2%) had pelvic LNM. Of 113 patients undergoing para-aortic LN dissection in the one risk factor group, six (5.3%) had para-aortic LNM.
Conclusion: Patients with grade 1/2 histology based on endometrial biopsy, small tumor volume assessed by magnetic resonance imaging, and low CA125 value are supposed to have negligible risk of para-aortic LNM. In such patients, the para-aortic region might not be considered as a target to be assessed by staging procedure.
abstract_id: PUBMED:36139675
Therapeutic Benefit of Systematic Lymphadenectomy in Node-Negative Uterine-Confined Endometrioid Endometrial Carcinoma: Omission of Adjuvant Therapy. Endometrial cancer is the most common gynecological tract malignancy in developed countries, and its incidence has been increasing globally with rising obesity rates and longer life expectancy. In endometrial cancer, extrauterine disease, in particular lymph node metastasis, is an important prognostic factor. Nevertheless, pelvic lymphadenectomy is not considered to have a therapeutic benefit, as it did not improve survival in randomized studies. However, lymphadenectomy may have a therapeutic benefit if adjuvant therapy can be omitted without decreasing oncological outcomes, as the long-term quality of life is maintained by avoiding morbidities associated with adjuvant therapy. In intermediate- and high-risk endometrioid endometrial carcinomas, adjuvant therapy may be safely omitted without decreasing long-term survival by open surgery including systematic pelvic and para-aortic lymphadenectomy when patients are node-negative. Systematic lymphadenectomy may remove undetectable low-volume lymph node metastasis in both pelvic and para-aortic regions, and open surgery may reduce vaginal recurrence even without vaginal brachytherapy. However, lymphadenectomy may not improve survival in elderly patients and patients with p53-mutant tumors. In this review, I discuss the characteristics of lymph node metastasis, the methods of lymph node assessment, and the therapeutic benefits of systematic lymphadenectomy in patients with intermediate- and high-risk endometrioid endometrial carcinoma.
Answer: The evidence regarding the impact of para-aortic lymph node dissection (PALND) on survival in women with intermediate- to high-risk endometrial adenocarcinoma is mixed and somewhat controversial. Some studies suggest a potential benefit, while others do not find a significant difference in survival outcomes.
One study found that women with intermediate- to high-risk endometrial adenocarcinoma who underwent PALND had improved overall survival (OS) but no improvement in disease-specific survival (DSS). The lack of difference in DSS supports the hypothesis that underlying comorbidities, rather than the absence of PALND, may result in poorer outcomes (PUBMED:24362716).
Another study indicated that the survival rate of early-stage endometrial carcinoma patients who underwent surgical staging with or without para-aortic lymph node resection was comparable, suggesting that PALND may not be necessary for all patients (PUBMED:29927197).
A retrospective analysis emphasized that the basic principles of surgical treatment consist of an individual choice of the scope of surgical intervention, performing adequate lymph node dissection, and preventing relapse and metastasis of the disease (PUBMED:36219260).
Furthermore, a study identified a patient group with negligible risk of para-aortic lymph node metastasis, suggesting that in such patients, the para-aortic region might not need to be assessed by the staging procedure (PUBMED:26825615).
Lastly, a review discussed that systematic lymphadenectomy, including PALND, may have a therapeutic benefit if adjuvant therapy can be omitted without decreasing oncological outcomes, particularly in node-negative patients with intermediate- and high-risk endometrioid endometrial carcinoma (PUBMED:36139675).
In conclusion, while PALND may offer an OS benefit for some patients, the evidence does not consistently show an improvement in DSS. The decision to perform PALND should be individualized based on the patient's risk factors, overall health, and the potential for therapeutic benefit versus the risks associated with the procedure. |
Instruction: Does the positive influence of an undergraduate rural placement persist into postgraduate years?
Abstracts:
abstract_id: PUBMED:22713111
Does the positive influence of an undergraduate rural placement persist into postgraduate years? Introduction: Medical schools worldwide are playing a role in addressing the shortage of rural health practitioners. Selection of rural-origin students and long-term rural undergraduate placements have been shown to have a positive influence on a subsequent career choice of rural health. Evidence for the impact of short-term rural placements is less clear. In New Zealand, the Otago University Faculty of Medicine introduced a 7 week rural undergraduate placement at the Dunedin School Of Medicine, one of its three clinical schools, in 2000. A study of the first two annual cohorts showed a positive influence of the course on student attitudes to rural health and their intention to practise in a rural setting. The purpose of this study was to test whether or not these effects persisted into postgraduate years.
Method: The original study cohorts were posted a questionnaire (questions worded identically to the original survey) in 2009 (5th and 6th postgraduate years). Non-responders were followed up after 2 months. Graduates from the same year cohort at the two other Otago clinical schools (Christchurch and Wellington) were also surveyed. In addition to analysis by question, principal component analysis (PCA) identified 3 questions which represented the influence of the medical undergraduate program on students' attitudes towards rural general practice. This was used as an index of influence of the undergraduate curriculum.
Results: There was a statistically significant difference among graduates from Dunedin and the other two schools in reporting a positive influence towards rural practice from the undergraduate course.When asked how the medical undergraduate program influenced their attitude towards a career in rural practice, 56% of respondents from Dunedin reported a positive influence compared with 24% from Christchurch and 15% Wellington. This effect was less strong than that obtained immediately after the rural placement where 70% of Dunedin based students reported a positive influence. The index value for positive effect on attitudes was significantly higher for respondents who studied at Dunedin than at Wellington (mean index value 0.552 for Dunedin, -0.374 for Wellington t=4.172, p=0.000) or Christchurch (mean index value -0.083 for Christchurch t=2.606, p=0.011). There was no significant difference between Christchurch and Wellington (t=1.420, p=0.160). There was no significant difference among schools in the proportion of graduates who had worked or intended to work in rural general practice at any point in their career (24% Dunedin, 31% Christchurch, 16% Wellington (Phi=0.160, p=0.178).
Conclusion: Most of the literature on the influence of rural undergraduate placements, especially short term placements, examines immediate changes. This study adds to the evidence by showing that positive effects from a rural undergraduate placement persist into the postgraduate years, although that in isolation is unlikely to result in a significant workforce effect. Further investigation is warranted into which features of the undergraduate placement result in an extended positive effect on student attitudes.
abstract_id: PUBMED:22935122
Rural Undergraduate Support and Coordination, Rural Clinical School, and Rural Australian Medical Undergraduate Scholarship: rural undergraduate initiatives and subsequent rural medical workforce. Background: This study examined postgraduate work after an undergraduate clinical year spent in the Rural Clinical School of Western Australia (RCSWA), compared with 6 weeks Rural Undergraduate Support and Coordination (RUSC)-funded rural experience in a 6-year undergraduate medical course. Rural background, sex and Rural Australian Medical Undergraduate Scholarship (RAMUS)-holding were taken into account. Methods. University of Western Australia undergraduate data were linked by hand with postgraduate placements to provide a comprehensive dataset on the rural exposure history of junior medical practitioners working in Western Australia between 2004 and 2007.
Results: Participation in the RCSWA program was associated with significantly more postgraduate year one rural work than RUSC placement alone (OR=1.5, CI 0.97-2.38). The RCSWA workforce effect increased at postgraduate year two (OR=3.0, CI 1.6484 to 5.5935 relative to RUSC). Rural-origin practitioners who chose the RCSWA program were more likely than other rural-origin practitioners to take rural rotations in both postgraduate years. RAMUS holders' choice in relation to the RCSWA program predicted later rural work. There were no effects of sex.
Conclusions: Rural initiatives, in particular the Rural Clinical School program, are associated with postgraduate rural choices. The real impact of these data rely on the translation of early postgraduate choices into long-term work commitments.
abstract_id: PUBMED:7554587
Influence of undergraduate and postgraduate education on recruitment and retention of physicians in rural Alberta. The composition of practising physicians in Alberta, with respect to medical school of graduation, changed between 1986 and 1991. The percentage of graduates of the 2 Alberta medical schools increased, and the percentage of graduates of foreign medical schools decreased. Graduates of the University of Calgary increased their percentage in family practice in urban and rural communities except in Edmonton, while graduates of the University of Alberta increased their percentage almost everywhere except in communities with populations of < 4,000. Although graduates of Alberta medical schools are locating their practices in rural regions, the smaller communities continue to depend on foreign medical graduates. Retention is a problem in communities of < 4,000, with greatest mobility demonstrated by non-Albertan Canadian graduates and foreign medical graduates while Alberta graduates demonstrate less mobility. Overall, 85% of new Alberta physicians have had undergraduate or postgraduate experience in Alberta or Canada. Those with no medical educational experiences in Canada are more likely to locate in small communities. Those with postgraduate training in Alberta or Canada are more likely to locate in urban centres. When both undergraduate and postgraduate influences are considered, Alberta graduates appear to locate in non-urban regions to a greater extent than other Canadian or foreign graduates. For family physicians and specialists, the city where postgraduate training was obtained has a profound influence on the choice of urban practice locations.
abstract_id: PUBMED:32000498
Factors associated with rural work for nursing and allied health graduates 15-17 years after an undergraduate rural placement through the University Department of Rural Health program. Introduction: Very little is known about the long term workforce outcomes, or factors relating to these outcomes, for nursing and allied health rural placement programs. The positive evidence that does exist is based on short term (1-3 year) evaluations, which suggest that undergraduate rural placements are associated with substantial immediate rural practice of 25-30% graduates practising rurally. These positive data suggest the value of examining long term practice outcomes, since such data are necessary to providing an evidence base for future workforce strategies. The objective was to measure long term (15-17 year) rural practice outcomes for nursing and allied health graduates who had completed an undergraduate rural placement of 2-18 weeks through a university department of rural health (UDRH).
Methods: This was a longitudinal cohort study, with measures taken at the end of the placement, at one year and at 15-17 years post-graduation. Participants were all nursing and allied health students who had taken part in a UDRH rural placement, who consented to be followed up, and whose practice location was able to be identified. The main outcome measure was factors associated with location of practice as being either urban (RA 1) or rural (RA 2-5).
Results: Of 776 graduates initially surveyed, 474 (61%) were able to be contacted in the year after their graduation, and 244 (31%) were identified through the Australian Health Practitioner Regulation Agency, 15-17 years later. In univariate analysis at the first graduate year, previously lived rural, weeks in placement, discipline and considering future rural practice all had significant relationships with initial rural practice. In multivariate analysis, only rural background retained significance (odds ratio (OR) 3.19, confidence interval (CI) 1.71-5.60). In univariate analysis 15-17 years later, previously lived rural and first job being rural were significantly related to current rural practice. In multivariate analysis, only first job being rural retained significance (OR 11.57, CI 2.77-48.97).
Conclusion: The most significant long term practice factor identified in this study was initial rural practice. This suggests that funding to facilitate a rural pathway to not just train but also support careers in rural nursing and allied health rural training, similar to that already established for pharmacy and medicine, is likely to have beneficial long term workforce outcomes. This result adds to the evidence base of strategies that could be implemented for the successful development of a long term rural health workforce.
abstract_id: PUBMED:34457735
A Qualitative and Semiquantitative Exploration of the Experience of a Rural and Regional Clinical Placement Programme. In many countries, including New Zealand, recruitment of medical practitioners to rural and regional areas is a government priority, yet evidence for what determines career choice remains limited. We studied 19 newly qualified medical practitioners, all of whom had participated in a year-long undergraduate rural or regional placement (the Pūkawakawa Programme). We explored their placement experiences through focus groups and interviews and aimed to determine whether experiential differences existed between those who chose to return to a rural or regional location for early career employment (the Returners) and those who did not (the Non-Returners). Focus group and interview transcripts were a mean (range) length of 6485 (4720-7889) and 3084 (1843-4756) words, respectively, and underwent thematic analysis. We then used semiquantitative analysis to determine the relative dominance of themes and subthemes within our thematic results. Placement experiences were overwhelming positive - only four themes emerged for negative experiences, but five themes and nine subthemes emerged for positive experiences. Many curricular aspects of the placement experience were viewed as similarly positive for Returners and Non-Returners, as were social aspects with fellow students. Hence, positive experiences per se appear not to differentiate Returner and Non-Returner groups and so seem unlikely to be related to decisions about practice location. However, Returners reported a substantially higher proportion of positive placement experiences related to feeling part of the clinical team compared with Non-Returners (11% vs 4%, respectively) - a result consistent with Returners also reporting more positive experiences related to learning and knowledge gained and personal development.
abstract_id: PUBMED:23607311
Additional years of Australian Rural Clinical School undergraduate training is associated with rural practice. Background: To understand the influence of the number of years spent at an Australian rural clinical school (RCS) on graduate current, preferred current and intended location for rural workforce practice.
Methods: Retrospective online survey of medical graduates who spent 1-3 years of their undergraduate training in the University of New South Wales (UNSW) Rural Clinical School. Associations with factors (gender, rural versus non-rural entry, conscription versus non-conscript and number of years of RCS attendance) influencing current, preferred current and intended locations were assessed using X2 test. Factors that were considered significant at P 0.1 were entered into a logistic regression model for further analysis.
Results: 214 graduates responded to the online survey. Graduates with three years of previous RCS training were more likely to indicate rural areas as their preferred current work location, than their colleagues who spent one year at an RCS campus (OR = 3.0, 95% CI = 1.2-7.4, P = 0.015). Also RCS graduates that spent three years at an RCS were more likely to intend to take up rural medical practice after completion of training compared to the graduates with one year of rural placement (OR = 5.1, 95% CI = 1.8-14.2, P = 0.002). Non-rural medicine entry graduates who spent three years at rural campuses were more likely to take up rural practice compared to those who spent just one year at a rural campus (OR = 8.4, 95% CI = 2.1-33.5, P = 0.002).
Conclusions: Increasing the length of time beyond a year at an Australian RCS campus for undergraduate medical students is associated with current work location, preferred current work location and intended work location in a rural area. Spending three years in a RCS significantly increases the likelihood of rural career intentions of non-rural students.
abstract_id: PUBMED:32899356
Increasing Rural Recruitment and Retention through Rural Exposure during Undergraduate Training: An Integrative Review. Objectives: Ensuring nationwide access to medical care challenges health systems worldwide. Rural exposure during undergraduate medical training is promising as a means for overcoming the shortage of physicians outside urban areas, but the effectiveness is widely unknown. This integrative review assesses the effects of rural placements during undergraduate medical training on graduates' likelihood to take up rural practice. Methods: The paper presents the results of a longitudinal review of the literature published in PubMed, Embase, Google Scholar and elsewhere on the measurable effects of rural placements and internships during medical training on the number of graduates in rural practice. Results: The combined database and hand search identified 38 suitable primary studies with rather heterogeneous interventions, endpoints and results, mostly cross-sectional and control studies. The analysis of the existing evidence exhibited predominantly positive but rather weak correlations between rural placements during undergraduate medical training and later rural practice. Beyond the initial scope, the review underpinned rural upbringing to be the strongest predictor for rural practice. Conclusions: This review confirms that rural exposure during undergraduate medical training to contributes to recruitment and retention in nonurban settings. It can play a role within a broader strategy for overcoming the shortage of rural practitioners. Rural placements during medical education turned out to be particularly effective for rural-entry students. Given the increasing funding being directed towards medical schools to produce graduates that will work rurally, more robust high-quality research is needed.
abstract_id: PUBMED:38324168
Evaluation of the accuracy of fully guided implant placement by undergraduate students and postgraduate dentists: a comparative prospective clinical study. Purpose: This study aimed to assess the accuracy of implant placement through three-dimensional planning and fully guided insertion, comparing outcomes between undergraduate and postgraduate surgeons.
Methods: Thirty-eight patients requiring 42 implants in posterior single-tooth gaps were enrolled from the University Clinic for Prosthodontics at the Martin Luther University Halle Wittenberg and the Department of Prosthodontics, Geriatric Dentistry, and Craniomandibular Disorders of Charité University Medicine, Berlin. Twenty-two implants were placed by undergraduate students (n = 18), while 20 implants were placed by trainee postgraduate dentists (n = 5). Pre-operative intraoral scans and cone beam computed tomography images were performed for implant planning and surgical template fabrication. Postoperative intraoral scans were superimposed onto the original scans to analyze implant accuracy in terms of apical, coronal, and angular deviations, as well as vertical discrepancies.
Results: In the student group, two implant insertions were performed by the assistant dentist because of intraoperative complications and, thus, were excluded from further analysis. For the remaining implants, no statistically significant differences were observed between the dentist and student groups in terms of apical (p = 0.245), coronal (p = 0.745), or angular (p = 0.185) implant deviations, as well as vertical discrepancies (p = 0.433).
Conclusions: This study confirms the viability of fully guided implant placement by undergraduate students, with comparable accuracy to postgraduate dentists. Integration into dental education can prepare students for implant procedures, expanding access and potentially reducing costs in clinical practice. Collaboration is essential for safe implementation, and future research should explore long-term outcomes and patient perspectives, contributing to the advancement of dental education and practice.
Trial Registration: DRKS, DRKS00023024, Registered 8 September 2020-Retrospectively registered, https://drks.de/search/de/trial/DRKS00023024 .
abstract_id: PUBMED:36717767
Postgraduate perspectives on mentoring undergraduate researchers for talent development. Undergraduate research experiences are critical for the talent development of the STEM research workforce, and research mentors play an influential role in this process. Given the many life science majors seeking research experiences at universities, graduate and postdoctoral researchers (i.e., postgraduates) provide much of the daily mentoring of undergraduate researchers. Yet, there remains little research on how postgraduates contribute to talent development among undergraduate researchers. To begin to address this knowledge gap, we conducted an exploratory study of the experiences of 32 postgraduates who mentored life science undergraduate researchers. We identified four factors that they perceived as enabling undergraduate researcher talent development: undergraduate researcher characteristics, research project characteristics, and mentoring implementation as well as outcomes for both the postgraduate and undergraduate. We then describe a team-based approach to postgraduate mentoring of undergraduate researchers that attends to these factors to provide an example that practitioners can adapt or adopt for their own research groups.
abstract_id: PUBMED:37565569
A bibliometric comparison of undergraduate and postgraduate endodontic education publications: The topics, trends, and challenges. Objectives: This study employs bibliometric analysis to compare knowledge units and main topics in undergraduate and postgraduate endodontic education, aiming to identify similarities, differences, and connections. The insights gained are expected to inform the future of two-stage education to enhance continuity, highlighting evolving trends, challenges, and development directions.
Methods: Citation data were retrieved from the Web of Science Core Collections (WOSCC) database and non-WOSCC databases with two separate search formulas. VOSviewer and CiteSpace were used to analyze the distribution of research by publication years, citation-sources, co-authorship network of authors and countries, and clusters of keywords.
Results: The focus on undergraduate education preceded postgraduate education by nearly a decade. The United Kingdom has emerged as the most prominent contributor to endodontic literature at both levels, with the International Endodontic Journal representing the most voluminous and cited resource in this domain. Dummer is recognized as the most prolific author in undergraduate endodontic education, while Gulabivala spearheads the most extensive cluster of postgraduate education. Keywords clustering analysis reveals that undergraduate education places greater emphasis on fundamental knowledge, while postgraduate education concentrates more on clinical practice. Descriptive analyses from non-WOSCC databases align with the topics and findings from WOSCC-based bibliometric analysis.
Conclusion: This bibliometric analysis revealed the emphasis on fundamental knowledge and teaching techniques at the undergraduate level versus advanced clinical knowledge and techniques at the postgraduate level, which originated from different learning aims and contexts. Updating the curriculum to meet the latest practices and innovations is crucial for aligning learning objectives with current and future needs, and the connection between the two levels remains a central challenge in endodontic education.
Answer: Yes, the positive influence of an undergraduate rural placement does persist into postgraduate years. A study conducted on graduates from the Otago University Faculty of Medicine in New Zealand found that there was a statistically significant difference among graduates from Dunedin, who had a 7-week rural undergraduate placement, and the other two schools in reporting a positive influence towards rural practice from the undergraduate course. While the effect was less strong than immediately after the rural placement, 56% of respondents from Dunedin reported a positive influence compared with 24% from Christchurch and 15% from Wellington. This suggests that the effects of a rural undergraduate placement can persist into the postgraduate years, although it may not necessarily result in a significant workforce effect (PUBMED:22713111).
Additionally, other studies have shown that rural undergraduate initiatives, such as the Rural Clinical School program, are associated with postgraduate rural choices. For instance, participation in the Rural Clinical School of Western Australia (RCSWA) program was associated with significantly more postgraduate year one rural work than a 6-week Rural Undergraduate Support and Coordination (RUSC)-funded rural experience (PUBMED:22935122). Moreover, the University of New South Wales (UNSW) Rural Clinical School found that graduates with three years of previous RCS training were more likely to indicate rural areas as their preferred current work location and intended work location in a rural area (PUBMED:23607311).
Furthermore, a study on nursing and allied health graduates who completed an undergraduate rural placement through a university department of rural health (UDRH) found that the most significant long-term practice factor was initial rural practice, suggesting that support for rural pathways could have beneficial long-term workforce outcomes (PUBMED:32000498).
In summary, the evidence indicates that the positive influence of undergraduate rural placements does persist into postgraduate years and can influence the likelihood of graduates choosing to work in rural areas. |
Instruction: Social anxiety in children: social skills deficit, or cognitive distortion?
Abstracts:
abstract_id: PUBMED:33115188
Depression, Anxiety, Related Risk Factors and Cognitive Distortion in Korean Patients with Inflammatory Bowel Disease. Objective: To evaluate the severity of depression, anxiety, associated risk factors, and cognitive distortion in Korean patients with ulcerative colitis (UC) and Crohn's disease (CD).
Methods: This study included 369 patients with inflammatory bowel disease. The severity of depression and anxiety was examined using Patient Health Questionnaire-9 and Hospital Anxiety and Depression Scale. The Anxious Thoughts and Tendencies scale was used to measure catastrophizing tendency. Multivariate regression analyses were performed.
Results: The predictors of depression were marital status, anti-tumor necrosis factor-α (TNF-α) agent use, age, and body mass index in UC patients and marital status, disease activity, alcohol use, and employment status in CD patients. For anxiety, sex and marital status were the associated factors in UC patients, whereas steroid use was the only significant predictor in CD patients. Comparing the cognitive distortion level, there were no significant differences between UC and CD patients although there was an increasing tendency according to the severity of depression or anxiety.
Conclusion: If patients are accompanied by high levels of depression or anxiety and their associated risk factors including TNF-α agent or steroid use, it is recommended that not only symptoms are treated but also cognitive approach and evaluation be performed.
abstract_id: PUBMED:31122300
Utilization of learned skills in cognitive behavioural therapy for panic disorder. Background: Research has long investigated the cognitive processes in the treatment of depression, and more recently in panic disorder (PD). Meanwhile, other studies have examined patients' cognitive therapy skills in depression to gain insight into the link between acquiring such skills and treatment outcome.
Aims: Given that no scale exists to examine in-session patient use of panic-related cognitive behavioural therapy (CBT) skills, the aim of this study was to develop a new measure for assessing patients' cognitive and behavioural skills in CBT for PD.
Method: This study included 20 PD patients who received 12 weekly individual therapy sessions. The Cognitive Behavioral Therapy Panic Skills (CBTPS) rating system was developed. Three independent raters coded tapes of therapy sessions at the beginning and end of treatment.
Results: The coefficient alphas and inter-rater reliability were high for the cognitive and behavioural subscales. Improvement in the patients' CBTPS scores on both subscales indicated overall symptom improvement, above improvement in anxiety sensitivity.
Conclusion: To our knowledge, this is the first study examining the impact of patient acquisition of CBT PD skills on treatment outcome. A new measure was developed based on the observations and was deemed reliable and valid. The measure facilitates the examination of the mechanisms of change in treatment for PD. An in-depth examination of the CBTPS may refine our understanding of the impact of each skill on PD treatment outcome. Further research relating to acquiring CBT skills could shed light on the mechanisms of change in treatment.
abstract_id: PUBMED:28515542
The Role of Cognitive Factors in Childhood Social Anxiety: Social Threat Thoughts and Social Skills Perception. Models of cognitive processing in anxiety disorders state that socially anxious children display several distorted cognitive processes that maintain their anxiety. The present study investigated the role of social threat thoughts and social skills perception in relation to childhood trait and state social anxiety. In total, 141 children varying in their levels of social anxiety performed a short speech task in front of a camera and filled out self-reports about their trait social anxiety, state anxiety, social skills perception and social threat thoughts. Results showed that social threat thoughts mediated the relationship between trait social anxiety and state anxiety after the speech task, even when controlling for baseline state anxiety. Furthermore, we found that children with higher trait anxiety and more social threat thoughts had a lower perception of their social skills, but did not display a social skills deficit. These results provide evidence for the applicability of the cognitive social anxiety model to children.
abstract_id: PUBMED:38454910
Optimising listening skills: Analysing the effectiveness of a blended model with a top-down approach through cognitive load theory. The ability to listen is critical in the task of language learning. Although listening has the least pedagogical attention, the growing emphasis on communication and language proficiency makes listening skills prominent in the language classroom. This paper aims to analyse the effectiveness of the Blended model to improve teaching listening skills, by instigating a top-down approach through Cognitive Load Theory. The top-down approach aids the students with the background knowledge of the audio with information like context, situation, phrases, etc. The blended model enables the teacher to facilitate students through the technological platform to process their listening input. A questionnaire was adopted for data collection and a semi-structured interview was performed from 60 samples from prefinal year Engineering students selected through purposive sampling techniques and grouped as experimental N = 30 and control N = 30 groups. The experimental group was trained with a top-down approach with the support of LMS. The control group was provided with the same listening material but taught in the conventional method. The purpose of this study is to show the statistically significant impact of employing technology inside the language classroom to teach listening skills. Findings showed that samples in the experimental group could identify the relevant and non-relevant information from the audio, conceptualise the audio content and predict the information beforehand. The difficulties that the students and teachers faced and the remedial measures to overcome them are also discussed. The following objectives were established for the study through mixed methods of enhancing listening skills through Cognitive Load Theory (CLT). •To explore the effect of intervention through a top-down approach with the support of technology (LMS) on enhancing the listening skills of the students.•How the blending of synchronous and asynchronous and a top-down approach develops the predicting skills of the students during the listening comprehension exercises.•To adapt procedures involved in enhancing the self-paced learning efficacy and reducing listening anxiety in ESL learners.
abstract_id: PUBMED:35726497
The relationship between posttherapeutic Cognitive Behavior Therapy skills usage and follow-up outcomes of internet-delivered Cognitive Behavior Therapy. Background: Clients independently applying Cognitive Behavior Therapy (CBT) skills is an important outcome of CBT-based treatments. The relationship between posttherapeutic CBT skills usage and clinical outcomes remains under-researched-especially after internet-delivered CBT (iCBT).
Objective: Explore contemporaneous and lagged effects of posttherapeutic CBT skills usage frequency on iCBT follow-up outcomes.
Method: Nested within a randomized controlled trial, 241 participants received 8-week supported iCBT for anxiety and/or depression, completing measures of anxiety, depression, functional impairment, and CBT skills usage frequency at 3-, 6-, 9-, and 12-month follow-up. Cross-lagged panel models evaluated primary aims.
Results: While analyses support a contemporaneous relationship between anxiety, depression, functional impairment, and CBT skills usage frequency, no consistent lagged effects were observed.
Conclusion: Findings align with qualitative research but the role of CBT skills usage in the maintenance of iCBT effects remains unclear. Innovative research modeling temporal and possibly circular relationships between CBT skill usage and clinical outcomes is needed to inform iCBT optimization.
abstract_id: PUBMED:27639442
Cognitive behavioral therapy in 22q11.2 microdeletion with psychotic symptoms: What do we learn from schizophrenia? The 22q11.2 deletion syndrome (22q11.2DS) is one of the most common microdeletion syndromes, with a widely underestimated prevalence between 1 per 2000 and 1 per 6000. Since childhood, patients with 22q11.2DS are described as having difficulties to initiate and maintain peer relationships. This lack of social skills has been linked to attention deficits/hyperactivity disorder, anxiety and depression. A high incidence of psychosis and positive symptoms is observed in patients with 22q11.2DS and remains correlated with poor social functioning, anxiety and depressive symptoms. Because 22q11.2DS and schizophrenia share several major clinical features, 22q11.2DS is sometimes considered as a genetic model for schizophrenia. Surprisingly, almost no study suggests the use of cognitive and behavioral therapy (CBT) in this indication. We reviewed what should be learned from schizophrenia to develop specific intervention for 22q11.2DS. In our opinion, the first step of CBT approach in 22q11.2DS with psychotic symptoms is to identify precisely which tools can be used among the already available ones. Cognitive behavioral therapy (CBT) targets integrated disorders, i.e. reasoning biases and behavior disorders. In 22q11.2DS, CBT-targeted behavior disorders may take the form of social avoidance and withdrawal or, in the contrary, a more unusual disinhibition and aggressiveness. In our experience, other negative symptoms observed in 22q11.2DS, such as motivation deficit or anhedonia, may also be reduced by CBT. Controlled trials have been studying the benefits of CBT in schizophrenia and several meta-analyses proved its effectiveness. Therefore, it is legitimate to propose this tool in 22q11.2DS, considering symptoms similarities. Overall, CBT is the most effective psychosocial intervention on psychotic symptoms and remains a relevant complement to pharmacological treatments such as antipsychotics.
abstract_id: PUBMED:31158113
Cognitive deficit in schizophrenia: an overview. Depressive mood, anxiety, delusions, hallucinations and behavioral disturbances have been traditionally recognized as leading symptoms of mental disorders. However, cognitive symptoms went under-recognized or declined. Today there is robust evidence that cognitive dysfunction is present in the majority of mental disorders and is also related to impairments in the functioning of the persons with mental illness. It is proposed that aberrant brain neuronal network connectivity, arising from interplay of genetic, epigenetic, developmental and environmental factors, is responsible for cognitive decline. In schizophrenia, dysfunctions in working memory, attention, processing speed, visual and verbal learning with substantial deficit in reasoning, planning, abstract thinking and problem solving have been extensively documented. Social cognition - the ability to correctly process information and use it to generate appropriate response in situations, is also impaired. The correlation of cognitive impairment with functional outcome and employment, independent living and social functioning has emphasized the need for development of the treatments specific to cognition. It is considered that brain neuroplasticity allows for re-modulating and compensating the impairment process which could give opportunity to improve cognitive functions. Therefore, there is a need for comprehensive clinical assessment and follow-up of cognitive decline in mental illness. Implementation of specific treatment strategies addressing cognitive decline in mental illness, like new drugs, distinct cognitive-behavioural therapy, psychoeducation, social skills training and remediation strategies should be strongly indorsed targeting recovery and reduction of disability due to mental illness.
abstract_id: PUBMED:27764528
Sluggish Cognitive Tempo is Associated With Poorer Study Skills, More Executive Functioning Deficits, and Greater Impairment in College Students. Objectives: Few studies have examined sluggish cognitive tempo (SCT) in college students even though extant research suggests a higher prevalence rate of SCT symptoms in this population compared to general adult or youth samples. The current study examined SCT symptoms in relation to two domains related to college student's academic success, study skills and daily life executive functioning (EF), as well as specific domains of functional impairment.
Method: 158 undergraduate students (Mage = 19.05 years; 64% female) completed measures of psychopathology symptoms, study skills, daily life EF, and functional impairment.
Results: After controlling for demographics and symptoms of attention-deficit/hyperactivity disorder (ADHD), anxiety, and depression, SCT remained significantly associated with poorer study skills, greater daily life EF deficits, and global impairment and with greater functional impairment in the specific domains of educational activities, work, money/finances, managing chores and household tasks, community activities, and social situations with strangers and friends. In many instances, ADHD inattentive symptoms were no longer significantly associated with study skills or impairment after SCT symptoms were added to the model.
Conclusion: SCT is associated with poorer college student functioning. Findings highlight the need for increased specificity in studies examining the relation between SCT and adjustment.
abstract_id: PUBMED:31218759
The assessment of cognitive-behavioral therapy skills in patients diagnosed with health anxiety: Development and pilot study on an observer-based rating scale. Cognitive-behavioral therapy is a highly effective treatment of health anxiety, but it remains unclear through which mechanisms treatment effects prevail. Some evidence suggests that patients acquire skills-understood as techniques helping them reach therapy goals-through psychotherapy. In the current study, an observer-based rating scale for the skills assessment of patients with health anxiety (SAPH) was developed and validated in a pilot study. Based on 177 videotapes, four independent raters evaluated the frequency of skills acquired during cognitive and exposure therapy among 66 patients diagnosed with health anxiety with the SAPH. Predictive validity was evaluated by the Yale-Brown Obsessive-Compulsive Scale for Hypochondriasis. The SAPH demonstrated good interrater reliability (ICC(1,2) = .88, p < .001, 95% CI [.81, .92]) and internal consistency (α = .94). Although patient skills did not significantly increase during three sessions, they significantly predicted a reduction in health anxiety symptoms at the end of treatment (R2 = .35). Patients' skills are highly important within the treatment of health anxiety. By providing external ratings of patients' skills with good psychometric properties, our pilot data suggest that the SAPH may complement current tools for the assessment of skills, specifically in targeting health anxiety.
abstract_id: PUBMED:35116205
Exploring the relationship between ADHD, its common comorbidities, and their relationship to organizational skills. Background: Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder affecting numerous executive functioning skills, such as organizational skills. While the relationship between the inattention aspect of ADHD and poor organizational skills is well documented, it is still unclear if lower organizational skills are only associated with ADHD or if they are also associated with other comorbid disorders commonly diagnosed in conjunction with ADHD. The purpose of the present study is to investigate the relationship between organizational skills and ADHD in adults, as well as the impact of comorbid disorders on ADHD in relation to organizational skills.
Methods: Four hundred seven (n = 201 with a diagnosis of ADHD) adults from the general population were recruited online. Participants completed a measure of organizational skills, measures assessing levels of ADHD, depression, and anxiety, and extensive demographic information related to their diagnosis of ADHD or other possible diagnosis.
Results: Participants with a diagnosis of ADHD were significantly more likely to have a comorbid diagnosis of depression and/or anxiety. Organizational skills were significantly lower in individuals who reported having received a diagnosis of ADHD, but not in those with a past diagnosis of depression and anxiety. However, organizational skills were lower in individuals currently experiencing higher levels of depression and anxiety. The results of regression analyses suggest that levels of organizational skills are most strongly predicted by inattention and more weakly predicted by comorbid disorders.
Discussion: These results highlight the centrality of organizational skills deficit in ADHD as well as the influence of the inattention component over other components commonly observed in ADHD on organizational skills. Suggestions for treatment of ADHD in adults are discussed.
Answer: The study on the role of cognitive factors in childhood social anxiety suggests that socially anxious children do not necessarily display a social skills deficit, but rather they experience cognitive distortions, such as social threat thoughts. The research found that social threat thoughts mediated the relationship between trait social anxiety and state anxiety after a speech task, even when controlling for baseline state anxiety. Furthermore, it was observed that children with higher trait anxiety and more social threat thoughts had a lower perception of their social skills, but did not actually display a social skills deficit (PUBMED:28515542). This indicates that the issue may be more related to cognitive distortion, where the children's perception of their social abilities is negatively skewed by their anxious thoughts, rather than an actual lack of social skills. |
Instruction: Does liquid-based technology really improve detection of cervical neoplasia?
Abstracts:
abstract_id: PUBMED:35747800
Significance of Triple Detection of p16/ki-67 Dual-Staining, Liquid-Based Cytology and HR HPV Testing in Screening of Cervical Cancer: A Retrospective Study. In addition to liquid-based cytology (LBC) and HR HPV testing, p16/ki-67 dual-staining is another method for cervical cancer screening. The combination of any two methods can improve the accuracy of screening, but some cervical lesions are still missed or misdiagnosed. In this retrospective study, the significance of LBC, HR HPV testing and especially p16/ki-67 dual-staining in cervical lesion screening was evaluated with reference to histological diagnosis. At the same time, we tried to explore the value of p16/ki-67 dual-staining combined with LBC and HR HPV testing (triple detection) in improving the diagnostic specificity of CIN2+ and reducing the missed diagnosis of CIN2+ lesions. We found that p16/ki-67 dual-staining was valuable in identifying cervical CIN2+ lesions and reducing the missed diagnosis of CIN2+ in HPV negative patients. More than 96% of CIN2+ patients were positive for two or three tests of triple detection. Whole positive triple detection can effectively predict high grade cervical lesions. In conclusion, the triple detection can distinguish almost all cervical CIN2+ lesions. Our data put forward and highlight the feasibility and significance of triple detection in cervical lesion screening.
abstract_id: PUBMED:37745808
Comparative Analysis of Conventional Cytology and Liquid-Based Cytology in the Detection of Carcinoma Cervix and its Precursor Lesions. Context: The conventional smears (CS) and Liquid based cytology (LBC) are important tools to detect carcinoma cervix and its precursor lesions.
Aims: The present study was done to compare the cytomorphological features of cervical lesions using both techniques and compare with the histopathological diagnosis.
Settings And Design: This was a prospective observational study over a period of 1.5 years at a tertiary care hospital.
Methods And Material: A total of 969 women in the age group of 21-65 years presenting with either routine screening or complaints of vaginal bleeding, discharge, or pelvic pain were enrolled for the study. Both the CS and LBC smears were analyzed and compared with the corresponding histopathology diagnosis. The data was analyzed using Statistical Package for the Social Sciences (SPSS) software and P values <0.05 were considered significant.
Results: There were 8.57% unsatisfactory smears in CS as compared to 0.5% in LBC smears. Liquid-based cytology was superior to conventional preparations in terms of smear adequacy, lesser hemorrhagic and inflammatory background, and presence of more endocervical cells. Liquid-based cytology showed a better yield in detecting all the types of epithelial cell lesions with a concordance rate of 73.9% between the two techniques. On histopathology correlation of these lesions, LBC had a higher sensitivity (96.67%) and diagnostic accuracy (99.08%) as compared to CS (73.33% and 92.66%, respectively).
Conclusions: Liquid-based cytology is superior to conventional cytology for the detection of epithelial cell lesions. Reduction in the unsatisfactory smears, a cleaner background, and better representation of the sample are more significantly appreciated on LBC in contrast to CS.
abstract_id: PUBMED:21617785
The impact of liquid-based cytology in decreasing the incidence of cervical cancer. Major advances in screening have lowered the death rate from cervical cancer in the United States. One of the first major advances in cervical cancer screening was the Papanicolaou (Pap) test. The second major advance was liquid-based cytology (LBC). This review presents a wide range of data, discusses the strengths and weaknesses of the available information regarding Pap technologies, and reviews the meta-analyses, which have examined the differences in clinical performance. The review concludes with information on new and future developments to further decrease cervical cancer deaths.
abstract_id: PUBMED:34837897
An Evaluation of Phosphate Buffer Saline as an Alternative Liquid-Based Medium for HPV DNA Detection. Objective: HPV detection has been proposed as part of the co-testing which improves the sensitivity of cervical screening. However, the commercially liquid-based medium adds cost in low-resource areas. This study aimed to evaluate the performance of ice-cold phosphate buffer saline (PBS) for HPV detection.
Methods: HPV DNA from SiHa cells (with 1-2 copies of HPV16 per cell) preserved in ice-cold PBS or PreserveCyt solution at different time points (24, 36, 48, 72, 120 and 168 h) was tested in triplicate using Cobas 4800. The threshold cycle (Ct) values of both solutions were compared. An estimated false negative rate of PBS was also assessed by using the difference in Ct values between both solutions (∆Ct) and Ct values of HPV16-positive PreserveCyt clinical samples (Ctsample) at corresponding time points. Samples with a (Ctsample+∆Ct) value > 40.5 (the cutoff of HPV16 DNA by Cobas 4800) were considered as false negativity.
Results: The Ct values of HPV16 DNA of SiHa cells collected in PBS were higher than PreserveCyt ranging from 0.43 to 2.36 cycles depending on incubation times. There was no significant difference at 24, 72, 120, and 168 h. However, the Ct values were statistically significantly higher for PBS than PreserveCyt at 36 h (31.00 vs 29.26), and 48 h (31.06 vs 28.70). A retrospective analysis in 47 clinical PreserveCyt collected samples that were positive for HPV16 DNA found that 1 case (2%) would become negative if collected in ice-cold PBS.
Conclusions: The PBS might be an alternative collecting medium for HPV detection in the low-resource areas. Further evaluations are warranted.
abstract_id: PUBMED:31735963
A comparison of liquid-based and conventional cytology using data for cervical cancer screening from the Japan Cancer Society. Objective: Liquid-based cytology has replaced conventional cytology in cervical cancer screening in many countries. However, a detailed comparison of liquid-based cytology with conventional cytology has not been reported in Japan. Therefore, the aim of the study is to evaluate efficacy of liquid-based cytology in Japan.
Methods: We first evaluated the prevalence of use of liquid-based cytology and then examined the efficacy of liquid-based cytology and conventional cytology for detecting CIN and the rate of unsatisfactory specimens using data from cancer screening collected by the Japanese Cancer Society from FY2011 to FY2014. A Poisson regression model with random effects analyses was used to classify histological outcomes and unsatisfactory specimens using liquid-based cytology compared to conventional cytology.
Results: A total of 3 815 131 women were analyzed in the study. The rate of liquid-based cytology increased from approximately 8% in FY2011 to 37% in FY2014. Compared to conventional cytology, the detection rates with liquid-based cytology were significantly higher (1.42 times) for CIN1+ [detection rate ratio (DRR) = 1.42, 95% confidence interval (CI) 1.35-1.48, P < 0.001] and CIN2+ (DRR = 1.16, 95% CI 1.08-1.25, P < 0.001). Positive predictive value ratios of CIN1+ and CIN2+ were also significantly higher for liquid-based cytology than for conventional cytology. However, there was no significant difference between liquid-based cytology and conventional cytology for detection rates and positive predictive values of CIN3+ and cancer. The rate of unsatisfactory specimens was significantly lower with liquid-based cytology compared to conventional cytology (DRR = 0.07, 95% CI 0.05-0.09, P < 0.001).
Conclusions: In order to avoid the unsatisfactory specimens in cervical cancer screening, the results of this study did indicate that liquid-based cytology was more useful than conventional cytology in practical standpoints.
abstract_id: PUBMED:32394634
Detection of transformation zone cells in liquid-based cytology and its comparison with conventional smears. Background: To compare the differences between liquid-based cytology (LBC) and conventional cytology in respect of the detection of transformation zone cells (TZC) by age group and to assess test performance by correlating results with cytological abnormalities.
Methods: A retrospective study assessing the results of cervical-vaginal cytology smears collected at a private laboratory in São Paulo (Brazil) between January 2010 and December 2015.
Results: A total of 1 030 482 cytology tests were performed; of these, 3811 (0.36%) unsatisfactory samples were excluded. Cytology sampling in the patients studied was performed using the conventional technique in 394 879 (38.5%) cases and the liquid-based techniques in 631 792 (61.5%) cases. The proportion of samples with TZC for interpretation was 73.2% (288 956 samples) in conventional cytology and 52.7% (333 115 samples) in LBC (P < .001). The presence of TZC rate declined in both groups with age, but was consistently lower for LBC (P < .001). The presence of endocervical and metaplastic cells was associated with higher high-grade intraepithelial lesion detection rates.
Conclusion: Low representation of the transformation zone was found in the samples collected using the LBC technique, particularly in the over 50 age group. Conventional cytology was associated with a higher rate of detection of high-grade lesions.
abstract_id: PUBMED:29802695
Cervical Cancer Detection between Conventional and Liquid Based Cervical Cytology: a 6-Year Experience in Northern Bangkok Thailand Objectives: To determine the prevalence of abnormal Papanicolaou (Pap) smear, cervical intraepithelial neoplasia (CIN) 2 or higher and cancer between conventional Pap smear (CPP) and liquid based Pap smear (LBP). Methods: This retrospective study was conducted at Bhumibol Adulyadej Hospital, Bangkok, Thailand between January 2011 and December 2016. Data was collected from medical records of participants who attended for cervical cancer screening test. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy for detecting CIN 2 or higher were evaluated by using the most severity of histopathology reports. Results: A total of 28,564 cases were recruited. Prevalence of abnormal Pap smear from CPP and LBP were 4.8 % (1,092/22,552) and 5.7 % (345/6,012), respectively. Percentage of unsatisfactory smears in CPP (52.3%) was higher than LBP (40.5%). From CPP and LBP, cervical cancer percentages were 0.2 and 0.1, respectively. Sensitivity, specificity, PPV, NPV and accuracy of CPP and LBP for detection cancer were 42.5 vs 26.1%, 99.9 vs 100.0%, 69.8vs 75.0%, 99.7 vs 100.0 % and 99.7 vs 99.7%, respectively. Conclusion: Prevalence of abnormal cervical cytology and cancer from CPP and LBP were 4.8/0.2 and 5.7/0.1 percent, respectively. Unsatisfactory smear of LBP was less than CPP. Sensitivity, specificity, PPV, NPV and accuracy of CPP and LBP for detection CIN 2 or higher and cancer were comparable.
abstract_id: PUBMED:33210635
Evaluation of Visual Inspection of Cervix with Acetic Acid and Liquid Based in Cervical Cancer Screening with Cervical Biopsy. Background: Cervical cancer is the second most common cancer among women in developing countries. Cervical cancer generally develops slowly over a period of 10-15 years. Incidence and mortality related to cervical cancer both have declining in developed countries because of effective screening programs through Papanicolaou smear. Therefore, cervical cancer can be prevented through implementation of different methods of screening programs like visual inspection of cervix with application of acetic acid visual inspection with acetic acid, liquid based cytology and human papilloma virus deoxyribonucleic acid. The purpose of this study is to compare the efficacy of visual inspection with Acetic Acid with liquid based cytology in cervical cancer screening taking cervical biopsy as a gold standard.
Methods: The study was conducted at Paropakar Maternity and Women's Hospital, Kathmandu. One hundred forty four patients underwent visual inspection with acetic acid and liquid based cytology test followed by biopsy for confirmation of the lesion, when required. Data were obtained and statistically analyzed.
Results: Out of 144 screened patients, 62 (43.05%) were positive in visual inspection with acetic acid test. Eighteen (12.5%) cases were positive in liquid based cytology. Thirteen women were positive with both tests. Thirty-nine cases underwent histopathological examination including 13 cases who were positive in both tests. The sensitivity, specificity, positive predictive value and negative predictive value for visual inspection with acetic acid was 81.25%, 65.22%, 61.90% and 83.33%, whereas for liquid based cytology it was 100%, 91.30%, 88.89% and 94.87% respectively.
Conclusions: Liquid based cytology was more efficacious to diagnose atypical cells with higher sensitivity and specificity in comparison to that of visual Inspection with Acetic Acid test.
abstract_id: PUBMED:29995342
Comparison of Siriraj liquid-based solution and standard transport media for the detection of high-risk human papillomavirus in cervical specimens. Purpose: To evaluate the performance of Siriraj liquid-based solution for human papillomavirus (HPV) DNA testing compared with standard transport media.
Methods: This cross-sectional study enrolled 217 women aged 30 years or older who attended for cervical cancer screening or had abnormal cervical cytology, or were diagnosed with cervical cancer at the Department of Obstetrics-Gynecology, Siriraj Hospital from March 2015 to January 2016. We excluded patients with a history of any cervical procedures, hysterectomy, or previous treatment with pelvic irradiation or chemotherapy. Two cervical specimens were collected from each participant. The standard Cervi-Collect Specimen Collection Kit was used to preserve the first sample, and Siriraj liquid-based solution was used for the second one. All samples were sent for HPV DNA testing using the same standard high-risk HPV assay. HPV test results were recorded and statistically analyzed.
Results: The results showed agreement between standard transport media and Siriraj liquid-based solution for HPV DNA testing, at a kappa value of 0.935 (P < 0.001). We found no discorrelation for the detection of HPV 16, which accounts for approximately 50% of cervical cancers. The relative sensitivity of Siriraj liquid-based solution and standard transport media in patients with high-grade cervical intraepithelial neoplasia or worse (CIN2+) is 98% (50/51). The relative specificity of Siriraj liquid-based solution and standard transport media in patients with non-CIN2+ is 98.1% (102/104).
Conclusion: Siriraj liquid-based solution showed almost perfect agreement with the standard transport media for HPV DNA testing. This solution, costing 2 to 3 times less than the commercially available standard media, may be an alternative option for HPV DNA testing.
abstract_id: PUBMED:32516778
Comparison between Conventional Cytology and Liquid-Based Cytology in the Tertiary Brazilian Navy Hospital in Rio de Janeiro. Introduction: Cervical cancer screening is an important tool in public health. Liquid-based cytology (LBC) has been performed at the studied hospital for 7 years. The present study compares the performance of 2 LBC techniques with conventional cytology.
Objective: Our objective is to verify the sensitivity for the detection of neoplastic and preneoplastic epithelial atypia, as well as the positive predictive value of the 3 methodologies.
Methods: We analyzed retrospectively 24,529 cases and evaluated the conventional cytology, ThinPrep®, and BD SurePath® performance categorizing the results according to the Bethesda system. We also compared the level of unsatisfactory samples, the presence of elements from the squamocolumnar junction, and the detection of pathogenic microorganisms.
Results: ThinPrep® (1.43%) showed superior sensitivity over BD SurePath® (0.91%) and conventional cytology (0.71%) in terms of the detection of high-grade lesions; however, in terms of squamous atypia as a whole (ASC-US+), BD SurePath® (6.44%) proved to be more sensitive than conventional cytology (5.28%) and ThinPrep® (3.73%).
Conclusions: The results show the advantage of implementing LBC in routine screening for cervical lesions. In this study, BD SurePath® achieved the overall best performance considering the studied variables.
Answer: Yes, liquid-based cytology (LBC) does appear to improve the detection of cervical neoplasia compared to conventional cytology. Several studies have demonstrated the advantages of LBC over conventional Pap smears (conventional cytology) in terms of detecting cervical lesions and reducing the number of unsatisfactory smears.
For instance, a study comparing conventional cytology and LBC found that LBC had a higher sensitivity (96.67%) and diagnostic accuracy (99.08%) for detecting epithelial cell lesions compared to conventional cytology (73.33% sensitivity and 92.66% diagnostic accuracy) (PUBMED:37745808). Another study showed that LBC was superior to conventional cytology in terms of smear adequacy, having a cleaner background, and better representation of the sample (PUBMED:37745808).
Moreover, a retrospective study using data from the Japan Cancer Society indicated that the detection rates with LBC were significantly higher for CIN1+ and CIN2+ compared to conventional cytology. The rate of unsatisfactory specimens was also significantly lower with LBC (PUBMED:31735963). Similarly, another study found that LBC was more sensitive and had a higher positive predictive value for detecting high-grade lesions compared to conventional cytology (PUBMED:32516778).
Additionally, LBC has been shown to be more effective in detecting transformation zone cells, which are important for assessing the presence of cervical lesions, especially in older age groups (PUBMED:32394634). Furthermore, LBC has been associated with a reduction in the incidence of cervical cancer due to its role in screening programs (PUBMED:21617785).
In summary, the evidence suggests that LBC is a more effective method for cervical cancer screening, offering improvements in sensitivity, diagnostic accuracy, and sample quality over conventional cytology, thus enhancing the detection of cervical neoplasia (PUBMED:37745808, PUBMED:31735963, PUBMED:32394634, PUBMED:21617785, PUBMED:32516778). |
Instruction: Is methotrexate administration needed for the treatment of caesarean section scar pregnancy in addition to suction curettage?
Abstracts:
abstract_id: PUBMED:24460453
Is methotrexate administration needed for the treatment of caesarean section scar pregnancy in addition to suction curettage? Objective: This study evaluated the clinical outcomes and safety of treating caesarean scar pregnancy (CSP) by means of suction curettage followed when required by Foley tamponade, with or without methotrexate (MTX) therapy preceding the curettage.
Methods: Twenty-five patients with CSP were identified between August 2008 and April 2012. The first team of doctors treated Group A patients (n = 11) with systemic MTX followed by dilatation and suction curettage whereas the second team of doctors carried out only a suction curettage on women of Group B (n = 14). If uncontrolled vaginal bleeding occurred in either group during or after the operation, a Foley catheter, guided by real time transabdominal ultrasound, was placed in the uterine cavity against the site where the CSP had been implanted.
Results: Clinical outcomes in the two groups - including mean estimated blood loss, major complication rate, and hospital length of stay - were comparable. Surgeons used Foley catheter balloons for tamponade in six of the 11 patients in Group A and in seven of the 14 patients in Group B. Treatment was successful in ten of 11 cases in group A and 13 of 14 cases in group B. Group B's mean duration of treatment (2.36 ± 0.49 days) was significantly shorter than that of Group A (14.45 ± 4.96 days; p < 0.001).
Conclusion: Suction curettage, followed when needed by Foley catheter tamponade, is an effective treatment for CSP.
abstract_id: PUBMED:25897638
Suction curettage as first line treatment in cases with cesarean scar pregnancy: feasibility and effectiveness in early pregnancy. Objective: A cesarean scar pregnancy (CSP) is an extremely rare form of an ectopic pregnancy, which is defined as the localization of a fertilized ovum surrounded by uterine muscular fiber and scar tissue. The objective of this study was to discuss the management options for CSPs in a singleton center. In the current study, we discussed the current management options for CSPs based on our 6 years of experience.
Material And Methods: A retrospective evaluation of diagnosed and treated 26 patients with CSPs in Istanbul Kanuni Sultan Suleyman Training and Research Hospital during a 6-year period was discussed. Suction curettage was performed as first-line treatment in patients with a gestation <8 weeks and myometrial thickness >2 mm.
Results: Twenty-two (84.6%) patients with CSPs were initially treated surgically (curettage and hysterotomy) and four (15.4%) patients were treated medically with methotrexate injections. Vacuum aspiration was performed in 19 patients as a first-line treatment, six of them needed an additional Foley balloon catheter to be inserted for tamponade because of persistent vaginal bleeding. Suction curettage was successful in 12 patients. The treatment rate for suction curettage with or without Foley balloon catheter tamponade was 16 of 19 (84.2%).
Conclusion: The early diagnosis of a CSP (7-8 weeks gestation) with a β-hCG level <17.000 mIU/ml and a myometrial thickness >2 mm can be treated with suction curettage with or without placement of a uterine Foley balloon as curative treatment.
abstract_id: PUBMED:27125570
Exogenous cesarean scar pregnancies managed by suction curettage alone or in combination with other therapeutic procedures: A series of 33 cases and analysis of complication profile. Aim: The aim of this study was to review our exogenous cesarean scar pregnancy (CSP) cases that were managed through transabdominal ultrasound (TAUS)-guided suction curettage either alone or with a concomitant additional therapeutic modality. The study was carried out over a 6-year period and we compared clinical outcomes, success rates and complication profiles between the two therapeutic approaches.
Methods: A total of 33 exogenous CSP patients who were managed by suction curettage were extracted from the medical records. The patients were analyzed according to the intervention applied in the two groups as: TAUS-guided suction curettage alone (Group 1); and additional therapeutic tools, such as systemic or intracavitary administration of methotrexate and intracavitary ethanol instillation, in combination with suction curettage (Group 2). Basic demographic and clinical characteristics of women experiencing hemorrhagic complications and those who did not after the treatment were also compared.
Results: There were no cases of uterine perforation, hysterectomy or unresponsiveness to treatment in our analyzed CSP cases. Four patients, two in each group, required blood transfusion. Our success rate in the overall patient population was 87.8% (29/33). Fourteen out of 16 patients who were treated with TAUS-guided suction curettage alone, and 15 out of 17 patients who received other interventional treatment modalities preceding suction curettage revealed successful resolution of the CSP without any complication (P = 0.948). Clinical and demographic characteristics of women who experienced any hemorrhagic complication did not significantly differ from those who did not.
Conclusion: In appropriate CSP cases, TAUS-guided suction curettage appears to be a reliable treatment option with acceptable success rates and similar complication profile to other therapeutic options.
abstract_id: PUBMED:37131042
Analysis of risk factors for patients with cesarean scar pregnancy treated with methotrexate combined with suction curettage. Purpose: To analyze the predictive value of clinical and ultrasound parameters for treatment failure after administration of methotrexate (MTX) in combination with suction curettage (SC) in treatment of cesarean scar pregnancy (CSP) in the early first trimester.
Methods: In this retrospective cohort study, electronic medical records of patients diagnosed with CSP and initially treated between 2015 and 2022 with MTX combined with SC were reviewed and outcome data were collected.
Results: 127 patients met inclusion criteria. 25 (19.69%) required additional treatment. Logistic regression analysis indicated that factors independently associated with the need for additional treatment included progesterone level > 25 mIU/mL (OR: 1.97; 95% CI: 0.98-2.87, P = 0.039), abundant blood flow (OR: 5.19; 95% CI: 2.44-16.31, P = 0.011), gestational sac size > 3 cm (OR: 2.54; 95% CI: 1.12-6.87, P = 0.029), and the myometrial thickness between the bladder and gestational sac < 2.5 mm (OR: 3.48; 95% CI: 1.91-6.98, P = 0.015).
Conclusions: Our study identified several factors which increase the need for additional treatment following the initial treatment of CSP with MTX and SC. Alternative therapy should be considered if these factors are present.
abstract_id: PUBMED:21392878
Methotrexate therapy followed by suction curettage followed by Foley tamponade for caesarean scar pregnancy. Objectives: Caesarean scar pregnancy (CSP) is a very rare and dangerous form of pregnancy because of the increased risk of rupture and excessive hemorrhage. There is currently no consensus on the treatment. We studied if methotrexate (MTX) therapy followed by suction curettage followed by Foley tamponade was a viable treatment for patients with CSP.
Study Design: Forty-five patients with CSP in our hospital received a single dose of 50mg/m(2) MTX by intramuscular injection. If gestational cardiac activity was seen on transvaginal ultrasound, local injection of MTX was given. After 7 days, suction curettage was performed to remove the retained products of conception and blood clot (CSP mass) under transabdominal sonography (TAS) guidance. After the suction curettage, a Foley catheter balloon was placed into the isthmic portion of cervix.
Results: Forty-two subjects were successfully treated and 3 subjects failed treatment. The mean estimated blood loss of all 45 patients was 706.89 ± 642.08 (100-3000)ml. The resolution time of the serum β-hCG was 20.62 ± 5.41 (9-33) days. The time to CSP mass disappearance was 12.57 ± 4.37 (8-25) days.
Conclusions: MTX administration followed by suction curettage followed by Foley tamponade was an effective treatment for caesarean scar pregnancy.
abstract_id: PUBMED:26629384
Suction Evacuation with Methotrexate as a Successful Treatment Modality for Caesarean Scar Pregnancies: Case series. Pregnancy resulting from the implantation of an embryo within a scar of a previous Caesarean section is extremely rare. The diagnosis and treatment of Caesarean scar pregnancies (CSPs) are challenging and the optimal course of treatment is still to be determined. We report a case series of six patients with CSPs who presented to the Royal Hospital in Muscat, Oman, between October 2012 and April 2014. All of the patients were successfully treated with systemic methotrexate and five patients underwent suction evacuation either before or after the methotrexate administration. The patients were followed up for a period of 6-9 weeks after treatment and recovered completely without any significant complications. Suction evacuation with methotrexate can therefore be considered an effective treatment option with good maternal outcomes.
abstract_id: PUBMED:29973177
Management of Caesarean scar pregnancy with or without methotrexate before curettage: human chorionic gonadotropin trends and patient outcomes. Background: To evaluate the effects of systemic methotrexate in cesarean scar pregnancy (CSP) patients treated with ultrasound-guided suction curettage.
Methods: A retrospective review of all women presenting with CSP treated with ultrasound-guided suction curettage at Tongji Hospital, Wuhan, China, between January 1, 2013 and December 31, 2015, was conducted. Patients were grouped into those not treated with methotrexate before curettage (group 1), treated with methotrexate by intramuscular injection (group 2) and treated with methotrexate by intravenous injection (group 3). The clinical characteristics and outcomes were analyzed.
Results: Among 107 patients, 47 patients were not treated with methotrexate before curettage, 46 patients had methotrexate administered by intramuscular injection and 14 patients had methotrexate injected intravenously. There were no significant differences among the groups in basic and clinical characteristics, such as age, gravity, parity, positive fetal heart beat and gestational age at diagnosis. Patients presented similar initial human chorionic gonadotropin (hCG) levels in all groups. After treatment with methotrexate or curettage, the percentage changes and varied ranges of the hCG levels were also similar in all groups. There were no significant differences in intraoperative blood loss and retained products of conception among the three groups. However group 1 had significantly shorter hospital stays than the two groups that were treated with methotrexate (p<0.001).
Conclusion: By grouping CSP patients who shared similar age, gravity, parity, fetal heart beat positive and gestational age at diagnosis, we found that the presence or absence of methotrexate treatment before curettage resulted in comparable outcomes and hCG levels, although patients who were not treated with methotrexate had significantly shorter stays in the hospital.
abstract_id: PUBMED:24320609
Hysteroscopy and suction evacuation of cesarean scar pregnancies: a case report and review. Implantation of a pregnancy into the scar of a prior cesarean is an uncommon type of ectopic pregnancy. The incidence of cesarean scar pregnancy is thought to be one in 1800-2216 pregnancies. The increase in the incidence of cesarean scar pregnancy is thought to be a consequence of the increasing rates of cesarean delivery. The natural history of cesarean scar pregnancy is unknown. However, if such a pregnancy is allowed to continue, uterine scar rupture with hemorrhage and possible hysterectomy seem likely. Two early diagnosed cesarean scar pregnancies were treated with hysteroscopy and suction curettage removal. One required intramuscular methotrexate to resolve a persistent cesarean scar ectopic pregnancy. It would seem reasonable that simple suction evacuation would frequently leave chorionic villi imbedded within the cesarean scar, as the pregnancy is not within the endometrial cavity.
abstract_id: PUBMED:26581398
Uterine artery embolization combined with curettage vs. methotrexate plus curettage for cesarean scar pregnancy. Purpose: To compare the efficacy and safety of uterine artery embolization (UAE) combined with curettage and methotrexate (MTX) plus curettage in the treatment of cesarean scar pregnancy (CSP).
Methods: From January 2005 to December 2013, we treated 38 CSP patients with UAE combined with curettage, and another 26 patients with CSP were treated with methotrexate (MTX) plus curettage. The resulting data were analyzed statistically.
Results: The median volume of blood loss was 17.5 ml in the UAE combined with curettage (UAE-C) group vs. 335 ml in the MTX plus curettage (MTX-C) group (p < 0.001). The time from the procedure till β-human chorionic gonadotropin (β-hCG) concentration returned to normal was 25.13 ± 3.74 days in the UAE-C group vs. 56.15 ± 15.99 days in the MTX-C group (p < 0.001). The duration of hospitalization was 7.44 ± 1.92 vs. 19.38 ± 8.75 days, respectively (p < 0.001). The percentage of side effects in the UAE-C group was significantly lower than in the MTX-C group.
Conclusions: UAE combined with curettage appears to be superior to MTX plus curettage for treatment of CSP with high serum β-hCG level, though prospective trials are needed.
abstract_id: PUBMED:34797925
High-intensity focused ultrasound compared with uterine artery chemoembolization with methotrexate for the management of cesarean scar pregnancy. Objective: To compare the effectiveness and safety of high-intensity focused ultrasound (HIFU) versus uterine artery chemoembolization with methotrexate for the management of cesarean scar pregnancy (CSP).
Methods: A total of 170 women with CSP were enrolled in the retrospective cohort study from April 2016 to September 2020. One hundred women received HIFU followed by suction curettage, and 70 women were treated with uterine artery chemoembolization followed by suction curettage.
Results: The median blood loss during suction curettage was 20 ml (range 2-800 ml) in the HIFU group versus 20 ml (range 5-200 ml) in the chemoembolization group (P = 0.837). The mean time for β-human chorionic gonadotropin normalization was 28.52 ± 10.82 days versus 26.69 ± 8.97 days (P = 0.246), respectively. The mean time of menstruation recovery was 34.36 ± 10.63 days versus 31.93 ± 8.53 days (P = 0.114), respectively. The incidence of adverse effects in the HIFU group was lower than that in the chemoembolization group (P = 0.028). The median hospitalization time in the HIFU group was longer than that in the chemoembolization group (P = 0.000).
Conclusion: HIFU was as effective and safe as uterine artery chemoembolization in the management of CSP, and the incidence of adverse effects was lower.
Answer: The necessity of methotrexate administration for the treatment of caesarean section scar pregnancy (CSP) in addition to suction curettage is not definitive and seems to depend on various factors, including the specific clinical scenario and the preferences of the treating physicians.
Some studies suggest that suction curettage, followed when needed by Foley catheter tamponade, is an effective treatment for CSP without the need for methotrexate therapy preceding the curettage. Clinical outcomes, including mean estimated blood loss, major complication rate, and hospital length of stay, were comparable between groups treated with and without methotrexate, but the duration of treatment was significantly shorter in the group that did not receive methotrexate (PUBMED:24460453).
Another study found that suction curettage was successful in 84.2% of cases when used as a first-line treatment for CSP, with or without the placement of a uterine Foley balloon catheter for tamponade (PUBMED:25897638). Similarly, a study comparing suction curettage alone to suction curettage with additional therapeutic tools, such as systemic or intracavitary administration of methotrexate, found no significant difference in success rates or complication profiles between the two approaches (PUBMED:27125570).
However, other studies have utilized methotrexate in combination with suction curettage and reported effective outcomes. For instance, one study reported that methotrexate therapy followed by suction curettage followed by Foley tamponade was an effective treatment for CSP (PUBMED:21392878), and another case series reported successful treatment of CSP with systemic methotrexate followed by suction evacuation (PUBMED:26629384).
A retrospective review also indicated that the presence or absence of methotrexate treatment before curettage resulted in comparable outcomes and human chorionic gonadotropin levels, although patients who were not treated with methotrexate had significantly shorter hospital stays (PUBMED:29973177).
In conclusion, while methotrexate administration is not universally required for the treatment of CSP in addition to suction curettage, it may be considered in certain cases based on clinical judgment and specific patient factors. Some studies suggest that suction curettage alone can be effective, while others report successful outcomes with the addition of methotrexate. |
Instruction: Can inhibition of IKur promote atrial fibrillation?
Abstracts:
abstract_id: PUBMED:32966616
Activation of PKCα participates in the reduction of Ikur in atrial myocytes induced by tumour necrosis factor-α. The atrial-specific ultra-rapid delayed rectifier K+ current (Ikur) plays an important role in the progression of atrial fibrillation (AF). Because inflammation is known to lead to the onset of AF, we aimed to investigate whether tumour necrosis factor-α (TNF-α) played a role in regulating Ikur and the potential signalling pathways involved. Whole-cell patch-clamp and biochemical assays were used to study the regulation and expression of Ikur in myocytes and in tissues from left atrial appendages (LAAs) obtained from patients with sinus rhythm (SR) or AF, as well as in rat cardiomyocytes (H9c2 cells) and mouse atrial myocytes (HL-1 cells). Ikur current density was markedly reduced in atrial myocytes from AF patients compared with SR controls. Reduction of Kv1.5 protein levels was accompanied by increased expression of TNF-α and protein kinase C (PKC)α activation in AF patients. Treatment with TNF-α dose-dependently reduced Ikur and protein expression of Kv1.5 but not Kv3.1b in H9c2 cells and HL-1 cells. TNF-α also increased activity of PKCα. Specific PKCα inhibitor Gö6976 alleviated the reduction in Ikur induced by TNF-α, but not the reduction in Kv1.5 protein. TNF-α was involved in the electrical remodelling associated with AF, probably by depressing Ikur in atrial myocytes via activation of PKCα.
abstract_id: PUBMED:34959701
Peptide Inhibitors of Kv1.5: An Option for the Treatment of Atrial Fibrillation. The human voltage gated potassium channel Kv1.5 that conducts the IKur current is a key determinant of the atrial action potential. Its mutations have been linked to hereditary forms of atrial fibrillation (AF), and the channel is an attractive target for the management of AF. The development of IKur blockers to treat AF resulted in small molecule Kv1.5 inhibitors. The selectivity of the blocker for the target channel plays an important role in the potential therapeutic application of the drug candidate: the higher the selectivity, the lower the risk of side effects. In this respect, small molecule inhibitors of Kv1.5 are compromised due to their limited selectivity. A wide range of peptide toxins from venomous animals are targeting ion channels, including mammalian channels. These peptides usually have a much larger interacting surface with the ion channel compared to small molecule inhibitors and thus, generally confer higher selectivity to the peptide blockers. We found two peptides in the literature, which inhibited IKur: Ts6 and Osu1. Their affinity and selectivity for Kv1.5 can be improved by rational drug design in which their amino acid sequences could be modified in a targeted way guided by in silico docking experiments.
abstract_id: PUBMED:23264583
Genetic variation in KCNA5: impact on the atrial-specific potassium current IKur in patients with lone atrial fibrillation. Aims: Genetic factors may be important in the development of atrial fibrillation (AF) in the young. KCNA5 encodes the potassium channel α-subunit KV1.5, which underlies the voltage-gated atrial-specific potassium current IKur. KCNAB2 encodes KVβ2, a β-subunit of KV1.5, which increases IKur. Three studies have identified loss-of-function mutations in KCNA5 in patients with idiopathic AF. We hypothesized that early-onset lone AF is associated with high prevalence of genetic variants in KCNA5 and KCNAB2.
Methods And Results: The coding sequences of KCNA5 and KCNAB2 were sequenced in 307 patients with mean age of 33 years at the onset of lone AF, and in 216 healthy controls. We identified six novel non-synonymous mutations [E48G, Y155C, A305T (twice), D322H, D469E, and P488S] in KCNA5 in seven patients. None were present in controls. We identified a significantly higher frequency of rare deleterious variants in KCNA5 in the patients than in controls. The mutations were analysed with confocal microscopy and whole-cell patch-clamp techniques. The mutant proteins Y155C, D469E, and P488S displayed decreased surface expression and loss-of-function in patch-clamp studies, whereas E48G, A305T, and D322H showed preserved surface expression and gain-of-function for KV1.5.
Conclusion: This study is the first to present gain-of-function mutations in KCNA5 in patients with early-onset lone AF. We identified three gain-of-function and three loss-of-function mutations. We report a high prevalence of variants in KCNA5 in these patients. This supports the hypothesis that both increased and decreased potassium currents enhance AF susceptibility.
abstract_id: PUBMED:28494969
Rate-Dependent Role of IKur in Human Atrial Repolarization and Atrial Fibrillation Maintenance. The atrial-specific ultrarapid delayed rectifier K+ current (IKur) inactivates slowly but completely at depolarized voltages. The consequences for IKur rate-dependence have not been analyzed in detail and currently available mathematical action-potential (AP) models do not take into account experimentally observed IKur inactivation dynamics. Here, we developed an updated formulation of IKur inactivation that accurately reproduces time-, voltage-, and frequency-dependent inactivation. We then modified the human atrial cardiomyocyte Courtemanche AP model to incorporate realistic IKur inactivation properties. Despite markedly different inactivation dynamics, there was no difference in AP parameters across a wide range of stimulation frequencies between the original and updated models. Using the updated model, we showed that, under physiological stimulation conditions, IKur does not inactivate significantly even at high atrial rates because the transmembrane potential spends little time at voltages associated with inactivation. Thus, channel dynamics are determined principally by activation kinetics. IKur magnitude decreases at higher rates because of AP changes that reduce IKur activation. Nevertheless, the relative contribution of IKur to AP repolarization increases at higher frequencies because of reduced activation of the rapid delayed-rectifier current IKr. Consequently, IKur block produces dose-dependent termination of simulated atrial fibrillation (AF) in the absence of AF-induced electrical remodeling. The inclusion of AF-related ionic remodeling stabilizes simulated AF and greatly reduces the predicted antiarrhythmic efficacy of IKur block. Our results explain a range of experimental observations, including recently reported positive rate-dependent IKur-blocking effects on human atrial APs, and provide insights relevant to the potential value of IKur as an antiarrhythmic target for the treatment of AF.
abstract_id: PUBMED:23364608
Human electrophysiological and pharmacological properties of XEN-D0101: a novel atrial-selective Kv1.5/IKur inhibitor. The human electrophysiological and pharmacological properties of XEN-D0101 were evaluated to assess its usefulness for treating atrial fibrillation (AF). XEN-D0101 inhibited Kv1.5 with an IC50 of 241 nM and is selective over non-target cardiac ion channels (IC50 Kv4.3, 4.2 μM; hERG, 13 μM; activated Nav1.5, >100 μM; inactivated Nav1.5, 34 μM; Kir3.1/3.4, 17 μM; Kir2.1, >>100 μM). In atrial myocytes from patients in sinus rhythm (SR) and chronic AF, XEN-D0101 inhibited non-inactivating outward currents (Ilate) with IC50 of 410 and 280 nM, respectively, and peak outward currents (Ipeak) with IC50 of 806 and 240 nM, respectively. Whereas Ilate is mainly composed of IKur, Ipeak consists of IKur and Ito. Therefore, the effects on Ito alone were estimated from a double-pulse protocol where IKur was inactivated (3.5 µM IC50 in SR and 1 µM in AF). Thus, inhibition of Ipeak is because of IKur reduction and not Ito. XEN-D0101 significantly prolonged the atrial action potential duration at 20%, 50%, and 90% of repolarization (AF tissue only) and significantly elevated the atrial action potential plateau phase and increased contractility (SR and AF tissues) while having no effect on human ventricular action potentials. In healthy volunteers, XEN-D0101 did not significantly increase baseline- and placebo-adjusted QTc up to a maximum oral dose of 300 mg. XEN-D0101 is a Kv1.5/IKur inhibitor with an attractive atrial-selective profile.
abstract_id: PUBMED:37922915
A critical role of retinoic acid concentration for the induction of a fully human-like atrial action potential phenotype in hiPSC-CM. Retinoic acid (RA) induces an atrial phenotype in human induced pluripotent stem cells (hiPSCs), but expression of atrium-selective currents such as the ultrarapid (IKur) and acetylcholine-stimulated K+ current is variable and less than in the adult human atrium. We suspected methodological issues and systematically investigated the concentration dependency of RA. RA treatment increased IKur concentration dependently from 1.1 ± 0.54 pA/pF (0 RA) to 3.8 ± 1.1, 5.8 ± 2.5, and 12.2 ± 4.3 at 0.01, 0.1, and 1 μM, respectively. Only 1 μM RA induced enough IKur to fully reproduce human atrial action potential (AP) shape and a robust shortening of APs upon carbachol. We found that sterile filtration caused substantial loss of RA. We conclude that 1 μM RA seems to be necessary and sufficient to induce a full atrial AP shape in hiPSC-CM in EHT format. RA concentrations are prone to methodological issues and may profoundly impact the success of atrial differentiation.
abstract_id: PUBMED:29163179
In Silico Assessment of Efficacy and Safety of IKur Inhibitors in Chronic Atrial Fibrillation: Role of Kinetics and State-Dependence of Drug Binding. Current pharmacological therapy against atrial fibrillation (AF), the most common cardiac arrhythmia, is limited by moderate efficacy and adverse side effects including ventricular proarrhythmia and organ toxicity. One way to circumvent the former is to target ion channels that are predominantly expressed in atria vs. ventricles, such as KV1.5, carrying the ultra-rapid delayed-rectifier K+ current (IKur). Recently, we used an in silico strategy to define optimal KV1.5-targeting drug characteristics, including kinetics and state-dependent binding, that maximize AF-selectivity in human atrial cardiomyocytes in normal sinus rhythm (nSR). However, because of evidence for IKur being strongly diminished in long-standing persistent (chronic) AF (cAF), the therapeutic potential of drugs targeting IKur may be limited in cAF patients. Here, we sought to simulate the efficacy (and safety) of IKur inhibitors in cAF conditions. To this end, we utilized sensitivity analysis of our human atrial cardiomyocyte model to assess the importance of IKur for atrial cardiomyocyte electrophysiological properties, simulated hundreds of theoretical drugs to reveal those exhibiting anti-AF selectivity, and compared the results obtained in cAF with those in nSR. We found that despite being downregulated, IKur contributes more prominently to action potential (AP) and effective refractory period (ERP) duration in cAF vs. nSR, with ideal drugs improving atrial electrophysiology (e.g., ERP prolongation) more in cAF than in nSR. Notably, the trajectory of the AP during cAF is such that more IKur is available during the more depolarized plateau potential. Furthermore, IKur block in cAF has less cardiotoxic effects (e.g., AP duration not exceeding nSR values) and can increase Ca2+ transient amplitude thereby enhancing atrial contractility. We propose that in silico strategies such as that presented here should be combined with in vitro and in vivo assays to validate model predictions and facilitate the ongoing search for novel agents against AF.
abstract_id: PUBMED:18536759
Pathology-specific effects of the IKur/Ito/IK,ACh blocker AVE0118 on ion channels in human chronic atrial fibrillation. Background And Purpose: This study was designed to establish the pathology-specific inhibitory effects of the IKur/Ito/IK,ACh blocker AVE0118 on atrium-selective channels and its corresponding effects on action potential shape and effective refractory period in patients with chronic AF (cAF).
Experimental Approach: Outward K+-currents of right atrial myocytes and action potentials of atrial trabeculae were measured with whole-cell voltage clamp and microelectrode techniques, respectively. Outward currents were dissected by curve fitting.
Key Results: Four components of outward K+-currents and AF-specific alterations in their properties were identified. Ito was smaller in cAF than in SR, and AVE0118 (10 microM) apparently accelerated its inactivation in both groups without reducing its amplitude. Amplitudes of rapidly and slowly inactivating components of IKur were lower in cAF than in SR. The former was abolished by AVE0118 in both groups, the latter was partially blocked in SR, but not in cAF, even though its inactivation was apparently accelerated in cAF. The large non-inactivating current component was similar in magnitude in both groups, but decreased by AVE0118 only in SR. AVE0118 strongly suppressed AF-related constitutively active IK,ACh and prolonged atrial action potential and effective refractory period exclusively in cAF.
Conclusions And Implications: In atrial myocytes of cAF patients, we detected reduced function of distinct IKur components that possessed decreased component-specific sensitivity to AVE0118 most likely as a consequence of AF-induced electrical remodelling. Inhibition of profibrillatory constitutively active IK,ACh may lead to pathology-specific efficacy of AVE0118 that is likely to contribute to its ability to convert AF into SR.
abstract_id: PUBMED:28964116
Revealing kinetics and state-dependent binding properties of IKur-targeting drugs that maximize atrial fibrillation selectivity. The KV1.5 potassium channel, which underlies the ultra-rapid delayed-rectifier current (IKur) and is predominantly expressed in atria vs. ventricles, has emerged as a promising target to treat atrial fibrillation (AF). However, while numerous KV1.5-selective compounds have been screened, characterized, and tested in various animal models of AF, evidence of antiarrhythmic efficacy in humans is still lacking. Moreover, current guidelines for pre-clinical assessment of candidate drugs heavily rely on steady-state concentration-response curves or IC50 values, which can overlook adverse cardiotoxic effects. We sought to investigate the effects of kinetics and state-dependent binding of IKur-targeting drugs on atrial electrophysiology in silico and reveal the ideal properties of IKur blockers that maximize anti-AF efficacy and minimize pro-arrhythmic risk. To this aim, we developed a new Markov model of IKur that describes KV1.5 gating based on experimental voltage-clamp data in atrial myocytes from patient right-atrial samples in normal sinus rhythm. We extended the IKur formulation to account for state-specificity and kinetics of KV1.5-drug interactions and incorporated it into our human atrial cell model. We simulated 1- and 3-Hz pacing protocols in drug-free conditions and with a [drug] equal to the IC50 value. The effects of binding and unbinding kinetics were determined by examining permutations of the forward (kon) and reverse (koff) binding rates to the closed, open, and inactivated states of the KV1.5 channel. We identified a subset of ideal drugs exhibiting anti-AF electrophysiological parameter changes at fast pacing rates (effective refractory period prolongation), while having little effect on normal sinus rhythm (limited action potential prolongation). Our results highlight that accurately accounting for channel interactions with drugs, including kinetics and state-dependent binding, is critical for developing safer and more effective pharmacological anti-AF options.
abstract_id: PUBMED:26786269
Dynamics and modulation studies of human voltage gated Kv1.5 channel. The voltage gated Kv1.5 channels conduct the ultrarapid delayed rectifier current (IKur) and play critical role in repolarization of action potential duration. It is the most rapidly activated channel and has very little or no inactivated states. In human cardiac cells, these channels are expressed more extensively in atrial myocytes than ventricle. From the evidences of its localization and functions, Kv1.5 has been declared a selective drug target for the treatment of atrial fibrillation (AF). In this present study, we have tried to identify the rapidly activating property of Kv1.5 and studied its mode of inhibition using molecular modeling, docking, and simulation techniques. Channel in open conformation is found to be stabilized quickly within the dipalmitoylphosphatidylcholine membrane, whereas most of the secondary structure elements were lost in closed state conformation. The obvious reason behind its ultra-rapid property is possibly due to the amino acid alteration in S4-S5 linker; the replacement of Lysine by Glutamine and vice versa. The popular published drugs as well as newly identified lead molecules were able to inhibit the Kv1.5 in a very similar pattern, mainly through the nonpolar interactions, and formed sable complexes. V512 is found as the main contributor for the interaction along with the other important residues such as V505, I508, A509, V512, P513, and V516. Furthermore, two screened novel compounds show surprisingly better inhibitory potency and can be considered for the future perspective of antiarrhythmic survey.
Answer: No, inhibition of IKur is not known to promote atrial fibrillation (AF); rather, it is considered a therapeutic strategy for the treatment of AF. The ultra-rapid delayed rectifier K+ current (IKur), conducted by the Kv1.5 channel, is a key determinant of the atrial action potential and is an attractive target for the management of AF (PUBMED:34959701). Mutations in the gene encoding Kv1.5 (KCNA5) have been linked to hereditary forms of AF, with both gain-of-function and loss-of-function mutations identified, suggesting that both increased and decreased potassium currents can enhance AF susceptibility (PUBMED:23264583). However, the therapeutic approach has been to develop IKur blockers, with the rationale that these could help in the treatment of AF by affecting atrial repolarization (PUBMED:34959701).
Small molecule inhibitors and peptide inhibitors of Kv1.5 have been developed to selectively target IKur, with the aim of reducing the risk of side effects by avoiding non-target cardiac ion channels (PUBMED:34959701). For instance, XEN-D0101 is a Kv1.5/IKur inhibitor that has been shown to prolong the atrial action potential duration and increase contractility without affecting ventricular action potentials, indicating an atrial-selective profile (PUBMED:23364608). Similarly, AVE0118, which blocks IKur among other currents, has been shown to prolong the effective refractory period exclusively in chronic AF, suggesting a pathology-specific efficacy that could contribute to its ability to convert AF into sinus rhythm (PUBMED:18536759).
Moreover, in silico studies have been used to assess the efficacy and safety of IKur inhibitors, suggesting that despite IKur being downregulated in chronic AF, it still contributes to action potential and effective refractory period duration, and ideal drugs targeting IKur could improve atrial electrophysiology more in chronic AF than in normal sinus rhythm (PUBMED:29163179).
In summary, the inhibition of IKur is a strategy being pursued to treat AF, not to promote it. The development of selective IKur blockers is aimed at managing AF by affecting atrial-specific electrophysiological properties while minimizing the impact on ventricular function and reducing the risk of proarrhythmic effects. |
Instruction: Does the type and size of Amplatzer vascular plug affect the occlusion time of pulmonary arteriovenous malformations?
Abstracts:
abstract_id: PUBMED:19234279
Occlusion time for Amplatzer vascular plug in the management of pulmonary arteriovenous malformations. Objective: The occlusion time, that is, the interval between device deployment and complete occlusion of the vessel, associated with the use of embolic devices influences the risk of embolic complications caused by small clots that can form over the surface of a device and break away. The purpose of our study was to determine the time for an Amplatzer vascular plug to bring about percutaneous transcatheter occlusion of a pulmonary arteriovenous malformation (PAVM).
Materials And Methods: We retrospectively studied the occlusion times of Amplatzer vascular plugs in the management of 12 PAVMs. We recorded the number, location, type (simple or complex), and diameter and number of feeding arteries of PAVMs; the number and size of devices needed to occlude each PAVM; and the occlusion time for each PAVM. The occlusion time is the time interval from device placement to complete occlusion of the PAVM. Occlusion time was determined by recording the time between acquisition of the first angiographic image after deployment of the device and the angiogram that showed total occlusion of the PAVM. The relevant literature on the subject was reviewed.
Results: All PAVMs managed were supplied by a single feeding artery. The average diameter of the feeding arteries was 4.8 mm (range, 3.0-11.2 mm). All PAVMs were occluded by deployment of a single Amplatzer vascular plug. Vascular plug sizes ranged from 4 to 16 mm. The mean occlusion time was 3 minutes 20 seconds (range, 1 minute 49 seconds-5 minutes 16 seconds). There were no immediate complications, including air embolism and thromboembolism.
Conclusion: The occlusion time determined in our study and the need to place only one Amplatzer vascular plug in each feeding artery to achieve complete occlusion in most cases suggest that the device is safe for management of PAVM with no increased risk of systemic embolization. The use of the Amplatzer vascular plug for PAVM embolization is a relatively recent development. Long-term follow-up studies are needed to assess recanalization rates, radiation exposure rates, and risk of device migration.
abstract_id: PUBMED:24987264
Amplatzer vascular plug IV for occlusion of pulmonary arteriovenous malformations in a patient with cryptogenic stroke. Paradoxical embolism resulting in cryptogenic stroke has received much attention recently, with the primary focus on patent foramen ovale (PFO). However, it is essential to be vigilant in the search for other causes of paradoxical embolic events, such as pulmonary arteriovenous malformations (PAVM). We describe successful closure of pulmonary AVM with a St Jude Medical (Plymouth, MN) Amplatzer™ vascular plug IV. The newer AVP-IV devices can be used for successful embolization of tortuous pulmonary AVM in remote locations where use of other traditional devices may be technically challenging.
abstract_id: PUBMED:22115581
Long-term follow-up of treatment of pulmonary arteriovenous malformations with AMPLATZER Vascular Plug and AMPLATZER Vascular Plug II devices. Purpose: To assess the feasibility, complications, and long-term success of embolization of pulmonary arteriovenous malformations (PAVMs) with the AMPLATZER Vascular Plug and AMPLATZER Vascular Plug II.
Materials And Methods: The study included 15 consecutive patients (19 embolization episodes) who had embolization of PAVMs between April 2004 and April 2009 with an AMPLATZER Vascular Plug or AMPLATZER Vascular Plug II. There were 4 men and 11 women, with a mean age of 56 years (range 24-74 years). A prospective database of all cases of PAVM embolization is kept in the department. Patient history, detailed procedural records, and clinical and radiological follow-up were reviewed.
Results: Among the 19 PAVMs, an AMPLATZER Vascular Plug was deployed in 11, and an AMPLATZER Vascular Plug II was deployed in 8. The technical success of the procedure was 100% for PAVM occlusion; 30-day mortality in the group was zero. Successful radiologic follow-up with the AMPLATZER Vascular Plug was a mean of 28 months (range 0-60 months) and with the AMPLATZER Vascular Plug II was a mean of 18 months (range 12-36 months). There was one recanalization of an AMPLATZER Vascular Plug 36 months after embolization giving an annual event rate of 0.03 recanalizations per AMPLATZER Vascular Plug or AMPLATZER Vascular Plug II per year. There were no major complications. Clinically, there was one (1 of 18 cases [5%]) immediate complication of chest pain that resolved in 24 hours with simple analgesia. There were no early or late clinical complications.
Conclusions: The treatment of PAVM with either an AMPLATZER Vascular Plug or an AMPLATZER Vascular Plug II is safe and effective and associated with a low reintervention rate. Further follow-up is ongoing to ensure continued occlusion of treated PAVMs.
abstract_id: PUBMED:17397589
Occlusion of a pulmonary arteriovenous fistula with an amplatzer vascular plug Pulmonary arteriovenous malformations are rare anomalies that carry a considerable risk of serious complications such as cerebral thromboembolism or abscess and pulmonary hemorrhage. The first-line treatment of such malformations is detachable coil or balloon embolotherapy. However, coils and balloons may migrate and cause paradoxical embolism especially in malformations with large arteriovenous shunts. We report a case in which we used a new vascular occlusion device (amplatzer vascular plug), to occlude a pulmonary arteriovenous fistula in a patient with Rendu-Osler-Weber syndrome.
abstract_id: PUBMED:18306144
Peripheral vascular applications of the Amplatzer vascular plug. Purpose: To present our experience using the Amplatzer vascular plug in various arterial and venous systems, and follow-up results.
Materials And Methods: Between May 2005 and October 2006, 20 Amplatzer vascular plugs were used to achieve occlusion in 20 vessels in 12 patients (10 male, 2 female) aged between 24 and 80 years (mean age, 55 years). Localization and indications for embolotherapy were as follows: pulmonary arteriovenous malformations (n = 3; 9 vessels), internal iliac artery embolization before stent-graft repair for aortoiliac aneurysms (n = 4; 4 vessels), preoperative (right hemipelvectomy) embolization of bilateral internal iliac arteries (n = 1), bilateral internal iliac aneurysms (n = 1), large thoracic side branch of the left internal mammary artery coronary by-pass graft causing coronary steal syndrome (n = 1), closure of a transjugular intrahepatic portosystemic shunt (n = 1), and testicular vein embolization for a varicocele (n = 1).
Results: The technical success rate was 100%, with total occlusion of all the targeted vessels. Only one device was used to achieve total occlusion of the targeted vessel in all patients (device size range, 6-16 mm in diameter). No major complications occurred. Target vessel occlusion time after deployment of the Amplatzer vascular plug was 6-10 min in pulmonary arteries (mean, 7.5 min) and 10-35 min (mean, 24.4 min) in systemic arteries. Mean follow-up was 6.7 months (range, 1-18 months).
Conclusion: Embolization with the Amplatzer vascular plug is safe, feasible, and technically simple with appropriate patient selection in various vascular territories.
abstract_id: PUBMED:27856403
Does the type and size of Amplatzer vascular plug affect the occlusion time of pulmonary arteriovenous malformations? Purpose: Occlusion time (OT) is an important factor in the treatment of pulmonary arteriovenous malformations (PAVMs) since it can lead to serious complications. The purpose of our study is to calculate the OT of Amplatzer vascular plug (AVP, St Jude Medical), and correlate it to the type of the device used (AVP or AVP 2) and the percent of device oversizing. Technical success rates and complications were also recorded.
Methods: We retrospectively studied a total of 19 patients with 47 PAVMs who received percutaneous transcatheter embolization therapy using either AVP or AVP 2. We recorded the location, type, feeding artery diameter, AVP device used, and OT of each PAVM. We correlated the percent of device oversizing and the type of AVP with the OT. We also studied the rate of persistence of PAVM for both devices.
Results: Forty-six (98%) of the PAVMs were simple. Device diameters ranged from 4.0-16.0 mm with device oversizing ranging between 14% and 120%. There was a statistically significant difference in the OT of AVP and AVP 2 (3 min 54 s vs. 5 min 30 s, P = 0.030). There was a weak positive correlation between OT and device oversizing for AVP (r=0.246, P = 0.324) and AVP 2 (r=0.261, P = 0.240). No major complications were identified. Immediate technical success rate was 100%.
Conclusion: The use of AVP 2, and increase in device oversizing were not associated with reduction in the OT of PAVMs. There was no reported difference in safety between the two devices, and no major complications were noted.
abstract_id: PUBMED:24688229
Amplatzer vascular plugs in congenital cardiovascular malformations. Background: Amplatzer vascular plugs (AVPs) are devices ideally suited to close medium-to-large vascular communications. There is limited published literature regarding the utility of AVPs in congenital cardiovascular malformations (CCVMs).
Aims: To describe the use of AVPs in different CCVMs and to evaluate their safety and efficacy.
Materials And Methods: All patients who required an AVP for the closure of CCVM were included in this retrospective review of our catheterization laboratory data. The efficacy and safety of AVPs are reported.
Results: A total of 39 AVPs were implanted in 31 patients. Thirteen (33%) were AVP type I and 23 (59%) were AVP type II. AVP type III were implanted in two patients and type IV in one patient. The major indications for their use included closure of pulmonary arteriovenous malformation (AVM) (n = 7), aortopulmonary collaterals (n = 7), closure of a patent Blalock-Taussig shunt (n = 5), systemic AVM (n = 5), coronary AVM (n = 4), patent ductus arteriosus (PDA) (n = 3), pulmonary artery aneurysms (n = 3), and venovenous collaterals (n = 2). Deployment of the AVP was done predominantly via the 5 - 7F Judkin's right coronary guide catheter. Overall 92% of the AVPs could be successfully deployed and resulted in occlusion of the target vessel in all cases, within 10 minutes. No procedure-related or access site complication occurred.
Conclusions: AVPs are versatile, easy to use, and effective devices to occlude the vascular communications in a variety of settings. AVP II is especially useful in the closure of tubular structures with a high flow.
abstract_id: PUBMED:26432734
Device closure of pulmonary arteriovenous malformation using Amplatzer vascular plug II in hereditary hemorrhagic telangiectasia. Pulmonary arteriovenous malformations (AVM) are very rare and carry the risk of cerebral thrombo-embolism, brain abscess or pulmonary hemorrhage. The Amplatzer vascular plug II (AVP II) is a new device, used for embolization of the pulmonary AVMs. We report a case of pulmonary AVM successfully managed by using AVP II in a patient with hereditary hemorrhagic telangiectasia (HHT).
abstract_id: PUBMED:25150900
Early experience with the AMPLATZER vascular plug IV for the occlusion of pulmonary arteriovenous malformations. The recent generation of AMPLATZER Vascular Plug (AVP; ie, the AVP IV) was used for the occlusion of eight pulmonary arteriovenous malformations (PAVMs) in five patients. A treatment was considered successful when there was a reduction or disappearance of the aneurysmal sac. At a mean follow-up of 20.1 months, no recanalization of PAVMs was observed on multidetector computed tomographic angiography. This shows the AVP IV to be safe and effective as an embolic device to occlude PAVMs.
abstract_id: PUBMED:28745460
Lobar occlusion of pulmonary arteriovenous malformations with Amplatzer vascular plug. N/A
Answer: Yes, the type and size of the Amplatzer vascular plug (AVP) can affect the occlusion time of pulmonary arteriovenous malformations (PAVMs). A study found a statistically significant difference in the occlusion time (OT) between the AVP and AVP 2, with the AVP 2 having a longer OT (5 minutes 30 seconds) compared to the AVP (3 minutes 54 seconds) (PUBMED:27856403). Additionally, there was a weak positive correlation between OT and device oversizing for both AVP and AVP 2, although this correlation was not statistically significant (PUBMED:27856403). Despite these findings, the increase in device oversizing was not associated with a reduction in the OT of PAVMs, and there was no reported difference in safety between the two devices, with no major complications noted (PUBMED:27856403). |
Instruction: Rapid osteolysis of the femoral neck: consequence of an insufficiency fracture of the hip?
Abstracts:
abstract_id: PUBMED:23636732
Rapid osteolysis of the femoral neck: consequence of an insufficiency fracture of the hip? Purpose: To describe the imaging and clinical features of rapid osteolysis of the femoral neck in an attempt to better understand this uncommon pathology.
Materials And Methods: We retrospectively reviewed the files of 11 patients (six women and five men) aged 53-78 years diagnosed with rapid osteolysis of the femoral neck. Available imaging studies included radiographs, CT, MRI, and bone scintigraphy. Histopathological evaluations were available for seven cases.
Results: All patients presented with complaints of hip pain, six of whom had acute symptoms, while the rest had progressive symptoms and impairment. All but one case were found to have bone deposition in adjacent hip muscles. CT confirmed bone deposition in adjacent tissues and true osteolysis of the femoral neck with relative sparing of the articular surfaces. Bone scintigraphy and MRI were useful to exclude underlying neoplastic disease.
Conclusions: Rapid osteolysis of the femoral neck tends to occur in patients with underlying comorbidities leading to bone fragility and may actually represent a peculiar form of spontaneous insufficiency fracture. Recognition of its imaging features and clinical risk factors may help distinguish this process from other more concerning disorders such as infection or neoplasm.
abstract_id: PUBMED:24972443
Spontaneous modular femoral head dissociation complicating total hip arthroplasty. Modular femoral heads have been used successfully for many years in total hip arthroplasty. Few complications have been reported for the modular Morse taper connection between the femoral head and trunnion of the stem in metal-on-polyethylene bearings. Although there has always been some concern over the potential for fretting, corrosion, and generation of particulate debris at the modular junction, this was not considered a significant clinical problem. More recently, concern has increased because fretting and corrosive debris have resulted in rare cases of pain, adverse local tissue reaction, pseudotumor, and osteolysis. Larger femoral heads, which have gained popularity in total hip arthroplasty, are suspected to increase the potential for local and systemic complications of fretting, corrosion, and generation of metal ions because of greater torque at the modular junction. A less common complication is dissociation of the modular femoral heads. Morse taper dissociation has been reported in the literature, mainly in association with a traumatic event, such as closed reduction of a dislocation or fatigue fracture of the femoral neck of a prosthesis. This report describes 3 cases of spontaneous dissociation of the modular prosthetic femoral head from the trunnion of the same tapered titanium stem because of fretting and wear of the Morse taper in a metal-on-polyethylene bearing. Continued clinical and scientific research on Morse taper junctions is warranted to identify and prioritize implant and surgical factors that lead to this and other types of trunnion failure to minimize complications associated with Morse taper junctions as hip implants and surgical techniques continue to evolve.
abstract_id: PUBMED:15057094
The role of proximal femoral support in stress development within hip prostheses. Bone remodeling commonly associated with implant loosening may require revision total hip replacement when there is substantial proximal femoral bone loss. Additionally, the surgical exposure required to remove primary implants may alter the proximal femur's structure. As a result, in many revision hip situations the proximal femur provides compromised support for the revision femoral component. Stress analyses of the proximal femur with extensively porous-coated prosthetic femoral components show that proximomedial femoral bone loss, ununited femoral osteotomy, and periprosthetic fracture can result in significant elevation of stress within revision prosthetic components. The first principal stress within prosthetic components can, in proximal bone loss conditions, be elevated significantly above a revision prostheses' fatigue strength. Loss of proximomedial bone is predicted to increase stress within a revision component by as much as 82%. An unhealed transverse femoral fracture or osteotomy is predicted to more than double the stress within a revision femoral component. In revision total hip replacement, efforts directed toward the restoration of proximal femoral bone and the use of larger prostheses may contribute to avoiding prostheses fatigue fracture. Similarly, protected weightbearing in patients with ununited femoral osteotomies and periprosthetic fractures may be important to preventing prosthetic fracture.
abstract_id: PUBMED:34788256
Cementless Hip Arthroplasty in Patients with Subchondral Insufficiency Fracture of the Femoral Head. Background: Subchondral insufficiency fracture of the femoral head (SIFFH) occurs in elderly patients and might be confused with osteonecrosis of the femoral head (ONFH). Subchondral insufficiency fracture of the femoral head is an insufficiency fracture at the dome of the femoral head and has been known to be associated with osteoporosis, hip dysplasia, and posterior pelvic tilt. This study's aims were to evaluate (1) surgical complications, (2) radiological changes, (3) clinical results, and (4) survivorship of THA in patients with SIFFH.
Methods: From November 2010 to June 2017, 21 patients (23 hips); 5 men (5 hips) and 16 women (18 hips) underwent cementless THA due to SIFFH at our institution. Their mean age was 71.9 years (range, 57 to 86) at the time of surgery, and mean T-score was -2.2 (range, -4.2 to 0.2). The mean lateral center-edge angle, abduction, and anteversion of the acetabulum were 29.9° (range, 14.8° to 47.5°), 38.5° (range, 31° to 45°), and 20.0° (range, 12° to 25°), respectively. The mean pelvic incidence, lumbar kyphotic angle and posterior pelvic tilt were 56.4° (range, 39° to 79°), 14.7° (range, -34° to 43°), and 13.0° (range, 3° to 34°), respectively.
Results: An intraoperative calcar crack occurred in 1 hip. The mean anteversion and abduction of cup were 29.0° (range, 17° to 43°) and 43.3° (range, 37° to 50°), respectively. One patient sustained a traumatic posterior hip dislocation 2 weeks after the procedure, and was treated with open reduction. At a mean follow-up of 35.4 months (range, 24 to 79 months), no hip had prosthetic loosening or focal osteolysis. At the latest follow-up, the mean modified Harris hip score was 79.1 (range, 60 to 100) points, and mean UCLA activity score was 4.2 (range, 2 to 7) points. The survivorship was 95.7% (95% CI, 94.9% to 100%) at 6 years.
Conclusions: Cementless THA is a favorable treatment option for SIFFH in elderly patients.
Level Of Evidence: 3.
abstract_id: PUBMED:11046159
From the RSNA Refresher Courses. Radiological Society of North America. Adult chronic hip pain: radiographic evaluation. Adult chronic hip pain can be difficult to attribute to a specific cause, both clinically and radiographically. Yet, there are often subtle radiographic signs that point to traumatic, infectious, arthritic, neoplastic, congenital, or other causes. Stress fractures appear as a lucent line surrounded by sclerosis or as subtle lucency or sclerosis. Subtle femoral neck angulation, trabecular angulation, or a subcapital impaction line indicates an insufficiency fracture. Apophyseal avulsion fractures appear as a thin, crescentic, ossific opacity when viewed in tangent and as a subtle, disk-shaped opacity when viewed en face. Effusion, cartilage loss, and cortical bone destruction are diagnostic of a septic hip. Transient osteoporosis manifests as osteoporosis and effusion. The earliest finding of avascular necrosis is relative sclerosis in the femoral head. Subtle osteophytes or erosive change is indicative of arthropathy. Osteoarthritis can manifest as early cyst formation, small osteophytes, or buttressing of the femoral neck or calcar. Rheumatoid arthritis may manifest as classic osteopenia, uniform cartilage loss, and erosive change. A disturbance of the trabecular pattern might suggest an early permeative pattern due to a tumor. Knowledge of common causes of chronic hip pain will allow the radiologist to seek out these radiographic findings.
abstract_id: PUBMED:3980529
Osteolytic changes in the upper femoral shaft following porous-coated hip replacement. Ten uncemented total hip replacements were performed in 1975 using an implant in which the cobalt-chrome femoral stem was coated to give a porous surface. In all but one case a high-density polyethylene head was used. The radiological changes in the upper femoral shafts were assessed between three and nine years later. Seven showed extensive stress-relieving changes, loss of calcar, stress fractures at the root of the lesser trochanter with subsequent detachment, and osteoporosis followed by avulsion of the greater trochanter. In these seven patients the lower part of the stem appeared to be soundly embedded, although in only one was there evidence of bony incorporation. It is suggested that if the fixation of a fully coated implant of this sort remains sound, gross atrophy of the upper femoral shaft develops after five years. This atrophy, associated with an implant which can be removed only at the expense of further bone destruction, presents substantial problems if revision is needed.
abstract_id: PUBMED:7782364
Mechanical consequences of bone ingrowth in a hip prosthesis inserted without cement. Long-term biomechanical problems associated with the use of sintered porous coating on prosthetic femoral stems inserted without cement include proximal loss of bone and a risk of fatigue fracture of the prosthesis. We sought to identify groups of patients in whom these problems are accentuated and in whom the use of porous coating may thus jeopardize the success of the arthroplasty. We attempted to develop clinical guidelines for the use of sintered porous coating by investigating the long-term biomechanical effects of bone growth into partially (two-thirds) porous-coated anatomic medullary locking hip prostheses that fit well. More specifically, we used a detailed finite element analysis and a composite beam theory to determine the dependence of proximal loading of the bone and maximum stresses on the stem on the development of clinically observed patterns of bone ingrowth and the dependence of the risk of fatigue fracture of the stem on the diameter of the stem, the diameter of the periosteal bone, and the material from which the prosthesis was made. We found that bone ingrowth per se substantially reduced proximal loading of the bone. With typical bone ingrowth, axial and torsional loads acting on the proximal end of the bone were reduced aa much as twofold compared with when there was no ingrowth; bending loads on the proximal end of the bone were also reduced. The risk of fatigue fracture of the stem was insensitive to the development of bone ingrowth. However, the risk of fatigue fracture of the stem increased with decreased diameters of the stem and the periosteal bone and with increased modulus of the stem. The maximum risk of fracture was found in active patients in whom a cobalt-chromium-alloy stem with a small diameter had been implanted in a bone with a small diameter.
abstract_id: PUBMED:19833568
Total hip arthroplasty in severe segmental femoral bone loss situations: use of a reconstruction modular stem design (JVC IX). Retrospective study of 23 cases. Background: Management of extensive proximal femur bone loss secondary to tumor resection or major osteolysis remains controversial. The possible options include a composite allograft/stem prosthesis, a modular type megaprosthesis or a custom-made megaprosthesis. Modularity allows versatility at reconstruction and avoids the delay required manufacturing a custom-made implant. Hypothesis and type of study: A retrospective radiological and clinical study investigated whether a special reconstruction modular stem design (JVC IX) would provide medium term success in the treatment of severe proximal femur bone loss.
Material And Methods: Between 1995 and 2005, 23 JVC IX hip replacements were performed for severe segmental proximal femur bone loss. Etiology was: 13 cases of tumor resection, eight of extensive osteolysis secondary to femoral implant loosening, and two traumatic situations. Follow-up was annual. Functional assessment used the Musculo-Skeletal Tumor Score (MSTS), and implant survival rates underwent Kaplan-Meier analysis, with surgical revision (to replace or remove the implant) as the end point.
Results: All 23 patients (23 hips) were followed up for a mean 5.4 years (+/-3.7 yrs). Mean MSTS was 16.2 (max.=30). All stems demonstrated good fixation at radiological assessment, except for one case of probable loosening in contact with a metastatic osteolysis. Four implants had to be revised: two for non-controlled infection, one for tumor extension, and one for stem fatigue fracture. At 10 years' follow-up, implant survivorship was 81.5% (range: 62% to 100%).
Discussion: Severe proximal femur bone loss is a difficult situation to deal with, offering no ideal treatment option. Modular megaprostheses are salvage procedures. Their results at a mean 5.4 years' follow-up are encouraging, and appear comparable to the ones obtained with alternative solutions (composite allograft/stem prostheses).
Type Of Study: Level IV retrospective, therapeutic study.
abstract_id: PUBMED:25019450
MR imaging of hip arthroplasty implants. Hip arthroplasty has become the standard treatment for end-stage hip disease, allowing pain relief and restoration of mobility in large numbers of patients; however, pain after hip arthroplasty occurs in as many as 40% of cases, and despite improved longevity, all implants eventually fail with time. Owing to the increasing numbers of hip arthroplasty procedures performed, the demographic factors, and the metal-on-metal arthroplasty systems with their associated risk for the development of adverse local tissue reactions to metal products, there is a growing demand for an accurate diagnosis of symptoms related to hip arthroplasty implants and for a way to monitor patients at risk. Magnetic resonance (MR) imaging has evolved into a powerful diagnostic tool for the evaluation of hip arthroplasty implants. Optimized conventional pulse sequences and metal artifact reduction techniques afford improved depiction of bone, implant-tissue interfaces, and periprosthetic soft tissue for the diagnosis of arthroplasty-related complications. Strategies for MR imaging of hip arthroplasty implants are presented, as well as the imaging appearances of common causes of painful and dysfunctional hip arthroplasty systems, including stress reactions and fractures; bone resorption and aseptic loosening; polyethylene wear-induced synovitis and osteolysis; adverse local tissue reactions to metal products; infection; heterotopic ossification; tendinopathy; neuropathy; and periprosthetic neoplasms. A checklist is provided for systematic evaluation of MR images of hip arthroplasty implants. MR imaging with optimized conventional pulse sequences and metal artifact reduction techniques is a comprehensive imaging modality for the evaluation of the hip after arthroplasty, contributing important information for diagnosis, prognosis, risk stratification, and surgical planning.
abstract_id: PUBMED:17343304
Rapid acetabular osteolysis secondary to subchondral insufficiency fracture. A 93-year-old man presented with a one-month history of persistent left hip pain of sudden onset. At first visit, radiographs revealed a fracture line at the medial portion of the acetabulum with no displacement. Magnetic resonance imaging revealed bone marrow edema in the corresponding medial portion of the acetabulum. Radiographs obtained 2 months later showed rapid acetabular osteolysis with associated prominent migration of the femoral head into the acetabulum. Histology obtained from the hip joint was consistent with a subchondral insufficiency fracture with no evidence of massive chondrolysis. Our case was considered as a subchondral insufficiency fracture of the left acetabulum resulting in rapid acetabular osteolysis (protrusio acetabuli).
Answer: The imaging and clinical features of rapid osteolysis of the femoral neck suggest that it may indeed be a consequence of an insufficiency fracture of the hip. A study that reviewed the files of 11 patients diagnosed with rapid osteolysis of the femoral neck found that all patients presented with hip pain, with some having acute symptoms and others having progressive symptoms and impairment. The imaging studies, which included radiographs, CT, MRI, and bone scintigraphy, revealed true osteolysis of the femoral neck with relative sparing of the articular surfaces and bone deposition in adjacent hip muscles. The imaging was also useful in excluding underlying neoplastic disease. The study concluded that rapid osteolysis of the femoral neck tends to occur in patients with underlying comorbidities leading to bone fragility and may represent a peculiar form of spontaneous insufficiency fracture. Recognition of its imaging features and clinical risk factors may help distinguish this process from other disorders such as infection or neoplasm (PUBMED:23636732).
This conclusion is supported by the understanding that insufficiency fractures can occur in conditions of weakened bone, as seen in cases of osteoporosis or other comorbidities that affect bone strength and integrity. The presence of bone deposition in adjacent muscles and the sparing of articular surfaces are consistent with the patterns of stress and insufficiency fractures, where the bone fails under normal stress due to its compromised condition. Therefore, the evidence suggests that rapid osteolysis of the femoral neck can be a consequence of an insufficiency fracture of the hip. |
Instruction: Are norms based on the original Teller Acuity Cards appropriate for use with the new Teller Acuity Cards II?
Abstracts:
abstract_id: PUBMED:16213399
Are norms based on the original Teller Acuity Cards appropriate for use with the new Teller Acuity Cards II? Purpose: We sought to compare visual acuity results obtained with the original Teller Acuity Cards (TAC), which are no longer commercially available, and the modified, laminated Teller Acuity Cards II (TAC II), which recently became commercially available.
Methods: Sixty children were tested, 20 at each of 3 ages: 3.5 months, 11 months, and 30 months. Each subject's binocular grating acuity was measured once with the TAC and once with the TAC II, with the order of testing counterbalanced across subjects. Testers were aware that acuity cards were arranged in ascending order of spatial frequency, but they were masked to the absolute spatial frequencies of the gratings in the subset of cards used. Testers were also masked to acuity results until testing of the child was completed.
Results: Repeated-measures analysis of variance with age as a between-subjects variable and card type as a within-subjects variable showed a significant effect of age (P < 0.001) and a significant effect of card type (P < 0.001), but no interaction between age and card type. Post hoc comparisons (with Bonferroni correction) showed that mean acuity score was significantly better with TAC than with TAC II at 3.5 months (0.2 octave, P < 0.05), 11 months (0.4 octave, P < 0.01), and 30 months (0.7 octave, P < 0.001).
Conclusions: These results suggest that normative grating acuity data obtained with the original Teller Acuity Cards need to be adjusted toward lower acuity values by approximately 0.5 octave to be appropriate for use with the new Teller Acuity Cards II.
abstract_id: PUBMED:9333664
Reference values in vision development of infants with clinical use of the Teller Acuity Cards Background: Using Teller Acuity Cards (TAC) for clinical visual testing, the question arose how our measurements fitted in the different standard tables of the producer's hand-book. In addition, we wanted to investigate how reliable the measurements of newborn and infants were and what the examination success rate under clinical conditions was.
Methods: At the paediatric clinic of the University of Erlangen, we tested the binocular grating acuity of 98 infants up to the age of one year, using the complete set of Teller acuity cards. In addition, 41 of the children underwent a monocular vision test.
Results: 1. Theoretical: At first we calculated conversion data for our card set. Using this conversion scale from cy/cm in cy/deg and the corresponding vision equivalent we produced our own standards for the development of grating acuity up to the age of one year. 2. Clinical: In 3-5 min per clinical examination we could determine for 90.8% of the patients a vision equivalent. The reliability of the results was age dependent and was at its best at the age of 5-11 months. The reliability was also very dependent on the duration of the test and the number of test runs. This resulted in a limited card choice for each age group.
abstract_id: PUBMED:35601243
Clinical Utility of 'Peekaboo Vision' Application for Measuring Grating Acuity in Children with Down Syndrome. Peekaboo Vision is an iPad grating acuity app built with typically developing children in mind. Given the ease of using this app in the pediatric age group, this study determined its clinical utility in children with Down syndrome. Two groups of participants (children with Down syndrome and age-matched controls) were included. Presenting binocular grating acuity was measured using Peekaboo Vision and Teller acuity cards II in random order. Parents' feedback about their child's engagement and time taken to complete each test was documented. Thirty-seven children with Down syndrome (males = 23; mean age = 8.1 ± 4.2 years) and 28 controls (males = 15; mean age = 8.71 ± 3.84 years) participated. Time taken to complete the tests was comparable (p = 0.83) in children with Down syndrome. Controls were significantly faster with Peekaboo Vision (p = 0.01). Mean logMAR acuities obtained with Peekaboo Vision (0.16 ± 0.34) and Teller acuity cards II (0.63 ± 0.34) were significantly different (p < 0.001) in children with Down syndrome (mean difference in acuities: -0.44 ± 0.38 logMAR (95% LoA: -1.18 to 0.3). For controls, the mean logMAR acuity with Peekaboo Vision (-0.13 ± 0.12) and Teller acuity cards II (0.12 ± 0.09) was also found to be significantly different (p < 0.001) (mean difference in acuities: -0.24 ± 0.14 logMAR (95% LoA: -0.51 to 0.03) Peekaboo Vision test can be used on children with Down syndrome. Peekaboo Vision and Teller acuity cards II can be used independently but not interchangeably. The differences in the acuity values between the two tests could be a result of the differences in the thresholding paradigms, different testing mediums and the range of acuities covered.
abstract_id: PUBMED:2235017
500 visual acuity tests in infants with Teller acuity cards Assessment of visual resolution with Teller Acuity Cards is now routine procedure in infant visual check-up. A grating is printed on one end of a card on an homogeneous background of the same mean luminance. A series of spatial frequencies at an interval of one half octave covers the range of infant acuity. Binocular and monocular acuities have been measured on 517 infants between 100 and 399 days of age. The cards can be used from a few weeks of age up to 18 months at least. Then it gets more chancy to keep the child attentive. Binocular and two monocular tests take 6 minutes with a normal child. Binocular grating acuity normally reaches 6.5 cycles/degree at 4 months of age, and 12 c/deg at 12 months. Monocular acuity is one half octave lower. From the level of acuity obtained and observation of the behaviour, amblyopia can be detected and low vision can be estimated easily in infants.
abstract_id: PUBMED:2796227
The Teller Acuity Card Test: possibilities and limits of clinical use The Teller Acuity Card test was used to examine 49 normal children, 77 with strabismus, 9 with anisometropia and 19 with various organic ocular diseases. The vision of some of these children was also tested with the Landolt C and fixation preference tests. A comparison of the three tests showed that strabismic amblyopia is not reliably detected with the Teller Acuity Cards. On the other hand, this test appears to be a good one for detecting loss of acuity due to ocular diseases.
abstract_id: PUBMED:33240565
Too Many Shades of Grey: Photometrically and Spectrally Mismatched Targets and Backgrounds in Printed Acuity Tests for Infants and Young Children. Purpose: Acuity tests for infants and young children use preferential looking methods that require a perceptual match of brightness and color between grey background and target spatial average. As a first step in exploring this matching, this article measures photometric and colorimetric matches in these acuity tests.
Methods: The luminance, uniformity, contrast, and color spectra of Teller Acuity Cards, Keeler Acuity Cards for Infants, and Lea Paddles under ambient, warm, and cold lighting, and of grey-emulating patterns on four digital displays, were measured. Five normal adults' acuities were tested at 10 m observationally.
Results: Luminance and spectral mismatches between target and background were found for the printed tests (Weber contrasts of 0.3% [Teller Acuity Cards], -1.7% [Keeler Acuity Cards for Infants], and -26% [Lea Paddles]). Lighting condition had little effect on contrast, and all printed tests and digital displays met established adult test luminance and uniformity standards. Digital display grey backgrounds had very similar luminance and color whether generated by a checkerboard, vertical grating, or horizontal grating. Improbably good psychophysical acuities (better than -0.300 logMAR: (logarithm of the minimum angle of resolution)) were recorded from adults using the printed tests at 10 m, but not using the digital test Peekaboo Vision.
Conclusions: Perceptible contrast between target and background could lead to an incorrectly measured, excessively good acuity. It is not clear whether the luminance and spectral contrasts described here have clinically meaningful consequences for the target patient group, but they may be avoidable using digital tests.
Translational Relevance: Current clinical infant acuity tests present photometric mismatches that may return inaccurate testing results.
abstract_id: PUBMED:7710400
Grating visual acuity with Teller cards compared with Snellen visual acuity in literate patients. Objective: To determine the usefulness of Teller Acuity Cards for detecting three levels of vision deficit, the cutoff for amblyopia (20/40 or poorer), vision impairment (20/70), or legal blindness (20/200).
Design: We compared grating visual acuity with the Teller cards with Snellen visual acuity (our gold standard) in 69 literate patients with amblyopia or other cause of vision loss in a prospective masked study.
Results: Teller card visual acuity and distance Snellen visual acuity correlated significantly (r = .508, P < .001); however, Teller card visual acuity explained only 26% of the variation in distance Snellen visual acuity. Teller card visual acuity had a low sensitivity for detecting vision deficit of 20/40 or poorer (58%), vision deficit of 20/70 or poorer (39%), or legal blindness (24%), but somewhat more accurately reflected near Snellen visual acuity than distance visual Snellen acuity. Teller cards had a higher positive predictive value--80% for 20/70 visual acuity and 43% for legal blindness, as determined by near Snellen visual acuity. Specificity of Teller cards was 88% for detecting visual acuity loss of 20/70 and 98% for legal blindness. Negative predictive value of Teller cards for detecting visual acuity loss of 20/70 was 50% and for legal blindness was 71%.
Conclusions: Teller Acuity Cards may underestimate the presence of amblyopia of all types, legal blindness, and a specified level of vision impairment (20/70). Even in the presence of normal visual acuity measurements with Teller cards, significant visual loss as assessed by standard Snellen optotypes may be anticipated in many patients.
abstract_id: PUBMED:33235349
Study to establish visual acuity norms with Teller Acuity Cards II for infants from southern China. Objectives: To establish the norms of binocular and monocular acuity and interocular acuity differences for southern Chinese infants and compare these norms with the results for northern Chinese infants.
Methods: A prospective, comparative, and noninterventional study was conducted from January to August 2018. Teller Acuity Cards II were used to determine the binocular and monocular acuity of infants. The tolerance intervals and limits with a stated proportion and probability were used to evaluate the norms of binocular and monocular acuity and interocular acuity differences. An unpaired t-test was used to compare the obtained norms with the reported northern Chinese norms.
Results: The tolerance intervals of binocular acuity (mean acuity of 3.73, 7.35, and 12.01 cpd, respectively, at 12, 24, and 36 months), monocular acuity (mean acuity of 2.88, 6.91, and 10.75 cpd, respectively, at 12, 24, and 36 months), and interocular acuity differences (mean difference of 0.92, 2.89, and 3.99 cpd, respectively, at 12, 24, and 36 months) were obtained, exhibiting an increasing trend with age. The binocular visual acuity norms of southern Chinese infants were significantly lower than those in northern China (4.37 vs. 6.9 cpd at 8 months and 7.35 vs. 26 cpd at 24 months) (P = 0.011).
Conclusions: Northern and southern Chinese infants exhibited distinct acuity norms and visual development patterns. The establishment of population-specific visual acuity norms is necessary for current populations of infants from different regions.
abstract_id: PUBMED:2355168
Expanding the limits of the Teller Visual Acuity Cards. A nomogram is presented that permits quick determination of Snellen visual acuity values for the Teller Visual Acuity Cards at distances up to 10 feet. With this nomogram, it is now possible to determine visual acuity in nonverbal populations, other than infants, while taking into account the effect of refractive error, especially myopia. The formula for constructing the nomogram is also included as generated by the Lotus 1-2-3 spreadsheet.
abstract_id: PUBMED:3062530
Teller acuity cards versus clinical judgment in the diagnosis of amblyopia with strabismus. Teller acuity card testing, which is a form of the preferential-looking procedure, is a popular way of assessing visual acuity in preverbal patients. The authors suspected that the clinical judgment of a pediatric ophthalmologist is superior to the Teller acuity cards in diagnosing amblyopia when strabismus is present. Acuity card and fixation preference measurements on each eye were compared at the same clinical visit in a group of 108 strabismic patients. The authors found that the acuity cards could be used to detect amblyopia. However, the pediatric ophthalmologist was more sensitive in diagnosing amblyopia than the Teller acuity cards in the presence of strabismus.
Answer: No, the norms based on the original Teller Acuity Cards are not appropriate for use with the new Teller Acuity Cards II. A study comparing visual acuity results obtained with the original Teller Acuity Cards (TAC) and the modified, laminated Teller Acuity Cards II (TAC II) found that mean acuity scores were significantly better with TAC than with TAC II at all tested ages (3.5 months, 11 months, and 30 months). The results suggested that normative grating acuity data obtained with the original Teller Acuity Cards need to be adjusted toward lower acuity values by approximately 0.5 octave to be appropriate for use with the new Teller Acuity Cards II (PUBMED:16213399). |
Instruction: Can a validated sleep apnea scoring system predict cardiopulmonary events using propofol sedation for routine EGD or colonoscopy?
Abstracts:
abstract_id: PUBMED:24219821
Can a validated sleep apnea scoring system predict cardiopulmonary events using propofol sedation for routine EGD or colonoscopy? A prospective cohort study. Background: Obstructive sleep apnea (OSA), which is linked to the prevalence of obesity, continues to rise in the United States. There are limited data on the risk for sedation-related adverse events (SRAE) in patients with undiagnosed OSA receiving propofol for routine EGD and colonoscopy.
Objective: To identify the prevalence of OSA by using the STOP-BANG questionnaire (SB) and subsequent risk factors for airway interventions (AI) and SRAE in patients undergoing elective EGD and colonoscopy.
Design: Prospective cohort study.
Setting: Tertiary-care teaching hospital.
Patients: A total of 243 patients undergoing routine EGD or colonoscopy at Cleveland Clinic.
Intervention: Chin lift, mask ventilation, placement of nasopharyngeal airway, bag mask ventilation, unplanned endotracheal intubation, hypoxia, hypotension, or early procedure termination.
Main Outcome Measurements: Rates of AI and SRAE.
Results: Mean age of the cohort was 50 ± 16.2 years, and 41% were male. The prevalence of SB+ was 48.1%. The rates of hypoxia (11.2% vs 16.9%; P = .20) and hypotension (10.4% vs 5.9%; P = .21) were similar between SB- and SB+ patients. An SB score ≥3 was found not to be associated with occurrence of AI (relative risk [RR] 1.07, 95% confidence interval [CI] 0.79-1.5) or SRAE (RR 0.81, 95% CI, 0.53-1.2) after we adjusted for total and loading dose of propofol, body mass index (BMI), smoking, and age. Higher BMI was associated with an increased risk for AI (RR 1.02; 95% CI, 1.01-1.04) and SRAE (RR 1.03; 95% CI, 1.01-1.05). Increased patient age (RR 1.09; 95% CI, 1.02-1.2), higher loading propofol doses (RR 1.4; 95% CI, 1.1-1.8), and smoking (RR 1.9; 95% CI, 1.3-2.9) were associated with higher rates of SRAE.
Limitations: Non-randomized study.
Conclusion: A significant number of patients undergoing routine EGD and colonoscopy are at risk for OSA. SB+ patients are not at higher risk for AI or SRAE. However, other risk factors for AI and SRAE have been identified and must be taken into account to optimize patient safety.
abstract_id: PUBMED:24338242
Capnographic monitoring of propofol-based sedation during colonoscopy. Background And Study Aims: Capnography enables the measurement of end-tidal CO2 and thereby the early detection of apnea, prompting immediate intervention to restore ventilation. Studies have shown that capnographic monitoring is associated with a reduction of hypoxemia during sedation for endoscopy and early detection of apnea during sedation for colonoscopy. The primary aim of this prospective randomized study was to evaluate whether capnographic monitoring without tracheal intubation reduces hypoxemia during propofol-based sedation in patients undergoing colonoscopy.
Patients And Methods: A total of 533 patients presenting for colonoscopy at two study sites were randomized to either standard monitoring (n = 266) or to standard monitoring with capnography (n = 267). The incidence of hypoxemia (SO2 < 90 %) and severe hypoxemia (SO2 < 85 %) were compared between the groups. Furthermore, risk factors for hypoxemia were evaluated, and sedation performed by anesthesiologists was compared with nurse-administered propofol sedation (NAPS) or endoscopist-directed sedation (EDS).
Results: The incidence of hypoxemia was significantly lower in patients with capnography monitoring compared with those receiving standard monitoring (18 % vs. 32 %; P = 0.00091). Independent risk factors for hypoxemia were age (P = 0.00015), high body mass index (P = 0.0044), history of sleep apnea (P = 0.025), standard monitoring group (P = 0.000069), total dose of propofol (P = 0.031), and dose of ketamine (P < 0.000001). Patients receiving anesthesiologist-administered sedation developed hypoxemic events more often than those receiving NAPS or EDS. In patients with anesthesiologist-administered sedation, sedation was deeper, a combination of sedative medication (propofol, midazolam and/or ketamine) was administered significantly more often, and sedative doses were significantly higher compared with patients receiving NAPS or EDS.
Conclusions: In patients undergoing colonoscopy during propofol-based sedation capnography monitoring with a simple and inexpensive device reduced the incidence of hypoxemia.
abstract_id: PUBMED:28465784
Does deep sedation with propofol affect adenoma detection rates in average risk screening colonoscopy exams? Aim: To determine the effect of sedation with propofol on adenoma detection rate (ADR) and cecal intubation rates (CIR) in average risk screening colonoscopies compared to moderate sedation.
Methods: We conducted a retrospective chart review of 2604 first-time average risk screening colonoscopies performed at MD Anderson Cancer Center from 2010-2013. ADR and CIR were calculated in each sedation group. Multivariable regression analysis was performed to adjust for potential confounders of age and body mass index (BMI).
Results: One-third of the exams were done with propofol (n = 874). Overall ADR in the propofol group was significantly higher than moderate sedation (46.3% vs 41.2%, P = 0.01). After adjustment for age and BMI differences, ADR was similar between the groups. CIR was 99% for all exams. The mean cecal insertion time was shorter among propofol patients (6.9 min vs 8.2 min; P < 0.0001).
Conclusion: Deep sedation with propofol for screening colonoscopy did not significantly improve ADR or CIR in our population of average risk patients. While propofol may allow for safer sedation in certain patients (e.g., with sleep apnea), the overall effect on colonoscopy quality metrics is not significant. Given its increased cost, propofol should be used judiciously and without the implicit expectation of a higher quality screening exam.
abstract_id: PUBMED:28829527
Does caffeine improve respiratory rate during remifentanil target controlled infusion sedation? A case report in endoscopic sedation. Sedation for endoscopic procedures may be challenging when facing patients with high risk. Traditional techniques, as propofol or meperidine/midazolam administration, cannot ensure an adequate level of safety and efficacy for these patients. Remifentanil infusion is a common alternative, but the incidence of apneic events does not allow achieving safely a good level of analgesia. To overcome with this issue, the authors borrowed suggestions from other medical fields. The clinical practice has recognized a wide utility of methylxanthines (caffeine, theophylline, etc). The positive effect of caffeine on the airways function is known and in the treatment of neonatal apnea, it works as direct stimulant of central respiratory center. Furthermore, preclinical studies suggest that methylxanthines could have a protective role on the opioids inhibition of the bulbar-pontine respiratory center. As described in this report, the authors observed that, also when apnea has been induced by remifentanil, caffeine is able to restore the respiratory rate. The authors present the management of a respiratory impaired patient scheduled for a therapeutic colonoscopy. Our sedation was focused on the match between remifentanil in target controlled infusion and intravenous caffeine, like an "expresso to wake-up" the respiratory drive.
abstract_id: PUBMED:27885537
Safety Analysis of Bariatric Patients Undergoing Outpatient Upper Endoscopy with Non-Anesthesia Administered Propofol Sedation. Background: Non-anesthesia administered propofol (NAAP) has been shown to be a safe and effective method of sedation for patients undergoing gastrointestinal endoscopy. Bariatric surgery patients are potentially at a higher risk for sedation-related complications due to co-morbidities including obstructive sleep apnea. The outcomes of NAAP in bariatric patients have not been previously reported.
Methods: In this retrospective cohort study, severely obese patients undergoing pre-surgical outpatient esophagogastroduodenoscopy (EGD) were compared to non-obese control patients (BMI ≤ 25 kg/m2) undergoing diagnostic EGD at our institution from March 2011-September 2015 using our endoscopy database. Patients' demographics and procedural and recovery data, including any airway interventions, were statistically analyzed.
Results: We included 130 consecutive pre-operative bariatric surgical patients with average BMI 45.8 kg/m2 (range 34-80) and 265 control patients with average BMI 21.9 kg/m2 (range 14-25). The severely obese group had a higher prevalence of sleep apnea (62 vs 8%; p < 0.001), experienced more oxygen desaturations (22 vs 7%; p < 0.001), and received more chin lift maneuvers (20 vs 6%; p < 0.001). Advanced airway interventions were rarely required in either group and were not more frequent in the bariatric group.
Conclusions: With appropriate training of endoscopy personnel, NAAP is a safe method of sedation in severely obese patients undergoing outpatient upper endoscopy.
abstract_id: PUBMED:31043207
Return to Normal Activity after Colonoscopy Using Propofol Sedation. N/A
abstract_id: PUBMED:33133374
Predictor of respiratory disturbances during gastric endoscopic submucosal dissection under deep sedation. Background: Sedation is commonly performed for the endoscopic submucosal dissection (ESD) of early gastric cancer. Severe hypoxemia occasionally occurs due to the respiratory depression during sedation.
Aim: To establish predictive models for respiratory depression during sedation for ESD.
Methods: Thirty-five adult patients undergoing sedation using propofol and pentazocine for gastric ESDs participated in this prospective observational study. Preoperatively, a portable sleep monitor and STOP questionnaires, which are the established screening tools for sleep apnea syndrome, were utilized. Respiration during sedation was assessed by a standard polysomnography technique including the pulse oximeter, nasal pressure sensor, nasal thermistor sensor, and chest and abdominal respiratory motion sensors. The apnea-hypopnea index (AHI) was obtained using a preoperative portable sleep monitor and polysomnography during ESD. A predictive model for the AHI during sedation was developed using either the preoperative AHI or STOP questionnaire score.
Results: All ESDs were completed successfully and without complications. Seventeen patients (49%) had a preoperative AHI greater than 5/h. The intraoperative AHI was significantly greater than the preoperative AHI (12.8 ± 7.6 events/h vs 9.35 ± 11.0 events/h, P = 0.049). Among the potential predictive variables, age, body mass index, STOP questionnaire score, and preoperative AHI were significantly correlated with AHI during sedation. Multiple linear regression analysis determined either STOP questionnaire score or preoperative AHI as independent predictors for intraoperative AHI ≥ 30/h (area under the curve [AUC]: 0.707 and 0.833, respectively) and AHI between 15 and 30/h (AUC: 0.761 and 0.778, respectively).
Conclusion: The cost-effective STOP questionnaire shows performance for predicting abnormal breathing during sedation for ESD that was equivalent to that of preoperative portable sleep monitoring.
abstract_id: PUBMED:34032029
Procedural Sedation for Pediatric Upper Gastrointestinal Endoscopy in Korea. Background: Sedative upper endoscopy is similar in pediatrics and adults, but it is characteristically more likely to lead to respiratory failure. Although recommended guidelines for pediatric procedural sedation are available within South Korea and internationally, Korean pediatric endoscopists use different drugs, either alone or in combination, in practice. Efforts are being made to minimize the risk of sedation while avoiding procedural challenges. The purpose of this study was to collect and analyze data on the sedation methods used by Korean pediatric endoscopists to help physicians perform pediatric sedative upper endoscopy (PSUE).
Methods: The PSUE procedures performed in 15 Korean pediatric gastrointestinal endoscopic units within a year were analyzed. Drugs used for sedation were grouped according to the method of use, and the depth of sedation was evaluated based on the Ramsay scores. The procedures and their complications were also assessed.
Results: In total, 734 patients who underwent PSUE were included. Sedation and monitoring were performed by an anesthesiologist at one of the institutions. The sedative procedures were performed by a pediatric endoscopist at the other 14 institutions. Regarding the number of assistants present during the procedures, 36.6% of procedures had one assistant, 38.8% had 2 assistants, and 24.5% had 3 assistants. The average age of the patients was 11.6 years old. Of the patients, 19.8% had underlying diseases, 10.0% were taking medications such as epilepsy drugs, and 1.0% had snoring or sleep apnea history. The average duration of the procedures was 5.2 minutes. The subjects were divided into 5 groups as follows: 1) midazolam + propofol + ketamine (M + P + K): n = 18, average dose of 0.03 + 2.4 + 0.5 mg/kg; 2) M + P: n = 206, average dose of 0.06 + 2.1 mg/kg; 3) M + K: n = 267, average dose of 0.09 + 0.69 mg/kg; 4) continuous P infusion for 20 minutes: n = 15, average dose of 6.6 mg/kg; 5) M: n = 228, average dose of 0.11 mg/kg. The average Ramsay score for the five groups was 3.7, with significant differences between the groups (P < 0.001). Regarding the adverse effects, desaturation and increased oxygen supply were most prevalent in the M + K group. Decreases and increases in blood pressure were most prevalent in the M + P + K group, and bag-mask ventilation was most used in the M + K group. There were no reported incidents of intubation or cardiopulmonary resuscitation. A decrease in oxygen saturation was observed in 37 of 734 patients, and it significantly increased in young patients (P = 0.001) and when ketamine was used (P = 0.014). Oxygen saturation was also correlated with dosage (P = 0.037). The use of ketamine (P < 0.001) and propofol (P < 0.001) were identified as factors affecting the Ramsay score in the logistic regression analysis.
Conclusion: Although the drug use by Korean pediatric endoscopists followed the recommended guidelines to an extent, it was apparent that they combined the drugs or reduced the doses depending on the patient characteristics to reduce the likelihood of respiratory failure. Inducing deep sedation facilitates comfort during the procedure, but it also leads to a higher risk of complications.
abstract_id: PUBMED:27113580
Depth-dependent changes of obstruction patterns under increasing sedation during drug-induced sedation endoscopy: results of a German monocentric clinical trial. Purpose: Drug-induced sedation endoscopy (DISE) and simulated snoring (SimS) can locate the site of obstruction in patients with sleep-disordered breathing (SDB). There is clinical evidence for a change in collapsibility of the upper airway depending on the depth of sedation. So far, a dose-response relationship between sedation and collapsibility has not been demonstrated.
Methods: DISE and SimS were performed in 60 consecutive patients with SDB under monitoring of depth of sedation by BiSpectral Index® (BIS). Initially, SimS was conducted followed by DISE using bolus application of propofol. Sedation was performed up to a sedation level representing slow wave sleep (BIS = 40). The collapsibility of the upper airway was documented at decreasing sedation levels by an identical pictogram classification.
Results: For all levels and patterns of obstruction, a dose-dependent increase in the collapsibility of the upper airway was detected. A maximum collapsibility was achieved at sedation levels representing slow wave sleep. The collapsibility during SimS corresponded to light sleep stages and did not cover slow wave sleep.
Conclusion: A dose-dependent change of patterns of obstructions can be observed during DISE under BIS monitoring indicating sedation depth. The obtained patterns of obstruction during DISE and SimS should thus be interpreted with regard to the sedation depth.
abstract_id: PUBMED:25799356
A nasal high-flow system prevents hypoxia in dental patients under intravenous sedation. Purpose: Hypoxia is a major complication in dental patients under intravenous sedation (IVS). A nasal high-flow (NHF) system has been reported to achieve effective oxygenation in patients with sleep apnea syndrome. This study investigated the ability of the NHF system to prevent hypoxia in dental patients under IVS.
Materials And Methods: Thirty patients scheduled for dental treatment under IVS were enrolled. Patients were randomly divided into 3 groups: patients spontaneously breathing oxygen at 5 L/minute through a nasal cannula (NC5 group), patients administered oxygen at 30 L/minute through the NHF system, and patients administered oxygen at 50 L/minute through the NHF system. Hypnosis was induced by bolus administration of midazolam (0.05 mg/kg) followed by continuous administration of propofol (target blood concentration, 1.2 to 2 μg/mL). Noninvasive blood pressure, peripheral capillary oxygen saturation (SpO2), heart rate, and bispectral index values were recorded every 2.5 minutes before the induction of anesthesia. Interventions, such as jaw lifting, were recorded during IVS and arterial blood gas analysis was performed at the end of sedation. Patient and surgeon satisfaction with IVS was evaluated by interview.
Results: Minimum SpO2 was lowest in and surgeons were least satisfied with the NC5 group. In addition, interventions were required most frequently in the NC5 group (P < .05). Compared with the NC5 group, use of the NHF system improved partial pressures of oxygen and carbon dioxide in dental patients under IVS (P < .05).
Conclusions: These results suggest that use of the NHF system can prevent hypoxia in dental patients under IVS. Further studies are necessary to determine the appropriate flow rate and indications for NHF in obese patients.
Answer: A validated sleep apnea scoring system, such as the STOP-BANG questionnaire (SB), was used to identify the prevalence of obstructive sleep apnea (OSA) and subsequent risk factors for airway interventions (AI) and sedation-related adverse events (SRAE) in patients undergoing elective EGD and colonoscopy with propofol sedation. The study found that the prevalence of SB positive was 48.1%, but an SB score ≥3 was not associated with an increased occurrence of AI or SRAE after adjusting for total and loading dose of propofol, body mass index (BMI), smoking, and age. Higher BMI, increased patient age, higher loading propofol doses, and smoking were associated with higher rates of SRAE. Therefore, the study concluded that SB positive patients are not at higher risk for AI or SRAE, suggesting that a validated sleep apnea scoring system may not be a reliable predictor of cardiopulmonary events during propofol sedation for routine EGD or colonoscopy (PUBMED:24219821). |
Instruction: Patient-ventilator asynchronies: may the respiratory mechanics play a role?
Abstracts:
abstract_id: PUBMED:23531269
Patient-ventilator asynchronies: may the respiratory mechanics play a role? Introduction: The mechanisms leading to patient/ventilator asynchrony has never been systematically assessed. We studied the possible association between asynchrony and respiratory mechanics in patients ready to be enrolled for a home non-invasive ventilatory program. Secondarily, we looked for possible differences in the amount of asynchronies between obstructive and restrictive patients and a possible role of asynchrony in influencing the tolerance of non-invasive ventilation (NIV).
Methods: The respiratory pattern and mechanics of 69 consecutive patients with chronic respiratory failure were recorded during spontaneous breathing. After that patients underwent non-invasive ventilation for 60 minutes with a "dedicated" NIV platform in a pressure support mode during the day. In the last 15 minutes of this period, asynchrony events were detected and classified as ineffective effort (IE), double triggering (DT) and auto-triggering (AT).
Results: The overall number of asynchronies was not influenced by any variable of respiratory mechanics or by the underlying pathologies (that is, obstructive vs restrictive patients). There was a high prevalence of asynchrony events (58% of patients). IEs were the most frequent asynchronous events (45% of patients) and were associated with a higher level of pressure support. A high incidence of asynchrony events and IE were associated with a poor tolerance of NIV.
Conclusions: Our study suggests that in non-invasively ventilated patients for a chronic respiratory failure, the incidence of patient-ventilator asynchronies was relatively high, but did not correlate with any parameters of respiratory mechanics or underlying disease.
abstract_id: PUBMED:15691394
Ventilator graphics and respiratory mechanics in the patient with obstructive lung disease. Obstruction of the large and small airways occurs in several diseases, including asthma, chronic obstructive pulmonary disease, cystic fibrosis, bronchiectasis, and bronchiolitis. This article discusses the role of ventilator waveforms in the context of factors that contribute to the development of respiratory failure and acute respiratory distress in patients with obstructive lung disease. Displays of pressure, flow, and volume, flow-volume loops, and pressure-volume loops are available on most modern ventilators. In mechanically ventilated patients with airway obstruction, ventilator graphics aid in recognizing abnormalities in function, in optimizing ventilator settings to promote patient-ventilator interaction, and in diagnosing complications before overt clinical signs develop. Ventilator waveforms are employed to detect the presence of dynamic hyperinflation and to measure lung mechanics. Various forms of patient-ventilator asynchrony (eg, auto-triggering and delayed or ineffective triggering) can also be detected by waveform analysis. Presence of flow limitation during expiration and excessive airway secretions can be determined from flow-volume loops. Abnormalities in pressure-volume loops occur when the trigger sensitivity is inadequate, with alterations in respiratory compliance, or during patient-ventilator asynchrony. Thus, ventilator waveforms play an important role in management of mechanically-ventilated patients with obstructive lung disease.
abstract_id: PUBMED:15691396
Respiratory mechanics in the patient who is weaning from the ventilator. Ventilator management of the patient recovering from acute respiratory failure must balance competing objectives. On the one hand, aggressive efforts to promptly discontinue support and remove the artificial airway reduce the risk of ventilator-induced lung injury, nosocomial pneumonia, airway trauma from the endotracheal tube, and unnecessary sedation. On the other hand, overly aggressive, premature discontinuation of ventilatory support or removal of the artificial airway can precipitate ventilatory muscle fatigue, gas-exchange failure, and loss of airway protection. To help clinicians balance these concerns, 2 important research projects were undertaken in 1999-2001. The first was a comprehensive evidence-based literature review of the ventilator-discontinuation process, performed by the McMaster University research group on evidence-based medicine. The second was the development (by the American Association for Respiratory Care, American College of Chest Physicians, and Society of Critical Care Medicine) of a set of evidence-based guidelines based on the latter literature review. From those 2 projects, several themes emerged. First, frequent patient-assessment is required to determine whether the patient needs continued ventilatory support, from both the ventilator and the artificial airway. Second, we should continuously re-evaluate the overall medical management of patients who continue to require ventilatory support, to assure that we address all factors contributing to ventilator-dependence. Third, ventilatory support strategies should be aimed at maximizing patient comfort and unloading the respiratory muscles. Fourth, patients who require prolonged ventilatory support beyond the intensive care unit should go to specialized facilities that can provide gradual reduction of support. Fifth, many of these management objectives can be effectively carried out with protocols executed by nonphysicians.
abstract_id: PUBMED:29430405
Respiratory mechanics, ventilator-associated pneumonia and outcomes in intensive care unit. Aim: To evaluate the predictive capability of respiratory mechanics for the development of ventilator-associated pneumonia (VAP) and mortality in the intensive care unit (ICU) of a hospital in southern Brazil.
Methods: A cohort study was conducted between, involving a sample of 120 individuals. Static measurements of compliance and resistance of the respiratory system in pressure-controlled ventilation (PCV) and volume-controlled ventilation (VCV) modes in the 1st and 5th days of hospitalization were performed to monitor respiratory mechanics. The severity of the patients' illness was quantified by the Acute Physiology and Chronic Health Evaluation II (APACHE II). The diagnosis of VAP was made based on clinical, radiological and laboratory parameters.
Results: The significant associations found for the development of VAP were APACHE II scores above the average (P = 0.016), duration of MV (P = 0.001) and ICU length of stay above the average (P = 0.003), male gender (P = 0.004), and worsening of respiratory resistance in PCV mode (P = 0.010). Age above the average (P < 0.001), low level of oxygenation on day 1 (P = 0.003) and day 5 (P = 0.004) and low lung compliance during VCV on day 1 (P = 0.032) were associated with death as the outcome.
Conclusion: The worsening of airway resistance in PCV mode indicated the possibility of early diagnosis of VAP. Low lung compliance during VCV and low oxygenation index were death-related prognostic indicators.
abstract_id: PUBMED:34230215
Longitudinal Changes in Patient-Ventilator Asynchronies and Respiratory System Mechanics Before and After Tracheostomy. Background: This was a pilot study to analyze the effects of tracheostomy on patient-ventilator asynchronies and respiratory system mechanics. Data were extracted from an ongoing prospective, real-world database that stores continuous output from ventilators and bedside monitors. Twenty adult subjects were on mechanical ventilation and were tracheostomized during an ICU stay: 55% were admitted to the ICU for respiratory failure and 35% for neurologic conditions; the median duration of mechanical ventilation before tracheostomy was 12 d; and the median duration of mechanical ventilation was 16 d.
Methods: We compared patient-ventilator asynchronies (the overall asynchrony index and the rates of specific asynchronies) and respiratory system mechanics (respiratory-system compliance and airway resistance) during the 24 h before tracheostomy versus the 24 h after tracheostomy. We analyzed possible differences in these variables among the subjects who underwent surgical versus percutaneous tracheostomy. To compare longitudinal changes in the variables, we used linear mixed-effects models for repeated measures along time in different observation periods. A total of 920 h of mechanical ventilation were analyzed.
Results: Respiratory mechanics and asynchronies did not differ significantly between the 24-h periods before and after tracheostomy: compliance of the respiratory system median (IQR) (47.9 [41.3 - 54.6] mL/cm H2O vs 47.6 [40.9 - 54.3] mL/cm H2O; P = .94), airway resistance (9.3 [7.5 - 11.1] cm H2O/L/s vs 7.0 [5.2 - 8.8] cm H2O/L/s; P = .07), asynchrony index (2.0% [1.1 - 3.6%] vs 4.1% [2.3 - 7.6%]; P = .09), ineffective expiratory efforts (0.9% [0.4 - 1.8%] vs 2.2% [1.0 - 4.4%]; P = .08), double cycling (0.5% [0.3 - 1.0%] vs 0.9% [0.5 - 1.9%]; P = .24), and percentage of air trapping (7.6% [4.2 - 13.8%] vs 10.6% [5.9 - 19.2%]; P = .43). No differences in respiratory mechanics or patient-ventilator asynchronies were observed between percutaneous and surgical procedures.
Conclusions: Tracheostomy did not affect patient-ventilator asynchronies or respiratory mechanics within 24 h before and after the procedure.
abstract_id: PUBMED:11373510
Clinical relevance of monitoring respiratory mechanics in the ventilator-supported patient: an update (1995-2000). The introduction of mechanical ventilation in the intensive care unit environment had the merit of putting a potent life-saving tool in the physicians' hands in a number of situations; however, like most sophisticated technologies, it can cause severe side effects and eventually increase mortality if improperly applied. Assessment of respiratory mechanics serves as an aid in understanding the patient-ventilator interactions with the aim to obtain a better performance of the existing ventilator modalities. It has also provided a better understanding of patients' pathophysiology. Thanks to it, new ventilatory strategies and modalities have been developed. Finally, on-line monitoring of respiratory mechanics parameters is going to be more than a future perspective.
abstract_id: PUBMED:20037864
Patient-ventilator interaction Mechanically ventilated patients interact with ventilator functions at different levels such as triggering of the ventilator, pressurization and cycling from inspiration to expiration. Patient ventilator asynchrony in any one of these phase results in fighting with ventilator, increase in work of breathing and respiratory muscle fatigue. Patient ventilator dyssynchrony occurs when gas delivery from the ventilator does not match with the neural output of the respiratory center. The clinical findings of patient-ventilator asynchrony are; use of accessory respiratory muscle, tachypnea, tachycardia, active expiration, diaphoresis and observation of asynchrony between patient respiratory effort and the ventilator waveforms. Among the patients with dynamic hyperinflation such as chronic obstructive pulmonary disease the most frequent causes of patient-ventilator asynchrony are trigger and expiratory asynchronies. In acute respiratory distress syndrome patient-ventilator asynchrony may develop due to problems in triggering or asynchrony in flow and inspiration-expiration cycle. Patient-ventilator interaction during noninvasive mechanical ventilation may be affected by the type of masks used, ventilator types, ventilation modes and parameters, humidification and sedation. Among the different patient groups it is important to know causes and solutions of patient-ventilator asynchrony problems. By this way patient will adapt ventilator and then dyspnea, ineffective respiratory effort and work of breathing may decrease subsequently.
abstract_id: PUBMED:28196936
Influences of Duration of Inspiratory Effort, Respiratory Mechanics, and Ventilator Type on Asynchrony With Pressure Support and Proportional Assist Ventilation. Background: Pressure support ventilation (PSV) is often associated with patient-ventilator asynchrony. Proportional assist ventilation (PAV) offers inspiratory assistance proportional to patient effort, minimizing patient-ventilator asynchrony. The objective of this study was to evaluate the influence of respiratory mechanics and patient effort on patient-ventilator asynchrony during PSV and PAV plus (PAV+).
Methods: We used a mechanical lung simulator and studied 3 respiratory mechanics profiles (normal, obstructive, and restrictive), with variations in the duration of inspiratory effort: 0.5, 1.0, 1.5, and 2.0 s. The Auto-Trak system was studied in ventilators when available. Outcome measures included inspiratory trigger delay, expiratory trigger asynchrony, and tidal volume (VT).
Results: Inspiratory trigger delay was greater in the obstructive respiratory mechanics profile and greatest with a effort of 2.0 s (160 ms); cycling asynchrony, particularly delayed cycling, was common in the obstructive profile, whereas the restrictive profile was associated with premature cycling. In comparison with PSV, PAV+ improved patient-ventilator synchrony, with a shorter triggering delay (28 ms vs 116 ms) and no cycling asynchrony in the restrictive profile. VT was lower with PAV+ than with PSV (630 mL vs 837 mL), as it was with the single-limb circuit ventilator (570 mL vs 837 mL). PAV+ mode was associated with longer cycling delays than were the other ventilation modes, especially for the obstructive profile and higher effort values. Auto-Trak eliminated automatic triggering.
Conclusions: Mechanical ventilation asynchrony was influenced by effort, respiratory mechanics, ventilator type, and ventilation mode. In PSV mode, delayed cycling was associated with shorter effort in obstructive respiratory mechanics profiles, whereas premature cycling was more common with longer effort and a restrictive profile. PAV+ prevented premature cycling but not delayed cycling, especially in obstructive respiratory mechanics profiles, and it was associated with a lower VT.
abstract_id: PUBMED:24199991
Monitorization of respiratory mechanics in the ventilated patient Monitoring during mechanical ventilation allows the measurement of different parameters of respiratory mechanics. Accurate interpretation of these data can be useful for characterizing the situation of the different components of the respiratory system, and for guiding ventilator settings. In this review, we describe the basic concepts of respiratory mechanics, their interpretation, and their potential use in fine-tuning mechanical ventilation.
abstract_id: PUBMED:15691393
Respiratory mechanics and ventilator waveforms in the patient with acute lung injury. Acute lung injury/acute respiratory distress syndrome is a syndrome of low respiratory compliance. However, longstanding knowledge of applied respiratory mechanics and refined imaging techniques have shown that this is clearly an oversimplified view. Though the average compliance of the respiratory system is reproducibly low, regional mechanics may vastly differ; lung, airway, and chest wall mechanics may be variably affected; finally, these abnormalities may be very dynamic in nature, being influenced by time, posture, and the way positive-pressure ventilation is applied. Modern mechanical ventilators are equipped to display pressure, flow, and volume waveforms that can be used to measure respiratory compliance, airway resistance, and intrinsic positive end-expiratory pressure. These basic measurements, once the domain of applied physiologists only, are now available to aid clinicians to choose the appropriate ventilator settings to promote lung recruitment and avoid injury during lung-protective ventilatory strategies. High-resolution lung imaging and bedside recording of physiologic variables are important tools for clinicians who want to deliver specialized care to improve the outcome of critically ill patients in acute respiratory failure.
Answer: Patient-ventilator asynchronies are a significant concern in mechanically ventilated patients, as they can lead to increased work of breathing, respiratory muscle fatigue, and poor tolerance of non-invasive ventilation (NIV) (PUBMED:20037864). The role of respiratory mechanics in patient-ventilator asynchrony has been explored in various studies.
One study found that the overall number of asynchronies was not influenced by any variable of respiratory mechanics or by the underlying pathologies, such as obstructive vs restrictive patients. Ineffective efforts (IEs) were the most frequent asynchronous events and were associated with a higher level of pressure support. A high incidence of asynchrony events and IEs were associated with poor tolerance of NIV (PUBMED:23531269).
Ventilator waveforms, which display pressure, flow, and volume, are used to detect dynamic hyperinflation, measure lung mechanics, and identify various forms of patient-ventilator asynchrony, such as auto-triggering and delayed or ineffective triggering. These waveforms are particularly useful in managing mechanically ventilated patients with obstructive lung disease (PUBMED:15691394).
Another study highlighted that the worsening of airway resistance in pressure-controlled ventilation (PCV) mode indicated the possibility of early diagnosis of ventilator-associated pneumonia (VAP). Low lung compliance during volume-controlled ventilation (VCV) and low oxygenation index were death-related prognostic indicators (PUBMED:29430405).
A pilot study analyzing the effects of tracheostomy on patient-ventilator asynchronies and respiratory system mechanics found that tracheostomy did not affect patient-ventilator asynchronies or respiratory mechanics within 24 hours before and after the procedure (PUBMED:34230215).
In summary, while patient-ventilator asynchronies are a complex issue, respiratory mechanics may not directly influence the overall number of asynchronies. However, specific aspects of respiratory mechanics, such as airway resistance and lung compliance, can be indicators of complications like VAP or can influence the outcome of the patient. Ventilator waveforms are essential tools for detecting and managing asynchronies, especially in patients with obstructive lung diseases. |
Instruction: Is Strain Elastography (IO-SE) Sufficient for Characterization of Liver Lesions before Surgical Resection--Or Is Contrast Enhanced Ultrasound (CEUS) Necessary?
Abstracts:
abstract_id: PUBMED:26114286
Is Strain Elastography (IO-SE) Sufficient for Characterization of Liver Lesions before Surgical Resection--Or Is Contrast Enhanced Ultrasound (CEUS) Necessary? Aim: To evaluate the diagnostic accuracy of IO-SE in comparison to IO-CEUS for the differentiation between malignant and benign liver lesions.
Material And Methods: In a retrospective diagnostic study IO-CEUS and SE examinations of 49 liver lesions were evaluated and compared to histopathological examinations. Ultrasound was performed using a multifrequency linear probe (6-9 MHz). The loops of CEUS were evaluated up to 5 min. The qualitative characterization of IO-SE was based on a color coding system (blue = hard, red = soft). Stiffness of all lesions was quantified by a specific scaling of 0-6 (0 = low, 6 = high) using 7 ROIs (2 central, 5 peripheral).
Results: All malignant lesions displayed a characteristic portal venous washout and could be diagnosed correctly by IO-CEUS. 3/5 benign lesions could not be characterized properly either by IO-CEUS or IO-SE prior to resection. Thus for IO-CEUS sensitivity, specificity, positive and negative predictive value and accuracy were 100%, 40%, 94%, 100% and 94%. Lesion sizes were between 8 and 59 mm in diameter. Regarding the IO-SE, malignant lesions showed a marked variability. In qualitative analysis, 31 of the malignant lesions were blue colored denoting overall induration. Thirteen malignant lesions showed an inhomogenous color pattern with partial indurations. Two of the benign lesions also displayed overall induration. The other benign lesions showed an inhomogenous color mapping. Calculated sensitivity of the SE was 70.5%, specificity 60%, PPV 94%, NPV 18.75%, and accuracy 69%.
Conclusion: IO-CEUS is useful for localization and characterization of liver lesions prior to surgical resection whereas IO-SE provided correct characterization only for a limited number of lesions.
abstract_id: PUBMED:27767982
Intrasurgical dignity assessment of hepatic tumors using semi-quantitative strain elastography and contrast-enhanced ultrasound for optimisation of liver tumor surgery. Objective: To evaluate the efficacy of strain elastography (SE) using semi-quantitative measurement methods compared to constrast enhanced ultrasound during liver tumor surgery (Io-CEUS) for dignity assessment of focal liver lesions(FLL).
Material And Methods: Prospective data acquisition and retrospective analysis of US data of 100 patients (116 lesions) who underwent liver tumor surgery between 10/2010 and 03/2016. Retrospective reading of SE color patterns was performed establishing groups depending on dominant color (>50% blue = stiff, inhomogenous, >50% yellow/red/green = soft tissue). Semi-quantitative analysis was performed by Q-analysis based on a scale from 0 (soft) to 6 (stiff). 2 ROIs were placed centrally, 5 ROIs in the lesion's surrounding tissue. Io-CEUS was performed by bolus injection of 5-10 ml sulphurhexaflourid microbubbles evaluating wash-in- and -out- kinetics in arterial, portal venous and late phase. Histopathology after surgical resection served as goldstandard.
Results: 100 patients (m: 65, f: 35, mean age 60.5 years) with 116 liver lesions were included. Lesion's size ranged from 0.5 to 8.4 cm (mean 2.42 cm SD±1.44 cm). Postoperative histology showed 105 malignant and 11 benign lesions. Semi-quantitative analysis showed central indurations of >2.5 in 76/105 cases suggesting malignancy. 7 benign lesions displayed no central indurations correctly characterized benign by SE. ROC-analysis and Youden index showed a sensitivity of 72.4% and specificity of 63.6% assuming a cut-off of 2.5. Io-CEUS correctly characterized 103/105 as malignant. Sensitivity was 98%, specificity 72.7%.
Conclusion: Strain elastography is a valuable tool for non-invasive characterization of FLLs. Semi-quantitative intratumoral stiffness values of >2.5 suggested malignancy. However, sensitivity of Io-CEUS in detecting malignant lesions was higher compared to SE. In conclusion SE should be considered for routine use during intraoperative US in addition to Io-CEUS for optimization of curative liver surgery.
abstract_id: PUBMED:29103478
Liver investigations: Updating on US technique and contrast-enhanced ultrasound (CEUS). Over the past few years, the cross sectional imaging techniques (Computed Tomography - CT and Magnetic Resonance - MR) have improved, allowing a more efficient study of focal and diffuse liver diseases. Many papers had been published about the results of a routinely clinical use of the dual source/dual energy CT techniques and the use of hepatobiliary contrast agents in MR liver studies. As a consequence, these new improvements have diverted the attention away from the Ultrasound technique and its technical and conceptual evolutions. In these years of disinterest, US and especially Contrast Enhanced Ultrasound (CEUS) have consolidated and grown in their application in clinical routine for liver pathologies. In particular, thanks to the introduction of new, dedicated software packages, CEUS has allowed not only qualitative, but also quantitative analysis of lesion microcirculation, thus opening a new era in the evaluation of lesion characterization and response to therapy. Moreover, the renewed interest in liver elastography, a baseline ultrasound-based imaging modality, has led to the development of a competitive technique to assess liver stiffness and then for the evaluation of the progression towards cirrhosis, and characterization of focal liver lesions, opening the way to avoid, in selected cases, liver biopsy. The aim of this review is to offer an up-to-date overview on the state of the art of clinical applications of US and CEUS in the study of focal and diffuse liver pathologies. Besides, it aims to highlight the emerging role of perfusion techniques in the assessment of local and systemic treatment response and to show how the liver evolution from steatosis to fibrosis can be revealed by elastography.
abstract_id: PUBMED:37005881
An alternative second performance of contrast-enhanced ultrasound for large focal liver lesion is necessary for sufficient characterization. Focal liver lesions (FFLs) evaluated using contrast-enhanced ultrasound (CEUS) and contrast-enhanced computed tomography (CECT) may have the same or similar findings or substantial discrepant findings. Such phenomenon can be found in two performances of CEUS that the second performance of CEUS conducted shortly following the initial performance of CEUS. Discrepancy of two performances of CEUS for FFLs occurring in the same patient at a short internal has not been well addressed, which raises challenge for CEUS for the evaluation of FFLs. In this case study, such phenomenon is illustrated and implication is obtained.
abstract_id: PUBMED:36525179
Shear-wave elastography combined with contrast-enhanced ultrasound algorithm for noninvasive characterization of focal liver lesions. Purpose: To establish shear-wave elastography (SWE) combined with contrast-enhanced ultrasound (CEUS) algorithm (SCCA) and improve the diagnostic performance in differentiating focal liver lesions (FLLs).
Material And Methods: We retrospectively selected patients with FLLs between January 2018 and December 2019 at the First Affiliated Hospital of Sun Yat-sen University. Histopathology was used as a standard criterion except for hemangiomas and focal nodular hyperplasia. CEUS with SonoVue (Bracco Imaging) and SCCA combining CEUS and maximum value of elastography with < 20 kPa and > 90 kPa thresholds were used for the diagnosis of FLLs. The diagnostic performance of CEUS and SCCA was calculated and compared.
Results: A total of 171 FLLs were included, with 124 malignant FLLs and 47 benign FLLs. The area under curve (AUC), sensitivity, and specificity in detecting malignant FLLs were 0.83, 91.94%, and 74.47% for CEUS, respectively, and 0.89, 91.94%, and 85.11% for SCCA, respectively. The AUC of SCCA was significantly higher than that of CEUS (P = 0.019). Decision curves indicated that SCCA provided greater clinical benefits. The SCCA provided significantly improved prediction of clinical outcomes, with a net reclassification improvement index of 10.64% (P = 0.018) and integrated discrimination improvement of 0.106 (P = 0.019). For subgroup analysis, we divided the FLLs into a chronic-liver-disease group (n = 88 FLLs) and a normal-liver group (n = 83 FLLs) according to the liver background. In the chronic-liver-disease group, there were no differences between the CEUS-based and SCCA diagnoses. In the normal-liver group, the AUC of SCCA and CEUS in the characterization of FLLs were 0.89 and 0.83, respectively (P = 0.018).
Conclusion: SCCA is a feasible tool for differentiating FLLs in patients with normal liver backgrounds. Further investigations are necessary to validate the universality of this algorithm.
abstract_id: PUBMED:29797043
Contrast-enhanced ultrasound (CEUS) and image fusion for procedures of liver interventions Clinical/methodical Issue: Contrast-enhanced ultrasound (CEUS) is becoming increasingly important for the detection and characterization of malignant liver lesions and allows percutaneous treatment when surgery is not possible. Contrast-enhanced ultrasound image fusion with computed tomography (CT) and magnetic resonance imaging (MRI) opens up further options for the targeted investigation of a modified tumor treatment.
Methodical Innovations: Ultrasound image fusion offers the potential for real-time imaging and can be combined with other cross-sectional imaging techniques as well as CEUS.
Performance: With the implementation of ultrasound contrast agents and image fusion, ultrasound has been improved in the detection and characterization of liver lesions in comparison to other cross-sectional imaging techniques. In addition, this method can also be used for intervention procedures. The success rate of fusion-guided biopsies or CEUS-guided tumor ablation lies between 80 and 100% in the literature.
Achievements: Ultrasound-guided image fusion using CT or MRI data, in combination with CEUS, can facilitate diagnosis and therapy follow-up after liver interventions.
Practical Recommendations: In addition to the primary applications of image fusion in the diagnosis and treatment of liver lesions, further useful indications can be integrated into daily work. These include, for example, intraoperative and vascular applications as well applications in other organ systems.
abstract_id: PUBMED:28980061
Applications of contrast-enhanced ultrasound in the pediatric abdomen. Contrast-enhanced ultrasound (CEUS) is a radiation-free, safe, and in specific clinical settings, highly sensitive imaging modality. Over the recent decades, there is cumulating experience and a large volume of published safety and efficacy data on pediatric CEUS applications. Many of these applications have been directly translated from adults, while others are unique to the pediatric population. The most frequently reported intravenous abdominal applications of CEUS in children are the characterization of focal liver lesions, monitoring of solid abdominal tumor response to treatment, and the evaluation of intra-abdominal parenchymal injuries in selected cases of blunt abdominal trauma. The intravesical CEUS application, namely contrast-enhanced voiding urosonography (ceVUS), is a well-established, pediatric-specific imaging technique entailing the intravesical administration of ultrasound contrast agents for detection and grading of vesicoureteral reflux. In Europe, all pediatric CEUS applications remain off-label. In 2016, the United States Food and Drug Administration (FDA) approved the most commonly used worldwide second-generation ultrasound contrast SonoVue®/Lumason® for pediatric liver and intravesical applications, giving new impetus to pediatric CEUS worldwide.
abstract_id: PUBMED:33996586
Can Risk Stratification Based on Ultrasound Elastography of Background Liver Assist CEUS LI-RADS in the Diagnosis of HCC? Objective: To explore whether risk stratification based on ultrasound elastography of liver background assists contrast-enhanced ultrasound liver imaging reporting and data system (CEUS LI-RADS) in diagnosing HCC.
Materials And Methods: In total, 304 patients with focal liver lesions (FLLs) confirmed by pathology underwent CEUS and ultrasound elastography were included in this retrospective study. Patients with chronic hepatitis B (CHB, n=193) and non-CHB (n=111) were stratified by four liver stiffness measurement (LSM) thresholds. A LI-RADS category was assigned to FLLs using CEUS LI-RADS v2017. The diagnostic performance was assessed with the AUC, sensitivity, specificity, PPV, and NPV.
Results: The mean background liver stiffness of HCC patients with CHB, HCC patients without CHB and non-HCC patients without CHB were 9.72 kPa, 8.23 kPa and 4.97 kPa, respectively. The AUC, sensitivity, specificity and PPV of CEUS LI-RADS for HCC in CHB patients with LSM ≥ 5.8 kPa, ≥ 6.8 kPa, ≥ 9.1 kPa, and ≥ 10.3 kPa were high, with corresponding values of 0.745 to 0.880, 94.2% to 95.3%, 81.3% to 85.7%, and 98.1% to 98.8%, respectively. Higher AUC and specificity for HCC was observed in non-CHB patients with LSM ≥ 9.1 kPa and ≥ 10.3 kPa compared to non-CHB patients with LSM ≥ 5.8 kPa and ≥ 6.8 kPa, with corresponding values of0.964/1.000 vs 0.590/0.580, and 100%/100% vs 60%/70%, respectively.
Conclusion: CEUS LI-RADS has a good diagnostic performance in CHB patients regardless of the background liver stiffness. Furthermore, CEUS LI-RADS can be applied for non-CHB patients with a LSM ≥ 9.1 kPa.
abstract_id: PUBMED:35466932
Modified contrast-enhanced ultrasonography with the new high-resolution examination technique of high frame rate contrast-enhanced ultrasound (HiFR-CEUS) for characterization of liver lesions: First results. Aim: To examine to what extent the high frame rate contrast-enhanced ultrasound (HiFR) diagnostic enables the conclusive diagnosis of liver changes with suspected malignancy.
Material/methods: Ultrasound examinations were performed by an experienced examiner using a multifrequency probe (SC6-1) on a high-end ultrasound system (Resona 7, Mindray) to clarify liver changes that were unclear on the B-scan. A bolus of 1-2.4 ml of the Sulphur hexafluoride ultrasound microbubbles contrast agent SonoVue™ (Bracco SpA, Italy) was administered with DICOM storage of CEUS examinations from the early arterial phase (5-15 s) to the late phase (5-6 min). Based on the image files stored in the PACS, an independent reading was performed regarding image quality and finding-related diagnostic significance (0 not informative/non-diagnostic to 5 excellent image quality/confident diagnosis possible). References were clinical follow-up, if possible, comparison to promptly performed computed tomography or magnetic resonance imaging, in some cases also to histopathology.
Results: We examined 100 patients (42 women, 58 men, from 18 years to 90 years, mean 63±13 years) with different entities of focal and diffuse liver parenchymal changes, which could be detected in all cases with sufficient image quality with CEUS and with high image quality with HiFR-CEUS. Proportionally septate cysts were found in n = 19 cases, scars after hemihepatectomy with local reduced fat in n = 5 cases, scars after microwave ablation in n = 19 cases, hemangiomas in n = 9 cases, focal nodular hyperplasia in n = 8 cases, colorectal metastases in n = 15 cases, hepatocellular carcinoma (HCC) in n = 11 cases, Osler disease in n = 8 cases. The size of lesions ranged from 5 mm to 200 mm with a mean value of 33.1±27.8 mm. Conclusive diagnoses could be made by the experienced investigator in 97/100 cases with CEUS, confirmed by reference imaging, in parts by histopathology or follow-up. The image quality for HiFR CEUS was rated with a score of 3 to 5; 62 cases were assessed with an average of good (4 points), 27 cases with very good (5 points), and in 11 cases (3 points) still satisfactory despite aggravated acoustic conditions. The specificity of HIFR-CEUS was 97%, the sensitivity 97%, the positive predictive value 94%, the negative predictive value 99% and the accuracy 97%.
Conclusion: HIFR-CEUS has demonstrated has demonstrated an improved image quality resulting in a high diagnostic accuracy. In the hands of an experienced investigator, HiFR-CEUS allows the assessment of focal and diffuse unclear liver parenchymal changes on B-scan and dynamic assessment of microcirculation in solid and vascular changes.
abstract_id: PUBMED:26102419
Contrast-Enhanced Ultrasound of Focal Liver Lesions. Objective: The purpose of this article is to discuss the use of contrast-enhanced ultra-sound (CEUS) in focal liver lesions.
Conclusion: Focal liver lesions are usually detected incidentally during abdominal ultrasound. The injection of microbubble ultrasound contrast agents improves the characterization of focal liver lesions that are indeterminate on conventional ultrasound. The use of CEUS is recommended in official guidelines and suggested as a second diagnostic step after ultrasound detection of indeterminate focal liver lesions to immediately establish the diagnosis, especially for benign liver lesions, such as hemangiomas, avoiding further and more expensive examinations.
Answer: Based on the provided abstracts, Strain Elastography (IO-SE) alone may not be sufficient for the characterization of liver lesions before surgical resection, and Contrast Enhanced Ultrasound (CEUS) appears to be necessary for a more accurate diagnosis.
In a retrospective diagnostic study comparing IO-SE and IO-CEUS to histopathological examinations, it was found that all malignant lesions displayed a characteristic portal venous washout and could be diagnosed correctly by IO-CEUS. However, IO-SE provided correct characterization only for a limited number of lesions, with a sensitivity of 70.5%, specificity of 60%, and accuracy of 69% (PUBMED:26114286).
Another study evaluating the efficacy of semi-quantitative strain elastography compared to IO-CEUS for dignity assessment of focal liver lesions (FLL) during liver tumor surgery found that IO-CEUS correctly characterized 103/105 malignant lesions, with a sensitivity of 98% and specificity of 72.7%. Strain elastography suggested malignancy with semi-quantitative intratumoral stiffness values of >2.5, but the sensitivity of IO-CEUS in detecting malignant lesions was higher compared to SE (PUBMED:27767982).
Furthermore, a study on shear-wave elastography combined with CEUS algorithm (SCCA) showed that the SCCA had a significantly higher area under curve (AUC) than CEUS alone for differentiating FLLs in patients with normal liver backgrounds, indicating that combining elastography with CEUS may improve diagnostic performance (PUBMED:36525179).
In conclusion, while strain elastography can be a valuable tool for non-invasive characterization of FLLs, the sensitivity and specificity of IO-CEUS in detecting malignant lesions are higher. Therefore, CEUS is recommended for the characterization of liver lesions prior to surgical resection, and SE should be considered for routine use during intraoperative US in addition to IO-CEUS for optimization of curative liver surgery (PUBMED:27767982; PUBMED:36525179). |
Instruction: Does a pelvic belt influence sacroiliac joint laxity?
Abstracts:
abstract_id: PUBMED:12206939
Does a pelvic belt influence sacroiliac joint laxity? Objective: To evaluate the influence of different positions and tensions of a pelvic belt on sacroiliac joint laxity in healthy young women.
Background: Clinical experience has shown that positive effects can be obtained with different positions and tensions of a pelvic belt. A functional approach to the treatment of the unstable pelvic girdle requires an understanding of the effect of a pelvic belt on a normal pelvic girdle.
Methods: Sacroiliac joint laxity was assessed with Doppler imaging of vibrations. The influence of two different positions (low: at the level of the symphysis and high: just below the anterior superior iliac spines) and tensions (50 and 100 N) of a pelvic belt was measured in ten healthy subjects, in the prone position. Data were analysed using repeated measures analysis of variance.
Results: Tension does not have a significant influence on the amount by which sacroiliac joint laxity with belt differs from sacroiliac joint laxity without belt. A significant effect was found for the position of the pelvic belt. Mean sacroiliac joint laxity value was 2.2 (SD, 0.2) threshold units nearer to the without-belt values when the belt was applied in low position as compared to the case with the belt in high position.
Conclusions: A pelvic belt is most effective in a high position, while a tension of 100 N does not reduce laxity more than 50 N.
Relevance: Information about the biomechanical effects of a pelvic belt provided by this study will contribute to a better understanding of the treatment of women with pregnancy-related pelvic pain.
abstract_id: PUBMED:16214275
The mechanical effect of a pelvic belt in patients with pregnancy-related pelvic pain. Background: Many patients with pregnancy-related pelvic girdle pain experience relief of pain when using a pelvic belt, which makes its use a common part of the therapy, but there is no in vivo proof of the mechanical effect of the application of a pelvic belt.
Methods: The influence of a pelvic belt on sacroiliac joint laxity values was tested in 25 subjects with pregnancy-related pelvic girdle pain by means of Doppler imaging of vibrations in prone position with and without the application of a pelvic belt. The belt was adjusted just below the anterior superior iliac spines (high position) and at the level of the pubic symphysis (low position).
Findings: Sacroiliac joint laxity values decreased significantly during both applications of a pelvic belt (P<0.001). The application of a pelvic belt in high position decreased sacroiliac joint laxity to a significantly greater degree than the application of a belt in low position (P=0.006). The decrease of laxity significantly correlated with the decrease of the score on the active straight leg raise test (r=0.57 for the low position, P=0.003 and r=0.54 for the high position, P=0.005).
Interpretation: Application of a pelvic belt significantly decreases mobility of the sacroiliac joints. The decrease of mobility is larger with the belt positioned just caudal to the anterior superior iliac spines than at the level of the pubic symphysis. The findings are in line with the biomechanical predictions and might be the basis for clinical studies about the use of pelvic belts in pregnancy-related pelvic girdle pain.
abstract_id: PUBMED:1566778
An integrated therapy for peripartum pelvic instability: a study of the biomechanical effects of pelvic belts. Objectives: The objectives of this study were to investigate the influence of pelvic belts on the stability of the pelvis and to discuss the treatment of peripartum pelvic instability.
Study Design: In six human pelvis-spine preparations, sagittal rotation in the sacroiliac joints was induced by bidirectional forces directed at the acetabula. Weight-bearing was mimicked by the application of a compressive force to the spine. The biomechanical effect of a pelvic belt was measured in 12 sacroiliac joints.
Results: The pelvic belt caused a significant decrease in the sagittal rotation in the sacroiliac joints. The effect of a 100 N belt did not differ significantly from that of a 50 N belt.
Conclusion: The combination of a pelvic belt and muscle training enhances pelvic stability. The load of the belt can be relatively small; location is more important. The risk of symphysiodesis, especially as a result of the insertion of bone grafts, is emphasized.
abstract_id: PUBMED:12486354
The prognostic value of asymmetric laxity of the sacroiliac joints in pregnancy-related pelvic pain. Study Design: Prospective cohort study.
Objective: To determine the prognostic value of asymmetric laxity of the sacroiliac joints during pregnancy on pregnancy-related pelvic pain postpartum.
Summary Of Background Data: In a previous study, we observed a significant relation between asymmetric laxity of the sacroiliac joints and moderate to severe pregnancy-related pelvic pain during pregnancy.
Methods: A group of 123 women were prospectively questioned and examined, and sacroiliac joint laxity was measured by means of Doppler imaging of vibrations at 36 weeks' gestation and at 8 weeks' postpartum. A left to right difference in sacroiliac joint laxity >or=3 threshold units was considered to indicate asymmetric laxity of the sacroiliac joints.
Results: In subjects with moderate to severe pregnancy-related pelvic pain during pregnancy, sacroiliac joint asymmetric laxity was predictive of moderate to severe pregnancy-related pelvic pain persisting into the postpartum period in 77% of the subjects. The sensitivity, specificity, and positive predictive value of sacroiliac joint asymmetric laxity during pregnancy for pregnancy-related pelvic pain persisting postpartum were 65%, 83%, and 77%, respectively. Subjects with moderate to severe pregnancy-related pelvic pain and asymmetric laxity of the sacroiliac joints during pregnancy have a threefold higher risk of moderate to severe pregnancy-related pelvic pain postpartum than subjects with symmetric laxity.
Conclusion: These data indicate that in women with moderate to severe complaints of pelvic pain during pregnancy, sacroiliac joint asymmetric laxity measured during pregnancy is predictive of the persistence of moderate to severe pregnancy-related pelvic pain into the postpartum period.
abstract_id: PUBMED:26897433
Radiological evaluation of the posterior pelvic ring in paediatric patients: Results of a retrospective study developing age- and gender-related non-osseous baseline characteristics in paediatric pelvic computed tomography - References for suspected sacroiliac joint injury. Introduction: The prevalence of paediatric pelvic injury is low, yet they are often indicative of accompanying injuries, and an instable pelvis at presentation is related to long-term poor outcome. Judging diastasis of the sacroiliac joint in paediatric pelvic computed tomography is challenging, as information on their normal appearance is scarce. We therefore sought to generate age- and gender-related standard width measurements of the sacroiliac joint in children for comparison.
Patients And Methods: A total of 427 pelvic computed tomography scans in paediatric patients (<18 years old) were retrospectively evaluated. After applying exclusion criteria, 350 scans remained for measurements. Taking a standard approach we measured the sacroiliac joint width bilaterally in axial and coronal planes.
Results: We illustrate age- and gender-related measurements of the sacroiliac joint width as a designated continuous 3rd, 15th, 50th, 85th and 97th centile graph, respectively. Means and standard deviations in the joint width are reported for four age groups. There are distinct changes in the sacroiliac joint's appearance during growth. In general, male children exhibit broader sacroiliac joints than females at the same age, although this difference is significant only in the 11 to 15-year-old age group.
Conclusion: The sacroiliac joint width in children as measured in coronal and axial CT scans differs in association with age and gender. When the sacroiliac joint width is broader than the 97th centile published in our study, we strongly encourage considering a sacroiliac joint injury.
abstract_id: PUBMED:23111368
Effect of the pelvic compression belt on the hip extensor activation patterns of sacroiliac joint pain patients during one-leg standing: a pilot study. As a means of external stabilization of the sacroiliac joint (SIJ), many clinicians have often advocated the use of the pelvic compression belt (PCB). The objective of this pilot study was to compare the effects of the PCB on hip extensor muscle activation patterns during one-leg standing in subjects with and without sacroiliac joint pain (SIJP). Sixteen subjects with SIJP and fifteen asymptomatic volunteers participated in this study. Surface electromyography (EMG) data [signal amplitude and premotor reaction time (RT)] were collected from the gluteus maximus and biceps femoris muscles of the supporting leg during one-leg standing with and without the PCB. Compared to that of the asymptomatic individuals, the EMG amplitude of the biceps femoris was significantly decreased in individuals with SIJP upon the application of the PCB (p < 0.05). Furthermore, on using the PCB, in individuals with SIJP, the RT of the gluteus maximus was significantly decreased; however, the RT of the biceps femoris was increased (p < 0.05). Thus, our data support the use of the PCB to modify the activation patterns of the hip extensors among patients with SIJP.
abstract_id: PUBMED:12616067
Percutaneous computed tomographic stabilization of the pathologic sacroiliac joint. Metastases to the sacroiliac joint region can be a source of significant pain in many patients who are terminally ill. Six patients with metastatic lesions in the sacroiliac region who presented with significant posterior pelvic pain were treated with computed tomography-guided insertion of iliosacral screws. All patients reported excellent pain control in the early postoperative period. Computed tomography-guided insertion of iliosacral screws in an area of relatively preserved bone stock provides good purchase of the screws. It is a safe percutaneous procedure and it helps alleviate pain in patients with sacroiliac metastases.
abstract_id: PUBMED:11703199
Pelvic pain during pregnancy is associated with asymmetric laxity of the sacroiliac joints. Objective: The aim of this study was to investigate the association between pregnancy-related pelvic pain (PRPP) and sacroiliac joint (SIJ) laxity.
Methods: A cross-sectional analysis was performed in a group of 163 women, 73 with moderate or severe (PRPP+) and 90 with no or mild (PRPP-) PRPP at 36 weeks of pregnancy. SIJ laxity was measured by means of Doppler imaging of vibrations in threshold units (TU). Pain, clinical signs and disability were assessed with visual analog scale (VAS), posterior pelvic pain provocation (PPPP) test, active straight leg raise (ASLR) test, and Quebec back pain disability scale (QBPDS), respectively.
Results: Mean SIJ laxity in the PRPP+ group was not significantly different from the PRPP- group (3.0 versus 3.4 TU). The mean left-right difference, however, was significantly higher in the PRPP+ group (2.2 TU) than in the PRPP- group (0.9 TU). In the PRPP- group, only 4% had asymmetric laxity of the SIJs in contrast to 37% of the PRPP+ group. Between the PRPP+ subjects with asymmetric and symmetric laxity of the SIJs significant differences were found with respect to mean VAS for pain (7.9 versus 7.0), positive PPPP test (59% versus 35%), positive ASLR test (85 versus 41%) and mean QBPDS score (61 versus 50).
Conclusions: Increased SIJ laxity is not associated with PRPP. In fact, pregnant women with moderate or severe pelvic pain have the same laxity in the SIJs as pregnant women with no or mild pain. However, a clear relation between asymmetric laxity of the SIJs and PRPP is found.
abstract_id: PUBMED:12902785
Biomechanical comparison of posterior pelvic ring fixation. Objective: To determine relative stiffness of various methods of posterior pelvic ring internal fixation.
Design: Simulated single leg stance loading of OTA 61-Cl.2, a2 fracture model (unilateral sacroiliac joint disruption and pubic symphysis diastasis).
Setting: Orthopaedic biomechanic laboratory.
Outcome Variables: Pubic symphysis gapping, sacroiliac joint gapping, hemipelvis coronal plane rotation.
Methods: Nine different posterior pelvic ring fixation methods were tested on each of six hard plastic pelvic models. Pubic symphysis was plated. The pelvic ring was loaded to 1000N.
Results: All data were normalized to values obtained with posterior fixation with a single iliosacral screw. The types of fixation could be grouped into three categories based on relative stiffness of fixation: For sacroiliac joint gapping, group 1-fixation stiffness 0.8 and above (least stiff) includes a single iliosacral screw (conditions A and J), an isolated tension band plate (condition F), and two sacral bars (condition H); group 2-fixation stiffness 0.6 to 0.8 (intermediate stiffness) includes a tension band plate and an iliosacral screw (condition E), one or two sacral bars in combination with an iliosacral screw (conditions G and I); group 3-fixation stiffness 0.6 and below (greatest stiffness) includes two anterior sacroiliac plates (condition D), two iliosacral screws (condition B), and two anterior sacroiliac plates and an iliosacral screw (condition C). For sacroiliac joint rotation, group 1-fixation stiffness 0.8 and above includes a single iliosacral screw (conditions A and J), two anterior sacroiliac plates (condition D), a tension band plate in isolation or in combination with an iliosacral screw (conditions E and F), and two sacral bars (condition H); group 2-fixation stiffness 0.6 to 0.8 (intermediate level of instability) includes either one or two sacral bars in combination with an iliosacral screw (conditions G and I); group 3-fixation stiffness 0.6 and below (stiffest fixation) consists of two iliosacral screws (condition B) and two anterior sacroiliac plates and an iliosacral screw (condition C).
Discussion: Under conditions of maximal instability with similar material properties between specimens, differences in stiffness of posterior pelvic ring fixation can be demonstrated. The choice of which method to use is multifactorial.
abstract_id: PUBMED:33337683
Position-related Change of Pelvic Incidence Depends on the Nonfused Sacroiliac Joint in Patients with Degenerative Spinal Diseases. Study Design: A retrospective cross-sectional study.
Objective: This study aims to determine whether the sacroiliac (SI) joint motion correlated to pelvic incidence (PI) change from standing to supine position in patients with degenerative spinal diseases.
Summary Of Background Data: PI was found an unstable parameter after adolescence as the fixed nature of PI was challenged by several studies. The SI joint has been shown to have some motion, age-related degenerative changes of cartilage and SI ligaments contribute to SI joint instability.
Methods: The study contains both specimen study and radiographic study. One human specimen was acquired, on which PI was measured with different sacrum-ilium positions. In radiographic study, patients with old thoracolumbar fracture, lumbar disc herniation, stenosis, and spondylolisthesis were included. Ankylosing spondylitis (AS) patients were also included as control group. PI was measured on standing x-rays and scanogram of computed tomography images in supine position.
Results: Specimen study result revealed that SI motion would lead to the change of PI with fixed pelvic thickness. In radiographic study, 101 patients with different etiology and 30 AS patients were included. After stratifying into different age groups, standing PI was significantly larger than supine PI in each age groups (P = 0.002, <0.001, and <0.001, respectively). In patients with degenerative diseases, PI was significantly larger on standing position than that on supine position. ΔPI showed no significant difference across etiologies. However, in AS patients, standing PI and supine PI revealed no significant difference (P = 0.528).
Conclusion: Mobile SI joint may be the cause of increased PI in the aging spine. The dynamic change of PI is etiology-independent if the SI joint was not fused. Older patients have greater position-related change of PI.Level of Evidence: 4.
Answer: Yes, a pelvic belt does influence sacroiliac joint laxity. Studies have shown that the application of a pelvic belt can significantly decrease the mobility of the sacroiliac joints. Specifically, the use of a pelvic belt in a high position, just below the anterior superior iliac spines, decreases sacroiliac joint laxity to a greater degree than when the belt is applied in a low position, at the level of the pubic symphysis (PUBMED:16214275). Additionally, the tension applied by the belt does not have a significant influence on the reduction of laxity, as a tension of 100 N does not reduce laxity more than 50 N (PUBMED:12206939). The findings suggest that the position of the belt is more important than the amount of tension for reducing sacroiliac joint laxity (PUBMED:1566778). These results are relevant for the treatment of women with pregnancy-related pelvic pain, as many patients experience relief of pain when using a pelvic belt (PUBMED:16214275). |
Instruction: Is there a disparity in the prevalence of asthma between American Indian and white adults?
Abstracts:
abstract_id: PUBMED:18773326
Is there a disparity in the prevalence of asthma between American Indian and white adults? Background: Though racial disparities in asthma prevalence are well documented, little is known about the burden of asthma in American Indians compared to whites in the United States.
Objectives: To compare the prevalence of asthma among American Indian and white adults 18 years of age and older in Montana.
Methods: We used Behavioral Risk Factor Surveillance System (BRFSS) data representative of the Montana population from 2001 to 2006.
Results: Using multiple logistic regression analysis, American Indian race was not independently associated with increased asthma prevalence (OR 1.05, 95% CI 0.83-1.33). Obesity, lower household income and lower educational attainment, factors disproportionately affecting American Indians in Montana, were independently associated with increased asthma prevalence.
Conclusions: Regional and national surveillance is needed to comprehensively document asthma prevalence in American Indians and other underrepresented minorities in the United States.
abstract_id: PUBMED:34320979
Chronic respiratory disease disparity between American Indian/Alaska Native and white populations, 2011-2018. Background: American Indian/Alaska Native (AI/AN) populations have been disproportionately affected by chronic respiratory diseases for reasons incompletely understood. Past research into disease disparity using population-based surveys mostly focused on state-specific factors. The present study investigates the independent contributions of AI/AN racial status and other socioeconomic/demographic variables to chronic respiratory disease disparity in an 11-state region with historically high AI/AN representation. Using data from the Behavioral Risk Factor Surveillance System (BRFSS) spanning years 2011-2018, this work provides an updated assessment of disease disparity and potential determinants of respiratory health in AI/AN populations.
Methods: This cross-sectional study used data from the BRFSS survey, 2011-2018. The study population included AI/AN and non-Hispanic white individuals resident in 11 states with increased proportion of AI/AN individuals. The yearly number of respondents averaged 75,029 (62878-87,350) which included approximately 5% AI/AN respondents (4.5-6.3%). We compared the yearly adjusted prevalence for chronic respiratory disease, where disease status was defined by self-reported history of having asthma and/or chronic obstructive pulmonary disease (COPD). Multivariable logistic regression was performed to determine if being AI/AN was independently associated with chronic respiratory disease. Covariates included demographic (age, sex), socioeconomic (marital status, education level, annual household income), and behavioral (smoking, weight morbidity) variables.
Results: The AI/AN population consistently displayed higher adjusted prevalence of chronic respiratory disease compared to the non-Hispanic white population. However, the AI/AN race/ethnicity characteristic was not independently associated with chronic respiratory disease (OR, 0.93; 95% CI, 0.79-1.10 in 2017). In contrast, indicators of low socioeconomic status such as annual household income of <$10,000 (OR, 2.02; 95% CI, 1.64-2.49 in 2017) and having less than high school education (OR, 1.37; 95% CI, 1.16-1.63 in 2017) were positively associated with disease. These trends persisted for all years analyzed.
Conclusions: This study highlighted that AI/AN socioeconomic burdens are key determinants of chronic respiratory disease, in addition to well-established risk factors such as smoking and weight morbidity. Disease disparity experienced by the AI/AN population is therefore likely a symptom of disproportionate socioeconomic challenges they face. Further promotion of public health and social service efforts may be able to improve AI/AN health and decrease this disease disparity.
abstract_id: PUBMED:36205849
Cross-sectional Associations of Multiracial Identity with Self-Reported Asthma and Poor Health Among American Indian and Alaska Native Adults. Introduction: American Indian and Alaska Native (AI/AN) multiracial subgroups are underrecognized in health outcomes research.
Methods: We performed a cross-sectional analysis of Behavioral Risk Factor Surveillance System surveys (2013-2019), including adults who self-identified as AI/AN only (single race AI/AN, n = 60,413) or as AI/AN and at least one other race (multiracial AI/AN, (n = 6056)). We used log binomial regression to estimate the survey-weighted prevalence ratios (PR) and 95% confidence intervals (CI) of lifetime asthma, current asthma, and poor self-reported health among multiracial AI/AN adults compared to single race AI/AN adults, adjusting for age, obesity, and smoking status. We then examined whether associations differed by sex and by Latinx identity.
Results: Lifetime asthma, current asthma, and poor health were reported by 25%, 18%, and 30% of multiracial AI/AN adults and 18%, 12%, and 28% single race AI/AN adults. Multiracial AI/AN was associated with a higher prevalence of lifetime (PR 1.30, 95% CI 1.18-1.43) and current asthma (PR 1.36, 95% CI 1.21-1.54), but not poor health. Associations did not differ by sex. The association of multiracial identity with current asthma was stronger among AI/AN adults who identified as Latinx (PR 1.77, 95% CI 1.08-2.94) than non-Latinx AI/AN (PR 1.18, 95% CI 1.04-1.33), p-value for interaction 0.03.
Conclusions: Multiracial AI/AN adults experience a higher prevalence of lifetime and current asthma compared to single race AI/AN adults. The association between multiracial identity and current asthma is stronger among AI/AN Latinx individuals. The mechanisms for these findings remain under-explored and merit further study.
abstract_id: PUBMED:10476995
Asthma prevalence among American Indian and Alaska Native children. Objectives: Although asthma is the most common chronic childhood illness in the United States, little is known about its prevalence among American Indian and Alaska Native (AI/AN) children. The authors used the latest available household survey data to estimate the prevalence of asthma in this population.
Methods: The authors analyzed data for children ages 1 through 17 years from the 1987 Survey of American Indians and Alaska Natives (SAIAN) and the 1987 National Medical Expenditure Survey (NMES). At least one member of each AI/AN household included in the SAIAN was eligible for services through the Indian Health Service.
Results: The weighted prevalence of parent-reported asthma was 7.06% among 2288 AI/AN children ages 1-17 (95% CI 5.08, 9.04), compared with a US estimate of 8.40% for children ages 1-17 based on the 1987 NMES (95% CI 7.65, 9.15). The AI/AN sample was too small to yield stable estimates for a comparison between AI/AN children and all US children when the data were stratified according to household income and metropolitan vs non-metropolitan residence. The unadjusted asthma prevalence rates were similar for AI/AN children and for children in the NMES sample.
Conclusions: In 1987, the prevalence of parent-reported asthma was similar for AI/AN children in the SAIAN sample and for children in the NMES sample. More recent data are needed to better understand the current prevalence of asthma among AI/AN children.
abstract_id: PUBMED:18850367
A comparison of respiratory conditions between multiple race adults and their single race counterparts: an analysis based on American Indian/Alaska Native and white adults. Context: Multiple race data collection/reporting are relatively new among United States federal statistical systems. Not surprisingly, very little is known about the multiple race population in the USA. It is well known that some race and ethnic groups experience some respiratory diseases (e.g., asthma) disproportionately. However, not much is known about the experience of multiple race adults. If differences exist in how single/multiple race adults experience respiratory conditions, this information could be useful in public health education.
Objective: To explore differences in respiratory conditions between single race white adults, single race American Indian/Alaska Native (AIAN) adults, and adults who are both white and AIAN (largest multiple race group of adults in the USA).
Methods: Data from the National Health Interview Survey (NHIS), conducted by the Centers for Disease Control and Prevention's National Center for Health Statistics, were analyzed. Hispanic and black populations are oversampled. Multiple logistic regressions were performed to predict if the occurrence of each respiratory condition analyzed differed by single/multiple race reporting.
Sample: A nationally representative sample of 127,596 civilian non-institutionalized adults (> or = 18 years of age) from the 2000--2003 NHIS.
Outcome Measure: Adults told by a doctor or other health professional that they had asthma, hay fever, sinusitis, and/or chronic obstructive pulmonary disease.
Results: Adults who are both AIAN and white generally had higher rates of respiratory conditions than did their single race counterparts. These differences persisted even after controlling for socio-demographic and health care access measures.
Conclusions: This paper presents some of the first research of how the health of some multiple race adults differs from their single race counterparts. Contrary to some previous expectations for these estimates, respiratory condition estimates for adults who are both AIAN and white do not appear to be located between those of the component single race groups.
abstract_id: PUBMED:18595967
Asthma prevalence among US children in underrepresented minority populations: American Indian/Alaska Native, Chinese, Filipino, and Asian Indian. Objectives: The purpose of this work was to estimate asthma prevalence among US children in racial minority subgroups who have been historically underrepresented in the pediatric asthma literature. These subgroups include American Indian/Alaska Native, Chinese, Filipino, and Asian Indian children. We also explored the association between these race categories and asthma after adjusting for demographic and sociodemographic characteristics and explored the effect of place of birth as it relates to current asthma.
Patients And Methods: Data on all 51944 children aged 2 to 17 years from the 2001-2005 National Health Interview Survey were aggregated and analyzed to estimate the prevalence of current asthma, lifetime asthma, and asthma attacks according to race and place of birth. Logistic regression was used to determine adjusted odds ratios for current asthma according to race and place of birth while controlling for other demographic and sociodemographic variables.
Results: National estimates of current asthma prevalence among the children in the selected minority subgroups ranged from 4.4% in Asian Indian children to 13.0% in American Indian/Alaska Native children. Overall, children born in the United States had greater adjusted odds of reporting current asthma than did children born outside of the United States.
Conclusions: Smaller racial and ethnic minority groups are often excluded from asthma studies. This study reveals that, among children from different Asian American subgroups, wide variation may occur in asthma prevalence. We also found that children born in the United States were more likely than children born outside of the United States to have current asthma.
abstract_id: PUBMED:17400687
Asthma in American Indian adults: the Strong Heart Study. Background: Despite growing recognition that asthma is an important cause of morbidity among American Indians, there has been no systematic study of this disease in older adults who are likely to be at high risk of complications related to asthma. Characterization of the impact of asthma among American Indian adults is necessary in order to design appropriate clinical and preventive measures.
Methods: A sample of participants in the third examination of the Strong Heart Study, a multicenter, population-based, prospective study of cardiovascular disease in American Indians, completed a standardized respiratory questionnaire, performed spirometry, and underwent allergen skin testing. Participants were > or = 50 years old.
Results: Of 3,197 participants in the third examination, 6.3% had physician-diagnosed asthma and 4.3% had probable asthma. Women had a higher prevalence of physician-diagnosed asthma than men (8.2% vs 3.2%). Of the 435 participants reported in the asthma substudy, morbidity related to asthma was high: among those with physician-diagnosed asthma: 97% reported trouble breathing and 52% had severe persistent disease. The mean FEV(1) in those with physician-diagnosed asthma was 61.3% of predicted, and 67.2% reported a history of emergency department visits and/or hospitalizations in the last year, yet only 3% were receiving regular inhaled corticosteroids.
Conclusions: The prevalence of asthma among older American Indians residing in three separate geographic areas of the United States was similar to rates in other ethnic groups. Asthma was associated with low lung function, significant morbidity and health-care utilization, yet medications for pulmonary disease were underutilized by this population.
abstract_id: PUBMED:20690798
Advantages of video questionnaire in estimating asthma prevalence and risk factors for school children: findings from an asthma survey in American Indian youth. Objectives: The aims of the present study were to estimate the prevalence and risk factors of asthma among a sample of American Indian youth and to evaluate survey instruments used in determining asthma prevalence and risk factors.
Methods: Three hundred and fifty-two adolescents aged 9 to 21 years enrolled in an Indian boarding school completed an asthma screening. The survey instruments were a written questionnaire and a video-illustrated questionnaire prepared from the International Study of Asthma and Allergies in Childhood (ISAAC), school health records, and a health questionnaire. Participants also underwent spirometry testing.
Results: The prevalence of self-reported asthma varied from 12.7% to 13.4% depending upon the instrument used and the questions asked. A history of hay fever, respiratory infections, and family history of asthma were found to be risk factors for asthma by all instruments. Female gender and living on a reservation were significantly associated with asthma by some, but not all, instruments. Airway obstruction was highly associated with one asthma symptom (wheeze) shown in the video questionnaire. Associations for most risk factors with asthma were strongest for the video questionnaire.
Conclusions: The prevalence of self-reported asthma among these American Indian youth was similar to rates reported for other ethnic groups. The video-based questionnaire may be the most sensitive tool for identifying individuals at risk for asthma.
abstract_id: PUBMED:23248805
Prevalence and risk factors of asthma in off-reserve Aboriginal children and adults in Canada. Only a few studies have investigated asthma morbidity in Canadian Aboriginal children. In the present study, data from the 2006 Aboriginal Peoples Survey were used to determine the prevalence and risk factors for asthma in Canadian Aboriginal children six to 14 years of age and adults 15 to 64 years of age living off reserve. The prevalence of asthma was 14.3% in children and 14.0% in adults. Children and adults with Inuit ancestry had a significantly lower prevalence of asthma than those with North American Indian and Métis ancestries. Factors significantly associated with ever asthma in children included male sex, allergy, low birth weight, obesity, poor dwelling conditions and urban residence. In adults, factors associated with ever asthma varied among Aboriginal groups; however, age group, sex and urban residence were associated with ever asthma in all four Aboriginal groups. The prevalence of asthma was lower in Aboriginal children and higher in Aboriginal adults compared with that reported for the Canadian population. Variation in the prevalence of and risk factors for asthma among Aboriginal ancestry groups may be related to genetic and environmental factors that require further investigation.
abstract_id: PUBMED:31120804
Pediatric Asthma Among Small Racial/Ethnic Minority Groups: An Analysis of the 2006-2015 National Health Interview Survey. Few studies have examined the asthma burden among small racial/ethnic minority groups such as Asian children. We examined asthma disparities among children aged 4-17 in 6 small non-Hispanic racial/ethnic minority groups (American Indian/Alaska Native [AI/AN], Asian Indian, Chinese, Filipino, other Asian, and multiple race) by using the 2006-2015 National Health Interview Survey. These small minority groups represented a weighted 6.1% of the study population (6770 of 88 049). The prevalence of current asthma ranged from 5.5% (95% confidence interval [CI], 3.5%-7.5%) among Chinese children to 13.8% (95% CI, 10.4%-17.2%) among multiple-race children and 14.6% (95% CI, 10.8%-18.4%) among AI/AN children. Compared with non-Hispanic white children, AI/AN (adjusted odds ratio [aOR] = 1.6; 95% CI, 1.2-2.2) and multiple-race (aOR = 1.4; 95% CI, 1.0-2.0) children had higher odds for current asthma. Several small racial/ethnic minority groups are at heightened risk of asthma-associated outcomes, highlighting the need for further research on these populations.
Answer: The available abstracts present mixed findings regarding the disparity in the prevalence of asthma between American Indian and white adults.
One study using Behavioral Risk Factor Surveillance System (BRFSS) data from Montana from 2001 to 2006 found that American Indian race was not independently associated with increased asthma prevalence when compared to whites (OR 1.05, 95% CI 0.83-1.33). Instead, factors such as obesity, lower household income, and lower educational attainment, which disproportionately affect American Indians in Montana, were independently associated with increased asthma prevalence (PUBMED:18773326).
Another study analyzing BRFSS data from 2011-2018 across an 11-state region with a high representation of American Indian/Alaska Native (AI/AN) individuals found that while the AI/AN population consistently displayed higher adjusted prevalence of chronic respiratory disease compared to the non-Hispanic white population, AI/AN racial status was not independently associated with chronic respiratory disease (OR, 0.93; 95% CI, 0.79-1.10 in 2017). Indicators of low socioeconomic status were positively associated with disease (PUBMED:34320979).
A cross-sectional analysis of BRFSS surveys from 2013-2019 indicated that multiracial AI/AN adults experience a higher prevalence of lifetime and current asthma compared to single race AI/AN adults. The association between multiracial identity and current asthma was stronger among AI/AN adults who identified as Latinx (PUBMED:36205849).
Earlier data from the 1987 Survey of American Indians and Alaska Natives (SAIAN) and the 1987 National Medical Expenditure Survey (NMES) suggested that the prevalence of parent-reported asthma was similar for AI/AN children and for children in the NMES sample (PUBMED:10476995).
A study of the Strong Heart Study participants, who were American Indian adults aged 50 years or older, reported a prevalence of physician-diagnosed asthma at 6.3% and probable asthma at 4.3%, with significant morbidity and health-care utilization, yet underutilization of medications for pulmonary disease (PUBMED:17400687). |
Instruction: Can patients with left main coronary artery disease wait for myocardial revascularization surgery?
Abstracts:
abstract_id: PUBMED:12640512
Can patients with left main coronary artery disease wait for myocardial revascularization surgery? Objective: To assess the occurrence of cardiac events in patients diagnosed with left main coronary artery disease on diagnostic cardiac catheterization and waiting for myocardial revascularization surgery.
Methods: All patients diagnosed with left main coronary artery disease (stenosis > or = 50%) consecutively identified on diagnostic cardiac catheterization during an 8-month period were selected for the study. The group comprised 56 patients (40 males and 16 females) with a mean age of 61 10 years. The cardiac events included death, nonfatal acute myocardial infarction, acute left ventricular failure, unstable angina, and emergency surgery.
Results: While waiting for surgery, patients experienced the following cardiac events: 7 acute myocardial infarctions and 1 death. All events occurred within the first 60 days after the diagnostic cardiac catheterization. More patients, whose indication for diagnostic cardiac catheterization was unstable angina, experienced events as compared with those with other indications [p=0.03, relative risk (RR) = 5.25, 95% confidence interval = 1.47 - 18.7]. In the multivariate analysis of logistic regression, unstable angina was also the only factor that independently contributed to a greater number of events (p = 0.02, OR = 8.43, 95% CI =1.37 - 51.7).
Conclusion: Unstable angina in patients with left main coronary artery disease acts as a high risk factor for cardiac events, emergency surgery being recommended in these cases.
abstract_id: PUBMED:36937609
Myocardial Surgical Revascularization in Patients with Reduced Left Ventricular Ejection Fraction. Background: Myocardial surgical revascularization in patients with low left ventricular ejection fraction (LVEF) is accompanied by a high rate of morbidity and mortality.
Objective: The aim of this study was to investigate and eliminate the reasons for the most common perioperative and postoperative complications.
Methods: A total of 64 were analyzed. of patients during 2019 who underwent coronary artery bypass grafting (CABG), average age 61.29±9.12 years.
Results: Out of the total number of operated patients, there were 16 women and 48 men. Patients were divided into two groups. The first group consisted of patients who underwent surgery with the use of cardiopulmonary bypass (cCABG-CPB) and the second group those who underwent surgery without the use of cardiopulmonary bypass (OPCAB). In 41 patients, myocardial infarction was previously recorded. Critical stenosis of the main trunk of the left coronary artery was present in 14 patients. The incidence of postoperative complications was higher in the cCABG-CPB 16/10 group (p0.030).
Conclusion: In our study, we confirmed that myocardial revascularization is justified, especially in the case of multivessel coronary disease. In the long term, it significantly improves the systolic function of the left ventricle, and thus and quality and length of life.
abstract_id: PUBMED:36873760
Impact of Complete or Incomplete Revascularization for Left Main Coronary Disease: The Extended PRECOMBAT Study. Background: Whether complete revascularization (CR) or incomplete revascularization (IR) may affect long-term outcomes after PCI) and coronary artery bypass grafting (CABG) for left main coronary artery (LMCA) disease is unclear.
Objectives: The authors sought to assess the impact of CR or IR on 10-year outcomes after PCI or CABG for LMCA disease.
Methods: In the PRECOMBAT (Premier of Randomized Comparison of Bypass Surgery versus Angioplasty Using Sirolimus-Eluting Stent in Patients with Left Main Coronary Artery Disease) 10-year extended study, the authors evaluated the effect of PCI and CABG on long-term outcomes according to completeness of revascularization. The primary outcome was the incidence of major adverse cardiac or cerebrovascular events (MACCE) (composite of mortality from any cause, myocardial infarction, stroke, or ischemia-driven target vessel revascularization).
Results: Among 600 randomized patients (PCI, n = 300 and CABG, n = 300), 416 patients (69.3%) had CR and 184 (30.7%) had IR; 68.3% of PCI patients and 70.3% of CABG patients underwent CR, respectively. The 10-year MACCE rates were not significantly different between PCI and CABG among patients with CR (27.8% vs 25.1%, respectively; adjusted HR: 1.19; 95% CI: 0.81-1.73) and among those with IR (31.6% vs 21.3%, respectively; adjusted HR: 1.64; 95% CI: 0.92-2.92) (P for interaction = 0.35). There was also no significant interaction between the status of CR and the relative effect of PCI and CABG on all-cause mortality, serious composite of death, myocardial infarction, or stroke, and repeat revascularization.
Conclusions: In this 10-year follow-up of PRECOMBAT, the authors found no significant difference between PCI and CABG in the rates of MACCE and all-cause mortality according to CR or IR status. (Ten-Year Outcomes of PRE-COMBAT Trial [PRECOMBAT], NCT03871127; PREmier of Randomized COMparison of Bypass Surgery Versus AngioplasTy Using Sirolimus-Eluting Stent in Patients With Left Main Coronary Artery Disease [PRECOMBAT], NCT00422968).
abstract_id: PUBMED:37632766
2022 Joint ESC/EACTS review of the 2018 guideline recommendations on the revascularization of left main coronary artery disease in patients at low surgical risk and anatomy suitable for PCI or CABG. Task Force structure and summary of clinical evidence of 2022 ESC/EACTS review of the 2018 guideline recommendations on the revascularization of left main coronary artery disease. CABG, coronary artery bypass grafting; PCI, percutaneous coronary intervention; LM, left main; SYNTAX, Synergy Between Percutaneous Coronary Intervention with TAXUS and Cardiac Surgery. a'Event' refers to the composite of death, myocardial infarction (according to Universal Definition of Myocardial Infarction if available, otherwise protocol defined) or stroke. In October 2021, the European Society of Cardiology (ESC) and the European Association for Cardio-Thoracic Surgery (EACTS) jointly agreed to establish a Task Force (TF) to review recommendations of the 2018 ESC/EACTS Guidelines on myocardial revascularization as they apply to patients with left main (LM) disease with low-to-intermediate SYNTAX score (0-32). This followed the withdrawal of support by the EACTS in 2019 for the recommendations about the management of LM disease of the previous guideline. The TF was asked to review all new relevant data since the 2018 guidelines including updated aggregated data from the four randomized trials comparing percutaneous coronary intervention (PCI) with drug-eluting stents vs. coronary artery bypass grafting (CABG) in patients with LM disease. This document represents a summary of the work of the TF; suggested updated recommendations for the choice of revascularization modality in patients undergoing myocardial revascularization for LM disease are included. In stable patients with an indication for revascularization for LM disease, with coronary anatomy suitable for both procedures and a low predicted surgical mortality, the TF concludes that both treatment options are clinically reasonable based on patient preference, available expertise, and local operator volumes. The suggested recommendations for revascularization with CABG are Class I, Level of Evidence A. The recommendations for PCI are Class IIa, Level of Evidence A. The TF recognized several important gaps in knowledge related to revascularization in patients with LM disease and recognizes that aggregated data from the four randomized trials were still only large enough to exclude large differences in mortality.
abstract_id: PUBMED:37632756
2022 Joint ESC/EACTS review of the 2018 guideline recommendations on the revascularization of left main coronary artery disease in patients at low surgical risk and anatomy suitable for PCI or CABG. In October 2021, the European Society of Cardiology (ESC) and the European Association for Cardio-Thoracic Surgery (EACTS) jointly agreed to establish a Task Force (TF) to review recommendations of the 2018 ESC/EACTS Guidelines on myocardial revascularization as they apply to patients with left main (LM) disease with low-to-intermediate SYNTAX score (0-32). This followed the withdrawal of support by the EACTS in 2019 for the recommendations about the management of LM disease of the previous guideline. The TF was asked to review all new relevant data since the 2018 guidelines including updated aggregated data from the four randomized trials comparing percutaneous coronary intervention (PCI) with drug-eluting stents vs. coronary artery bypass grafting (CABG) in patients with LM disease. This document represents a summary of the work of the TF; suggested updated recommendations for the choice of revascularization modality in patients undergoing myocardial revascularization for LM disease are included. In stable patients with an indication for revascularization for LM disease, with coronary anatomy suitable for both procedures and a low predicted surgical mortality, the TF concludes that both treatment options are clinically reasonable based on patient preference, available expertise, and local operator volumes. The suggested recommendations for revascularization with CABG are Class I, Level of Evidence A. The recommendations for PCI are Class IIa, Level of Evidence A. The TF recognized several important gaps in knowledge related to revascularization in patients with LM disease and recognizes that aggregated data from the four randomized trials were still only large enough to exclude large differences in mortality.
abstract_id: PUBMED:23549496
Myocardial revascularization in patients with left main coronary disease. While coronary artery bypass grafting (CABG) has been the standard of care for patients with unprotected left main coronary artery disease, advances in percutaneous coronary intervention (PCI) have made stent placement a reasonable alternative in selected patients. In this review, we address the results of studies comparing PCI with CABG, discuss the invasive evaluation of these patients, and the technical approach to percutaneous revascularization. Furthermore, we discuss future pivotal trials, which will help define long-term outcomes comparing PCI with surgery.
abstract_id: PUBMED:8879950
Myocardial viability in patients with coronary artery disease and left ventricular dysfunction: transplantation or revascularization? Coronary artery bypass surgery performed in patients with coronary artery disease and left ventricular dysfunction improves survival compared with antianginal therapy alone. The mechanisms for this survival advantage with revascularization therapy have not been systematically elucidated. Many of these patients have "hibernating" myocardium secondary to chronic ischemia with the potential for substantial improvement in left ventricular function and heart failure symptoms following revascularization therapy. Nevertheless, as survival with cardiac transplantation continues to improve, a significantly larger number of patients with coronary artery disease and left ventricular dysfunction are being referred for cardiac transplantation in lieu of revascularization surgery. Recently developed imaging modalities, which include positron emission tomography, thallium imaging, and dobutamine echocardiography, can reliably predict recovery of regional myocardial dysfunction after revascularization in these areas of hibernating heart. New modalities to detect hibernating myocardium include 99mTc-sestamibi, contrast echocardiography, nuclear magnetic resonance spectroscopic imaging, and ultrasonic tissue characterization. In an era of medicine characterized by increased concern for cost containment and the judicious application of expensive technology, the choice of the most appropriate tests to detect viability is a growing challenge and is essential in the choice between transplantation and revascularization.
abstract_id: PUBMED:17915150
Predictors of improved left ventricular systolic function after surgical revascularization in patients with ischemic cardiomyopathy Introduction And Objectives: Although it is known that the presence of myocardial viability predicts an increase in ejection fraction after revascularization in patients with ischemic cardiomyopathy, little is known about other predictive factors. The aim of this study was to identify variables that can predict an increase in ejection fraction after coronary revascularization surgery in patients with ischemic cardiomyopathy and a viable myocardium.
Methods: The study included 30 patients (mean age 61.6 [11] years, one female) with ischemic cardiomyopathy (ejection fraction <or=40%) who fulfilled criteria for myocardial viability. All underwent ECG-gated single-photon emission computed tomography before and after surgery.
Results: An increase in ejection fraction >or=5% occurred after surgery in 17 of the 30 patients (56.6%). These patients were characterized by the presence of left main coronary artery disease (P< .004), a large number of grafts (P< .03), a high perfusion summed difference score (P< .012), a low end-diastolic volume (P< .013), and a low end-systolic volume (P< .01). An end-systolic volume <148 mL and a summed difference score >or=4 gave the best predictive model (P=.001, R2=0.73) for an increase in ejection fraction.
Conclusions: In patients with ischemic cardiomyopathy and a viable myocardium, the main determinants of an increase in ejection fraction after revascularization surgery were low levels of left ventricular remodeling and myocardial ischemia.
abstract_id: PUBMED:30092952
Left Main Revascularization With PCI or CABG in Patients With Chronic Kidney Disease: EXCEL Trial. Background: The optimal revascularization strategy for patients with left main coronary artery disease (LMCAD) and chronic kidney disease (CKD) remains unclear.
Objectives: This study investigated the comparative effectiveness of percutaneous coronary intervention (PCI) versus coronary artery bypass graft (CABG) surgery in patients with LMCAD and low or intermediate anatomical complexity according to baseline renal function from the multicenter randomized EXCEL (Evaluation of XIENCE Versus Coronary Artery Bypass Surgery for Effectiveness of Left Main Revascularization) trial.
Methods: CKD was defined as an estimated glomerular filtration rate <60 ml/min/1.73 m2 using the CKD Epidemiology Collaboration equation. Acute renal failure (ARF) was defined as a serum creatinine increase ≥5.0 mg/dl from baseline or a new requirement for dialysis. The primary composite endpoint was the composite of death, myocardial infarction (MI), or stroke at 3-year follow-up.
Results: CKD was present in 361 of 1,869 randomized patients (19.3%) in whom baseline estimated glomerular filtration rate was available. Patients with CKD had higher 3-year rates of the primary endpoint compared with those without CKD (20.8% vs. 13.5%; hazard ratio [HR]: 1.60; 95% confidence interval [CI]: 1.22 to 2.09; p = 0.0005). ARF within 30 days occurred more commonly in patients with compared with those without CKD (5.0% vs. 0.8%; p < 0.0001), and was strongly associated with the 3-year risk of death, stroke, or MI (50.7% vs. 14.4%; HR: 4.59; 95% CI: 2.73 to 7.73; p < 0.0001). ARF occurred less commonly after revascularization with PCI compared with CABG both in patients with CKD (2.3% vs. 7.7%; HR: 0.28; 95% CI: 0.09 to 0.87) and in those without CKD (0.3% vs. 1.3%; HR: 0.20; 95% CI: 0.04 to 0.90; pinteraction = 0.71). There were no significant differences in the rates of the primary composite endpoint after PCI and CABG in patients with CKD (23.4% vs. 18.1%; HR: 1.25; 95% CI: 0.79 to 1.98) and without CKD (13.4% vs. 13.5%; HR: 0.97; 95% CI: 0.73 to 1.27; pinteraction = 0.38).
Conclusions: Patients with CKD undergoing revascularization for LMCAD in the EXCEL trial had increased rates of ARF and reduced event-free survival. ARF occurred less frequently after PCI compared with CABG. There were no significant differences between PCI and CABG in terms of death, stroke, or MI at 3 years in patients with and without CKD. (EXCEL Clinical Trial [EXCEL]; NCT01205776).
abstract_id: PUBMED:21646871
Revascularization of left main coronary artery disease. Coronary artery disease with left main stenosis is associated with the highest mortality of any coronary lesion. Studies in the 1970s and 1980s comparing coronary artery bypass grafting (CABG) and medical therapy showed a significant survival benefit with revascularization. In the angioplasty era, initial experience with percutaneous intervention was associated with poor clinical outcomes. As a result, percutaneous coronary intervention (PCI) was restricted to patients who were considered inoperable, or those with prior CABG with a functional graft to the left anterior descending or circumflex artery ("protected left main disease"). With the introduction of drug-eluting stents, there are new studies demonstrating comparable survival in patients who were revascularized using PCI and CABG, although percutaneous revascularization is associated with a higher rate of repeat revascularization. In the SYNTAX (Synergy between PCI with Taxus and Cardiac Surgery) trial, the combined incidence of death, myocardial infarction, and stroke was similar between the CABG and PCI groups; however, the stroke rate was higher in the CABG group. The degree and extent of disease as defined by the SYNTAX scoring system has allowed for stratification of risk and improved assignment of patients with left main stenosis to either PCI or CABG.
Answer: Patients with left main coronary artery disease (LMCAD) are at high risk for cardiac events, and the timing of myocardial revascularization surgery is critical. A study assessing the occurrence of cardiac events in patients with LMCAD waiting for myocardial revascularization surgery found that all cardiac events, including 7 acute myocardial infarctions and 1 death, occurred within the first 60 days after diagnostic cardiac catheterization. Patients with unstable angina were at particularly high risk, with a significantly higher number of events compared to those with other indications for diagnostic cardiac catheterization (PUBMED:12640512). This suggests that patients with unstable angina and LMCAD should be considered for emergency surgery rather than waiting for scheduled revascularization.
Furthermore, myocardial surgical revascularization in patients with reduced left ventricular ejection fraction, which can be associated with LMCAD, is known to be accompanied by high rates of morbidity and mortality. However, revascularization is justified, especially in the case of multivessel coronary disease, as it can significantly improve the systolic function of the left ventricle and thus the quality and length of life (PUBMED:36937609).
In summary, while myocardial revascularization surgery is beneficial for patients with LMCAD, those with unstable angina should not wait for surgery due to the high risk of adverse cardiac events. The decision to proceed with surgery should be made promptly to minimize the risk of complications and improve long-term outcomes. |
Instruction: Carotid siphon morphology: Is it associated with posterior communicating aneurysms?
Abstracts:
abstract_id: PUBMED:27012777
Carotid siphon morphology: Is it associated with posterior communicating aneurysms? Background And Purpose: Posterior communicating artery (PComA) aneurysm seems to behave uniquely compared with other intracranial aneurysms at different locations. The association between the morphology of the carotid siphon and PComA aneurysms is not well known. This study aimed to investigate whether the anatomical characteristics of the carotid siphon are associated with the formation and rupture of PComA aneurysms.
Methods: One hundred and thirty-two patients were retrospectively reviewed in a monocentric case-control study. Sixty-seven consecutive patients with PComA aneurysms were included in the case group, and 65 patients with anterior circulation aneurysm situated in other intracranial locations were included in the control group, matched by age and sex. Morphological characteristics of the carotid siphon were analyzed using angiography images. A univariate analysis was used to investigate the association between the morphological characteristics and the formation of PComA aneurysms. Furthermore, a subgroup analysis within the case group compared ruptured and non-ruptured PComA aneurysms.
Results: Patients with PComA aneurysm had a significantly (1.31 ± 0.70 vs. 0.82 ± 0.46; P < 0.001) larger PComA. No association was observed between the morphological characteristics of the carotid siphon and the presence of a PComA aneurysm. Likewise, subgroup analysis showed no significant association between morphological characteristics of the carotid siphon and aneurysm rupture.
Conclusions: This case-control study shows that the carotid siphon morphology seems not to be related to PComA aneurysm formation or rupture.
abstract_id: PUBMED:27387710
Flow changes in the posterior communicating artery related to flow-diverter stents in carotid siphon aneurysms. Background: Flow-diverter stent (FDS) placement for treatment of intracranial aneurysms can cause flow changes in the covered branches.
Objective: To assess the impact of the treatment of carotid siphon aneurysms with FDS on the posterior communicating artery (PComA) flow.
Materials And Methods: Between February 2011 and January 2015, 125 carotid siphon aneurysms were treated with FDS. We retrospectively analyzed all cases with PComA ostial coverage. The circle of Willis anatomy was also studied as the flow changes in PComA postoperatively and during angiographic follow-up. Data from neurological examination were also collected.
Results: Eighteen aneurysms of the carotid siphon in 17 patients were treated with FDS covering the ostium of the PComA. Based on the initial angiography, patients were divided into two groups: the first with a P1/PComA size ratio >1 (10 cases) and the second with a ratio ≤1 (8 cases). Follow-up angiography (mean time of 10 months) showed 90% of PComA flow changes in group 1 but only 12.5% in group 2. There was a significant difference between the two groups (p=0.002). Nevertheless, no patient had new symptoms related to these flow changes during the follow-up period.
Conclusions: In our experience, covering the PComA by FDS when treating carotid siphon aneurysms appeared safe and the P1/PComA ratio is a good predictor of flow changes in PComA.
abstract_id: PUBMED:35911913
Associations Between Posterior Communicating Artery Aneurysms and Morphological Characteristics of Surrounding Arteries. Objectives: To explore the associations between posterior communicating artery (PComA) aneurysms and morphological characteristics of arteries upstream of and around the PComA bifurcation site.
Methods: In this study, fifty-seven patients with PComA aneurysms and sixty-two control subjects without aneurysms were enrolled. The centerlines of the internal carotid artery (ICA) and important branches were generated for the measurement and analysis of morphological parameters, such as carotid siphon types, diameters of two fitting circles, and the angle formed by them (D1, D2, and ϕ), length (L) and tortuosity (TL) of ICA segment between an ophthalmic artery and PComA bifurcations, bifurcation angle (θ), tortuosity (TICA and TPComA), and flow direction changes (θICA and θPComA) around the PComA bifurcation site.
Results: No significant difference (p > 0.05) was found in the siphon types (p = 0.467) or L (p = 0.114). Significant differences (p < 0.05) were detected in D1 (p = 0.036), TL (p < 0.001), D2 (p = 0.004), ϕ (p = 0.008), θ (p = 0.001), TICA (p < 0.001), TPComA (p = 0.012), θICA (p < 0.001), and θPComA (p < 0.001) between the two groups. TICA had the largest area under the curve (AUC) (0.843) in the receiver operating characteristic (ROC) analysis in diagnosing the probability of PComA aneurysms presence and was identified as the only potent morphological parameter (OR = 11.909) associated with PComA aneurysms presence.
Conclusions: The high tortuosity of the ICA segment around the PComA bifurcation is associated with PComA aneurysm presence.
abstract_id: PUBMED:35620791
Increased Carotid Siphon Tortuosity Is a Risk Factor for Paraclinoid Aneurysms. Background: Geometrical factors associated with the surrounding vasculature can affect the risk of aneurysm formation. The aim of this study was to determine the association between carotid siphon curvature and the formation and development of paraclinoid aneurysms of the internal carotid artery.
Methods: Digital subtraction angiography (DSA) data from 42 patients with paraclinoid aneurysms (31 with non-aneurysmal contralateral sides) and 42 age- and gender-matched healthy controls were analyzed, retrospectively. Morphological characteristics of the carotid siphon [the posterior angle (α), anterior angle (β), and Clinoid@Ophthalmic angle (γ)] were explored via three-dimensional rotational angiography (3D RA) multiplanar reconstruction. The association between carotid siphon morphology and the formation of paraclinoid aneurysms was assessed through univariate analysis. After this, logistic regression analysis was performed to identify independent risk factors for aneurysms.
Results: Significantly smaller α, β, and γ angles were reported in the aneurysmal carotid siphon group when compared with the non-aneurysmal contralateral healthy controls. The β angle was best for discriminating between aneurysmal and non-aneurysmal carotid siphons, with an optimal threshold of 18.25°. By adjusting for hypertension, smoking habit, hyperlipidemia, and diabetes mellitus, logistic regression analysis demonstrated an independent association between the carotid siphons angles α [odds ratio (OR) 0.953; P < 0.05], β (OR 0.690; P < 0.001), and γ (OR 0.958; P < 0.01) with the risk of paraclinoid aneurysms.
Conclusions: The present findings provide evidence for the importance of morphological carotid siphon variations and the likelihood of paraclinoid aneurysms. These practical morphological parameters specific to paraclinoid aneurysms are easy to assess and may aid in risk assessment in these patients.
abstract_id: PUBMED:35314412
Posterior Communicating Artery-incorporated Internal Carotid-Posterior Communicating Artery Aneurysms Prone to Recur After Coil Embolization. Objective: The objective was to clarify predisposing factors of recurrence after coil embolization for internal carotid-posterior communicating artery (IC-Pcom) aneurysms.
Methods: The medical records were retrospectively reviewed and patients harboring IC-Pcom aneurysms treated with coil embolization between June 2004 and June 2020 were identified. Aneurysms whose 3-dimensional images were available, whose initial treatment was performed during the study period, and whose follow-up term was more than 1 year were included. Information of the patients, the aneurysms and Pcoms, the initial treatment, and angiographic outcomes were collected. The IC-Pcom aneurysms were divided into Pcom-incorporated when their neck mainly rode on the Pcom or non-Pcom-incorporated when their neck mainly rode on the internal carotid artery or the classification was equivocal. Relationship between these factors and recurrence was analyzed.
Results: Fifty-seven IC-Pcom aneurysms from 55 patients were recruited. Fifteen of the 57 aneurysms were categorized into Pcom-incorporated. Eighteen of the 57 aneurysms recurred. Mean follow-up term was 74.3 months and mean duration between the initial treatment and recurrence was 47.9 months. On univariate analyses, ruptured (P = 0.004), fetal-type Pcom (P = 0.002), and Pcom-incorporated (P < 0.001) were significantly correlated with recurrence. Multivariate analysis demonstrated that Pcom-incorporated aneurysms were significantly associated with recurrence (P < 0.001) along with ruptured (P = 0.027). Kaplan-Meier estimate demonstrated that cumulative recurrence-free rate was significantly lower in Pcom-incorporated aneurysms compared with non-Pcom-incorporated aneurysms (log-rank P < 0.001).
Conclusions: Pcom-incorporated IC-Pcom aneurysms were susceptible to recur after coil embolization, especially when ruptured and the incorporated Pcom was fetal-type.
abstract_id: PUBMED:30210969
The Microsurgical Relationships between Internal Carotid-Posterior Communicating Artery Aneurysms and the Skull Base. Objective This study aimed to review the anatomical and clinical characteristics of internal carotid-posterior communicating artery (IC-PC) aneurysms, especially those located close to the skull base. Methods The microsurgical anatomy around the posterior communicating artery (PComA) was examined in a dry skull and five formalin-fixed human cadaveric heads. The clinical characteristics of 37 patients with 39 IC-PC aneurysms, who were treated microsurgically between April 2008 and July 2016, were retrospectively reviewed. Results The anterior clinoid process (ACP), as well as the anterior petroclinoidal dural fold (APF), which forms part of the oculomotor triangle, are closely related to the origin of the PComA. Among the 39 IC-PC aneurysms, anterior clinoidectomy was performed on 4 (10.3%) and a partial resection of the APF was performed on 2 (5.1%). Both of these aneurysms projected inferior to the tentorium, or at least part of the aneurysm's dome was inferior to the tentorium. Conclusion Proximally located IC-PC aneurysms have an especially close relationship with the ACP and APF. We should be familiar with the anatomical relationship between IC-PC aneurysms and the structures of the skull base to avoid hazardous complications.
abstract_id: PUBMED:34351498
λ stenting: a novel technique for posterior communicating artery aneurysms with fetal-type posterior communicating artery originating from the aneurysm dome. Purpose: Endovascular treatment of posterior communicating artery aneurysms with fetal-type posterior communicating artery originating from the aneurysm dome is often challenging because, with conventional techniques, dense packing of aneurysms for posterior communicating artery preservation is difficult; moreover, flow-diversion devices are reportedly less effective. Herein, we describe a novel method called the λ stenting technique that involves deploying stents into the internal carotid artery and posterior communicating artery.
Methods: Between January 2018 and September 2020, the λ stenting technique was performed to treat eight consecutive cases of aneurysms. All target aneurysms had a wide neck (dome/neck ratio < 2), a fetal-type posterior communicating artery with hypoplastic P1, and a posterior communicating artery originating from the aneurysm dome. The origin of the posterior communicating artery from the aneurysm, relative to the internal carotid artery, was steep (< 90°: V shape).
Results: The maximum aneurysm size was 8.0 ± 1.9 mm (6-12 mm). The average packing density (excluding one regrowth case) was 32.7 ± 4.2% (26.8-39.1%). Initial occlusion was complete occlusion in 6 (75.0%) patients and neck remnants in 2 (25.0%) patients. Follow-up angiography was performed at 18.4 ± 11.6 months (3-38 months). There were no perioperative complications or reinterventions required during the study period.
Conclusion: The λ stenting technique enabled dense coil packing and preservation of the posterior communicating artery. This technique enabled safe and stable coil embolization. Thus, it could become an alternative treatment option for this sub-type of intracranial aneurysms.
abstract_id: PUBMED:27731782
Association between anatomical variations of the posterior communicating artery and the presence of aneurysms. Objectives: Posterior communicating artery aneurysms (PcoAA) account for 30-35% of intracranial aneurysms. The anatomical factors involved in the formation of PCoAA are poorly known. The study aimed to investigate the anatomical variations in the posterior communicating artery (PcoAs) and the presence of PCoAA.
Methods: All 154 patients hospitalized from January 2008 to December 2013 at the department of neurology of our hospital were included in this study; 76 were confirmed with PCoAA upon cerebral angiography and 78 were confirmed without cranial artery aneurysm (controls). According to the blood supply pattern, variations of the PCoAA were classified as Type P0, P-I, or P-II. The angles of C7 and C6 of the internal carotid artery on each side were analyzed.
Results: Compared with controls, patients with PCoAA had a higher frequency of abnormal posterior communicating artery (Types P-I and P-II) (p < 0.001). The angles of C7 and C6 on the contralateral side in the PCoAA group were significantly greater than on the affected side, and significantly lesser than in controls (p < 0.001). There was no difference in the angle between the culprit artery and the contralateral one.
Discussion: Abnormal PCoAs (Types P-I and P-II) might be more vulnerable to PCoAA development, and Type P-II was the most vulnerable. There was a correlation between the angles of C7 and C6 part of the internal carotid artery and the presence of symptomatic PCoAA, with smaller angles being associated with increased frequency of symptomatic PCoAA.
abstract_id: PUBMED:23295418
Carotid siphon geometry and variants of the circle of Willis in the origin of carotid aneurysms. This study evaluated anatomical variants in the carotid siphon and of the circle of Willis in patients with aneurysms. We performed a retrospective analysis of cerebral angiographies. The Control Group was composed of patients without aneurysms. Posterior communicating artery (PcomA) aneurysms were more common in women (p<0.05), and the anterior communicating artery (AcomA) aneurysms in men (p<0.1). The incidence of fetal-type PcomA was higher in cases with co-occurring PcomA aneurysm (24 versus 8%, p<0.05). Patients with AcomA aneurysm had higher incidence of A1 hypoplasia (p<0.0001, OR=32.13, 95%CI 12.95-79.71) and lower frequency of fetal-type PcomA compared to their control counterparts (p=0.0125). The angle of carotid siphon was narrower in patients with PcomA aneurysm (27.3±19.1 versus 34.8±22.6, p=0.028). In conclusion, a narrower carotid siphon or the presence of fetal-type PcomA or A1 hypoplasia may cause hemodynamic stress, thereby promoting the formation of aneurysms in susceptible individuals.
abstract_id: PUBMED:37502421
Retrograde T-Stent Technique for Large, Wide-Necked Internal Carotid-Posterior Communicating Artery Aneurysm. Objective: We report a case of a wide-necked internal carotid-posterior communicating (IC-Pcom) artery aneurysm treated by the retrograde T-stent technique in which a stent was also placed in the Pcom artery via the posterior circulation.
Case Presentation: A 35-year-old woman was diagnosed with an unruptured right IC-Pcom artery aneurysm (maximum diameter: 11 mm, neck diameter: 8.5 mm) during a detailed examination for vertigo. The Pcom artery (2.1 mm) branched from the aneurysmal dome. A microcatheter was guided in retrograde via the Pcom artery from the posterior circulation. A low-profile visualized intraluminal support (LVIS) Jr. 2.5 mm × 17 mm was deployed from the internal carotid artery (ICA) to the Pcom artery, and then, an LVIS 4.5 mm × 23 mm was deployed while pressing the flare of the LVIS Jr. protruding into the ICA. T configuration stenting was completed, and the coil was inserted as tightly as possible.
Conclusion: The retrograde T-stent technique, which has the advantage of optimal stent positioning, is useful for preserving a Pcom artery branching from the aneurysmal dome.
Answer: The association between carotid siphon morphology and posterior communicating artery (PComA) aneurysms has been investigated in several studies, with varying results.
One study found no association between the morphological characteristics of the carotid siphon and the presence of a PComA aneurysm. Similarly, subgroup analysis within the same study showed no significant association between morphological characteristics of the carotid siphon and aneurysm rupture (PUBMED:27012777).
Another study suggested that the high tortuosity of the internal carotid artery (ICA) segment around the PComA bifurcation is associated with PComA aneurysm presence. This study identified tortuosity as the only potent morphological parameter associated with the presence of PComA aneurysms (PUBMED:35911913).
In contrast, a different study reported that increased carotid siphon tortuosity is a risk factor for paraclinoid aneurysms, which are located near the PComA. This study found that smaller angles in the carotid siphon were significantly associated with the risk of paraclinoid aneurysms (PUBMED:35620791).
Additionally, one study found that anatomical variations of the PComA, such as abnormal types (P-I and P-II), might be more vulnerable to the development of PComA aneurysms. This study also noted a correlation between the angles of the C7 and C6 parts of the ICA and the presence of symptomatic PComA aneurysms, with smaller angles being associated with an increased frequency of symptomatic PComA aneurysms (PUBMED:27731782).
Lastly, another study concluded that a narrower carotid siphon or the presence of fetal-type PComA or A1 hypoplasia may cause hemodynamic stress, thereby promoting the formation of aneurysms in susceptible individuals (PUBMED:23295418). |
Instruction: Is the effect of alcohol on risk of stroke confined to highly stressed persons?
Abstracts:
abstract_id: PUBMED:15956807
Is the effect of alcohol on risk of stroke confined to highly stressed persons? Background: Psychological stress and alcohol are both suggested as risk factors for stroke. Further, there appears to be a close relation between stress and alcohol consumption. Several experimental studies have found alcohol consumption to reduce the immediate effects of stress in a laboratory setting. We aimed to examine whether the association between alcohol and stroke depends on level of self-reported stress in a large prospective cohort.
Methods: The 5,373 men and 6,723 women participating in the second examination of the Copenhagen City Heart Study in 1981-1983 were asked at baseline about their self-reported level of stress and their weekly alcohol consumption. The participants were followed-up until 31st of December 1997 during which 880 first ever stroke events occurred. Data were analysed by means of Cox regression modelling.
Results: At a high stress level, weekly total consumption of 1-14 units of alcohol compared with no consumption seemed associated with a lower risk of stroke (adjusted RR: 0.57, 95% CI: 0.31-1.07). At lower stress levels, no clear associations were observed. Regarding subtypes, self-reported stress appeared only to modify the association between alcohol intake and ischaemic stroke events. Regarding specific types of alcoholic beverages, self-reported stress only modified the associations for intake of beer and wine.
Conclusions: This study indicates that the apparent lower risk of stroke associated with moderate alcohol consumption is confined to a group of highly stressed persons. It is suggested that alcohol consumption may play a role in reducing the risk of stroke by modifying the physiological or psychological stress response.
abstract_id: PUBMED:12006774
Alcohol drinking and risk of hemorrhagic stroke. In view of conflicting prior reports, we prospectively studied associations between alcohol consumption and subsequent hospitalization for hemorrhagic stroke (HS) in 431 persons. Alcohol use was determined at examinations in 1978-1984 among 128,934 members of a prepaid health plan. Cox proportional hazards models, with 6 covariates, yielded the following multivariate relative risks (95% CI's) for HS: lifelong abstainers (ref) = 1.0, exdrinkers = 0.9 (0.5-1.5), persons drinking <1/month = 1.1 (0.8-1.4), >1/month; <1 drink/day = 0.7 (0.5-0.9), 1-2/day = 0.8 (0.6-1.1), 3-5/day = 1.0 (0.6-1.5), 6+/day = 1.9 (1.0-3.5). Relationships to alcohol were similar for subarachnoid (31% of HS) or intracerebral hemorrhage (69% of HS) and in men or women. Beverage choice (wine, beer, and liquor) was not independently related. We conclude that only heavy drinking is weakly related to increased HS risk and that light drinking need not be proscribed with respect to HS risk.
abstract_id: PUBMED:15066068
Alcohol intake and risk of dementia. Objectives: To examine the association between intake of alcoholic beverages and risk of Alzheimer's disease (AD) and dementia associated with stroke (DAS) in a cohort of elderly persons from New York City.
Design: Cohort study.
Setting: The Washington Heights Inwood-Columbia Aging Project.
Participants: Nine hundred eighty community-dwelling individuals aged 65 and older without dementia at baseline and with data on alcohol intake recruited between 1991 and 1996 and followed annually.
Measurements: Intake of alcohol was measured using a semiquantitative food frequency questionnaire at baseline. Subjects were followed annually, and incident dementia was diagnosed using Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria and classified as AD or DAS.
Results: After 4 years of follow-up, 260 individuals developed dementia (199 AD, 61 DAS). After adjusting for age, sex, apolipoprotein E (APOE)-epsilon 4 status, education, and other alcoholic beverages, only intake of up to three daily servings of wine was associated with a lower risk of AD (hazard ratio=0.55, 95% confidence interval=0.34-0.89). Intake of liquor, beer, and total alcohol was not associated with a lower risk of AD. Stratified analyses by the APOE-epsilon 4 allele revealed that the association between wine consumption and lower risk of AD was confined to individuals without the APOE-epsilon 4 allele.
Conclusions: Consumption of up to three servings of wine daily is associated with a lower risk of AD in elderly individuals without the APOE epsilon-4 allele.
abstract_id: PUBMED:2239729
Risk of cardiovascular mortality in alcohol drinkers, ex-drinkers and nondrinkers. Lower cardiovascular mortality rates in lighter drinkers (versus abstainers or heavier drinkers) in population studies have been substantially due to lower coronary artery disease (CAD) mortality. Controversy about this U-shaped curve focuses on whether alcohol protects against CAD or, because of other traits, whether abstainers are at increased risk. Inclusion of ex-drinkers among abstainers in some studies has led to speculation that this might be the trait increasing the risk of abstainers. This new prospective study among 123,840 persons with 1,002 cardiovascular (600 CAD) deaths showed that ex-drinkers had higher cardiovascular and CAD mortality risks than lifelong abstainers in unadjusted analyses, but not in analyses adjusted for age, gender, race, body mass index, marital status and education. Use of alcohol was associated with higher risk of mortality from hypertension, hemorrhagic stroke and cardiomyopathy, but with lower risk from CAD, occlusive stroke and nonspecific cardiovascular syndromes. Subsets free of baseline cardiovascular or CAD risk had U-shaped alcohol-CAD curves similar to subsets with baseline risk. Among ex-drinkers, maximal past intake and reasons for quitting (medical versus non-medical) were unrelated to cardiovascular or CAD mortality. These data show that: (1) alcohol has disparate relations to cardiovascular conditions; (2) higher cardiovascular mortality rates among ex-drinkers are due to confounding traits related to past alcohol use; and (3) the U-shaped alcohol-CAD relation is not due to selective abstinence by persons at higher risk. The findings indirectly support a protective effect of lighter drinking against CAD.
abstract_id: PUBMED:10501273
Alcohol intake and the risk of stroke. Alcohol consumption has been reported to have both beneficial and harmful effects on the incidence of stroke. Different drinking habits may explain the diversity of the observations, but this is still unclear. We reviewed recent clinical and epidemiological studies to find out whether alcohol intake could increase or decrease the risk for stroke. By a systematic survey of literature published from 1989 to 1997, we identified 14 case-control studies addressing alcohol as a risk factor for haemorrhagic and ischaemic stroke morbidity and fulfilling the following criteria: the type of stroke was determined by a head computerised tomography scan on admission or at autopsy; and alcohol consumption was verified using structured questionnaires or by personal interviews. In some studies, adjustment for hypertension abolished the independent role of alcohol as a risk factor. On the other hand, the studies covering even recent alcohol intake showed in many cases that heavy drinking is an independent risk factor for most stroke subtypes, and that the risk may decrease relatively rapidly after the cessation of alcohol abuse. In some studies, regular light to moderate drinking seemed to be associated with a decreased risk for ischaemic stroke of atherothrombotic origin. In conclusion, recent heavy alcohol intake seems to be an independent risk factor for all major subtypes of stroke. The ultimate mechanisms leading to the increased risk are unclear. The significance of alcohol as a risk factor has been demonstrated in young subjects because they are more often heavy drinkers than the elderly. Several factors to explain the beneficial effect of light to moderate drinking have been proposed.
abstract_id: PUBMED:15330400
Alcohol as a risk factor for hemorrhagic stroke. Purpose: Whereas the protective effect of mild-to-moderate alcohol consumption against ischemic stroke has been well recognized, there is conflicting evidence regarding the link between alcohol consumption and hemorrhagic strokes. The aim of the present study is to summarize the results of case-control and cohort studies published on this issue.
Methods: Recent epidemiologic articles on the relationship between alcohol consumption and hemorrhagic stroke were identified by Medline searches limited to title words using the following search terms: "alcohol AND cerebrovascular dis*", "alcohol AND stroke", "alcohol AND cerebral hemorrhage" and "alcohol AND hemorrhagic stroke".
Results: Most case-control and cohort studies either reported only on total strokes or on a combined group of hemorrhagic strokes including intracerebral as well as subarachnoid hemorrhages. There was a consensus among reports that heavy alcohol consumption was associated with a higher risk of hemorrhagic strokes. Controversy remains regarding the effect of mild-to-moderate alcohol consumption: while some studies reported a protective effect, others found a dose-dependent linear relationship between the amount of alcohol consumed and the risk of hemorrhagic stroke. The differential effect of moderate alcohol consumption on hemorrhagic compared to ischemic strokes is mostly attributed to alcohol- and withdrawal-induced sudden elevations of blood pressure, and coagulation disorders.
Conclusions: Heavy drinking should be considered as one of the risk factors for hemorrhagic stroke. In contrast to the protective effect of mild-to-moderate alcohol use against ischemic strokes, moderate drinking might result in an increased risk of hemorrhagic strokes.
abstract_id: PUBMED:19390181
Cerebrovascular ischemic events in HIV-1-infected patients receiving highly active antiretroviral therapy: incidence and risk factors. Background: Stroke risk is increased in AIDS patients, and highly active antiretroviral therapy (HAART) may accelerate atherosclerosis, but little is known about the incidence and risk factors for ischemic stroke in patients under HAART. We have studied the incidence, types of stroke and possible risk factors for cerebrovascular ischemic events in a large cohort of HIV-1-infected patients treated with HAART.
Methods: We conducted a retrospective review of ischemic strokes and transient ischemic attacks occurring in a cohort of HIV-1-infected patients treated with HAART from 1996 to 2008. As a control group, consecutive unselected patients from the same cohort were included. Patients and controls were compared for demographic, clinical and laboratory variables, including vascular risk factors, data on HIV infection and duration of HAART. Variables with significant differences were included in a backward logistic regression model.
Results: Twenty-seven cerebrovascular ischemic events occurred in 25 patients, with an incidence of 189 events (166 strokes) per 100,000 patients/year. Independent factors associated with cerebrovascular events were: history of high alcohol intake (OR 7.13, 95% CI 1.69-30.11; p = 0.007), a previous diagnosis of AIDS (OR 6.61, 95% CI 2.03-21.51; p = 0.002) and fewer months under HAART (OR 0.97, 95% CI 0.96-0.99; p < 0.001). Six patients (24%) had large artery atherosclerosis: they had a similar HAART duration to controls.
Conclusions: Stroke incidence is high in patients with HIV-1 infection treated with HAART. Duration of HAART exerted a global protective effect for cerebrovascular ischemic events, and our results do not support a major role in large artery atherosclerosis stroke. High alcohol intake is a major risk factor for stroke in these patients.
abstract_id: PUBMED:31901187
Alcohol and cardiovascular disease: Position Paper of the Czech Society of Cardiology. Epidemiologic studies consistently report a U-shaped curve relationship between the amount of alcohol consumption and cardiovascular disease, with consumption of ≥ three alcoholic drinks being associated with an increased risk. However, the cardioprotective effect of light and moderate alcohol consumption has been recently questioned. In the absence of a randomized trial confirming the cardioprotective effect of light or moderate alcohol consumption, an alternative method to prove the causality is Mendelian randomization using a genetic variant serving as a proxy for alcohol consumption. A Mendelian randomization analysis by Holmes et al. suggests that a reduction in alcohol intake is beneficial for cardiovascular health also in light to moderate drinkers. In a recent analysis of 83 prospective studies, alcohol consumption was roughly linearly associated with a higher risk of stroke, coronary heart disease excluding myocardial infarction, heart failure and risk of death from aortic aneurysm dissection. By contrast, increased alcohol consumption was associated with a lower risk of myocardial infarction. "Low-risk" alcohol consumption recommended by the National Institute of Public Health, Czech Republic, should not exceed 16 g of 100% ethanol/day for women and 24 g/day for men; at least two days a week should be alcohol free, and the dose of ethanol during binge drinking should not exceed 40 g. In practice, this means one standard drink daily for five days at most and two standard drinks at most when binge drinking. These amounts should be considered the highest acceptable limits, but alcohol consumption in general should be discouraged.
abstract_id: PUBMED:35440171
Association of Change in Alcohol Consumption With Risk of Ischemic Stroke. Background: The effect of serial change in alcohol consumption on stroke risk has been limitedly evaluated. We investigated the association of change in alcohol consumption with risk of stroke.
Methods: This study is a population-based retrospective cohort study from National Health Insurance Service database of all Koreans. Four lakh five hundred thirteen thousand seven hundred forty-six participants aged ≥40 years who underwent 2 subsequent national health examinations in both 2009 and 2011. Alcohol consumption was assessed by average alcohol intake (g/day) based on self-questionnaires and categorized into non-, mild, moderate, and heavy drinking. Change in alcohol consumption was defined by shift of category from baseline. Cox proportional hazards model was used with adjustment for age, sex, smoking status, regular exercise, socioeconomic information, and comorbidities, Charlson Comorbidity Index, systolic blood pressure, and laboratory results. Subgroup analysis among those with the third examination was conducted to reflect further change in alcohol consumption.
Results: During 28 424 497 person-years of follow-up, 74 923 ischemic stroke events were identified. Sustained mild drinking was associated with a decreased risk of ischemic stroke (adjusted hazard ratio, 0.88 [95% CI, 0.86-0.90]) compared with sustained nondrinking, whereas sustained heavy drinking was associated with an increased risk of ischemic stroke (adjusted hazard ratio, 1.06 [95% CI, 1.02-1.10]). Increasing alcohol consumption was associated with an increased risk of ischemic stroke (adjusted hazard ratio, 1.11 [95% CI, 1.06-1.17] from mild to moderate; adjusted hazard ratio, 1.28 [95% CI, 1.19-1.38] from mild to heavy) compared with sustained mild drinkers. Reduction of alcohol consumption from heavy to mild level was associated with 17% decreased risk of ischemic stroke through 3× of examinations.
Conclusions: Light-to-moderate alcohol consumption is associated with a decreased risk of ischemic stroke, although it might be not causal and could be impacted by sick people abstaining from drinking. Reduction of alcohol consumption from heavy drinking is associated with a decreased risk of ischemic stroke.
abstract_id: PUBMED:15993353
Risk factors for heart disease and stroke among diabetic persons, by disability status. Objective: To determine whether disabled diabetic persons have a higher prevalence of risk factors for heart disease and stroke than do diabetic persons without disability. RESEARCH, DESIGN, AND METHODS: Data were analyzed for noninstitutionalized adults in 27 states and the District of Columbia that participated in the Behavioral Risk Factor Surveillance System (BRFSS) in 2001 and/or 2003. Logistic regression analysis was used to estimate the adjusted prevalence and odds ratios of disabled diabetic persons, by sociodemographic characteristics. The logit form of each model was used to estimate conditional marginal probabilities of risk factors for heart disease and stroke among diabetic persons, by disability status.
Results: Diabetic persons with disability were more likely than those without disability to have more risk factors for heart disease and stroke, including insufficient leisure-time physical activity or inactivity (adjusted prevalence: 75.2% vs. 63.3%; P<.001), obesity (58.9% vs. 43.3%; P<.001), hypercholesterolemia (52.6% vs. 48.4%; P=.038), and hypertension (63.9% vs. 56.6%; P<.001). They were also more likely to have one or more, two or more, three or more, and four or more risk factors (97.2% vs. 95.6%, 83.5% vs. 74.0%, 56.5% vs. 41.1%, and 22.2% vs. 13.6%, respectively; P< or =.005).
Conclusions: Diabetic persons with disability are more likely than those without disability to have clusters of risk factors for heart disease and stroke. Health care guidelines specifically targeting diabetic patients with disability may be needed to aid health care providers in addressing these risk factors.
Answer: The effect of alcohol on the risk of stroke does not appear to be confined solely to highly stressed persons. While one study suggests that the apparent lower risk of stroke associated with moderate alcohol consumption is confined to a group of highly stressed persons (PUBMED:15956807), other studies indicate a more complex relationship between alcohol consumption and stroke risk that is not limited to stress levels.
For instance, research has shown that only heavy drinking is weakly related to an increased risk of hemorrhagic stroke (HS), and that light drinking does not necessarily need to be proscribed with respect to HS risk (PUBMED:12006774). Another study found that consumption of up to three servings of wine daily is associated with a lower risk of Alzheimer's disease (AD), which is a form of dementia associated with stroke, particularly in individuals without the APOE epsilon-4 allele (PUBMED:15066068).
Moreover, alcohol has been reported to have disparate relations to different cardiovascular conditions, with lighter drinking potentially having a protective effect against coronary artery disease (CAD), but not necessarily against other conditions such as hemorrhagic stroke or cardiomyopathy (PUBMED:2239729). Additionally, recent heavy alcohol intake has been identified as an independent risk factor for all major subtypes of stroke, with the risk decreasing relatively rapidly after cessation of alcohol abuse (PUBMED:10501273).
Further complicating the picture, some studies have found that moderate drinking might result in an increased risk of hemorrhagic strokes, contrasting with its protective effect against ischemic strokes (PUBMED:15330400). In HIV-1-infected patients receiving highly active antiretroviral therapy, high alcohol intake has been identified as a major risk factor for stroke (PUBMED:19390181).
A position paper by the Czech Society of Cardiology also indicates that the relationship between alcohol consumption and cardiovascular disease is U-shaped, with light to moderate consumption potentially having a cardioprotective effect, but this effect has been questioned and is not universally accepted (PUBMED:31901187).
Lastly, a study on the association of change in alcohol consumption with the risk of ischemic stroke found that light-to-moderate alcohol consumption is associated with a decreased risk of ischemic stroke, but this might not be causal and could be influenced by sick individuals abstaining from drinking. |
Instruction: Orthotopic liver transplant using allografts from geriatric population in the United States: is there any age limit?
Abstracts:
abstract_id: PUBMED:20716036
Orthotopic liver transplant using allografts from geriatric population in the United States: is there any age limit? Objectives: Observations of minimal pathophysiological changes in the liver with healthy aging represent the rationale for expanding the donor pool with older donors. However, a debate exists for their upper age limit. The aim of this study is to examine the outcomes of orthotopic liver transplants from older patients (>or= 60 years).
Materials And Methods: Using the Organ Procurement and Transplant Network/United Network for Organ Sharing (OPTN/UNOS) data, we retrospectively analyzed graft and patient survivals of orthotopic liver transplants done with octogenarian grafts (n=197) and compared them with orthotopic liver transplants done with donors aged between 60 and 79 years (n=4003) and < 60 years (n=21 290) during 2003 to 2007.
Results: One- and 3-year graft and patient survival rates among recipients of hepatic allografts from donors < 60 years of age were significantly superior to recipients of octogenarian grafts (graft: 84% vs 75.5% at 1 year; 74.2% vs 61.2% at 3 years; P < .001; patient: 87.8% vs 81.0% at 1-year; 79.3% vs 69.1% at 3 years; P < .001). However, there was no survival difference between recipients of allografts from donors aged > 80 years and 60-79 years (graft: 75.5% vs 77.4% at 1 year; 61.2% vs 64.2% at 3 years; P = .564; patient: 81.0% vs 83.8% at 1 year; 69.1% vs 71.8% at 3 years; P = .494). It correlates well with hepatitis C virus-seronegativity and relatively lower model for end-stage liver disease score among recipients of octogenarian grafts (P < .001).
Conclusions: Careful donor evaluation, avoidance of additional donor risk factors, and their pairing with appropriate recipients offer acceptable functional recovery, even with donors > 80 years.
abstract_id: PUBMED:35642976
Disparities in the Use of Older Donation After Circulatory Death Liver Allografts in the United States Versus the United Kingdom. Background: This study aimed to assess the differences between the United States and the United Kingdom in the characteristics and posttransplant survival of patients who received donation after circulatory death (DCD) liver allografts from donors aged >60 y.
Methods: Data were collected from the UK Transplant Registry and the United Network for Organ Sharing databases. Cohorts were dichotomized into donor age subgroups (donor >60 y [D >60]; donor ≤60 y [D ≤60]). Study period: January 1, 2001, to December 31, 2015.
Results: 1157 DCD LTs were performed in the United Kingdom versus 3394 in the United States. Only 13.8% of US DCD donors were aged >50 y, contrary to 44.3% in the United Kingdom. D >60 were 22.6% in the United Kingdom versus 2.4% in the United States. In the United Kingdom, 64.2% of D >60 clustered in 2 metropolitan centers. In the United States, there was marked inter-regional variation. A total of 78.3% of the US DCD allografts were used locally. One- and 5-y unadjusted DCD graft survival was higher in the United Kingdom versus the United States (87.3% versus 81.4%, and 78.0% versus 71.3%, respectively; P < 0.001). One- and 5-y D >60 graft survival was higher in the United Kingdom (87.3% versus 68.1%, and 77.9% versus 51.4%, United Kingdom versus United States, respectively; P < 0.001). In both groups, grafts from donors ≤30 y had the best survival. Survival was similar for donors aged 41 to 50 versus 51 to 60 in both cohorts.
Conclusions: Compared with the United Kingdom, older DCD LT utilization remained low in the United States, with worse D >60 survival. Nonetheless, present data indicate similar survivals for older donors aged ≤60, supporting an extension to the current US DCD age cutoff.
abstract_id: PUBMED:32034946
A learning curve in using orphan liver allografts for transplantation. Given the critical shortage of donor livers, marginal liver allografts have potential to increase donor supply. We investigate trends and long-term outcomes of liver transplant using national share allografts transplanted after rejection at the local and regional levels. We studied a cohort of 75 050 candidates listed in the Organ Procurement and Transplantation Network for liver transplantation between 2002 and 2016. We compared patients receiving national share and regional/local share allografts from 2002-2006, 2007-2011, and 2012-2016, performing multivariate Cox regression for graft survival. Recipient and center-level covariates that were not significant (P < .05) were removed. Graft survival of national share allografts improved over time. National share allografts had a 26% increased risk for graft failure in 2002-2006 but no impact on graft survival in 2007-2011 and 2012-2016. The cold ischemia time (CIT) of national share allografts decreased from 10.4 to 8.0 hours. We demonstrate that CIT had significant impact on graft survival using national share allografts (CIT <6 hours: hazard ratio 0.75 and CIT >12 hours: hazard ratio 1.25). Despite a trend toward sicker recipients and poorer quality allografts, graft survival outcomes using national share allografts have improved to benchmark levels. Reduction in cold ischemia time is a possible explanation.
abstract_id: PUBMED:2426172
Subcapsular hepatic necrosis in orthotopic liver allografts. Five cases of subcapsular liver necrosis were found in a series of 55 hepatic orthotopic allografts examined at hepatectomy or autopsy during a 3-yr period at Children's Hospital of Pittsburgh. There was a pronounced rise in liver enzymes in the first few days in all of the cases after transplantation followed by a decrease in values in four of the cases over the next few days. All were characterized by an irregular subcapsular band of necrotic tissue involving both lobes, to a variable degree, but most frequently the right lobe. There were no obstructions or occlusions of the extrahepatic arteries, portal or hepatic veins. Hepatocyte necrosis was frequently observed in periportal areas, although centrilobular necrosis was also common. Varying degrees of steatosis were seen in the rest of the liver. Various etiological possibilities are discussed, particularly the role of hypoperfusion. Focal subcapsular necrosis of liver allografts may be more frequent than is presently realized. Awareness that hepatic necrosis in allografts may occur as localized subcapsular phenomenon may prevent misinterpretation of superficial biopsy findings as being representative of the entire organ, thus over-estimating the degree of damage.
abstract_id: PUBMED:32274340
The Reliability of Fibro-test in Staging Orthotopic Liver Transplant Recipients with Recurrent Hepatitis C. Background and Aims: Liver biopsy remains the gold standard for staging of chronic liver disease following orthotopic liver transplantation. Noninvasive assessment of fibrosis with Fibro-test (FT) is well-studied in immunocompetent populations with chronic hepatitis C virus infection. The aim of this study is to investigate the diagnostic value of FT in the assessment of hepatic fibrosis in the allografts of liver transplant recipients with evidence of recurrent hepatitis C. Methods: We retrospectively compared liver biopsies and FT performed within a median of 1 month of each other in orthotopic liver transplantation recipients with recurrent hepatitis C. Results: The study population comprised 22 patients, most of them male (19/22), and with median age of 62 years. For all patients, there was at least a one-stage difference in fibrosis as assessed by liver biopsy compared to FT, while for the majority (16/22) there was at least a two-stage difference. The absence of correlation between the two modalities was statistically demonstrated (Mann-Whitney U test, p = 0.01). In detecting significant fibrosis (a METAVIR stage of F2 and above), an FT cut-off of 0.5 showed moderate sensitivity (77%) and negative predictive value (80%), but suboptimal specificity (61%) and positive predictive value (58%). Conclusions: In post-transplant patients with recurrent hepatitis C, FT appears to be inaccurately assessing the degree of allograft fibrosis, therefore limiting its reliability as a staging tool.
abstract_id: PUBMED:8475562
Prolonged survival of rat orthotopic liver allografts after intrathymic inoculation of donor-strain cells. Permanent donor-specific tolerance to tissue or organ allografts can be readily achieved without immunosuppression by administration of donor lymphohematopoietic cells to neonatal rodents. In adult recipients, however, induction of transplantation tolerance by this strategy generally requires intensive cytoablative conditioning of the recipient. We have now demonstrated that intrathymic inoculation of donor bone marrow or hepatic cells in conjunction with a single dose of antilymphocyte serum is effective in prolonging survival of DA rat orthotopic liver allografts in LEW strain recipients, which ordinarily rapidly reject such transplants. The unresponsive state achieved is donor-specific, as evidenced by the failure of intrathymic inocula of third-party WF cells to promote survival of LEW recipients of orthotopic DA liver allografts. Moreover, intravenous administration of the donor cells fails to extend liver allograft survival, demonstrating that the inoculum must be present in the thymus to promote unresponsiveness. Established DA liver allografts induced a state of systemic tolerance in LEW hosts, allowing their subsequent acceptance of donor-strain skin allografts. We hypothesize that the unresponsive state achieved by intrathymic inoculation of donor cells may result from the deletion or functional inactivation of alloreactive clones in a thymus bearing donor alloantigens. In this regard, cells of the macrophage/dendritic lineage (descendants of the bone marrow inoculum or hepatic Kupffer cells) may play a critical role by promoting thymic microchimerism and exerting modulatory effect on T cell development.
abstract_id: PUBMED:28267886
Primary non-function is frequently associated with fatty liver allografts and high mortality after re-transplantation. Background & Aims: The shortage of liver donations demands the use of suboptimal grafts with steatosis being a frequent finding. Although ≤30% macrovesicular steatosis is considered to be safe the risk for primary non-function (PNF) and outcome after re-transplantation (re-OLT) is unknown.
Methods: Among 1205 orthotopic liver transplantations performed at our institution the frequency, survival and reason of re-OLT were evaluated. PNF (group A) cases and those with initial transplant function but subsequent need for re-OLT (group B) were analysed. Histopathology and clinical judgement determined the cause of PNF and included an assessment of hepatic steatosis. Additionally, survival of fatty liver allografts (group C) not requiring re-OLT was considered in Kaplan-Meier and multivariate regression analysis.
Results: A total of 77 high urgency re-OLTs were identified and included 39 PNF cases. Nearly 70% of PNF cases were due to primary fatty liver allografts. The 3-month in-hospital mortality for PNF cases after re-OLT was 46% and the mean survival after re-OLT was 0.5 years as compared to 5.2 and 5.1 years for group B, C, respectively, (P<.008). In multivariate Cox regression analysis only hepatic steatosis was associated with an inferior survival (HR 4.272, P=.002). The MELD score, donor BMI, age, cold ischaemic time, ICU stay, serum sodium and transaminases did not influence overall survival.
Conclusions: Our study highlights fatty liver allografts to be a major cause for PNF with excessive mortality after re-transplantation. The findings demand the development of new methods to predict risk for PNF of fatty liver allografts.
abstract_id: PUBMED:33455886
Hepatic Vein Flow Index During Orthotopic Liver Transplantation as a Predictive Factor for Postoperative Early Allograft Dysfunction. Objectives: The authors devised a hepatic vein flow index (HVFi), using intraoperative transesophageal echocardiography and graft weight, and investigated its predictive value for postoperative graft function in orthotopic liver transplant.
Design: Prospective clinical trial.
Setting,: Single-center tertiary academic hospital.
Participants: Ninety-seven patients who had orthotopic liver transplant with the piggy-back technique between February 2018 and December 2019.
Measurements And Main Results: HVFi was defined with HV flow/graft weight. Patients who developed early graft dysfunction (EAD) had low HVFi in systole (HVFi sys, 1.23 v 2.19 L/min/kg, p < 0.01), low HVFi in diastole (HVFi dia, 0.87 v 1.54 L/min/kg, p < 0.01), low hepatic vein flow (HVF) in systole (HVF sys, 2.04 v 3.95 L/min, p < 0.01), and low HVF in diastole (HVF dia, 1.44 v 2.63 L/min, p < 0.01). More cardiac death, more vasopressors at the time of measurement, more acute rejection, longer time to normalize total bilirubin (TIME t-bil), longer surgery time, longer neohepatic time, and more packed red blood cell transfusion were observed in the EAD patients. All HVF parameters were negatively correlated with TIME t-bil (HVFi sys R = -0.406, p < 0.01; HFVi dia R = -0.442, p < 0.01; HVF sys R = -0.44, p < 0.01; HVF dia R = -0.467, p < 0.01). The receiver operating characteristic curve analysis determined the best cut-off levels of HVFi to predict occurrence of EAD (HVFi sys <1.608, HVFi dia <0.784 L/min/kg), acute rejection (HVFi sys <1.388, HVFi dia <1.077 L/min/kg), and prolonged high total bilirubin (HVFi sys <1.471, HVFi dia <1.087 L/min/kg).
Conclusions: The authors' devised HVFi has the potential to predict the postoperative graft function.
abstract_id: PUBMED:22081926
Management of excluded bile ducts in paediatric orthotopic liver transplant recipients of technical variant allografts. Background: A strategy to increase the number of size- and weight-appropriate organs and decrease the paediatric waiting list mortality is wider application of sectional orthotopic liver transplantation (OLT). These technical variants consist of living donor, deceased donor reduced and split allografts. However, these grafts have an increased risk of biliary complications. An unusual and complex biliary complication which can lead to graft loss is inadvertent exclusion of a major segmental bile duct. We present four cases and describe an algorithm to correct these complications.
Methods: A retrospective review of the paediatric orthotopic liver transplantation database (2000-2010) at Washington University in St. Louis/St. Louis Children's Hospital was conducted.
Results: Sixty-eight patients (55%) received technical variant allografts. Four complications of excluded segmental bile ducts were identified. Percutaneous cholangiography provided diagnostic confirmation and stabilization with external biliary drainage. All patients required interval surgical revision of their hepaticojejunostomy for definitive drainage. Indwelling biliary stents aided intra-operative localization of the excluded ducts. All allografts were salvaged.
Discussion: Aggressive diagnosis, percutaneous decompression and interval revision hepaticojejunostomy are the main tenets of management of an excluded bile duct. Careful revision hepaticojejunostomy over a percutaneous biliary stent can result in restoration of biliary continuity and allograft survival.
abstract_id: PUBMED:37722450
Dysfunctional Cori and Krebs cycle and inhibition of lactate transporters constitute a mechanism of primary nonfunction of fatty liver allografts. Orthotopic liver transplantation (OLT) is a lifesaving procedure. However, grafts may fail due to primary nonfunction (PNF). In the past, we demonstrated PNFs to be mainly associated with fatty allografts, and given its unpredictable nature, the development of a disease model is urgently needed. In an effort to investigate mechanism of fatty allograft-associated PNFs, we induced fatty liver disease in donor animals by feeding rats a diet deficient in methionine and choline (MCD). We performed OLT with allografts of different grades of hepatic steatosis and compared the results to healthy ones. We assessed liver function by considering serum biochemistries, and investigated genome wide responses following OLT of healthy and fatty allograft-associated PNFs. Furthermore, we performed immunohistochemistry to evaluate markers of oxidative stress and reperfusion injury, inflammation, glycolysis and gluconeogenesis, lactate transport, and its utilization as part of the Cori cycle. Strikingly, PNFs are strictly lipid content dependent. Nonetheless, a fat content of ≤17% and an increase in the size of hepatocytes of ≤11% (ballooning) greatly improved outcome of OLTs and the hepatic microcirculation. Mechanistically, PNFs arise from a dysfunctional Cori cycle with complete ablation of the lactate transporter SLC16A1. Thus, lipid-laden hepatocytes fail to perform gluconeogenesis via lactate reutilization, and the resultant hyperlactatemia and lactic acidosis causes cardiac arrhythmogenicity and death. Furthermore, the genomic and immunohistochemistry investigations underscore a dysfunctional Krebs cycle with impaired energy metabolism in lipid-burdened mitochondria. Together, we show fatty allografts to be highly vulnerable towards ischemia/reperfusion-injury, and stabilizing the Cori cycle is of critical importance to avert PNFs.
Answer: The use of allografts from the geriatric population for orthotopic liver transplants in the United States does not have a strict age limit, but outcomes vary with donor age. A study examining the outcomes of orthotopic liver transplants using grafts from donors aged ≥60 years found that graft and patient survival rates were significantly better among recipients of hepatic allografts from donors <60 years compared to recipients of octogenarian grafts (aged ≥80 years). However, there was no significant survival difference between recipients of allografts from donors aged >80 years and those aged 60-79 years, suggesting that with careful donor evaluation and appropriate recipient pairing, even donors >80 years can offer acceptable functional recovery (PUBMED:20716036).
In contrast, disparities exist in the use of older donation after circulatory death (DCD) liver allografts between the United States and the United Kingdom. The United States has a lower utilization of older DCD liver allografts compared to the United Kingdom, and the survival of grafts from donors aged >60 is worse in the United States. Despite this, data indicate similar survivals for older donors aged ≤60, supporting an extension to the current US DCD age cutoff (PUBMED:35642976).
Overall, while there is no explicit age limit for geriatric donors in the United States, the outcomes of liver transplants with older donors can be influenced by various factors, including donor and recipient characteristics, and careful consideration is required when using allografts from the geriatric population. |
Instruction: Influence of predispositions on post-traumatic stress disorder: does it vary by trauma severity?
Abstracts:
abstract_id: PUBMED:28566056
The impact of depression and PTSD symptom severity on trauma memory. Posttraumatic stress disorder (PTSD) and depression frequently co-occur following a traumatic event. Differences in the processing of autobiographical memory have been observed in both disorders in the form of overgeneralised memories and negative intrusive memories. The current study examined how symptoms of PTSD and depression influence the phenomenological characteristics of trauma memories. Undergraduate students who had experienced a traumatic event (n = 696) completed questionnaires online including measures of PTSD and depressive symptom severity. They rated their trauma memory on several phenomenological characteristics using the Memory Experiences Questionnaire [Sutin, A. R., & Robins, R. W. (2007). Phenomenology of autobiographical memories: The memory experiences questionnaire. Memory.]. Moderated multiple regression was used to examine how PTSD and depressive symptom severity related to each phenomenological characteristic. Symptoms of PTSD and depression were related separately and uniquely to the phenomenological characteristics of the trauma memory. PTSD severity predicted trauma memories that were more negative, contained higher sensory detail, and were more vivid. In contrast, depressive symptom severity predicted trauma memories that were less accessible and less coherent. These findings suggest that depressive and PTSD symptomatology affect traumatic memory differently and support a distinction between these two disorders.
abstract_id: PUBMED:30933703
Influence of earthquake exposure and left-behind status on severity of post-traumatic stress disorder and depression in Chinese adolescents. In the Longmenshan seismic fault zone in the Sichuan province of China, many children and adolescents have been exposed to the 2008 Wenchuan earthquake and/or the 2013 Lushan earthquake, and many are left alone for extended periods by parents who migrate to larger cities for work. We wished to examine how these two kinds of trauma-earthquake exposure and left-behind status-influence severity of post-traumatic stress disorder (PTSD) and depressive reactions. A cross-sectional survey of 2447 adolescents aged 13-18 at 11 schools in three cities in the Longmenshan fault zone was conducted in 2016. Potential relationships of scores on the Children's Revised Impact of Event Scale (CRIES-13) and the Depression Self-Rating Scale (KADS-6) with severity of PTSD and depression symptoms were explored using ANOVA and multiple hierarchical linear regression. Prevalence of post-traumatic stress and depression symptoms were higher among left-behind children than among those not left behind, and both types of symptoms were more severe in children exposed to both earthquakes than in children exposed only to the Lushan earthquake. Our results suggest that earthquake exposure is a strong risk factor for PTSD, whereas being left behind is a strong risk factor for depression.
abstract_id: PUBMED:29856110
The role of site and severity of injury as predictors of mental health outcomes following traumatic injury. The aim of this study was to investigate the influence of injury site and severity as predictors of mental health outcomes in the initial 12 months following traumatic injury. Using a multisite, longitudinal study, participants with a traumatic physical injury (N = 1,098) were assessed during hospital admission and followed up at 3 months (N = 932, 86%) and at 12 months (N = 715, 71%). Injury site was measured using the Abbreviated Injury Scale 90, and objective injury severity was measured using the Injury Severity Score. Participants also completed the Hospital Anxiety and Depression Scale and the Clinician Administered Post-traumatic Stress Disorder (PTSD) Scale. A random intercept mixed modelling analysis was conducted to evaluate the effects of site and severity of injury in relation to anxiety, PTSD, and depressive symptoms. Injury severity, as well as head and facial injuries, was predictive of elevated PTSD symptoms, and external injuries were associated with both PTSD and depression severity. In contrast, lower extremity injuries were associated with depressive and anxiety symptoms. The findings suggest that visible injuries are predictive of reduced mental health, particularly PTSD following traumatic injury. This has clinical implications for further advancing the screening for vulnerable injured trauma survivors at risk of chronic psychopathology.
abstract_id: PUBMED:27046669
Maladaptive trauma appraisals mediate the relation between attachment anxiety and PTSD symptom severity. Objective: In a large sample of community-dwelling older adults with histories of exposure to a broad range of traumatic events, we examined the extent to which appraisals of traumatic events mediate the relations between insecure attachment styles and posttraumatic stress disorder (PTSD) symptom severity.
Method: Participants completed an assessment of adult attachment, in addition to measures of PTSD symptom severity, event centrality, event severity, and ratings of the A1 PTSD diagnostic criterion for the potentially traumatic life event that bothered them most at the time of the study.
Results: Consistent with theoretical proposals and empirical studies indicating that individual differences in adult attachment systematically influence how individuals evaluate distressing events, individuals with higher attachment anxiety perceived their traumatic life events to be more central to their identity and more severe. Greater event centrality and event severity were each in turn related to higher PTSD symptom severity. In contrast, the relation between attachment avoidance and PTSD symptoms was not mediated by appraisals of event centrality or event severity. Furthermore, neither attachment anxiety nor attachment avoidance was related to participants' ratings of the A1 PTSD diagnostic criterion.
Conclusion: Our findings suggest that attachment anxiety contributes to greater PTSD symptom severity through heightened perceptions of traumatic events as central to identity and severe. (PsycINFO Database Record
abstract_id: PUBMED:25309831
Measuring the Severity of Negative and Traumatic Events. We devised three measures of the general severity of events, which raters applied to participants' narrative descriptions: 1) placing events on a standard normed scale of stressful events, 2) placing events into five bins based on their severity relative to all other events in the sample, and 3) an average of ratings of the events' effects on six distinct areas of the participants' lives. Protocols of negative events were obtained from two non-diagnosed undergraduate samples (n = 688 and 328), a clinically diagnosed undergraduate sample all of whom had traumas and half of whom met PTSD criteria (n = 30), and a clinically diagnosed community sample who met PTSD criteria (n = 75). The three measures of severity correlated highly in all four samples but failed to correlate with PTSD symptom severity in any sample. Theoretical implications for the role of trauma severity in PTSD are discussed.
abstract_id: PUBMED:22703614
Influence of predispositions on post-traumatic stress disorder: does it vary by trauma severity? Background: Only a minority of trauma victims (<10%) develops post-traumatic stress disorder (PTSD), suggesting that victims vary in predispositions to the PTSD response to traumas. It is assumed that the influence of predispositions is inversely related to trauma severity: when trauma is extreme predispositions are assumed to play a secondary role. This assumption has not been tested. We estimate the influence of key predispositions on PTSD induced by an extreme trauma - associated with a high percentage of PTSD - (sexual assault), relative to events of lower magnitude (accidents, disaster, and unexpected death of someone close).
Method: The National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) is representative of the adult population of the USA. A total of 34 653 respondents completed the second wave in which lifetime PTSD was assessed. We conducted three series of multinomial logistic regressions, comparing the influence of six predispositions on the PTSD effect of sexual assault with each comparison event. Three pre-existing disorders and three parental history variables were examined.
Results: Predispositions predicted elevated PTSD risk among victims of sexual assault as they did among victims of comparison events. We detected no evidence that the influence of predispositions on PTSD risk was significantly lower when the event was sexual assault, relative to accidents, disasters and unexpected death of someone close.
Conclusions: Important predispositions increase the risk of PTSD following sexual assault as much as they do following accidents, disaster, and unexpected death of someone close. Research on other predispositions and alternative classifications of event severity would be illuminating.
abstract_id: PUBMED:18720396
Tonic immobility mediates the influence of peritraumatic fear and perceived inescapability on posttraumatic stress symptom severity among sexual assault survivors. This study evaluated whether tonic immobility mediates the relations between perceived inescapability, peritraumatic fear, and posttraumatic stress disorder (PTSD) symptom severity among sexual assault survivors. Female undergraduates (N = 176) completed questionnaires assessing assault history, perceived inescapability, peritraumatic fear, tonic immobility, and PTSD symptoms. Results indicated that tonic immobility fully mediated relations between perceived inescapability and overall PTSD symptom severity, as well as reexperiencing and avoidance/numbing symptom clusters. Tonic immobility also fully mediated the relation between fear and reexperiencing symptoms, and partially mediated relations between fear and overall PTSD symptom severity, and avoidance/numbing symptoms. Results suggest that tonic immobility could be one path through which trauma survivors develop PTSD symptoms. Further study of tonic immobility may inform our ability to treat trauma victims.
abstract_id: PUBMED:25900026
An Investigation of Depression, Trauma History, and Symptom Severity in Individuals Enrolled in a Treatment Trial for Chronic PTSD. Objective: To explore how factors such as major depressive disorder (MDD) and trauma history, including the presence of childhood abuse, influence diverse clinical outcomes such as severity and functioning in a sample with posttraumatic stress disorder (PTSD).
Method: In this study, 200 men and women seeking treatment for chronic PTSD in a clinical trial were assessed for trauma history and MDD and compared on symptom severity, psychosocial functioning, dissociation, treatment history, and extent of diagnostic co-occurrence.
Results: Overall, childhood abuse did not consistently predict clinical severity. However, co-occurring MDD, and to a lesser extent a high level of trauma exposure, did predict greater severity, worse functioning, greater dissociation, more extensive treatment history, and additional co-occurring disorders.
Conclusion: These findings suggest that presence of co-occurring depression may be a more critical marker of severity and impairment than history of childhood abuse or repeated trauma exposure. Furthermore, they emphasize the importance of assessing MDD and its effect on treatment seeking and treatment response for those with PTSD.
abstract_id: PUBMED:25751510
The Number of Cysteine Residues per Mole in Apolipoprotein E Is Associated With the Severity of PTSD Re-Experiencing Symptoms. Apolipoprotien E (ApoE) is involved in critical neural functions and is associated with various neuropsychiatric disorders. ApoE exists in three isoforms that differ in the number of cysteine residues per mole (CysR/mole). This study evaluated associations between this informative ordinal biochemical scale (CysR/mole) and symptom severity in veterans with posttraumatic stress disorder (PTSD) or subthreshold PTSD. Results demonstrated a significant negative relationship between the CysR/mole and severity of PTSD re-experiencing symptoms, adjusted for trauma. The findings suggest a genetic influence on PTSD symptomatology and dovetail with recent advances regarding the molecular mechanisms underlying the differential effects of ApoE in the brain.
abstract_id: PUBMED:20154536
Injury severity and quality of life: whose perspective is important? Background: Ambiguity exists regarding factors that predict postinjury quality of life (QOL). We hypothesized that patient-perceived injury severity, not Injury Severity Score (ISS), would be correlated with QOL in a model that included severity of post-traumatic stress disorder (PTSD) symptoms.
Methods: Four hundred twenty-six trauma patients admitted to a Level I trauma center completed a questionnaire during inpatient stay and 6 months after injury. The questionnaire assessed physical component score and mental component score QOL with the SF-36, PTSD severity using the PTSD checklist, and used a four-point rating of perceived injury severity. ISS and demographic information were obtained from the trauma registry. Statistical analysis was done with Pearson's correlation and multiple regressions.
Results: ISS was not significantly correlated with perceived injury severity, PTSD symptom severity, physical component score, or mental component score. The majority of patients overestimated injury severity when compared with ISS. An increase in PTSD symptom severity and perceived injury severity significantly predicted both decreased physical and mental QOL at 6 months.
Conclusions: ISS does not give the full picture of the severity of injury. Surgeons should consider early screening of patients for perception of injury severity and PTSD symptoms to determine which patients may need psychologic intervention to improve long-term QOL.
Answer: The influence of predispositions on post-traumatic stress disorder (PTSD) does not appear to vary significantly by trauma severity. A study examining the influence of key predispositions on PTSD induced by an extreme trauma (sexual assault) relative to events of lower magnitude (accidents, disaster, and unexpected death of someone close) found that predispositions predicted elevated PTSD risk among victims of sexual assault as they did among victims of comparison events. The study detected no evidence that the influence of predispositions on PTSD risk was significantly lower when the event was sexual assault, relative to accidents, disasters, and unexpected death of someone close (PUBMED:22703614). This suggests that important predispositions increase the risk of PTSD following sexual assault as much as they do following other traumatic events.
Additionally, other factors such as the presence of major depressive disorder (MDD) and trauma history, including the presence of childhood abuse, influence diverse clinical outcomes such as severity and functioning in individuals with PTSD. However, the presence of co-occurring depression may be a more critical marker of severity and impairment than history of childhood abuse or repeated trauma exposure (PUBMED:25900026).
Moreover, the severity of PTSD symptoms can also be influenced by factors such as tonic immobility, which mediates the relations between perceived inescapability, peritraumatic fear, and PTSD symptom severity among sexual assault survivors (PUBMED:18720396). The severity of negative and traumatic events, as measured by various methods, did not correlate with PTSD symptom severity, indicating that the role of trauma severity in PTSD may be complex and not as straightforward as previously thought (PUBMED:25309831).
In summary, predispositions appear to influence PTSD risk regardless of the severity of the trauma, and other factors such as co-occurring depression and individual responses during the trauma, like tonic immobility, may also play significant roles in the severity of PTSD symptoms. |
Instruction: Do longer postpartum stays reduce newborn readmissions?
Abstracts:
abstract_id: PUBMED:11130811
Do longer postpartum stays reduce newborn readmissions? Analysis using instrumental variables. Objective: To determine the effect of postpartum length of stay on newborn readmission.
Data Sources: Secondary data set consisting of newborns born in Washington state in 1989 and 1990. The data set contains information about the characteristics of the newborn and its parents, physician, hospital, and insurance status.
Study Design: Analysis of the effect of length of stay on the probability of newborn readmission using hour of birth and method of delivery as instrumental variables (IVs) to account for unobserved heterogeneity. Of approximately 150,000 newborns born in Washington in 1989 and 1990, 108,551 (72 percent) were included in our analysis.
Principal Findings: Newborns with different lengths of stay differ in unmeasured characteristics, biasing estimates based on standard statistical methods. The results of our analyses show that a 12-hour increase in length of stay is associated with a reduction in the newborn readmission rate of 0.6 percentage points. This is twice as large as the estimate obtained using standard statistical (non-IV) methods.
Conclusion: An increase in the length of postpartum hospital stays may result in a decline in newborn readmissions. The magnitude of this decline in readmissions may be larger than previously thought.
abstract_id: PUBMED:20032758
Postpartum follow-up: can psychosocial support reduce newborn readmissions? Purpose: To determine whether there was a relationship between postpartum psychosocial support from healthcare providers and the rate of normal newborn readmissions (NNRs), and whether there was a cost benefit to justify an intervention.
Study Design And Methods: Data were abstracted for all normal newborn births from 1999 to 2006 (N = 14,786) at a community hospital in southern California at three different time periods: (1) at baseline prior to any intervention (1999-2000), (2) the 4 years during the comprehensive psychosocial support intervention (2001-2004), and (3) the 2 years during a limited psychosocial support intervention (2004-2006). A cost-benefit analysis was performed to analyze whether the financial benefits from the intervention matched or exceeded the costs for NNRs.
Results: There was a significantly lower readmission rate of 1.0% (p = < .001) during the comprehensive intervention time period compared to baseline (2.3%) or to the limited intervention time period (2.3%). Although there was no significant difference in the average cost per newborn readmitted across the three study time periods, during the comprehensive intervention time period the average costs of a NNR were significantly lower ($4,180, p = .041) for the intervention group compared to those who received no intervention ($5,338). There was a cost benefit of 513,540 dollars due to fewer readmissions during the comprehensive time period, but it did not exceed the cost of the intervention.
Clinical Implications: Providing comprehensive follow-up for new mothers in the postpartum period can reduce NNRs, thus lowering the average newborn readmission costs for those who receive psychosocial support. Followup for new mothers should be an accepted norm rather than the exception in postpartum care, but NNRs should not be considered the sole outcome in such programs.
abstract_id: PUBMED:37588413
Association of Insurance Type with Inpatient Surgical 30-day Readmissions, Emergency Department Visits/Observation Stays, and Costs. Objective: To assess the association of Private, Medicare, and Medicaid/Uninsured insurance type with 30-day Emergency Department visits/Observation Stays (EDOS), readmissions, and costs in a safety-net hospital (SNH) serving diverse socioeconomic status patients.
Summary Background Data: Medicare's Hospital Readmission Reduction Program (HRRP) disproportionately penalizes SNHs.
Methods: This retrospective cohort study used inpatient National Surgical Quality Improvement Program (2013-2019) data merged with cost data. Frailty, expanded Operative Stress Score, case status, and insurance type were used to predict odds of EDOS and readmissions, as well as index hospitalization costs.
Results: The cohort had 1,477 Private; 1,164 Medicare; and 3,488 Medicaid/Uninsured cases with a patient mean age 52.1 years [SD=14.7] and 46.8% of the cases were performed on male patients. Medicaid/Uninsured (aOR=2.69, CI=2.38-3.05, P<.001) and Medicare (aOR=1.32, CI=1.11-1.56, P=.001) had increased odds of urgent/emergent surgeries and complications versus Private patients. Despite having similar frailty distributions, Medicaid/Uninsured compared to Private patients had higher odds of EDOS (aOR=1.71, CI=1.39-2.11, P<.001), and readmissions (aOR=1.35, CI=1.11-1.65, P=.004), after adjusting for frailty, OSS, and case status, while Medicare patients had similar odds of EDOS and readmissions versus Private. Hospitalization variable cost %change was increased for Medicare (12.5%) and Medicaid/Uninsured (5.9%), but Medicaid/Uninsured was similar to Private after adjusting for urgent/emergent cases.
Conclusions: Increased rates and odds of urgent/emergent cases in Medicaid/Uninsured patients drive increased odds of complications and index hospitalization costs versus Private. SNHs care for higher cost populations while receiving lower reimbursements and are further penalized by the unintended consequences of HRRP. Increasing access to care, especially for Medicaid/Uninsured patients, could reduce urgent/emergent surgeries resulting in fewer complications, EDOS/readmissions, and costs.
abstract_id: PUBMED:29930970
The Effect of the Hospital Readmission Reduction Program on the Duration of Observation Stays: Using Regression Discontinuity to Estimate Causal Effects. Research Objective: Determine whether hospitals are increasing the duration of observation stays following index admission for heart failure to avoid potential payment penalties from the Hospital Readmission Reduction Program.
Study Design: The Hospital Readmission Reduction Program applies a 30-day cutoff after which readmissions are no longer penalized. Given this seemingly arbitrary cutoff, we use regression discontinuity design, a quasi-experimental research design that can be used to make causal inferences.
Population Studied: The High Value Healthcare Collaborative includes member healthcare systems covering 57% of the nation's hospital referral regions. We used Medicare claims data including all patients residing within these regions. The study included patients with index admissions for heart failure from January 1, 2012 to June 30, 2015 and a subsequent observation stay within 60 days. We excluded hospitals with fewer than 25 heart failure readmissions in a year or fewer than 5 observation stays in a year and patients with subsequent observation stays at a different hospital.
Principal Findings: Overall, there was no discontinuity at the 30-day cutoff in the duration of observation stays, the percent of observation stays over 12 hours, or the percent of observation stays over 24 hours. In the sub-analysis, the discontinuity was significant for non-penalized.
Conclusion: The findings reveal evidence that the HRRP has resulted in an increase in the duration of observation stays for some non-penalized hospitals.
abstract_id: PUBMED:36755372
Medicare's hospital readmissions reduction program and the rise in observation stays. Objective: To evaluate whether Medicare's Hospital Readmissions Reduction Program (HRRP) is associated with increased observation stay use.
Data Sources And Study Setting: A nationally representative sample of fee-for-service Medicare claims, January 2009-September 2016.
Study Design: Using a difference-in-difference (DID) design, we modeled changes in observation stays as a proportion of total hospitalizations, separately comparing the initial (acute myocardial infarction, pneumonia, heart failure) and subsequent (chronic obstructive pulmonary disease) target conditions with a control group of nontarget conditions. Each model used 3 time periods: baseline (15 months before program announcement), an intervening period between announcement and implementation, and a 2-year post-implementation period, with specific dates defined by HRRP policies.
Data Collection/extraction Methods: We derived a 20% random sample of all hospitalizations for beneficiaries continuously enrolled for 12 months before hospitalization (N = 7,162,189).
Principal Findings: Observation stays increased similarly for the initial HRRP target and nontarget conditions in the intervening period (0.01% points per month [95% CI -0.01, 0.3]). Post-implementation, observation stays increased significantly more for target versus nontarget conditions, but the difference is quite small (0.02% points per month [95% CI 0.002, 0.04]). Results for the COPD analysis were statistically insignificant in both policy periods.
Conclusions: The increase in observation stays is likely due to other factors, including audit activity and clinical advances.
abstract_id: PUBMED:27829570
A multi-state analysis of postpartum readmissions in the United States. Background: Readmission rates are used as a quality metric in medical and surgical specialties; however, little is known about obstetrics readmissions.
Objective: Our goals for this study were to describe the trends in postpartum readmissions over time; to characterize the common indications and associated diagnoses for readmissions; and to determine maternal, delivery, and hospital characteristics that may be associated with readmission.
Study Design: Postpartum readmissions occurring within the first 6 weeks after delivery in California, Florida, and New York were identified between 2004 and 2011 in State Inpatient Databases. Of the 5,949,739 eligible deliveries identified, 114,748 women were readmitted over the 8-year period. We calculated the rates of readmissions and their indications by state and over time. The characteristics of the readmission stay, including day readmitted, length of readmission, and charge for readmission, were compared among the diagnoses. Odds ratios were calculated using a multivariate logistic regression to determine the predictors of readmission.
Results: The readmission rate increased from 1.72% in 2004 to 2.16% in 2011. Readmitted patients were more likely to be publicly insured (54.3% vs 42.0%, P < .001), to be black (18.7% vs 13.5%, P < .001), to have comorbidities such as hypertension (15.3% vs 2.4%, P < 0.001) and diabetes (13.1% vs 6.8%, P < .001), and to have had a cesarean delivery (37.2% vs 32.9%, P < .001). The most common indications for readmission were infection (15.5%), hypertension (9.3%), and psychiatric illness (7.7%). Patients were readmitted, on average, 7 days after discharge, but readmission day varied by diagnosis: day 3 for hypertension, day 5 for infection, and day 9 for psychiatric disease. Maternal comorbidities were the strongest predictors of postpartum readmissions: psychiatric disease, substance use, seizure disorder, hypertension, and tobacco use.
Conclusion: Postpartum readmission rates have risen over the last 8 years. Understanding the risk factors, etiologies, and cause-specific timing for postpartum readmissions may aid in the development of new quality metrics in obstetrics and targeted strategies to curb the rising rate of postpartum readmissions in the United States.
abstract_id: PUBMED:33689181
Postpartum psychiatric readmissions: A nationwide study in women with and without epilepsy. Objective: To assess whether epilepsy is associated with increased odds of 30-day readmission due to psychiatric illness during the postpartum period.
Methods: The 2014 Nationwide Readmissions Database and the International Classification of Disease, Ninth Revision, Clinical Modification codes were used to identify postpartum women up to 50 years old in the United States, including the subgroup with epilepsy. The primary outcome was 30-day readmission and was categorized as (1) readmission due to psychiatric illness, (2) readmission due to all other causes, or (3) no readmission. Secondary outcome was diagnosis at readmission. The association of the primary outcome and presence of epilepsy was examined using multinomial logistic regression.
Results: Of 1 558 875 women with admissions for delivery identified, 6745 (.45%) had epilepsy. Thirteen of every 10 000 women had 30-day psychiatric readmissions in the epilepsy group compared to one of every 10 000 in the no-epilepsy group (p < .0001). Of every 10 000 women with epilepsy, 256 had 30-day readmissions due to other causes compared to 115 of every 10 000 women in the no-epilepsy group (p < .0001). The odds ratio for readmission due to psychiatric illness was 10.13 (95% confidence interval = 5.48-18.72) in those with epilepsy compared to those without. Top psychiatric causes for 30-day readmissions among women with epilepsy were mood disorders, schizophrenia and other psychotic disorders, and substance-related disorders.
Significance: This large-scale study demonstrated that postpartum women with epilepsy have higher odds of readmission due to a psychiatric illness compared to women without epilepsy. Postpartum treatment strategies and interventions to prevent psychiatric readmissions are necessary in this vulnerable population.
abstract_id: PUBMED:31242788
Risk for postpartum readmissions and associated complications based on maternal age. Purpose: To evaluate risk for postpartum readmissions and associated severe morbidity by maternal age.
Materials And Methods: This retrospective cohort study used the Nationwide Readmissions Database to analyze 60-day all-cause postpartum readmission risk from 2010 to 2014. Risk for severe maternal morbidity (SMM) during readmission was ascertained using criteria from the Centers for Disease Control and Prevention. The primary exposure of interest was maternal age. Outcomes included time to readmission, risk of readmission, and risk for SMM during readmission. Multivariable log linear analyses adjusting for patient, obstetric, and hospital factors were conducted to assess readmission and SMM risk with adjusted risk ratios (aRRs) with 95% confidence intervals (CIs) as measures of effect.
Results: Between 2010 and 2014, we identified 15.7 million deliveries, 15% of which were to women aged 35 or older. The 60-day all-cause readmission rate was 1.7%. Of these, 13% were complicated by SMM. Age-stratification revealed that women 35 and older were at increased risk for readmission and increased risk for SMM. The majority of readmissions occurred within the first 20 days regardless of age; although, women 35 and older were more likely to be admitted within the first 10 days of discharge. Patients ages 35-39, 40-44, and >44 years had 9% (95% CI 7-10%), 37% (95% CI 34-39%), and 66% (95% CI 55-79%) significantly higher rates of postpartum readmission when compared to women age 25-29. Women 35-39, 40-44, and >44 years of age had a 15% (95% CI 10-21%), 26% (95% CI 18-34%), and 56% (95% CI 25-94%) higher risk of a readmission with SMM than women 25-29.
Conclusions: AMA women are at higher risk for both postpartum readmission and severe morbidity during readmission. Women older than 35 years represent the group most likely to experience complications requiring readmission, with the highest risk age 40 and older.
abstract_id: PUBMED:33322966
Postpartum cardiac readmissions among women without a cardiac diagnosis at delivery. Objective: To determine risk for cardiac readmissions among women without cardiac diagnoses present at delivery up to 9 months after delivery hospitalization discharge.
Methods: Delivery hospitalizations without cardiac diagnoses were identified from the 2010-2014 Nationwide Readmissions Database and linked with subsequent cardiac hospitalizations over the following 9 months. The temporality of new-onset cardiac hospitalizations was calculated for each 30-day interval from delivery discharge up to 9 months postpartum. Multivariable log-linear regression models were fit to identify risk factors for cardiac readmissions adjusting for patient, medical, and obstetrical factors with adjusted risk ratios as measures of effect (aRR).
Results: Among 4.4 million delivery hospitalizations without a cardiac diagnosis, readmission for a cardiac condition within 9 months occurred in 26.8 per 10,000 women. Almost half of readmissions (45.9%) occurred within the first 30 days after delivery discharge with subsequent hospitalizations broadly distributed over the remaining 8 months. Factors such as hypertensive diseases of pregnancy (aRR 2.19, 95% CI 2.09, 2.30), severe maternal morbidity at delivery (aRR 2.06, 95% CI 1.79, 2.37), chronic hypertension (aRR 2.52, 95% CI 2.31, 2.74), lupus (aRR 4.62, 95% CI 3.82, 5.60), and venous thromboembolism during delivery (aRR 3.72, 95% CI 2.75, 5.02) were all associated with increased risk for 9-month postpartum cardiac admissions as were Medicaid (aRR 1.57, 95% CI 1.51, 1.64) and Medicare insurance (aRR 3.06, 95% CI 2.70, 3.46) compared to commercial insurance and maternal ages 35-39 and 40-54 years (aRR 1.24, 95% CI 1.17, 1.32, aRR 1.74, 95% CI 1.60, 1.90, respectively) compared to maternal age 25-29 years.
Conclusions: Among women without a cardiac diagnosis at delivery, multiple medical factors and obstetrical complications are associated with development of new cardiac disease requiring readmission in the postpartum period. Given that pregnancy complications and comorbidities may be associated with intermediate-term health outcomes, these findings support the importance of continued health care access after six weeks postpartum.
abstract_id: PUBMED:27695701
What Clinical Interventions Have Been Implemented to Prevent or Reduce Postpartum Hypertension Readmissions? A Clin-IQ. A literature review was conducted to determine what clinical interventions have been studied and implemented to prevent and/or reduce postpartum hypertension readmissions. Appropriate verbal and printed educational materials should be given to the patient prior to discharge with use of the "teach back" method. Patients and health care providers within the multidisciplinary team should be educated on the warning signs and symptoms of worsening hypertensive disease and when to appropriately involve the obstetrician. The use of text messaging may be useful in preventing hospital readmissions by increasing patient follow-up and compliance and appropriately managing patients in the postpartum period. Treating postpartum patients with furosemide may decrease blood pressure and prevent postpartum hypertension and the need for antihypertensive therapy.
Answer: Yes, longer postpartum stays have been associated with a reduction in newborn readmissions. A study analyzing data from newborns born in Washington state in 1989 and 1990 found that a 12-hour increase in the length of postpartum hospital stay is associated with a reduction in the newborn readmission rate by 0.6 percentage points (PUBMED:11130811). This effect was determined using instrumental variables to account for unobserved heterogeneity, suggesting that the magnitude of the decline in readmissions may be larger than previously estimated using standard statistical methods. |
Instruction: Does mesorectal preservation protect the ileoanal anastomosis after restorative proctocolectomy?
Abstracts:
abstract_id: PUBMED:18766412
Does mesorectal preservation protect the ileoanal anastomosis after restorative proctocolectomy? Background And Aims: The technique of rectal dissection during restorative proctocolectomy might influence the rate of septic complications. The aim of this study was to analyze the morbidity of restorative proctocolectomy in a consecutive series of patients who had rectal dissection with complete preservation of the mesorectum.
Patients And Methods: One hundred thirty-one patients who had restorative proctocolectomy for chronic inflammatory bowel disease with handsewn ileopouch-anal anastomosis (IPAA) and preservation of the mesorectal tissue were analyzed by chart reviews and a follow-up investigation at a median of 85 (14-169) months after surgery.
Results: Only one of 131 patients had a leak from the IPAA, and one patient had a pelvic abscess without evidence of leakage, resulting in 1.5% local septic complications. All other complications including the pouch failure rate (7.6%) and the incidence of both fistula (6.4%) and pouchitis (47.9%) were comparable to the data from the literature.
Conclusion: The low incidence of local septic complications in this series might at least in part result from the preservation of the mesorectum. As most studies do not specify the technique of rectal dissection, this theory cannot be verified by an analysis of the literature and needs further approval by a randomized trial.
abstract_id: PUBMED:10722040
Restorative proctocolectomy with J-pouch ileoanal anastomosis. Restorative proctocolectomy with ileoanal anastomosis, complemented by a pouch formed with the last foot of terminal ileum, is the procedure of choice for patients in need of surgical treatment for ulcerative colitis and familial polyposis. The procedure has undergone many technical modifications that have ensured a very high degree of continence and an acceptable number of daily bowel movements. Herein we describe the operative technique we use in the majority of our patients, a restorative proctocolectomy with hand-sewn J-pouch ileoanal anastomosis with protecting ileostomy. We also comment on the immediate postoperative care and on the long-term functional results.
abstract_id: PUBMED:17593481
Adenocarcinoma arising below an ileoanal anastomosis after restorative proctocolectomy for ulcerative colitis: report of a case. We report a case of adenocarcinoma developing in remnant rectal mucosa below a hand-sewn ileal pouch-anal anastomosis (IPAA) after restorative proctocolectomy for ulcerative colitis (UC). To our knowledge, this is the first such case to be reported from Japan. A 60-year-old man with a 13-year history of UC underwent proctocolectomy with a hand-sewn IPAA and mucosectomy for anal stenosis and serious tenesmic symptoms. About 7 years later, a follow-up endoscopy showed a flat elevated malignant lesion, 2 cm in diameter, below the ileoanal anastomosis. He was treated by abdominoperineal resection of the pouch and anus with total mesorectal excision. Histopathological examination of the resected specimen confirmed the presence of a well-differentiated adenocarcinoma but there were no metastatic lymph nodes. He recovered uneventfully and remains well without evidence of recurrent disease 2 years and 3 months after his last operation.
abstract_id: PUBMED:9931824
Long-term results after restorative proctocolectomy and ileoanal pouch in children with familial adenomatous polyps Restorative proctocolectomy and ileal pouch-anal anastomosis (IPAA) is considered the therapy of choice for the prophylactic treatment of FAP in adults, while straight ileoanal endorectal pull-throughs were often favored in children. However, our experience with five children undergoing an ileoanal J-pouch procedure under the age of 15 years (7-15) due to early onset of a severe symptomatic FAP phenotype suggests results which are superior to those after direct ileoanal anastomosis. Even after a primary straight ileoanal pull-through with local complications and a high defecation rate, secondary IPAA should be considered.
abstract_id: PUBMED:28721469
How does the ileoanal pouch keep its promises? : Functioning of the ileoanal pouch after restorative proctocolectomy Restorative proctocolectomy with an ileoanal pouch anastomosis (IAPA) is the surgical therapy of choice for patients with refractory ulcerative colitis and/or associated (pre)neoplastic lesions. It is predominantly performed laparoscopically. Reconstruction with a J‑pouch is the most frequently applied variant due to the ideal combination of technical simplicity and good long-term results. In the present review, potential postoperative pouch complications, their risk factors, diagnostics and surgical management, as well as mid-term and long-term quality of life after pouch construction are differentially presented based on the current literature.
abstract_id: PUBMED:31735363
Experience, complications and prognostic factors of the ileoanal pouch in ulcerative colitis: An observational study. Introduction: Ileoanal pouch following restorative proctocolectomy is the treatment for ulcerative colitis after failed medical treatment. Our main aim was to evaluate early and late morbidity associated with restorative proctocolectomy. The secondary aim was to assess risk factors for pouch failure.
Methods: A retrospective, observational, single-center study was performed. Patients who had undergone restorative proctocolectomy for a preoperative diagnosis of ulcerative colitis from 1983-2015 were included. Early (<30 days) and late (>30 days) adverse events were analyzed. Pouch failure was defined as the need for pouch excision or when ileostomy closure could not be performed. Univariate and multivariate analyses were performed to assess pouch failure risk factors.
Results: The study included 139 patients. One patient subsequently died in the early postoperative period. Mean follow-up was 23 years. Manual anastomoses were performed in 54 patients (39%). Early adverse events were found in 44 patients (32%), 15 of which (11%) had anastomotic fistula. Late adverse events were found in 90 patients (65%), and pouch-related fistulae (29%) were the most commonly found in this group. Pouch failure was identified in 42 patients (32%). In the multivariate analysis, age >50 years (p<0.01; HR: 5.55), handsewn anastomosis (p<0.01; HR: 3.78), pouch-vaginal (p=0.02; HR: 2.86), pelvic (p<0.01; HR: 5.17) and cutaneous p=0.01; HR: 3.01) fistulae were the main pouch failure risk factors.
Conclusion: Restorative proctocolectomy for a preoperative diagnosis of ulcerative colitis has high morbidity rates. Long-term outcomes could be improved if risk factors for failure are avoided.
abstract_id: PUBMED:7774465
Harry E. Bacon Oration. Comparison of the functional results of restorative proctocolectomy for ulcerative colitis between the J and W configuration ileal pouches with sutured ileoanal anastomosis. Purpose: This study was designed to compare function of patients who had undergone reconstruction following proctocolectomy for ulcerative colitis using the J or W configuration ileoanal pouch.
Methods: Of 126 patients who underwent restorative proctocolectomy between January 1981 and March 1993, 101 had surgery for ulcerative colitis. Eighty-seven of these patients were available for review by personal or postal interview. All operative procedures were performed by one surgeon. The group comprised 35 W-pouches and 52 J-pouches.
Results: More patients with a J-pouch had a stool frequency of greater than 8 per 24 hours (P = 0.044), and they were also more likely to use a perineal pad (P = 0.019). No difference in the rates of nocturnal stool frequency, fecal incontinence, or use of constipating agents between the two pouch designs was found. Significantly more patients with a J-pouch have had episodes of pouchitis (P = 0.001). Of the total patient group 91.9 percent felt that restorative proctocolectomy had improved their quality of life.
Conclusion: Minor differences in the function of the W configuration ileoanal pouch and the J configuration ileoanal pouch are demonstrated in this study.
abstract_id: PUBMED:14530685
Anal transitional zone cancer after restorative proctocolectomy and ileoanal anastomosis in familial adenomatous polyposis: report of two cases. Purpose: Restorative proctocolectomy with ileal pouch-anal anastomosis is accepted as the surgical treatment of choice for many patients with familial adenomatous polyposis. The risk of cancer developing in the ileal pouch after this surgery is unknown. Cancer may arise from the ileal pouch after restorative proctocolectomy, but that arising from the anal transitional zone has not been documented in familial adenomatous polyposis. We report two cases of this cancer from the anal transitional zone in patients with familial adenomatous polyposis, with a review of the literature.
Methods: All patients with familial adenomatous polyposis treated with restorative proctocolectomy and ileal pouch-anal anastomosis in The Cleveland Clinic were included in the study. Patients whose surveillance biopsy of the anal transitional zone revealed invasive adenocarcinoma were studied.
Results: Among a total of 146 patients with familial adenomatous polyposis who underwent restorative proctocolectomy and ileal pouch-anal anastomosis from 1983 to 2001 in our institution, none developed cancer of the anal transitional zone at up to 18 years of follow-up. However, there were two patients, both of whom underwent surgery elsewhere but who were followed up here, who developed invasive adenocarcinoma of the anal transitional zone. In one of them, cancer was diagnosed three years after a double-stapled ileal pouch-anal anastomosis, whereas in the other, cancer occurred eight years after a straight ileoanal anastomosis with mucosectomy.
Conclusions: Cancer may develop in the anal transitional zone after restorative proctocolectomy with ileal pouch-anal anastomosis for familial adenomatous polyposis. Long-term surveillance of the anal transitional zone needs to be emphasized.
abstract_id: PUBMED:1422751
Randomized trial of loop ileostomy in restorative proctocolectomy. A randomized controlled trial was performed to assess the role of loop ileostomy in totally stapled restorative proctocolectomy. Entry criteria included all patients who were not on corticosteroids in whom on-table testing revealed a watertight pouch with intact ileoanal anastomosis. Of 59 patients undergoing restorative proctocolectomy over 36 months, 45 were eligible and were randomized to loop ileostomy (n = 23) or no ileostomy (n = 22). The age and diagnosis of the groups were similar. There were no deaths; two ileoanal anastomotic leaks occurred, one in each group. Ileoanal stenosis occurred in five patients with and one without an ileostomy. The incidences of wound and pelvic sepsis, bowel obstruction and pouchitis were similar. Twelve patients (52 per cent) developed ileostomy-related complications. The median total hospital stay was 23 (range 13-75) days with ileostomy and 13 (range 7-119) days without (P < 0.001). This study indicates that there is a low risk of pelvic sepsis which is not increased by avoiding a protective ileostomy. Loop ileostomy was associated with a high incidence of complications.
abstract_id: PUBMED:18791770
Functional outcome after restorative proctocolectomy in pigs: comparing a novel transverse ileal pouch to the J-pouch and straight ileoanal anastomosis. Background: Restorative proctocolectomy followed by an ileoanal J-pouch procedure is the therapy of choice for patients with familial adenomatous polyposis and ulcerative colitis. After low anterior rectal resection, the authors have reported on a novel, less complex pouch configuration, a transverse coloplasty pouch. The aim of the present work was to apply this new design to the ileal pouch construction, to evaluate feasibility, and to measure functional results in comparison with the J-pouch and the straight ileoanal anastomosis using the pig as an animal model.
Methods: Twenty-three pigs underwent restorative proctocolectomy followed by reconstruction with straight ileoanal anastomosis (IAA; n = 5), J-pouch (n = 7), and a transverse ileal pouch (TIP; n = 11). Pigs were followed for 6 days postoperatively. Peristaltic function was assessed by manometry proximal to the pouch, in the reservoir, and at the level of the ileoanal anastomosis. Functional outcome was monitored by semiquantitative assessment of the general condition of the animals, postoperative feeding habits, and stool frequency and consistency. A Fourier analysis was performed in order to compare peristalsis in the ileal reservoirs. The reservoir volume was measured in situ by triple contrast computed tomography scan with 3D reconstruction.
Results: Seventeen animals survived for 1 week. There was no difference in the general condition or the feeding habits of the groups. A significant number of pigs with the TIP pouch (7/10) had semisolid or formed stools as opposed to liquid stools after J-pouch (6/6) and IAA (4/5; p = 0.01). TIP animals had a lower stool frequency (3.2 +/- 1.14 per day) on day 6 after the operation than pigs with J-pouch, 5.33 +/- 1,03, and IAA, 4.6 +/- 1.82 (p = 0.0036). The in situ volume of the pouches did not differ significantly. The Fourier analysis demonstrated a disruption of peristalsis by the J-pouch and the TIP reconstruction but not after IAA.
Conclusion: The function of ileoanal reservoirs after proctocolectomy may result from the disruption of properistaltic waves after pouch formation. The mechanism of peristalsis disruption is independent of the in situ volume of the pouch.
Answer: The study reported in abstract PUBMED:18766412 suggests that mesorectal preservation might protect the ileoanal anastomosis after restorative proctocolectomy. The authors analyzed a series of 131 patients who underwent restorative proctocolectomy with handsewn ileopouch-anal anastomosis (IPAA) and preservation of the mesorectal tissue. They found a low incidence of local septic complications, with only one patient experiencing a leak from the IPAA and another patient having a pelvic abscess without evidence of leakage, resulting in a 1.5% rate of local septic complications. The authors concluded that the low incidence of local septic complications might at least in part be due to the preservation of the mesorectum. However, they also noted that most studies do not specify the technique of rectal dissection, and therefore, their theory requires further validation through a randomized trial. |
Instruction: Are weight-related attitudes and behaviours associated with the accuracy of BMI derived from self-reported weight and height among 13-year-olds?
Abstracts:
abstract_id: PUBMED:25525037
Are weight-related attitudes and behaviours associated with the accuracy of BMI derived from self-reported weight and height among 13-year-olds? Aims: The aim was to explore the association between weight-related attitudes and behaviours and the accuracy of body mass index (BMI) derived from self-reported weight and height.
Methods: A total of 828 adolescents from the Health In Adolescents study were included. Self-reported and objective weight and height data were collected, and BMI was computed. Information about weight-related attitudes and behaviours was obtained. The association between weight-related attitudes and behaviours and the difference between BMI computed from self-reported and objective measures was assessed using generalized linear mixed model analyses.
Results: BMI was under-reported by overweight girls (p<0.001) and boys (p<0.001) compared to their normal weight counterparts. Underweight girls on the other hand over-reported their BMI (p=0.002). Girls who reported trying to lose weight under-reported their BMI compared to girls who had not tried to do anything about their weight (p=0.02). Girls who perceived their weight as being too much under-reported their BMI compared to girls who thought their weight was ok, the association was however borderline significant (p=0.06); this association was also found among boys (p=0.03). Self-weighing and the reported importance of weight for how adolescents perceive themselves were not associated with the accuracy of BMI.
Conclusions: weight perception and weight control behaviour among girls only were related to the accuracy of self-reported BMI; no association was found with self-weighing behaviour and the perceived importance of weight for how adolescents perceive themselves. Knowledge of such factors will allow for a better interpretation and possibly adjustment/correction of results of surveys based on self-reported weight and height data.
abstract_id: PUBMED:30314261
Self-Reported vs. Measured Height, Weight, and BMI in Young Adults. Self-reported height and weight, if accurate, provide a simple and economical method to track changes in body weight over time. Literature suggests adults tend to under-report their own weight and that the gap between self-reported weight and actual weight increases with obesity. This study investigates the extent of discrepancy in self-reported height, weight, and subsequent Body Mass Index (BMI) versus actual measurements in young adults. Physically measured and self-reported height and weight were taken from 1562 students. Male students marginally overestimated height, while females were closer to target. Males, on average, closely self-reported weight. Self-reported anthropometrics remained statistically correlated to actual measures in both sexes. Categorical variables of calculated BMI from both self-reported and actual height and weight resulted in significant agreement for both sexes. Researcher measured BMI (via anthropometric height and weight) and sex were both found to have association with self-reported weight while only sex was related to height difference. Regression examining weight difference and BMI was significant, specifically with a negative slope indicating increased BMI led to increased underestimation of weight in both sexes. This study suggests self-reported anthropometric measurements in young adults can be used to calculate BMI for weight classification purposes. Further investigation is needed to better assess self-reported vs measured height and weight discrepancies across populations.
abstract_id: PUBMED:26060545
Validity of self-reported height and weight in elderly Poles. Background/objectives: In nutritional epidemiology, collecting self-reported respondent height and weight is a simpler procedure of data collection than taking measurements. The aim of this study was to compare self-reported and measured height and weight and to evaluate the possibility of using self-reported estimates in the assessment of nutritional status of elderly Poles aged 65 + years.
Subjects/methods: The research was carried out in elderly Poles aged 65 + years. Respondents were chosen using a quota sampling. The total sample numbered 394 participants and the sub-sample involved 102 participants. Self-reported weight (non-corrected self-reported weight; non-cSrW) and height estimates (non-corrected self-reported height; non-cSrH) were collected. The measurements of weight (measured weight; mW) and height (measured height; mH) were taken. Using multiple regression equations, the corrected self-reported weight (cSrW) and height (cSrH) estimates were calculated.
Results: Non-cSrH was higher than mH in men on average by 2.4 cm and in women on average by 2.3 cm. In comparison to mW, non-cSrW was higher in men on average by 0.7 kg, while in women no significant difference was found (mean difference of 0.4 kg). In comparison to mBMI, non-cSrBMI was lower on average by 0.6 kg/m(2) in men and 0.7 kg/m(2) in women. No differences were observed in overweight and obesity incidence when determined by mBMI (68% and 19%, respectively), non-cSrBMI (62% and 14%, respectively), cSrBMI (70% and 22%, respectively) and pcSrBMI (67% and 18%, respectively).
Conclusions: Since the results showed that the estimated self-reported heights, weights and BMI were accurate, the assessment of overweight and obesity incidence was accurate as well. The use of self-reported height and weight in the nutritional status assessment of elderly Poles on a population level is therefore recommended. On an individual level, the use of regression equations is recommended to correct self-reported height, particularly in women.
abstract_id: PUBMED:24711630
Accuracy of self-reported weight, height and BMI in US firefighters. Background: Obesity is of increasing concern especially among firefighters. Bias in self-reported body weight, height and body mass index (BMI) has received a great deal of attention given its importance in epidemiological field research on obesity.
Aims: To determine the validity of self-reported weight, height and BMI and identify potential sources of bias in a national sample of US firefighters.
Methods: Self-reported and measured weight and height (and BMI derived from them) were assessed in a national sample of 1001 career male firefighters in the USA and errors in self-reported data were determined.
Results: There were 1001 participants. Self-reported weight, height and BMI were significantly correlated with their respective measured counterparts, i.e. measured weight (r = 0.990; P < 0.001), height (r = 0.961; P < 0.001) and BMI (r = 0.976; P < 0.001). The overall mean difference and standard deviation between self-reported weight, height and BMI were 1.3±2.0kg, 0.94±1.9cm and 0.09±0.9kg/m(2), respectively, for male firefighters. BMI-based weight status (P < 0.001) was the most consistent factor associated with bias in self-reported BMI, weight and height, with heavier firefighters more likely to underestimate their weight and overestimate their height, resulting in underestimated BMIs. Therefore, using self-reported BMI would have resulted in overestimating the prevalence of obesity (BMI ≥ 30.0) by 1.8%, but underestimating the prevalence of more serious levels of obesity (Class II and III) by 1.2%.
Conclusions: Self-reported weight and height (and the resulting BMI) were highly correlated with measured values. A primary and consistent source of error in self-reported weight, height and BMI based on those indices was BMI-based weight status.
abstract_id: PUBMED:23958434
The correlation between self-reported and measured height, weight, and BMI in reproductive age women. This prospective, cross-sectional study of 60 women compares self-reported height, weight, and BMI with measured values. Self-reported BMI (29.0±8.37 kg/m(2)) was slightly lower than measured BMI (29.1±8.38 kg/m(2)) (p=0.4). Eighty percent of participants reported a BMI in the same category in which their BMI was measured. Pearson's correlation coefficient for height (0.96, p<0.001), weight (0.99, p<0.001), and BMI (0.99, p<0.001) were high. Reproductive age women accurately reported their height and weight.
abstract_id: PUBMED:27524941
Accuracy of self-reported height, weight, and waist circumference in a general adult Chinese population. Background: Self-reported height, weight, and waist circumference (WC) are widely used to estimate the prevalence of obesity, which has been increasing rapidly in China, but there is limited evidence for the accuracy of self-reported data and the determinants of self-report bias among the general adult Chinese population.
Methods: Using a multi-stage cluster sampling method, 8399 residents aged 18 or above were interviewed in the Jiangsu Province of China. Information on self-reported height, weight, and WC, together with information on demographic factors and lifestyle behaviors, were collected through structured face-to-face interviews. Anthropometrics were measured by trained staff according to a standard protocol.
Results: Self-reported height was overreported by a mean of 1.1 cm (95 % confidence interval [CI]: 1.0 to 1.2). Self-reported weight, body mass index (BMI), and WC were underreported by -0.1 kg (95 % CI: -0.2 to 0.0), -0.4 kg/m(2) (95 % CI: -0.5 to -0.3) and -1.5 cm (95 % CI: -1.7 to -1.3) respectively. Sex, age group, location, education, weight status, fruit/vegetable intake, and smoking significantly affected the extent of self-report bias. According to the self-reported data, 25.5 % of obese people were misclassified into lower BMI categories and 8.7 % of people with elevated WC were misclassified as normal. Besides the accuracy, the distribution of BMI and WC and their cut-off point standards for obesity of a population affected the proportion of obesity misclassification.
Conclusion: Amongst a general population of Chinese adults, there was rather high proportion of obesity misclassification using self-reported weight, height, and WC data. Self-reported anthropometrics are biased and misleading. Objective measurements are recommended.
abstract_id: PUBMED:29379548
The validity of self-reported vs. measured body weight and height and the effect of self-perception. Introduction: The objective was to assess the validity of self-reported body weight and height and the possible influence of self-perception of body mass index (BMI) status on the actual BMI during the adolescent period.
Material And Methods: This cross sectional study was conducted on 3918 high school students. Accurate BMI perception occurred when the student's self-perception of their BMI status did not differ from their actual BMI based on measured height and weight. Agreement between the measured and self-reported body height and weight and BMI values was determined using the Bland-Altman metod. To determine the effects of "a good level of agreement", hierarchical logistic regression models were used.
Results: Among male students who reported their BMI in the normal region, 2.8% were measured as overweight while 0.6% of them were measured as obese. For females in the same group, these percentages were 1.3% and 0.4% respectively. Among male students who perceived their BMI in the normal region, 8.5% were measured as overweight while 0.4% of them were measured as obese. For females these percentages were 25.6% and 1.8% respectively. According to logistic regression analysis, residence and accurate BMI perception were significantly associated with "good agreement" (p ≤ 0.001).
Conclusions: The results of this study demonstrated that in determining obesity and overweight statuses, non-accurate weight perception is a potential risk for students.
abstract_id: PUBMED:33759632
Misreporting Weight and Height Among Mexican and Puerto Rican Men. Most obesity prevalence data rely on self-report, which typically differs when compared to objectively measured height, weight, and body mass index (BMI). Given that Latino men have high rates of obesity in the United States and demonstrate greater misreporting compared to Caucasian men, examining the factors that contribute to misreporting among Latino men is warranted. This study examined BMI, Latino ethnic background (Mexican or Puerto Rican), and social desirability in relation to misreporting of BMI, as defined as the discrepancy between self-reported and measured height and weight, in Latino men. Participants were 203 adult Mexican and Puerto Rican men, average age 39.41 years, who participated in a larger study. Participants self-reported their weight and height, had their weight and height objectively measured, and completed a measure of social desirability. Measured BMI was the strongest predictor of misreporting BMI, such that the greater the participants' BMI, the greater the discrepancy in BMI (p < .001). Misreporting of BMI did not vary based on ethnic background, and measured BMI did not moderate the relationship between social desirability and misreporting of BMI. When normative error was distinguished from misreporting in post-hoc analyses, results showed that only 34.5% of participants demonstrated misreporting. Findings highlight the importance of identifying normative error when examining misreporting in order to improve the accuracy of self-reported BMI data. Future research on misreporting for Latino men should include weight awareness, acculturation, and length of U.S. residency as these variables may be related to self-reported weight and height.
abstract_id: PUBMED:31837644
Validity of self-reported weight and height for BMI classification: A cross-sectional study among young adults. Objective: The aim of this study was to validate self-reported anthropometric measurements and body mass index (BMI) classifications in a young adult population.
Methods: Both self-reported and directly measured weight and height of 100 young adults 18 to 30 y of age were collected. Participants were measured at one of two university clinics by two research dietitians and within 2 wk self-reported their body weight and height via a questionnaire as part of a larger study. BMI was calculated and categorized according to the World Health Organization's cut-points for underweight, healthy weight, and overweight or obesity. The validity of measured against self-reported weight and height was examined using Pearson's correlation, Bland-Altman plots, and Cohen's kappa statistic.
Results: Strong correlation was observed between measured and self-reported weight (r = 0.99; P < 0.001), height (r = 0.95; P < 0.001), and BMI (r = 0.94; P < 0.001). Bland-Altman plots indicated that the mean difference between self-reported and direct BMI measurements were small in the total sample (0.1 kg/m2). The majority of values fell within the limits of agreement (2 SD), with random scatter plots and no systemic bias detected. The classification of BMI from self-reported and direct measurements showed that 88% were placed in the equivalent weight category with very good agreement Cohen's kappa (0.76; 95% confidence interval, 0.63-0.89; P < 0.001).
Conclusions: Good agreement was detected between self-reported and direct anthropometric measurements. The criticism of self-reported anthropometric measurements is unwarranted. The findings provide support for using self-reported height and weight data for research in Australian young adults when direct measurements are not feasible.
abstract_id: PUBMED:20688341
Bias in self-reported height and weight in preadolescents. Objectives: To examine the validity of self-reported height and weight and factors related to misreporting in preadolescents. The accuracy of screening for underweight (thinness), overweight, and obesity using self-reported data was also tested.
Study Design: Self-reported height and weight was administered and then measured in 798 fourth graders age 8 to 11 years. Body mass index (BMI) was calculated from self-reported and measured data and BMI categories were determined using international age- and sex-specific BMI criteria.
Results: Preadolescents overestimated their height by 0.54 ± 5.17 cm and underreported their weight by 0.80 ± 3.09 kg. BMI derived from self-reported data was underestimated by 0.47 ± 1.79 kg/m². Measured BMI category was an important independent predictor of bias in self-reported weight and BMI. Children who were overweight or obese underestimated their weight and BMI to a greater degree compared with normal weight/underweight children. The influence of sociodemographic factors on bias in self-reported values was relatively small. Approximately 15% of children were misclassified in BMI categories when self-reported data were used, especially in the underweight (thinness) and obese category.
Conclusions: Children age 8 to 11 years were not able to accurately estimate their actual height and weight, leading to erroneous estimating rates of their weight status.
Answer: Yes, weight-related attitudes and behaviors are associated with the accuracy of BMI derived from self-reported weight and height among 13-year-olds. The study included in abstract PUBMED:25525037 found that BMI was under-reported by overweight girls and boys compared to their normal weight counterparts. Underweight girls, on the other hand, over-reported their BMI. Girls who reported trying to lose weight under-reported their BMI compared to girls who had not tried to do anything about their weight. Additionally, girls who perceived their weight as being too much under-reported their BMI compared to girls who thought their weight was okay, although this association was borderline significant. The same association was found among boys. However, self-weighing and the reported importance of weight for how adolescents perceive themselves were not associated with the accuracy of BMI. These findings suggest that weight perception and weight control behavior among girls only were related to the accuracy of self-reported BMI. Knowledge of such factors will allow for a better interpretation and possibly adjustment/correction of results of surveys based on self-reported weight and height data. |
Instruction: The significance of prenatally identified isolated clubfoot: is amniocentesis indicated?
Abstracts:
abstract_id: PUBMED:9539532
The significance of prenatally identified isolated clubfoot: is amniocentesis indicated? Objective: Our purpose was to determine the significance of finding an isolated clubfoot on a prenatal sonogram.
Study Design: All fetuses found to have an isolated congenital clubfoot over a 9-year period were retrospectively identified. Fetuses with associated anomalies were excluded. Review of medical records for obstetric and neonatal outcome and pathologic and cytogenic results were tabulated.
Results: Eighty-seven fetuses were identified from our database as having isolated clubfoot on prenatal ultrasonography, with complete follow-up available for 68 fetuses. Sixty of the 68 fetuses were confirmed as having clubfoot after delivery (false-positive rate = 11.8%). The male/female ratio was 2:1. Four fetuses (5.9%) had abnormal karyotypes: 47,XXY, 47,XXX, trisomy 18, and trisomy 21. Nine fetuses had hip or other limb abnormalities noted after birth. Other anomalies not detected until delivery included a unilateral undescended testis, ventriculoseptal defects (n = 2), hypospadias (n = 2), early renal dysplasia, mild posterior urethral valves, and a two-vessel cord. Five of the 68 patients (including those with aneuploidy) had pregnancy terminations. Eleven patients were delivered preterm.
Conclusion: Karyotypic evaluation is recommended when isolated clubfoot is identified on prenatal sonogram because other subtle associated malformations may not be detected ultrasonographically in the early second trimester.
abstract_id: PUBMED:10711559
Isolated clubfoot diagnosed prenatally: is karyotyping indicated? Objective: To evaluate the appropriateness of fetal karyotyping after prenatal sonographic diagnosis of isolated unilateral or bilateral clubfoot.
Methods: We retrospectively reviewed a database of fetal abnormalities diagnosed by ultrasound at a single tertiary referral center from July 1994 to March 1999 for cases of unilateral or bilateral clubfoot. Fetuses who had additional anomalies diagnosed prenatally, after targeted sonographic fetal anatomy surveys, were excluded. Outcome results included fetal karyotype diagnosed by amniocentesis, or newborn physical examination by a pediatrician.
Results: During the 5-year period, 5,731 fetal abnormalities were diagnosed from more than 27,000 targeted prenatal ultrasound examinations. There were 51 cases of isolated clubfoot. The mean maternal age at diagnosis was 30.5 years. The mean gestational age at diagnosis was 21.6 weeks. Twenty-three of the women (45%) were at increased risk of fetal aneuploidy, on the basis of advanced maternal age or abnormal maternal serum screening. Six women (12%) had positive family histories of clubfoot; however, no cases of aneuploidy were found by fetal karyotype evaluation or newborn physical examination. All cases of clubfoot diagnosed prenatally were confirmed at newborn physical examination, and no additional malformations were detected.
Conclusion: After prenatal diagnosis of isolated unilateral or bilateral clubfoot, there appeared to be no indication to offer karyotyping, provided that a detailed sonographic fetal anatomy survey was normal and there were no additional indications for invasive prenatal diagnoses.
abstract_id: PUBMED:18937757
Management and outcome in prenatally diagnosed sacrococcygeal teratomas. Background: The aim of the present study was to retrospectively determine the clinical factors affecting the outcome after birth in prenatally diagnosed sacrococcygeal teratomas (SCT).
Methods: Six cases of prenatal SCT were identified from January 1985 until August 2005. A retrospective review of case-notes and pathological reports was carried out. Clinical data during the perinatal period, operative findings, postoperative complications and follow up were evaluated in the patients with prenatally diagnosed SCT.
Results: SCT presented as type I in two neonates and type III in four between 22 and 33 weeks' gestation. Fetal intervention was not performed for any fetus. Five of six were delivered by cesarean section and the other was delivered vaginally due to small tumor size. Patients were born at between 29 and 39 weeks' gestation and weighed from 1840 to 3500 g. All patients with type III SCT presented with related diseases, including bilateral hydronephrosis, neurological deficit of the communicating peroneal nerve such as paralytic talipes equines, bladder or bowel dysfunction, high-output cardiac failure, or fetal hydrops in one of a set of fraternal twins. A baby with high-output cardiac failure and fetal hydrops underwent urgent cesarean section at 29 weeks' gestation and died 8 days after birth despite intensive care due to multi-organ failure. In five cases, surgery was successful with good outcomes maintained at follow-up of between 8 months and 14 years.
Conclusions: Detailed ultrasound should be performed to rule out associated anomalies, and determine the presence or absence of hydrops in prenatally diagnosed SCT. Fetal hydrops, orthopedic impairment such as lower extremity weakness and swelling, and urinary incontinence are important clinical factors affecting the outcome after birth in prenatally diagnosed SCT. In particular, the present study indicated that the association of a fraternal twin and fetal hydrops makes it very difficult to treat SCT perinatally.
abstract_id: PUBMED:20069547
Outcome of prenatally diagnosed isolated clubfoot. Objectives: To analyze the aneuploidy risk and treatment outcome of prenatally diagnosed isolated clubfoot, to determine the false-positive rate (FPR) of ultrasound diagnosis and to calculate the risk of diagnostic revision to complex clubfoot.
Methods: By chart review, 65 patients were retrospectively ascertained to have unilateral or bilateral clubfeet diagnosed prenatally. We calculated the rates of false positives, aneuploidy and diagnostic revision to complex clubfoot, and used an ad hoc scoring system to determine orthopedic outcome. Published rates of aneuploidy were pooled and evaluated.
Results: Prenatally diagnosed isolated clubfoot FPR (defined as 1 - positive predictive value) was 10.5% (95% CI, 5.8-18%) (calculated per foot). After a minimum of 1-year postnatal follow-up, 13% (95% CI, 6-26%) of patients had revised diagnoses of complex clubfoot. No patients had aneuploidy identified by cytogenetic analysis or clinical assessment. Of the 34 patients with 2-year postnatal follow-up, 76.5% were treated with serial casting with or without Botox. All children with isolated clubfoot were walking and had an average outcome score of 'very good' to 'excellent'.
Conclusions: When counseling women regarding prenatally diagnosed isolated clubfoot, it is important to tell them that approximately 10% of individuals will have a normal foot or positional foot deformity requiring minimal treatment. Conversely, 10-13% of prenatally diagnosed cases of isolated clubfoot will have complex clubfoot postnatally, based on the finding of additional structural or neurodevelopmental abnormalities. Although this study did not identify an increased risk of fetal aneuploidy associated with isolated clubfoot, a review of the literature indicates a risk of 1.7-3.6% with predominance of sex chromosome aneuploidy.
abstract_id: PUBMED:21268031
Perinatal outcome of prenatally diagnosed congenital talipes equinovarus. Objective: The purpose of this study was to investigate the perinatal outcome of prenatally diagnosed congenital talipes equinovarus.
Methods: This was a retrospective observational study of all cases of prenatally diagnosed congenital talipes equinovarus referred to a major tertiary fetal medicine unit. Cases were identified from the fetal medicine and obstetric databases and pregnancy details and delivery outcome data obtained. Details of termination of pregnancy, number of patients undergoing karyotyping as well as details of prenatal classification of severity were recorded.
Results: A total of 174 cases were identified. Of these, outcome data was available for 88.5% (154/174) of the pregnancies. Eighty three (47.7%) of cases were isolated and 91 cases (52.3%) were associated with additional abnormalities. There was a significant difference in birth weights between the two cohorts. Bilateral abnormality tended to be more severe. A high caesarean section rate was noted overall and a high preterm delivery rate seen in the isolated group.
Conclusion: This study is important because it provides contemporary data that can be used to counsel women prenatally. In particular, the raised risk of preterm delivery and caesarean section as well as the increased severity of the condition when both feet are affected should be discussed. The poor perinatal outcome when additional anomalies are present and the increased risk of aneuploidy are also important factors.
abstract_id: PUBMED:12375549
Chronology of neurological manifestations of prenatally diagnosed open neural tube defects. Objective: To evaluate the incidence and chronology of sonographic markers of neurological compromise in prenatally diagnosed neural tube defects.
Methods: We reviewed our ultrasound database from 1988 to 1999 to identify all cases of prenatally diagnosed neural tube defects. All patients received an initial detailed targeted ultrasound evaluation with subsequent evaluations every 4-6 weeks. Cases involving multiple congenital anomalies, aneuploidy, or inadequate follow-up were excluded. Specific ultrasound markers assessed included the presence of ventriculomegaly (> 10 mm) and clubfoot.
Results: Forty-seven cases of neural tube defects were identified over the study interval. After exclusions, 42 cases were available for evaluation. The overall incidence of ventriculomegaly and clubfoot in the study cohort was 86% and 38%, respectively. In the 33 patients with initial ultrasound examination performed at < 24 weeks' gestation, 76% (25/33) had evidence of ventriculomegaly and 30% (10/33) and clubfoot. Only 9% (1/11) of the patients managed expectantly developed evidence of ventriculomegaly and 3/11 (27%) developed clubfoot from the time of the initial ultrasound examination to delivery.
Conclusions: Ultrasound markers of neurological compromise are early and frequent findings associated with fetal neural tube defects. Development of ventriculomegaly is an uncommon occurrence later in gestation, while the risk for developing clubfoot appears to increase as gestation progresses.
abstract_id: PUBMED:20733428
Prenatally diagnosed clubfeet: comparing ultrasonographic severity with objective clinical outcomes. Background: Improvements in obstetric sonography (US) have led to an increased prenatal detection of clubfoot, but studies have not been able to correlate sonographic severity to clinical deformity at birth. The purpose of this study was to decrease the false positive (FP) rate for prenatally identified clubfeet, and to predict clinical severity using a new prenatal sonographic classification system.
Methods: We retrospectively identified all pregnant patients referred to the fetal care center at our institution for a diagnosis of clubfoot between 2002 and 2007. A total of 113 fetuses were identified. Follow-up information was available for 107 fetuses (95%). Out of 107 fetuses, 17 were terminated or died shortly after birth. Seven patients had normal studies or were not seen at our center. Out of 83 patients, 42 had an US available for rereview. A novel sonographic severity scale for clubfoot (mild/moderate/severe) was assigned by a radiologist specializing in prenatal US to each fetus based on specific anatomic features. The prenatal sonographic scores were then assessed with respect to final postnatal clinical diagnosis and to clinical severity.
Results: None of the pregnancies were terminated because of an isolated diagnosis of clubfoot. Of the remaining 83 fetuses with a prenatal diagnosis of at least 1 clubfoot, 67 had a clubfoot documented at birth (FP=19%). A foot classified as "mild" on prenatal US was significantly less likely to be a true clubfoot at birth than when a "moderate" or "severe" diagnosis was given (Odds Ratio=21, P<0.0001). If "mild" clubfoot patients were removed from the analysis, our FP rate decreased to 3/42. For a subgroup in which postnatal DiMeglio scoring was available, prenatal sonographic stratification of clubfoot did not relate to postnatal clinical severity.
Conclusions: Our initial experience with this novel sonographic scoring system showed improved detection of a true clubfoot prenatally and a decrease in the FP rate. An isolated "mild" clubfoot diagnosed on a prenatal sonogram is less likely to be a clubfoot at birth; this will have substantial impact on prenatal counseling.
Level Of Evidence: Level III Diagnostic Study.
abstract_id: PUBMED:37942915
Prenatal diagnosis of isolated bilateral clubfoot: Is amniocentesis indicated? Introduction: The aim of this study is to evaluate the benefit of cytogenetic testing by amniocentesis after an ultrasound diagnosis of isolated bilateral talipes equinovarus.
Material And Methods: This multicenter observational retrospective study includes all prenatally diagnosed cases of isolated bilateral talipes equinovarus in five fetal medicine centers from 2012 through 2021. Ultrasound data, amniocentesis results, biochemical analyses of amniotic fluid and parental blood samples to test neuromuscular diseases, pregnancy outcomes, and postnatal outcomes were collected for each patient.
Results: In all, 214 fetuses with isolated bilateral talipes equinovarus were analyzed. A first-degree family history of talipes equinovarus existed in 9.8% (21/214) of our cohort. Amniocentesis was proposed to 86.0% (184/214) and performed in 70.1% (129/184) of cases. Of the 184 karyotypes performed, two (1.6%) were abnormal (one trisomy 21 and one triple X syndrome). Of the 103 microarrays performed, two (1.9%) revealed a pathogenic copy number variation (one with a de novo 18p deletion and one with a de novo 22q11.2 deletion) (DiGeorge syndrome). Neuromuscular diseases (spinal muscular amyotrophy, myasthenia gravis, and Steinert disease) were tested for in 56 fetuses (27.6%); all were negative. Overall, 97.6% (165/169) of fetuses were live-born, and the diagnosis of isolated bilateral talipes equinovarus was confirmed for 98.6% (139/141). Three medical terminations of pregnancy were performed (for the fetuses diagnosed with Down syndrome, DiGeorge syndrome, and the 18p deletion). Telephone calls (at a mean follow-up age of 4.5 years) were made to all parents to collect medium-term and long-term follow-up information, and 70 (33.0%) families were successfully contacted. Two reported a rare genetic disease diagnosed postnatally (one primary microcephaly and one infantile glycine encephalopathy). Parents did not report any noticeably abnormal psychomotor development among the other children during this data collection.
Conclusions: Despite the low rate of pathogenic chromosomal abnormalities diagnosed prenatally after this ultrasound diagnosis, the risk of chromosomal aberration exceeds the risks of amniocentesis. These data may be helpful in prenatal counseling situations.
abstract_id: PUBMED:25394569
Congenital talipes equinovarus: frequency of associated malformations not identified by prenatal ultrasound. Objectives: To establish the frequency of prenatally undetected associated malformations (identified at birth) in infants with apparent "isolated" club foot deformity.
Methods: A cohort study of all infants with unilateral or bilateral club foot deformity identified at birth among 311 480 infants surveyed between 1972 and 2012 at Brigham and Women's Hospital in Boston. Those with talipes equinovarus were divided into "isolated" and "complex", based on the findings in examination and by chromosome analysis.
Results: One hundred and forty-two infants had "isolated" talipes equinovarus (TEV), and 66 had the "complex" type. Six (4.2%) of the 142 infants with "isolated" TEV were found at birth to have associated malformations that had not been identified by imaging during pregnancy. These abnormalities included hip dislocation (n = 2), bilateral post-axial polydactyly of the feet (n = 1), penile chordee (n = 1), and hypospadias (n = 2).
Conclusion: In this consecutive series of infants with isolated talipes equinovarus, 95.8% had no additional malformations identified by examination at birth. None of the additional findings were severe enough to affect the medical prognosis of the affected infant. © 2014 John Wiley & Sons, Ltd.
abstract_id: PUBMED:16470735
Postmortem evaluation of 220 prenatally diagnosed fetuses with neural tube defects: detection of associated anomalies in a Turkish population. Objectives: The aim of this study is to represent the distribution of disorders resulting from neural tube defects (NTDs).
Materials And Methods: This study was conducted on 220 prenatally diagnosed cases with NTDs. Fetuses were evaluated by physical examination, anthropometric measurements, X-rays, and photographs after termination of pregnancy. Chromosome analysis and autopsy were performed for 37 fetuses (16.8%) with additional malformations.
Results: In 29 out of 37 fetuses (78.4%), additional malformations were detected by prenatal ultrasonography, whereas in eight cases postmortem evaluation produced additional findings that were not detected prenatally. Fourteen of 37 (37.8%) and 65 of 220 (29.5%) fetuses had clubfoot, which was mostly secondary to NTDs. There was no difference in sex distribution between isolated NTDs and the group with additional abnormalities and among the groups anencephaly and anencephaly + anomaly, encephalocele and encephalocele + anomaly, spina bifida and spina bifida + anomaly. There was only one case, a female fetus, with iniencephaly in this group. Anencephaly was more frequent in cases with isolated NTDs (48.1%) than in those with additional anomalies (27%). There was no difference for other groups of NTDs. The most frequent disorder was vertebral segmentation defects, which were detected in 11 out of 37 cases (29.7%).
Conclusions: Evaluation of associated malformations and confirmation of ultrasound findings can be performed by postmortem examination and simple X-ray studies for exact diagnosis, which strongly affects decisions on further pregnancies as well as genetic counseling. This method is straightforward, inexpensive and effective.
Answer: The significance of prenatally identified isolated clubfoot and the indication for amniocentesis varies according to different studies. One study suggests that karyotypic evaluation is recommended when isolated clubfoot is identified on a prenatal sonogram because other subtle associated malformations may not be detected ultrasonographically in the early second trimester (PUBMED:9539532). This study found that 5.9% of fetuses with isolated clubfoot had abnormal karyotypes, including conditions such as trisomy 18 and trisomy 21.
Conversely, another study concluded that after prenatal diagnosis of isolated unilateral or bilateral clubfoot, there appeared to be no indication to offer karyotyping, provided that a detailed sonographic fetal anatomy survey was normal and there were no additional indications for invasive prenatal diagnoses (PUBMED:10711559). This study did not find any cases of aneuploidy by fetal karyotype evaluation or newborn physical examination.
A third study did not identify an increased risk of fetal aneuploidy associated with isolated clubfoot, but a review of the literature indicated a risk of 1.7-3.6% with predominance of sex chromosome aneuploidy (PUBMED:20069547). This study also noted that approximately 10% of individuals will have a normal foot or positional foot deformity requiring minimal treatment, and 10-13% of prenatally diagnosed cases of isolated clubfoot will have complex clubfoot postnatally.
Another study found that the risk of chromosomal aberration after a prenatal diagnosis of isolated bilateral clubfoot exceeds the risks of amniocentesis, suggesting that cytogenetic testing may be beneficial in prenatal counseling situations (PUBMED:37942915).
In summary, the indication for amniocentesis in cases of prenatally identified isolated clubfoot is not universally agreed upon. Some studies recommend karyotyping due to the potential for associated subtle malformations or chromosomal abnormalities, while others suggest that it may not be necessary if detailed sonographic surveys are normal. The decision to proceed with amniocentesis should be made on a case-by-case basis, considering the detailed ultrasound findings, family history, and other risk factors. |
Instruction: Should breast reduction surgery be rationed?
Abstracts:
abstract_id: PUBMED:20574478
Breast reduction using liposuction alone. Liposuction alone as a treatment of breast hypertrophy has been mentioned in the literature for the past decade but has been limited in its application. Our experience in over 350 cases has shown that liposuction breast reduction is an excellent method of breast reduction when applied to the proper patient. The techniques involved in liposuction breast reduction mirror those used in standard liposuction cases, so most plastic surgeons will find the learning curve for this procedure to be very easy. Complications are infrequent and the recovery is rapid and easy. Liposuction breast reduction affords a rapid procedure with minimal complications and easy recovery and can provide a useful alternative to traditional breast reduction surgery in many patients.
abstract_id: PUBMED:20574477
Vertical breast reduction. The vertical approach to breast reduction surgery has achieved increasing popularity. The learning curve can be a problem for surgeons starting to incorporate vertical techniques into their practices; the medial pedicle approach is outlined in detail. Designing and creating the medial pedicle is straightforward and rotating it into position is easy. An elegant curve to the lower pole of the reduced breast can thus be created. Current concepts related to the skin brassiere, breast sutures, and the longevity of results are reviewed. It is important for the surgeon to understand that the skin resection pattern and the pedicle design are separate issues when discussing breast reduction surgery.
abstract_id: PUBMED:1299162
Breast feeding after breast reduction Few authors have addressed the feasibility of breast-feeding after a reduction mammaplasty. Nowadays, the majority of plastic surgeons perform breast reductions with techniques preserving the continuity of the nipple-areola complex with the retained breast tissue. These pedicle techniques should permit lactation as opposed to the free nipple grafting technique used earlier. To find out how many women nurse their children after a reduction mammaplasty, we reviewed 806 charts to identify 243 women having had a pedicle technique breast reduction, between 1967-1987, at the age of 15 to 35 years. These women were contacted and 98 of them were reached. Eighteen women had become pregnant after their surgery. They agreed to answer a questionnaire regarding their decision to nurse their children, the duration of breast-feeding and the difficulties encountered. Eight of eighteen mothers (45%) nursed their children up to 32 weeks (mean 11 weeks). Among them, 3 nursed for less than 3 weeks and 5 nursed from 3 to 32 weeks (mean 20 weeks). Only one mother had to supplement nursing with formula. Two mothers used mixed formula and breast-feeding when they returned to work. Ten of eighteen mothers (55%) did not breast-feed for the following reasons: 6 by personal choice, 2 due to premature delivery, one was advised that nursing was not feasible and one had no lactation. We believe that the nursing capacity of the breast is preserved after a breast reduction and that women should be encouraged to nurse their children.
abstract_id: PUBMED:31183292
Investigation of the Anthropometric Changes in Breast Volume and Measurements After Breast Reduction. Objective This study aims to compare breast volume changes and other anthropometric measurements by using before and after breast reduction pictures of women who underwent breast reduction operation in Plastic and Reconstructive Surgery clinic and by performing measurements from the anatomic points indicated in the literature. Background Landmarks (previously identified as anatomic points) that show the success of breast reduction operation are not sufficient. Anthropometric points and their identification are of great importance for choosing the landmarks and identifying the statistical approaches to be used. Methods A total of 40 women were measured breast anthropometric measurements in pre- and post-operative breast reduction surgery changes by a photographic technique using Image J programme from the anatomical points determined in the literature. Comparison of right and left breast anthropometric measurements before and after the operation was performed using the paired t test or Wilcoxon signed rank test. The intraclass correlation coefficient (ICC) and Bland-Altman plots were used to determine the agreement between each pair of measurements. Results There was a statistically significant agreement between all the measurements (p<0.001). According to the Bland-Altman graphics, right and left breast measurements after the operation were within the limits of agreement according to all measurement points. Conclusion This study presented anthropometric measurements to show and guide patient satisfaction and aesthetic success of the operations performed by plastic surgeons.
abstract_id: PUBMED:33935534
Discussion of Histopathological Findings of 954 Breast Reduction Specimens. Objectives: Breast reduction is a frequently sought procedure by patients and one of the most commonly performed operations by plastic surgeons. Follow-up of histopathological results after reduction mammoplasty is very important. This study aimed to evaluate the histopathological results of patients undergoing bilateral reduction mammoplasty to determine the incidence of breast lesions and risk factors of high-risk breast lesions.
Methods: 477 patients who underwent reduction mammoplasty in the plastic surgery department between October 2013 and January 2020 were included in this study. Patients were evaluated according to age, body mass index (BMI), comorbidity factors, tobacco use, family history and histopathological findings.
Results: The mean age of patients was 42.43±12.05 years. Body mass index ranged from 23 to 34.6. As for comorbidity factors, 12 patients had hypertension, five patients had asthma and six patients had diabetes mellitus. Seventeen patients (3.6%) were smokers, and 25 (5.2%) patients had a family history of breast cancer. Among the patients, 2.3% were 20 years and under, 17.1% were between 21 and 30 years old, 21.5% were between 31 and 40 years old, 33.1% were between 41 and 50 years old, 18.2% were between 51 and 60 years old, and 7.5% were 60 years and above. 85.4% of histopathological findings consisted of normal breast tissue and nonproliferative breast lesion breast lesions. The incidences of proliferative breast lesions, atypical hyperplasia and in situ lesions were calculated as 5.7%, 2% and 0.4%, respectively. The mean follow-up period was 3.8±1.6 years.
Conclusion: Although preoperative breast cancer screening methods are used before the reduction mammoplasty, high-risk lesions may be encountered afterwards. One of the biggest advantages of reduction mammoplasty in addition to psychophysiological recovery is breast cancer risk reduction.
abstract_id: PUBMED:27012791
Reduction Mammaplasty and Breast Cancer Screening. Breast reduction surgery is one of the most popular procedures performed by plastic surgeons; based on the current literature, it is safe and does not have a negative impact on identifying breast cancer in women. There are no evidence-based data to confirm the utility of unique screening protocols for women planning to undergo reduction surgery or for those who already had reduction. Women undergoing this surgery should not deviate from the current recommendations of screening mammography in women older than 40 years of average risk. Experienced radiologist can readily distinguish postsurgical imaging findings of rearranged breast parenchyma from malignancy.
abstract_id: PUBMED:30725197
Combined Breast Reduction Augmentation. Background: Numerous methods have been designed to reduce breasts size and weight. The goal today is to not only to reduce size but also to create a pleasing shape. Breast reduction techniques do not obtain the desired upper pole fullness, and commonly recurrent ptosis develops. To improve and maintain breast shape in the late postoperative period, we combine breast reduction with implants.
Methods: Three hundred and sixty-six patients who underwent combined breast reduction or mastopexy with implants from January 2014 to November 2017 at IM Clinic were retrospectively reviewed. We present the indications, surgical technique, and outcomes of these patients to determine the safety and efficacy of our technique.
Results: No major complications were noted in an average of 2 years of follow-up (range 2 months to 4 years). Minor complications occurred in 61 patients, of whom 46 required revision surgery (12.6%). The most common tissue-related complications were dog ears (7.6%) and poor scarring (4.9%). The most common implant-related complication was capsular contracture (0.8%).
Conclusions: Breast reduction with implants is a reliable option to provide additional volume to the upper pole of the breast to improve long-term breast shape and avoid ptosis recurrence. Our study indicates that the procedure is safe and has complication and revision rates comparable to traditional breast reduction or augmentation mastopexy techniques.
Level Of Evidence Iv: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
abstract_id: PUBMED:38339249
Breast Cancer in the Tissue of the Contralateral Breast Reduction. Breast cancer is the most prevalent malignancy among women worldwide, and the increasing number of survivors is due to advances in early diagnosis and treatment efficacy. Consequently, the risk of developing contralateral breast cancer (CBC) among these survivors has become a concern. While surgical intervention with lumpectomy is a widely used primary approach for breast cancer, post-operative breast asymmetry is a potential concern. Many women opt for symmetrizing reduction procedures to improve aesthetic outcomes and quality of life. However, despite careful radiological screening, there is a chance of accidentally finding CBC. To address this, tissue excised during symmetrizing surgery is examined pathologically. In some cases, CBC or in situ lesions have been incidentally discovered in these specimens, prompting a need for a more thorough examination. Resection in pieces and the absence of surgical marking and pathological inking of the margin have made it challenging to precisely identify tumor location and assess tumor size and margin status, hampering adjuvant treatment decisions. A new protocol introduced in July 2022 aims to enhance the precision of CBC diagnosis, allowing for tailored treatment plans, including re-excision, systemic adjuvant therapy, or radiation therapy.
abstract_id: PUBMED:20574481
Reduction mammaplasty in conjunction with breast conservation. Breast conservation therapy, consisting of lumpectomy or segmental mastectomy with negative margins followed by breast irradiation, has become a standard and safe alternative to mastectomy in selected patients with early-stage breast cancer. As the inclusion criteria for breast conservation therapy have continued to evolve to include lower quadrant tumors, very large breasts, and central tumors, the potential for significant disfigurement after breast conservation therapy has increased. Bilateral reduction mammaplasty in conjunction with tumor-directed partial mastectomy is a surgical approach that can benefit these patients by creating symmetric, aesthetically pleasing breasts in a single-stage operation.
abstract_id: PUBMED:29897268
Validation of the breast evaluation questionnaire for breast hypertrophy and breast reduction. Background: There is a lack of published, validated questionnaires for evaluating psychosocial morbidity in patients with breast hypertrophy undergoing breast reduction surgery. Aim: To validate the breast evaluation questionnaire (BEQ), originally developed for the assessment of breast augmentation patients, for the assessment of psychosocial morbidity in patients with breast hypertrophy undergoing breast reduction surgery. Study design: Validation study Subjects: Women with macromastia Methods: The validation of the BEQ, adapted to breast reduction, was performed in several steps. Content validity, reliability, construct validity and responsiveness were assessed. Results: The original version was adjusted according to the results for content validity and resulted in item reduction and a modified BEQ (mBEQ) that was then assessed for reliability, construct validity and responsiveness. Internal and external validation was performed for the modified BEQ. Convergent validity was tested against Breast-Q (reduction) and discriminate validity was tested against the SF-36. Known-groups validation revealed significant differences between the normal population and patients undergoing breast reduction surgery. The BEQ showed good reliability by test-re-test analysis and high responsiveness. Conclusion: The modified BEQ may be reliable, valid and responsive instrument for assessing women who undergo breast reduction.
Answer: The abstracts provided do not directly address the policy question of whether breast reduction surgery should be rationed. However, they do provide insights into various aspects of breast reduction surgery that could inform a discussion on the topic.
Breast reduction using liposuction alone has been shown to be an effective method with minimal complications and easy recovery for the proper patient (PUBMED:20574478). Vertical breast reduction has also gained popularity and can create an elegant curve to the lower pole of the reduced breast (PUBMED:20574477). Breastfeeding after breast reduction is feasible in many cases, preserving the nursing capacity of the breast (PUBMED:1299162). Anthropometric changes post-surgery can be significant, indicating the procedure's impact on breast volume and measurements (PUBMED:31183292). Histopathological examination of breast reduction specimens can reveal a range of breast lesions, suggesting a potential benefit in cancer risk reduction (PUBMED:33935534). Breast reduction does not negatively impact breast cancer screening (PUBMED:27012791). Combining breast reduction with implants can improve long-term breast shape and avoid ptosis recurrence (PUBMED:30725197). Incidental findings of contralateral breast cancer during symmetrizing reduction procedures highlight the importance of careful examination of excised tissue (PUBMED:38339249). Reduction mammaplasty can be performed in conjunction with breast conservation therapy for cancer, offering aesthetic benefits (PUBMED:20574481). Lastly, the validation of the breast evaluation questionnaire for breast hypertrophy and reduction indicates the importance of assessing psychosocial morbidity in patients undergoing this surgery (PUBMED:29897268).
These abstracts suggest that breast reduction surgery can have significant physical and psychological benefits for patients, including improved breast shape, reduced pain, and potential cancer risk reduction. The decision to ration such a procedure would need to consider these benefits against factors such as healthcare costs, resource allocation, and the impact on patient quality of life. The abstracts do not provide a definitive answer to the question of rationing but highlight the complexity of the issue and the need for careful consideration of the medical, psychological, and social implications of breast reduction surgery. |
Instruction: High amplitude contractions in the middle third of the oesophagus: a manometric marker of chronic alcoholism?
Abstracts:
abstract_id: PUBMED:8707108
High amplitude contractions in the middle third of the oesophagus: a manometric marker of chronic alcoholism? Background: Oesophageal motor abnormalities have been reported in alcoholism.
Aim: To investigate the effects of chronic alcoholism and its withdrawal on oesophageal disease.
Patients: 23 chronic alcoholic patients (20 men and three women; mean age 43, range 23 to 54).
Methods: Endoscopy, manometry, and 24 hour pH monitoring 7-10 days and six months after ethanol withdrawal. Tests for autonomic and peripheral neuropathy were also performed. Motility and pH tracings were compared with those of age and sex matched control groups: healthy volunteers, nutcracker oesophagus, and gastro-oesophageal reflux disease.
Results: 14 (61%) alcoholic patients had reflux symptoms, and endoscopy with biopsy showed oesophageal inflammation in 10 patients. One patient had an asymptomatic squamous cell carcinoma. Oesophageal motility studies in the alcoholic patients showed that peristaltic amplitude in the middle third was > 150 mm Hg (95th percentile (P95) of healthy controls) in 13 (57%), the ratio lower/ middle amplitude was < 0.9 in 15 (65%) (> 0.9 in all control groups), and the lower oesophageal sphincter was hypertensive (> 23.4 mm Hg, P95 of healthy controls) in 13 (57%). All three abnormalities were present in five (22%). Abnormal reflux (per cent reflux time > 2.9, P95 of healthy controls) was shown in 12 (52%) alcoholic patients, and was unrelated to peristaltic dysfunction. Subclinical neuropathy in 10 patients did not effect oesophageal abnormalities. Oesophageal motility abnormalities persisted at six months in six patients with ongoing alcoholism, whereas they reverted towards normal in 13 who remained abstinent; reflux, however, was unaffected.
Conclusions: Oesophageal peristaltic dysfunction and reflux are frequent in alcoholism. High amplitude contractions in the middle third of the oesophagus seem to be a marker of excessive alcohol consumption, and tend to improve with abstinence.
abstract_id: PUBMED:1551339
Secondary esophageal contractions are abnormal in chronic alcoholics. It is known that primary (swallow-induced) esophageal contractions are abnormal in alcoholics. Data concerning acid-induced esophageal contractions, which appear to be important in cleansing refluxed acid from the esophagus, are lacking. To determine whether acid-induced esophageal contractions are also affected by chronic ethanol exposure, we studied secondary (acid or saline-induced) esophageal motor events in 19 male alcoholics [6 actively drinking (ADA), 13 withdrawing (WA)]. Esophageal motility was performed in response to wet swallows (5 ml of water) and to intraesophageal injection of 5 ml of 0.1 N HCl (0.1 N) or saline. Lower esophageal sphincter pressure (LESP), amplitude (ECA), duration (ECD), and velocity (ECV) of esophageal contractions in response to swallowing and injection of acid or saline were similar in controls and alcoholics. There were more simultaneous and double-peaked contractions in response to acid and saline than to swallows in both alcoholics and controls. However, there was no difference between HCl- and NaCl-induced contractions. ECA in alcoholics was significantly higher than in controls. ECD in alcoholics was significantly more prolonged than in controls. There was no significant different between alcoholics and controls in ECV, LESP, or LES relaxation. These data indicate that similar to primary esophageal contractions, secondary esophageal contractions are also abnormal in both actively drinking and withdrawing alcoholics.
abstract_id: PUBMED:10023513
Path analysis of P300 amplitude of individuals from families at high and low risk for developing alcoholism. Background: A substantial amount of evidence exists suggesting that P300 amplitude in childhood is a risk marker for later development of alcohol dependence. There is evidence that P300 amplitude is heritable. The goal of the present study was to determine if patterns of transmission differed in families who were either at high or low risk for developing alcohol dependence.
Methods: Auditory P300 was recorded from 536 individuals spanning three generations. The path analytic TAU model was used to investigate the familial transmission of P300 amplitude in the two independent samples of families.
Results: Transmission of P300 in high-risk families most likely followed a polygenic model of inheritance with significant parent-to-offspring transmission. Parent-to-offspring transmission was significantly greater in high-risk than low-risk families. Total phenotypic variance due to transmissible factors was greater in low-risk families than in high-risk families, however. A somewhat unexpected finding was the substantial correlation between mates for P300 amplitude in both high- and low-risk families.
Conclusions: P300 is transmissible in families. Differences exist in the pattern of transmission for P300 in families at high and low risk for alcoholism.
abstract_id: PUBMED:36680783
Associations of parent-adolescent closeness with P3 amplitude, frontal theta, and binge drinking among offspring with high risk for alcohol use disorder. Background: Parents impact their offspring's brain development, neurocognitive function, risk, and resilience for alcohol use disorder (AUD) via both genetic and socio-environmental factors. Individuals with AUD and their unaffected children manifest low parietal P3 amplitude and low frontal theta (FT) power, reflecting heritable neurocognitive deficits associated with AUD. Likewise, children who experience poor parenting tend to have atypical brain development and greater rates of alcohol problems. Conversely, positive parenting can be protective and critical for normative development of self-regulation, neurocognitive functioning and the neurobiological systems subserving them. Yet, the role of positive parenting in resiliency toward AUD is understudied and its association with neurocognitive functioning and behavioral vulnerability to AUD among high-risk offspring is less known. Using data from the Collaborative Study on the Genetics of Alcoholism prospective cohort (N = 1256, mean age [SD] = 19.25 [1.88]), we investigated the associations of closeness with mother and father during adolescence with offspring P3 amplitude, FT power, and binge drinking among high-risk offspring.
Methods: Self-reported closeness with mother and father between ages 12 and 17 and binge drinking were assessed using the Semi-Structured Assessment for the Genetics of Alcoholism. P3 amplitude and FT power were assessed in response to target stimuli using a Visual Oddball Task.
Results: Multivariate multiple regression analyses showed that closeness with father was associated with larger P3 amplitude (p = 0.002) and higher FT power (p = 0.01). Closeness with mother was associated with less binge drinking (p = 0.003). Among male offspring, closeness with father was associated with larger P3 amplitude, but among female offspring, closeness with mother was associated with less binge drinking. These associations remained statistically significant with father's and mothers' AUD symptoms, socioeconomic status, and offspring impulsivity in the model.
Conclusions: Among high-risk offspring, closeness with parents during adolescence may promote resilience for developing AUD and related neurocognitive deficits albeit with important sex differences.
abstract_id: PUBMED:24255944
Alcohol reduces cross-frequency theta-phase gamma-amplitude coupling in resting electroencephalography. Background: The electrophysiological inhibitory mechanism of cognitive control for alcohol remains largely unknown. The purpose of the study was to compare electroencephalogram (EEG) power spectra and cross-frequency phase-amplitude coupling (CFPAC) at rest and during a simple subtraction task after acute alcohol ingestion.
Methods: Twenty-one healthy subjects participated in this study. Two experiments were performed 1 week apart, and the order of the experiments was randomly assigned to each subject. During the experiments, each subject was provided with orange juice containing alcohol or orange juice only. We recorded EEG activity and analyzed power spectra and CFPAC data.
Results: The results showed prominent theta-phase gamma-amplitude coupling at the frontal and parietal electrodes at rest. This effect was significantly reduced after alcohol ingestion.
Conclusions: Our findings suggest that theta-phase gamma-amplitude coupling is deficiently synchronized at rest after alcohol ingestion. Therefore, cross-frequency coupling could be a useful tool for studying the effects of alcohol on the brain and investigating alcohol addiction.
abstract_id: PUBMED:9442343
Genetic association between reduced P300 amplitude and the DRD2 dopamine receptor A1 allele in children at high risk for alcoholism. Background: There is evidence that both reduction in P300 amplitude and the presence of the A1 allele are risk markers for alcoholism. We hypothesized that demonstration of a relationship between the marker and the trait in young children who had not begun to drink regularly would provide evidence for dopaminergic mediation of the reduction in P300 often seen among high-risk children. A previous association between the A1 and the P300 amplitude in screened controls supports the hypothesis that this association occurs in the general population.
Methods: Children were assessed using both visual and auditory paradigms to elicit event-related potentials (ERPs). The P300 component of the ERP was investigated with respect to the genetic variation of the Taq1A D2 receptor in these children.
Results: Genetic association between a marker locus (Taq1 A RFLP near the D2 receptor locus) and the amplitude of P300 was found to be present in 58 high-risk children and their relatives (a total of 100 high-risk individuals).
Conclusions: A higher proportion of children from alcoholic families may exhibit lower P300 because more of these children carry the A1 allele than is seen in the normal population.
abstract_id: PUBMED:36272658
Cortical thickness and intrinsic activity changes in middle-aged men with alcohol use disorder. Background: Previous studies reported the alterations of brain structure or function in people with alcohol use disorder (AUD). However, a multi-modal approach combining structural and functional studies is essential to understanding the neural mechanisms of AUD. Hence, we examined regional differences in cortical thickness (CT) and amplitude of low-frequency fluctuation (ALFF) in patients with AUD.
Methods: Thirty male patients with AUD and thirty age- and education-matched healthy male controls were recruited. High-resolution anatomical and resting-state functional MRI (rs-fMRI) data were collected, and the CT and ALFF were computed.
Results: Behaviorally, males with AUD showed a cognitive decline in multiple domains. Structurally, they presented prominent reductions in CT in the bilateral temporal, insular, precentral, and dorsolateral prefrontal gyri (p < 0.05, voxel-wise family-wise error [FWE]). Functionally, a significant decrease in ALFF in the bilateral temporal, dorsolateral prefrontal, insular, putamen, cerebellum, right precuneus, mid-cingulate, and precentral gyri were observed (p < 0.05, FWE).
Conclusions: Our findings demonstrate the dual alterations of alcohol-related brain structure and function in male patients with AUD. These results may be useful in understanding the neural mechanisms in AUD.
abstract_id: PUBMED:12385676
P300 amplitude in adolescent twins discordant and concordant for alcohol use disorders. The sons of alcoholics have repeatedly been found to have reduced P300 amplitude. Further, quantitative behavioral genetic and molecular genetic studies indicating a genetic influence on P300 amplitude have fueled speculation that this component may be a biological vulnerability marker for alcoholism. To further explore this possibility, we examined P300 in adolescent twin pairs from an epidemiological sample who were (a) discordant for alcohol abuse/dependence, (b) concordant for alcohol abuse/dependence, or (c) concordant for the absence of alcohol abuse/dependence and other relevant disorders. For discordant pairs, the alcohol abusing/dependent twins' amplitude did not differ from that of non-alcoholic co-twins. Pairs free of psychopathology had greater amplitudes than both alcoholism discordant and concordant pairs. P300 amplitude was more similar in monozygotic than dizygotic discordant pairs, suggesting a genetic influence on P300 amplitude in this group. The findings are consistent with P300 amplitude being a marker of vulnerability to alcohol use disorders.
abstract_id: PUBMED:9756048
Amplitude of visual P3 event-related potential as a phenotypic marker for a predisposition to alcoholism: preliminary results from the COGA Project. Collaborative Study on the Genetics of Alcoholism. Recent data collected at six identical electrophysiological laboratories from the large national multisite Collaborative Study on the Genetics of Alcoholism provide evidence for considering the P3 amplitude of the event-related potential as a phenotypic marker for the risk of alcoholism. The distribution of P3 amplitude to target stimuli at the Pz electrode in individuals 16 years of age and over from 163 randomly ascertained control families (n = 687) was compared with those from 219 densely affected alcoholic families (n = 1276) in which three directly interviewed first-degree relatives met both DSM-III-R and Feighner criteria at the definite level for alcohol dependence (stage II). The control sample did not exclude individuals with psychiatric illness or alcoholism to obtain incidence rates of psychiatric disorders similar to those of the general population. P3 amplitude data from control families was converted to Z-scores, and a P3 amplitude beyond 2 SD's below the mean was considered an "abnormal trait." When age- and sex-matched distributions of P3 amplitude were compared, members of densely affected stage II families were more likely to manifest low P3 amplitudes (2 SD below the mean) than members of control families, comparing affected and unaffected offspring, and all individuals; all comparisons of these distributions between groups were significant (p < 0.00001). P3 amplitude means were also significantly lower in stage II family members, compared with control family members for all comparisons, namely probands, affected and unaffected individuals (p < 0.0001), and offspring (p < 0.01). Furthermore, affected individuals from stage II families, but not control families, had significantly lower P3 amplitudes than unaffected individuals (p < 0.001). Affected males from stage II families had significantly lower P3 amplitudes than affected females (p < 0.001). Recent linkage analyses indicate that visual P3 amplitude provides a biological phenotypic marker that has genetic underpinnings.
abstract_id: PUBMED:33933141
Third time recurrent Boerhaave's syndrome: a case report. Background: Effort rupture of the esophagus or Boerhaave's syndrome is a rare entity, and prognosis is largely dependent on early diagnosis and treatment. Recurrent effort ruptures are very rare, only reported in a few case reports in English literature. We present a case with a third time effort rupture, and to the best of our knowledge there are no such previous publications. Furthermore, the presented case is also distinct because each episode was treated by different methods, reflecting the pathophysiology of recurrent disease as well as the last decade's advancements in the management of esophageal perforations in our clinic and globally.
Case Presentation: The patient is a 60-year-old White male, suffering from alcohol abuse, mild reflux esophagitis, and a history of effort esophageal ruptures on two previous occasions. He was now admitted to our ward once again because of a third bout of Boerhaave's syndrome. The first time, 10 years ago, he was managed by thoracotomy and laparotomy with primary repair, and the second time, 5 years ago, by transhiatal mediastinal drainage through a laparotomy and endoscopic stent placement. Now he was successfully managed by endovascular vacuum-assisted closure therapy alone.
Conclusions: Recurrent cases of Boerhaave's syndrome are very rare, and treatment must be tailored individually. The basic rationale is, however, no different from primary disease: (1) early diagnosis, (2) adequate drainage of extraesophageal contamination, and (3) restoration of esophageal integrity. Recurrent disease is usually contained and exceptionally suitable for primary endoscopic treatment. To cover the full panorama and difficult nature of complex esophageal disease, endoscopic modalities such as stent placement and endovascular vacuum-assisted closure, as well as the capacity for prompt extensive surgical interventions such as esophagectomy, should be readily accessible within every modern esophageal center.
Answer: High amplitude contractions in the middle third of the esophagus have been investigated as a potential manometric marker of chronic alcoholism. A study involving 23 chronic alcoholic patients found that 57% of them exhibited peristaltic amplitude in the middle third of the esophagus greater than 150 mm Hg, which is above the 95th percentile of healthy controls. Additionally, the ratio of lower/middle amplitude was less than 0.9 in 65% of the alcoholic patients, a ratio that was greater than 0.9 in all control groups. These findings suggest that high amplitude contractions in the middle third of the esophagus could be indicative of excessive alcohol consumption (PUBMED:8707108).
Furthermore, secondary esophageal contractions, which are important for cleansing refluxed acid from the esophagus, were found to be abnormal in chronic alcoholics. A study involving 19 male alcoholics, both actively drinking and withdrawing, showed that the amplitude of esophageal contractions in alcoholics was significantly higher than in controls. The duration of these contractions was also more prolonged in alcoholics compared to controls, indicating that both primary and secondary esophageal contractions are affected by chronic ethanol exposure (PUBMED:1551339).
These studies support the notion that high amplitude contractions in the middle third of the esophagus may serve as a manometric marker for chronic alcoholism, reflecting the impact of alcohol on esophageal motility. However, it is important to note that while these manometric abnormalities tend to improve with abstinence from alcohol, they may persist in patients with ongoing alcoholism (PUBMED:8707108). |
Instruction: Is it safe to perform cardiac catheterizations on adults with congenital heart disease in a pediatric catheterization laboratory?
Abstracts:
abstract_id: PUBMED:16216015
Is it safe to perform cardiac catheterizations on adults with congenital heart disease in a pediatric catheterization laboratory? Objective: To determine the complication rate during the catheterization in adults with congenital heart disease (CHD) in a pediatric catheterization laboratory (PCL).
Background: An increasing number of patients with CHD are surviving into adulthood, with diagnostic and interventional cardiac catheterization being essential for the management of their disease. The complication rate during the catheterization of adults with CHD has not been reported.
Methods: A retrospective chart review was performed on all adult patients (>18 years) with CHD who underwent diagnostic or interventional catheterization in our PCL within the past 8.5 years.
Results: A total of 576 procedures were performed on 436 adult patients (median age 26 years). Complex heart disease was present in 387/576 (67%) procedures. An isolated atrial septal defect or patent foramen ovale was present in 115/576 (20%) procedures, and 51/576 (9%) procedures were performed on patients with structurally normal hearts with arrhythmias. Interventional catheterization was performed in 378/576 (66%) procedures. There were complications during 61/576 (10.6%) procedures; 19 were considered major and 42 minor. Major complications were death (1), ventricular fibrillation (1), hypotension requiring inotropes (7), atrial flutter (3), retroperitoneal hematoma, pneumothorax, hemothorax, aortic dissection, renal failure, myocardial ischemia and stent malposition (1 each). The most common minor complications were vascular entry site hematomas and hypotension not requiring inotropes. Procedures performed on patients > or = 45 years of age had a 19% occurrence of complications overall compared with 9% occurrence rate in patients of age < 45 years (P < 0.01).
Conclusions: The complication rate during the catheterization of adults with CHD in a PCL is similar to the complication rate of children with CHD undergoing cardiac catheterization. The older subset of patients are more likely to encounter complications overall. The encountered complications could be handled effectively in the PCL. With screening in place, it is safe to perform cardiac catheterization on most adults with CHD in a PCL.
abstract_id: PUBMED:32951945
Cardiac catheterization for hemoptysis in a Children's Hospital Cardiac Catheterization Laboratory: A 15 year experience. Objectives: The aim of this study was to evaluate the diagnostic utility of cardiac catheterization and the efficacy of transcatheter intervention in patients with hemoptysis.
Background: Cardiac catheterization may play a role in identifying the etiologies of hemoptysis with the potential for transcatheter intervention.
Methods: This was a retrospective study of all the patients who were brought to the pediatric cardiac catheterization laboratory for the indication of hemoptysis over a 15-year period (2006-2020).
Results: Twenty-one patients underwent 28 cardiac catheterizations. The median age was 17.4 years (range 0.3-60.0 years), and the underlying cardiac diagnoses were normal heart n = 3, pulmonary hypertension 1, heart transplant 1, pulmonary arteriovenous malformation 1, pulmonary vein disease 3, biventricular congenital heart diseases 5, and single ventricles 7. The diagnostic utility of catheterization was 81% (17/21). At two-thirds (18/28) of catheterizations, transcatheter interventions were performed in 14/21 (67%) patients: aortopulmonary collateral embolization 14, aortopulmonary and veno-venous collateral embolization 1, and pulmonary arteriovenous malformation embolization 3. Although recurrent hemoptysis was frequent (50%) post-intervention, the final effectiveness of transcatheter interventions was 79% (11/14 patients). Overall mortality was 19% (4/21), all in those presenting with massive hemoptysis.
Conclusions: Cardiac catheterization was shown to have good diagnostic utility for hemoptysis especially in patients with underlying congenital heart disease. Despite the high mortality and recurrent hemoptysis rate, transcatheter interventions were effective in our cohort.
abstract_id: PUBMED:24965688
Pulse fluoroscopy radiation reduction in a pediatric cardiac catheterization laboratory. Objective: To determine if lower starting pulse fluoroscopy rates lead to lower overall radiation exposure without increasing complication rates or perceived procedure length or difficulty.
Setting: The pediatric cardiac catheterization laboratory at University of Michigan Mott Children's Hospital.
Patients: Pediatric patients with congenital heart disease.
Design/interventions: We performed a single-center quality improvement study where the baseline pulse fluoroscopy rate was varied between cases during pediatric cardiac catheterization procedures.
Outcome Measures: Indirect and direct radiation exposure data were collected, and the perceived impact of the fluoroscopy rate and procedural complications was recorded. These outcomes were then compared among the different set pulse fluoroscopy rates.
Results: Comparing pulse fluoroscopy rates of 15, 7.5, and 5 frames per second from 61 cases, there was a significant reduction in radiation exposure between 15 and 7.5 frames per second. There was no difference in perceived case difficulty, procedural length, or procedural complications regardless of starting pulse fluoroscopy rate.
Conclusions: For pediatric cardiac catheterizations, a starting pulse fluoroscopy rate of 7.5 frames per second exposes physicians and their patients to significantly less radiation with no impact on procedural difficulty or outcomes. This quality improvement study has resulted in a significant practice change in our pediatric cardiac catheterization laboratory, and 7.5 frames per second is now the default fluoroscopy rate.
abstract_id: PUBMED:11196745
Complications of pediatric cardiac catheterization: 18-month study. Pediatric cardiac catheterization may be indicated under certain conditions, but is associated with some risk. The purpose of the study was to evaluate the complications associated with diagnostic and interventional catheterization procedures done over an 18-month period in our laboratory. Of the 230 cardiac catheterizations, 204 were solely diagnostic in nature. Eleven percent were interventional catheterizations including aortic and pulmonary valvuloplasties and balloon atrial septostomy. Six percent of the patients constituted grown-up congenital heart disease (GUCH). The median age was 34 months excluding the GUCH group. There was one death below one year of age (0.4% mortality) occurring six hours after the diagnostic catheterization; it was attributed to the underlying disease. There were eight complications (3.4%) that we would consider serious, including atrial flutter, ventricular tachycardia, severe hypercyanotic spell, seizure, transient complete heart block, peripheral vascular injury which resulted in pseudoaneurysm formation of the femoral artery requiring surgical intervention, and transient pulse loss. When catheterization is necessary, it should be carried out as efficiently as possible with awareness of conditions that probably increase the risk of a clinically important event. Although patients undergoing cardiac catheterization are now younger and have more complex cardiac abnormalities, the procedure seems to have become safer when compared to previous literature.
abstract_id: PUBMED:23006871
Caring for the adult with congenital heart disease in an adult catheterization laboratory by pediatric interventionalists--safety and efficacy. Objective: The purpose of this study is to describe the outcomes of cardiac catheterizations performed by pediatric interventional cardiologists in an adult catheterization laboratory on adult patients with congenital heart disease (CHD).
Background: With improved survival rates, the number of adults with CHD increases by ∼5%/year; this population often requires cardiac catheterization.
Methods: From January 2005 to December 2009, two groups of patients were identified, an adult group (>21 years) and an adolescent group (13-21 years), who had catheterizations performed by pediatric interventional staff.
Results: Fifty-seven catheterizations were performed in 53 adults, while 59 were performed in 47 adolescents. The male to female ratio differed significantly between groups; only 15/53 (28%) of adults were male vs. 26/47 (55%) of adolescents (P =.006). Among adults, 27 had previously corrected CHD, 16 with atrial septal defect (ASD), and six with patent foramen ovale (PFO). This differed significantly from the adolescents, where only 30 had previously corrected CHD, seven with ASD, and one with PFO (P =.012). Among adults who were catheterized, interventions were performed on 28/53 (53%). All interventions were successful and included ASD/PFO closure, patent ductus arteriosus occlusion, coarctation dilation, pulmonary artery dilations, and one saphenous vein graft aneurysm closure. Nineteen adults had coronary angiography performed by adult interventionalists in consult with pediatric interventionalists. Two complications occurred among adults (3.8%) vs. one complication (2%; P = 1) among adolescents. No femoral vessel complications or catheterization-associated mortality occurred.
Conclusions: Cardiac catheterizations can be performed effectively and safely in adults with CHD by pediatric interventional cardiologists in an adult catheterization laboratory.
abstract_id: PUBMED:37984324
Assessing the feasibility of using the antecubital vein to perform right heart catheterization in children and adults with congenital heart disease: a retrospective, observational single-center study. Background: Right heart catheterization (RHC) usually is performed via the femoral vein or the internal jugular vein. However, the antecubital fossa vein is a valid venous access, and it has become increasingly popular to perform right heart catheterization utilizing this access.
Methods: A retrospective, observational study was conducted to describe use of the antecubital fossa vein for right heart catheterization in adults and children with congenital heart disease (CHD). Patients who had undergone RHC via antecubital fossa vein at the authors' hospital between September 2019 and December 2022 were included. The outcomes studied were procedural failure and procedure-related adverse events.
Results: Fifty-two patients with CHD underwent right cardiac catheterization via an upper arm vein. The upper arm vein was unable to perform the RHC in only 2 patients (3.8%). Only 1 patient developed a minor adverse event. No irreversible and/or life-threating adverse events were detected.
Conclusions: The upper arm veins are safe and effective to perform a RHC in children and adults with CHD. This approach demonstrates a high percentage of technical success, and few mild complications.
abstract_id: PUBMED:24623940
Diagnostic pediatric cardiac catheterization: Experience of a tertiary care pediatric cardiac centre. Background: Cardiac catheterization was considered gold standard for confirmation of diagnosis and analyzing various management issues in congenital heart diseases. In spite of development of various non invasive tools for investigation of cardiac disorders diagnostic catheterization still holds an important place in pediatric patients.
Methods: 300 consecutive diagnostic cardiac catheterization performed since April 2007 were included in this study. The study was undertaken to evaluate the profile of patients undergoing diagnostic cardiac catheterization, its results, assess its safety and its contribution toward solving various management issues.
Result & Conclusion: Children who underwent cardiac catheterization ranged in weight from 1.6 kg to 35 kg, with their age range 0 day-12 years. The information obtained was of great importance for further management in over 90% cases. The procedure of cardiac cath is invasive, still it was proved to be quite safe even in smallest baby.
abstract_id: PUBMED:23345073
Adverse events rates and risk factors in adults undergoing cardiac catheterization at pediatric hospitals--results from the C3PO. Objective: Determine the frequency and risk factors for adverse events (AE) for adults undergoing cardiac catheterization at pediatric hospitals.
Background: Adult catheterization AE rates at pediatric hospitals are not well understood. The Congenital Cardiac Catheterization Project on Outcomes (C3PO) collects data on all catheterizations at eight pediatric institutions.
Methods: Adult (≥ 18 years) case characteristics and AE were reviewed and compared with those of pediatric (<18 years) cases. Cases were classified into procedure risk categories from 1 to 4 based on highest risk procedure/intervention performed. AE were categorized by level of severity. Using a multivariate model for high severity AE (HSAE), standardized AE rates (SAER) were calculated by dividing the observed rates of HSAE by the expected rates.
Results: 2,061 cases (15% of total) were performed on adults and 11,422 cases (85%) were performed on children. Adults less frequently underwent high-risk procedure category cases than children (19% vs. 30%). AE occurred in 10% of adult cases and 13% of pediatric cases (P < 0.001). HSAE occurred in 4% of adult and 5% of pediatric cases (P = 0.006). Procedure-type risk category (Category 2, 3, 4 OR = 4.8, 6.0, 12.9) and systemic ventricle end diastolic pressure ≥ 18 mm Hg (OR 3.1) were associated with HSAE, c statistic 0.751. There were no statistically significant differences in SAER among institutions.
Conclusions: Adults undergoing catheterization at pediatric hospitals encountered AE less frequently than children did. The congenital heart disease adjustment for risk method for adults with congenital heart disease is a new tool for assessing procedural risk in adult patients.
abstract_id: PUBMED:24968708
Direct measurement of a patient's entrance skin dose during pediatric cardiac catheterization. Children with complex congenital heart diseases often require repeated cardiac catheterization; however, children are more radiosensitive than adults. Therefore, radiation-induced carcinogenesis is an important consideration for children who undergo those procedures. We measured entrance skin doses (ESDs) using radio-photoluminescence dosimeter (RPLD) chips during cardiac catheterization for 15 pediatric patients (median age, 1.92 years; males, n = 9; females, n = 6) with cardiac diseases. Four RPLD chips were placed on the patient's posterior and right side of the chest. Correlations between maximum ESD and dose-area products (DAP), total number of frames, total fluoroscopic time, number of cine runs, cumulative dose at the interventional reference point (IRP), body weight, chest thickness, and height were analyzed. The maximum ESD was 80 ± 59 (mean ± standard deviation) mGy. Maximum ESD closely correlated with both DAP (r = 0.78) and cumulative dose at the IRP (r = 0.82). Maximum ESD for coiling and ballooning tended to be higher than that for ablation, balloon atrial septostomy, and diagnostic procedures. In conclusion, we directly measured ESD using RPLD chips and found that maximum ESD could be estimated in real-time using angiographic parameters, such as DAP and cumulative dose at the IRP. Children requiring repeated catheterizations would be exposed to high radiation levels throughout their lives, although treatment influences radiation dose. Therefore, the radiation dose associated with individual cardiac catheterizations should be analyzed, and the effects of radiation throughout the lives of such patients should be followed.
abstract_id: PUBMED:10955082
1000 cardiac catheterizations in congenital heart disease Over the past 15 years, percutaneous therapeutic cardiac catheterization has become increasingly important in the treatment of congenital heart disease. We describe our experience in 1000 such catheterizations between 1993-1997. 55% were in 1-12-year-olds; only 20% were in patients younger than 1 year old and 11.3% were in adults with congenital heart defects. In about 50% it was at least a second cardiac catheterization. Overall, there were 425 therapeutic cardiac catheterizations, increasing from 33% in the first 200 procedures, to 63% in the last 200. We performed 30 different types of therapeutic catheterizations: 23.3% were valvular dilations, 21.4% vessel angioplasties, 36.9% closure procedures, 9.2% electrophysiological procedures, and 9.2% miscellaneous. In 31.3% of therapeutic catheterizations we used 12 new procedures. Minor complications occurred in 8.5% and major in 0.6%; most complications were successfully treated or were self-limited and there was no residual damage. In this report the current role of each type of major catheterization is discussed on the basis of our experience. Further development of technology for lesions not amenable to currently available transcatheter methods, and longer follow-up for current techniques will consolidate the role of therapeutic cardiac catheterization in congenital heart disease.
Answer: Based on the information provided in the abstracts, it appears that performing cardiac catheterizations on adults with congenital heart disease (CHD) in a pediatric catheterization laboratory (PCL) is generally safe. A retrospective chart review of 576 procedures on 436 adult patients with CHD in a PCL showed a complication rate of 10.6%, with 19 considered major and 42 minor. The study concluded that the complication rate for adults with CHD in a PCL is similar to that of children with CHD undergoing cardiac catheterization, and that the complications could be effectively managed in the PCL. It was noted that older patients (≥45 years) had a higher occurrence of complications compared to younger patients (<45 years), but with appropriate screening, cardiac catheterization on most adults with CHD in a PCL is safe (PUBMED:16216015).
Additionally, another study described the outcomes of cardiac catheterizations performed by pediatric interventional cardiologists in an adult catheterization laboratory on adult patients with CHD. The study found that all interventions were successful, and only two complications occurred among adults (3.8%), with no femoral vessel complications or catheterization-associated mortality. This suggests that cardiac catheterizations can be performed effectively and safely in adults with CHD by pediatric interventional cardiologists in an adult catheterization laboratory (PUBMED:23006871).
Furthermore, a study assessing the feasibility of using the antecubital vein for right heart catheterization in children and adults with CHD reported a high percentage of technical success and few mild complications, indicating that this approach is safe and effective (PUBMED:37984324).
In summary, the evidence from these studies supports the safety of performing cardiac catheterizations on adults with CHD in pediatric catheterization laboratories, provided that the procedures are carried out by experienced personnel and with appropriate patient selection and management of complications. |
Instruction: Examining the etiology of associations between perceived parenting and adolescents' alcohol use: common genetic and/or environmental liabilities?
Abstracts:
abstract_id: PUBMED:20409424
Examining the etiology of associations between perceived parenting and adolescents' alcohol use: common genetic and/or environmental liabilities? Objective: Although twin studies yield consistent evidence of heritability in the frequency of adolescent alcohol use, parallel findings on parenting are more equivocal, supporting the role of genes in affective parenting but not as supportive in relation to parental control. The extent to which these patterns generalize to more nuanced forms of parenting is less clear. Furthermore, despite evidence linking parents' socialization practices with adolescents' alcohol-use behaviors, this study is the first attempt to determine the sources of this covariation.
Method: The present study used epidemiological data from 4,729 adolescent twins (2,329 females) to examine the nature of associations between their perceptions of parenting at age 12 and frequencies of alcohol use at age 14. Univariate analyses assessed the relative contributions of genetic and environmental influences on variability within six domains of parenting. Among those displaying consistent evidence of heritability, bivariate models were used to explore sources of covariation with drinking frequency.
Results: Univariate models suggested both genetic and environmental sources of variability across parenting phenotypes, including sex-specific sources of genetic liability within one dimension of parenting. However, despite evidence for heritability, bivariate analyses indicated that the covariation between alcohol use and perceptions of parental knowledge and warmth were entirely mediated through shared environmental pathways.
Conclusions: This study elucidates genetic and environmental sources of variability within individual parenting behaviors and characterizes the etiological nature of the association between parental socialization and adolescent alcohol use. The identification of specific and modifiable socialization practices will be crucial for the future development of parent-based prevention/intervention strategies.
abstract_id: PUBMED:32302254
Psychological Well-being and Perceived Parenting Style among Adolescents. The family of an adolescent assists in shaping the adolescent's behavior and psychological well-being throughout life. In order for the adolescents to maintain an identity, they require security and affection from their parents. To assess the psychological well-being, perceived parenting style of adolescents and to determine the relationship between psychological well-being and perceived parenting style among the adolescents, a correlational survey was conducted in five randomly selected schools in Southern India with 554 adolescents studying in 8th grade to 9th grade. A self-administered perceived parenting scale and a standardized Ryff scale for the assessment of psychological well-being were adopted to collect data, which were analyzed using SPSS. Without gender differences, majority (51%) had a high psychological well-being; 49% revealed low psychological well-being. Majority (95.5%) had a purpose in life and positive relation with others. Most (93.2%) of the adolescents perceived their parents as authoritative. A moderately positive relationship was found between psychological well-being and authoritarian and permissive parenting styles and a negative correlation between psychological well-being and neglectful parenting style. The study concluded that parenting styles will have an influence on adolescents' psychological well-being. Among the four parenting styles, authoritative parenting is warm and steady and hence will contribute to the psychological development of adolescents. They also had maintained a positive relation with others and have a purpose in life. Adolescents who perceived their parents as authoritarian had a decreased autonomy and those who perceived their parents as permissive had a diminished personal growth.
abstract_id: PUBMED:35140649
Adolescents' Filial Piety Attitudes in Relation to Their Perceived Parenting Styles: An Urban-Rural Comparative Longitudinal Study in China. The Dual Filial Piety Model (i.e., the model of reciprocal and authoritarian filial piety) offers a universally applicable framework for understanding essential aspects of intergenerational relations across diverse cultural contexts. The current research aimed to examine two important issues concerning this model that have lacked investigation: the roles of parental socialization (i.e., authoritative and authoritarian parenting styles) and social ecologies (i.e., urban vs. rural settings that differ in levels of economic development and modernization) in the development of reciprocal and authoritarian filial piety attitudes. To this end, a two-wave short-term longitudinal survey study was conducted among 850 early adolescents residing in urban (N = 314, 49.4% females, mean age = 13.31 years) and rural China (N = 536, 45.3% females, mean age = 13.72 years), who completed questionnaires twice, 6 months apart, in the spring semester of grade 7 and the fall semester of grade 8. Multigroup path analyses revealed bidirectional associations over time between perceived parenting styles and adolescents' filial piety attitudes, with both similarities and differences in these associations between urban and rural China. In both settings, perceived authoritative parenting predicted increased reciprocal filial piety 6 months later, whereas perceived authoritarian parenting predicted reduced reciprocal filial piety among urban (but not rural) adolescents over time. Moreover, in both settings, reciprocal filial piety predicted higher levels of perceived authoritative parenting and lower levels of perceived authoritarian parenting 6 months later, with the latter effect being stronger among urban (vs. rural) adolescents. Adolescents' perceived parenting styles did not predict their authoritarian filial piety over time; however, authoritarian filial piety predicted higher levels of perceived authoritative parenting (but not perceived authoritarian parenting) 6 months later in both settings. The findings highlight the roles of transactional socialization processes between parents and youth as well as social ecologies in the development of filial piety, thus advancing the understanding of how the universal human motivations underlying filial piety may function developmentally across different socioeconomic and sociocultural settings.
abstract_id: PUBMED:32638232
Developmental Changes in Secrecy During Middle Adolescence: Links with Alcohol Use and Perceived Controlling Parenting. Adolescence is a developmental period characterized by fundamental transformations in parent-child communication. Although a normative shift in adolescents' secrecy seems to occur in parallel to changes in their drinking behaviors and in their perceptions of the relationship with their parents, relatively little attention has been paid to their associations over time. The present longitudinal study examined the associations between developmental changes in adolescents' secrecy, alcohol use, and perceptions of controlling parenting during middle adolescence, using a latent growth curve approach. At biannual intervals for two consecutive years, a sample of 473 Swiss adolescents (64.7% girls) beginning their last year of mandatory school (mean age at Time 1 = 14.96) completed self-report questionnaires about secrecy, alcohol use, and perceived controlling parenting. The results of the univariate models showed mean level increases in secrecy and alcohol use, but stable levels in controlling parenting over time. The results of a parallel-process model indicated that higher initial levels of secrecy were associated with higher initial levels of alcohol use and perceived controlling parenting, while an increase in secrecy was associated with an increase in alcohol use and an increase in perceived controlling parenting over time. In addition, adolescents who reported the lowest initial levels of perceived controlling parenting showed a greater increase in secrecy over time and those with high initial levels of secrecy reported a relative decrease in perceived controlling parenting. Finally, adolescents with the lowest initial levels of alcohol use experienced a greater increase in secrecy. Overall, these results indicate that the development of adolescents' secrecy is associated with the development of their drinking habits and perceptions of family relationships in dynamic ways.
abstract_id: PUBMED:24465290
Alcohol consumption among Chilean adolescents: Examining individual, peer, parenting and environmental factors. Aims: This study examined whether adolescents from Santiago, Chile who had never drunk alcohol differed from those who had drunk alcohol but who had never experienced an alcohol-related problem, as well as from those who had drunk and who had experienced at least one alcohol-related problem on a number of variables from four domains - individual, peers, parenting, and environmental.
Design: Cross-sectional.
Setting: Community based sample.
Participants: 909 adolescents from Santiago, Chile.
Measurements: Data were analyzed with multinomial logistic regression to compare adolescents who had never drunk alcohol (non-drinkers) with i) those that had drunk but who had experienced no alcohol-related problems (non-problematic drinkers) and ii) those who had drunk alcohol and had experienced at least one alcohol-related problem (problematic drinkers). The analyses included individual, peer, parenting, and environmental factors while controlling for age, sex, and socioeconomic status.
Findings: Compared to non-drinkers, both non-problematic and problematic drinkers were older, reported having more friends who drank alcohol, greater exposure to alcohol ads, lower levels of parental monitoring, and more risk-taking behaviors. In addition, problematic drinkers placed less importance on religious faith to make daily life decisions and had higher perceptions of neighborhood crime than non-drinkers.
Conclusions: Prevention programs aimed at decreasing problematic drinking could benefit from drawing upon adolescents' spiritual sources of strength, reinforcing parental tools to monitor their adolescents, and improving environmental and neighborhood conditions.
abstract_id: PUBMED:37080670
Perceived parenting practices associated with African American adolescents' future expectations. The current chapter investigated perceived parenting practices associated with future expectations in a sample of African American adolescents and how these relations varied across self-processes (i.e., hope, self-esteem, racial identity). Specifically, 358 low-income, African American high school students were surveyed to examine the role of perceived parenting practices in youth's aspirations and expectations. Structural equation modeling (SEM) revealed that general parenting practices (i.e., support, monitoring, and consistent discipline) and racial socialization (i.e., preparation for bias, cultural socialization) significantly predicted positive future expectations, particularly for adolescents with low self-esteem. Implications of these results and directions for future research are discussed. Importantly, the results contribute to understanding of the developmental cascades of parenting practices and racial socialization in the everyday experiences of African American populations.
abstract_id: PUBMED:38031571
Patterns of indulgent parenting and adolescents' psychological development. Objective: This study aimed to extend the current literature by examining the patterns of indulgent parenting of both mothers and fathers and their associations with adolescents' basic psychological needs satisfaction, self-control, and self-efficacy.
Background: Indulgent parenting could be harmful for the development of psychological needs satisfaction and cognitive abilities when adolescents seek autonomy and gain emotional regulatory skills. Yet research is limited on investigating the patterns of indulgent parenting and their relationships to adolescents' psychological development.
Method: The sample consisted of 268 adolescents in Grades 9 to 11 from several high schools in a southeastern region of the United States. Participants took an online survey about their perceptions of parental indulgent parenting, their psychological development, and demographic information.
Results: Results from multivariate mixture modeling suggested four distinct classes of perceived maternal and paternal indulgence. Further, these classes demonstrated differential associations with adolescents' basic psychological needs satisfaction, self-control, and self-efficacy.
Conclusion: The findings revealed different patterns of perceived indulgent parenting practices. Further, these findings also highlighted the negative role of perceived behavioral indulgence on adolescents' psychological development.
Implications: Implications for interventions targeted at parenting and adolescent development were noted.
abstract_id: PUBMED:37879412
Negative parenting style and depression in adolescents: A moderated mediation of self-esteem and perceived social support. Background: Negative parenting style as a risk factors of depression has been defined in the previous researches. However, the underlying mechanism between negative parenting style and depression was still unclear. This study aimed to investigate the mediating role of self-esteem and the moderating role of perceived social support in the association between negative parenting style and depression among adolescents.
Methods: A total of 14,724 Chinese adolescents were asked to complete the questionnaires including Parenting Style scale, Multidimensional Scale of Perceived Social Support, Self-esteem scale, and Patient Health Questionnaire 9-item scale. Mediation and moderation analyses were carried out in SPSS 25.0 macro PROCESS.
Results: Self-esteem mediated the relationship between negative parenting styles and adolescent depression (β = 0.113, SE = 0.004, p < 0.001). Perceived social support moderated the direct effect of negative parenting style on depression (β = -0.076, SE = 0.009, p < 0.001). Moreover, perceived social support moderated the indirect effect of negative parenting style on self-esteem (β = -0.023, SE = 0.007, p < 0.001) and the indirect effect of self-esteem on depression (β = 0.070, SE = 0.009, p < 0.001) in the moderated mediation model.
Limitations: Cross-sectional research design was used in the study. All measures were based on participant self-report.
Conclusion: This study reveals the underlying mechanism with regard to the influence of negative parenting style on depression through self-esteem and perceived social support. Findings provide a theoretical basis and practical implications for prevention and intervention programs to protect adolescents' mental health.
abstract_id: PUBMED:28885599
Family Social Environment and Parenting Predictors of Alcohol Use among Adolescents in Lithuania. The role of the family as the social environment in shaping adolescent lifestyle has recently received substantial attention. This study was focused on investigating the association between familial and parenting predictors and alcohol use in school-aged children. Adolescents aged 13- and 15-year from a representative sample (N = 3715) of schools in Lithuania were surveyed during the spring of 2014. The methodology of the cross-national Health Behaviour in School-aged Children (HBSC) study was applied. HBSC international questionnaires were completed in the classroom anonymously for obtaining information about drinking of alcoholic beverages and family characteristics-family's affluence and structure, style of communication in the family, parenting style, parental monitoring, family time together, etc. Univariate and multivariate logistic regression analysis was applied for assessment of the association between familial variables and weekly alcohol use. Analysis has demonstrated that adolescents from non-intact families tended to show significantly higher risk of being weekly drinkers (OR = 1.69; 95% CI: 1.30-2.19). The following parenting factors were associated with weekly use of alcohol: father's and mother's low monitoring, father's authoritarian-repressive and mother's permissive-neglectful parenting style. Frequent family time together and frequent electronic media communication with parents showed an inverse negative effect than was predicted. The study suggests that alcohol misuse among adolescents could be associated with a non-intact family structure as well as with complex family and parenting determinants which should be investigated more thoroughly by further studies.
abstract_id: PUBMED:37389715
Perceived Parenting Discrepancy Profiles, Feelings about Language Brokering, and Internalizing Symptoms among Mexican-origin Adolescents in Immigrant Families. While different patterns of perceived parenting discrepancy among mother-adolescent dyads have been shown to be associated with adolescent internalizing symptoms, little is known about the pathway underlying such associations, particularly among immigrant families. The current study considered one culturally salient form of mother-adolescent communication, language brokering (i.e., adolescents translating and interpreting between host and heritage languages for mothers), in order to investigate its mediating role based on two waves of longitudinal data on Mexican-origin immigrant families. Wave 1 included 604 adolescents (54% female; Mage = 12.92, SD = 0.92) and 595 mothers (Mage = 38.89, SD = 5.74); Wave 2 was collected one year later with data from 483 adolescents. Perceived parenting discrepancy patterns at Wave 1 were captured by three profiles based on the levels of both mothers' and adolescents' perceived positive parenting (i.e., Mother High, Adolescent High, and Both High). Compared to the other two profiles, adolescents who reported much lower positive parenting than mothers at Wave 1 (i.e., Mother High) experienced more negative feelings about brokering at Wave 2, relating to more anxiety. Being in the Mother High (vs. Both High) group was also directly related to more depressive symptoms one year later. This study highlights the importance of considering culturally salient forms of communication, such as language brokering, when designing family-level interventions to reduce adolescents' internalizing symptoms by building agreement on high positive parenting among mother-adolescent dyads from immigrant families.
Answer: The etiology of associations between perceived parenting and adolescents' alcohol use involves both genetic and environmental factors. A study using epidemiological data from 4,729 adolescent twins found that while there are genetic and environmental sources of variability across parenting phenotypes, the covariation between alcohol use and perceptions of parental knowledge and warmth were entirely mediated through shared environmental pathways (PUBMED:20409424). This suggests that the environment shared by twins, which includes parenting practices, plays a significant role in the relationship between parenting and adolescent alcohol use.
Moreover, other studies have highlighted the influence of parenting styles on adolescents' behaviors and psychological well-being. For instance, adolescents who perceived their parents as authoritative had better psychological well-being, while those who perceived their parents as authoritarian had decreased autonomy, and those who perceived their parents as permissive had diminished personal growth (PUBMED:32302254). Additionally, perceived parenting styles have been linked to adolescents' filial piety attitudes, with authoritative parenting predicting increased reciprocal filial piety, and authoritarian parenting predicting reduced reciprocal filial piety among urban adolescents (PUBMED:35140649).
Parenting practices also play a role in adolescents' development of secrecy, which is associated with alcohol use and perceived controlling parenting (PUBMED:32638232). Furthermore, environmental factors such as exposure to alcohol ads, lower levels of parental monitoring, and neighborhood conditions have been associated with alcohol consumption among Chilean adolescents (PUBMED:24465290).
In summary, the associations between perceived parenting and adolescents' alcohol use are influenced by a complex interplay of genetic and environmental factors, with environmental factors, particularly shared family environments and parenting practices, playing a crucial role in mediating these associations. |
Instruction: Refill compliance in type 2 diabetes mellitus: a predictor of switching to insulin therapy?
Abstracts:
abstract_id: PUBMED:12642975
Refill compliance in type 2 diabetes mellitus: a predictor of switching to insulin therapy? Objective: To assess whether switching to insulin therapy in patients with type 2 diabetes mellitus is associated with medication refill compliance of oral hypoglycemic agents.
Research Design And Methods: The PHARMO Record Linkage System was used as data source for this study. Patients with newly treated type 2 diabetes mellitus were defined as subjects in whom oral hypoglycemic therapy was initiated between 1991 and 1998. We performed a matched case-control study in this cohort. Cases were patients who switched to insulin therapy. Date of switching in the case was defined as the index date. Controls were subjects still on oral therapy on the index date, matched on duration of diabetes and calendar time. We measured the medication refill compliance in the year starting 18 months before the index date and calculated various compliance indices.
Results: In total, 411 cases and 411 matched controls were identified. Cases suffered more often from more severe comorbidity and used a higher number of oral hypoglycemic agents and concomitant non-diabetic drugs. The overall compliance rate did not differ significantly between cases and controls, the adjusted odds ratio (OR) was 1.3 (CI 95% 0.6-2.8). After performing multivariate logistic regression modeling, age at onset of diabetes, gender, comedication, combination therapy, and daily dosage frequency, were independently related to switching.
Conclusions: We were unable to confirm the hypothesis that noncompliance with treatment is more prevalent in patients with secondary failure. Other variables, like comorbidity and disease-related factors seem to play a more important role in switching to insulin therapy.
abstract_id: PUBMED:35784534
Tolerability and Effectiveness of Switching to Dulaglutide in Patients With Type 2 Diabetes Inadequately Controlled With Insulin Therapy. Aims: Glucagon-like peptide 1 (GLP-1) receptor agonists have demonstrated strong glycemic control. However, few studies have investigated the effects of switching from insulin to GLP-1 receptor agonists. We aimed to investigate, using real-world data, whether switching to dulaglutide improves glycemic control in patients with type 2 diabetes mellitus (T2D) inadequately controlled with conventional insulin treatment.
Materials And Methods: We retrospectively evaluated 138 patients with T2D who were switched from insulin to dulaglutide therapy. We excluded 20 patients who dropped out during the follow-up period. The participants were divided into two groups according to whether they resumed insulin treatment at 6 months after switching to a GLP-1 receptor agonist (group I) or not (group II). A multiple logistic regression analysis was performed to evaluate the parameters associated with the risk of resuming insulin after replacement with dulaglutide.
Results: Of 118 patients initiated on the GLP-1 receptor agonist, 62 (53%) resumed insulin treatment (group I), and 53 (47%) continued with GLP-1 receptor agonists or switched to oral anti-hypoglycemic agents (group II). Older age, a higher insulin dose, and lower postprandial glucose levels while switching to the GLP-1 receptor agonist were associated with failure to switch to the GLP-1 receptor agonist from insulin.
Conclusions: A considerable proportion of patients with T2D inadequately controlled with insulin treatment successfully switched to the GLP-1 receptor agonist. Younger age, a lower dose of insulin, and a higher baseline postprandial glucose level may be significant predictors of successful switching from insulin to GLP-1 receptor agonist therapy.
abstract_id: PUBMED:29260929
Switching basal insulins in type 2 diabetes: practical recommendations for health care providers. Basal insulin remains the mainstay of treatment of type 2 diabetes when diet changes and exercise in combination with oral drugs and other injectable agents are not sufficient to control hyperglycemia. Insulin therapy should be individualized, and several factors influence the choice of basal insulin; these include pharmacological properties, patient preferences, and lifestyle, as well as health insurance plan formularies. The recent availability of basal insulin formulations with longer durations of action has provided further dosing flexibility; however, patients may need to switch agents throughout therapy for a variety of personal, clinical, or economic reasons. Although a unit-to-unit switching approach is usually recommended, this conversion strategy may not be appropriate for all patients and types of insulin. Glycemic control and risk of hypoglycemia must be closely monitored by health care providers during the switching process. In addition, individual changes in care and formulary coverage need to be adequately addressed in order to enable a smooth transition with optimal outcomes.
abstract_id: PUBMED:17823765
Refill adherence of antihyperglycaemic drugs related to glucose control (HbA1c) in patients with type 2 diabetes. The aim of this study was to examine a potential association between: (1) refill adherence to antihyperglycaemic drugs and glucose control, and (2) adherence to antihyperglycaemic and cardiovascular drugs for the same patients. Consecutive patients with type 2 diabetes at six Swedish health centres were included. Refill adherence was determined from repeat prescriptions. Satisfactory refill adherence was defined as the percentage of the patients with refills covering > or =80% of the prescribed treatment time. A total of 994 prescriptions were collected from 422 patients, 346 patients had antihyperglycaemic drugs (mean HbA(1c )6.5%) and 76 were on diet and exercise but not on drugs (mean HbA(1c )6.2%) (P = 0.0098). A total of 257 patients (74%) had satisfactory refill adherence. Mean HbA(1c) for the adherent patients was 6.5% and for the non-adherent patients 6.8% (P = 0.025). For patients on insulin only, 69% had satisfactory refill adherence with mean HbA(1c) 6.6% compared to 7.3% (P = 0.005) for the non-adherent patients. Ninety-two percent of the patients with satisfactory refill adherence to antihyperglycaemic agents were also adherent to cardiovascular drugs compared to 62% among those who were non-adherent to antihyperglycaemic drugs (P < 0.001). Patients with satisfactory refill adherence have lower HbA(1c)-levels and higher adherence to cardiovascular drugs than non-adherent patients.
abstract_id: PUBMED:36417158
Therapeutic Effects of Switching to Anagliptin from Other DPP-4 Inhibitors in T2DM Patients with Inadequate Glycemic Control: A Non-interventional, Single-Arm, Open-Label, Multicenter Observational Study. Introduction: The effects of switching DPP-4 inhibitors in type 2 diabetes mellitus (T2DM) patients are being widely studied. However, information of which factors affect the therapeutic response is limited. We evaluated the difference in HbA1c lowering effect by comorbidity and other variables after switching to anagliptin in patients with T2DM inadequately controlled by other DPP-4 inhibitors.
Methods: In a multicenter, open-label, single-arm, prospective observational study, patients with T2DM, HbA1c ≥ 7.0% who have taken DPP-4 inhibitors other than anagliptin, either alone or in combination (DPP-4 inhibitors + metformin/sulfonylurea (SU)/thiazolidinedione (TZD)/insulin), for at least 8 weeks were enrolled. After the switch to anagliptin, HbA1c and available clinical characteristics were determined.
Results: The change in HbA1c levels from baseline to week 12 and 24 was - 0.40% and - 0.42% in all patients. However, comparing the subgroups without and with comorbidities, the change in HbA1c levels at weeks 12 and 24 was - 0.68% and - 0.89% vs. - 0.27% and 0.22%, respectively. In addition, the proportion of patients achieving HbA1c < 7% from baseline to week 12 and 24 was increased to 70% and 70% vs. 20% and 24%, respectively. Duration of T2DM and different subtype classes of DPP-4 inhibitor did not significantly contribute to the change in HbA1c.
Conclusion: In patients with T2DM poorly controlled by other DPP-4 inhibitors, HbA1c levels were significantly decreased after switching to anagliptin. Given that the change in HbA1c was greater in patients without comorbidities than in patients with comorbidities, switching to anagliptin before adding other oral hypoglycemic agents (OHAs) may be an option in patients without comorbidities.
abstract_id: PUBMED:17632220
Metabolic impact of switching antipsychotic therapy to aripiprazole after weight gain: a pilot study. Switching antipsychotic regimen to agents with low weight gain potential has been suggested in patients who gain excessive weight on their antipsychotic therapy. In an open-label pilot study, we evaluated the metabolic and psychiatric efficacy of switching to aripiprazole in 15 (9 men, 6 women) outpatients with schizophrenia who had gained at least 10 kg on their previous antipsychotic regimen. Individuals had evaluation of glucose tolerance, insulin resistance (insulin suppression test), lipid concentrations, and psychiatric status before and after switching to aripiprazole for 4 months. A third of the individuals could not psychiatrically tolerate switching to aripiprazole. In the remaining individuals, psychiatric symptoms significantly improved with decline in Clinical Global Impression Scale (by 26%, P = 0.015) and Positive and Negative Syndrome Scale (by 22%, P = 0.023). Switching to aripiprazole did not alter weight or metabolic outcomes (fasting glucose, insulin resistance, and lipid concentrations) in the patients of whom 73% were insulin resistant and 47% had impaired or diabetic glucose tolerance at baseline. In conclusion, switching to aripiprazole alone does not ameliorate the highly prevalent metabolic abnormalities in the schizophrenia population who have gained weight on other second generation antipsychotic medications.
abstract_id: PUBMED:36596946
Glycaemic Control in People with Type 2 Diabetes Mellitus Switching from Basal Insulin to Insulin Glargine 300 U/ml (Gla-300): Results from the REALI Pooled Database. Introduction: Using pooled data from the REALI European database, we evaluated the impact of previous basal insulin (BI) type on real-life effectiveness and safety of switching to insulin glargine 300 U/ml (Gla-300) in people with suboptimally controlled type 2 diabetes.
Methods: Patient-level data were pooled from 11 prospective, open-label, 24-week studies. Participants were classified according to the type of prior BI. Of the 4463 participants, 1282 (28.7%) were pre-treated with neutral protamine Hagedorn (NPH) insulin and 2899 (65.0%) with BI analogues (BIAs), and 282 (6.3%) had undetermined prior BI.
Results: There were no meaningful differences in baseline characteristics between subgroups, except for a higher prevalence of diabetic neuropathy in the NPH subgroup (21.6% versus 7.8% with BIAs). Mean ± standard deviation haemoglobin A1c (HbA1c) decreased from 8.73 ± 1.15% and 8.35 ± 0.95% at baseline to 7.71 ± 1.09% and 7.82 ± 1.06% at week 24 in the NPH and BIA subgroups, respectively. Least squares (LS) mean change in HbA1c was - 0.85% (95% confidence interval - 0.94 to - 0.77) in NPH subgroup and - 0.70% (- 0.77 to - 0.64) in BIA subgroup, with a LS mean absolute difference between subgroups of 0.16 (0.06-0.26; p = 0.002). Gla-300 mean daily dose was slightly increased at week 24 by 0.07 U/kg/day (approximately 6 U/day) in both subgroups. Incidences of symptomatic and severe hypoglycaemia were low, without body weight change.
Conclusions: Irrespective of previous BI therapy (NPH insulin or BIAs), switching to Gla-300 improved glycaemic control without weight gain and with low symptomatic and severe hypoglycaemia incidences. However, a slightly greater glucose-lowering effectiveness was observed in people pre-treated with NPH insulin.
abstract_id: PUBMED:31842141
Is Switching from Oral Antidiabetic Therapy to Insulin Associated with an Increased Fracture Risk? Background: Observational studies showed that exposure to exogenous insulin increases fracture risk. However, it remains unclear whether the observed association is a function of the severity of underlying type 2 diabetes mellitus, complications, therapies, comorbidities, or all these factors combined. That being so, and because of the relative infrequency of these events, it is important to study this further in a large-database setting. QUESTION/PURPOSES: (1) Is switching from oral antidiabetic agents to insulin associated with an increased fracture risk? (2) How soon after switching does the increased risk appear, and for how long does this increased risk persist?
Methods: Data from healthcare utilization databases of the Italian region of Lombardy were used. These healthcare utilization databases report accurate, complete, and interconnectable information of inpatient and outpatient diagnoses, therapies, and services provided to the almost 10 million residents in the region. The 216,624 patients on treatment with oral antidiabetic therapy from 2005 to 2009 were followed until 2010 to identify those who modified their antidiabetic therapy (step 1 cohort). Among the 63% (136,307 patients) who experienced a therapy modification, 21% (28,420 patients) switched to insulin (active exposure), and the remaining 79% (107,887 patients) changed to another oral medication (referent exposure). A 1:1 high-dimension propensity score matching design was adopted for balancing patients on active and referent exposure. Matching failed for 3% of patients (926 patients), so the cohort of interest was formed by 27,494 insulin-referent couples. The latter were followed until 2012 to identify those who experienced hospital admission for fracture (outcome). A Cox proportional hazard model was fitted to estimate the hazard ratio (HR) for the outcome risk associated with active-exposure (first research question). Between-exposure comparison of daily fracture hazard rates from switching until the 24 successive months was explored through the Kernel-smoothed estimator (second research question).
Results: Compared with patients on referent exposure, those who switched to insulin had an increased risk of experiencing any fracture (HR = 1.5 [95% CI 1.3 to 1.6]; p < 0.001). The same risk was observed for hip and vertebral fractures, with HRs of 1.6 (95% CI 1.4 to 1.8; p < 0.001) and 1.8 (95% 1.5 to 2.3; p < 0.001), respectively. Differences in the daily pattern of outcome rates mainly appeared the first 2 months after switching, when the hazard rate of patients on active exposure (9 cases for every 100,000 person-days) was higher than that of patients on referent exposure (4 cases for every 100,000 person-days). These differences persisted during the remaining follow-up, though with reduced intensity.
Conclusions: We found quantitative evidence that switching from oral antidiabetic therapy to insulin is associated with an increased fracture risk, mainly in the period immediately after the start of insulin therapy. The observed association may result from higher hypoglycemia risk among patients on insulin, which leads to a greater number of falls and resulting fractures. However, although our study was based on a large sample size and highly accurate data, its observational design and the lack of clinical data suggest that future research will need to replicate or refute our findings and address the issue of causality, if any. Until then, though, prescribers and patients should be aware of this risk. Careful control of insulin dosage should be maintained and measures taken to reduce fall risk in these patients.
Level Of Evidence: Level III, therapeutic study.
abstract_id: PUBMED:29649539
Switching to insulin glargine 300 U/mL: Is duration of prior basal insulin therapy important? Aims: To assess the impact of duration of prior basal insulin therapy on study outcomes in people with type 2 diabetes mellitus receiving insulin glargine 300 U/mL (Gla-300) or insulin glargine 100 U/mL (Gla-100) for 6 months.
Methods: A post hoc patient-level meta-analysis of data from the EDITION 1 and 2 studies. Outcomes included: HbA1c, percentage of participants with ≥1 confirmed or severe hypoglycaemic event at night (00:00-05:59 h) or any time (24 h), and body weight change. Data were analysed according to duration of prior basal insulin use: >0-≤2 years, >2-≤5 years, >5 years.
Results: This meta-analysis included 1618 participants. HbA1c change from baseline to month 6 was comparable between Gla-300 and Gla-100 groups, regardless of duration of prior basal insulin therapy. The lower risk with Gla-300 versus Gla-100 of ≥1 confirmed (≤3.9 mmol/L [≤70 mg/dL]) or severe hypoglycaemic event, at night or any time (24 h), was unaffected by duration of prior basal insulin therapy. Similarly, weight change was unaffected by duration of prior basal insulin therapy.
Conclusions: Switching to Gla-300 from other basal insulin therapies provided comparable glycaemic control with lower risk of hypoglycaemia versus Gla-100, regardless of duration of prior basal insulin therapy.
Clinical Trial Registration: NCT01499082, NCT01499095 (ClinicalTrials.gov).
abstract_id: PUBMED:21051096
Predictors of switching to insulin from non-insulin therapy in patients with type 2 diabetes mellitus. Aims: To estimate the switching rate and to identify factors that predict switch from non-insulin to insulin therapy in patients with type 2 diabetes using routinely collected data from a clinical information system at Isfahan Endocrine and Metabolism Research Centre, Iran.
Methods: During the mean (SD) follow-up period of 9.3 (3.4) years, 6896 non-insulin-treated patients with type 2 diabetes at baseline have been examined to determine predictors of switches to insulin therapy. Their treatment at the last clinic visit was compared with the initial visit treatment. The mean (SD) age of participants was 51.2 (10.3) years with a mean (SD) duration of diabetes of 5.8 (5.9) years at initial registration.
Results: The switch to insulin from non-insulin therapy was 2.5 (95% confidence interval 2.4, 2.6) (2.2 men and 2.7 women) per 100 patient-years based on 64,540 patient-years of follow-up. Using a Cox's proportional hazards model, younger age at diagnosis, female gender, higher BMI and HbA(1c) were significant predictors of switch to insulin treatment.
Conclusions: These are the first estimate of switch to insulin from non-insulin therapy in Iran. Younger age at diagnosis, female gender, higher BMI and HbA(1c) at registration were identified as predictors of switching to insulin.
Answer: The relationship between refill compliance of oral hypoglycemic agents and the likelihood of switching to insulin therapy in patients with type 2 diabetes mellitus (T2DM) has been explored in several studies, but the findings are not entirely consistent.
One study aimed to assess whether medication refill compliance was associated with switching to insulin therapy in patients with T2DM. The study used data from the PHARMO Record Linkage System and included patients who initiated oral hypoglycemic therapy between 1991 and 1998. The researchers performed a matched case-control study, with cases being patients who switched to insulin therapy and controls being those who remained on oral therapy. The study found that overall compliance rates did not differ significantly between cases and controls, with an adjusted odds ratio of 1.3 (CI 95% 0.6-2.8). The study concluded that noncompliance with treatment was not more prevalent in patients with secondary failure, and other variables such as comorbidity and disease-related factors seemed to play a more important role in switching to insulin therapy (PUBMED:12642975).
Another study examined the association between refill adherence to antihyperglycemic drugs and glucose control (HbA1c) in patients with T2DM. The study found that patients with satisfactory refill adherence had lower HbA1c levels and higher adherence to cardiovascular drugs than non-adherent patients. This suggests that patients with good refill adherence may have better overall management of their diabetes, which could potentially influence the decision to switch to insulin therapy (PUBMED:17823765).
In summary, while refill compliance is an important aspect of diabetes management, the evidence does not strongly support it as a predictor of switching to insulin therapy. Other factors, such as comorbidities, disease severity, and individual patient characteristics, may have a more significant impact on the decision to switch from oral hypoglycemic agents to insulin therapy in patients with T2DM. |
Instruction: Is there a healthy obese subtype in cohort undergoing test for endothelial dysfunction?
Abstracts:
abstract_id: PUBMED:25877140
Is there a healthy obese subtype in cohort undergoing test for endothelial dysfunction? Objective: To investigate whether there existed a healthy obese subtype.
Methods: A total of 116 healthy subjects were recruited. They were divided into 3 groups according to BMI and metabolic disorders: 40 cases of normal weight and metabolic normality (NMN), 36 cases of obesity and metabolic normality (OMN) and 40 cases of obesity and metabolic abnormality (OMA). Anthropometic parameters as height, weight, waist circumference, hip circumference and blood pressure was recorded. Blood glucose, lipids, insulin, high-sensitivity C-reactive protein (hs-CRP) was detected. Body fat distribution was detected by dual-energy X-ray absorptiometry (DXA). Serum von Willebrand factor (vWF), a marker of endothelial dysfunction, were detected by ELISA.
Results: Both serum vWF levels in OMN group [(733.6 ± 86.2)U/L] and OMA group[(809.2 ± 46.3)U/L] are higher than that in NMN group[(466.9 ± 65.3)U/L, P < 0.05] with serum vWF level in OMA group is higher than in OMN group (P < 0.05). Among android fat mass percentage (AFM%), BMI, waist height ratio, waist circumference, hs-CRP, weight, hip circumference and trunk fat mass, AFM%, BMI and hs-CRP are main influencing factors of vWF.
Conclusions: Endothelial dysfunction existed in obese adults regardless of their metabolic status. There is no healthy obese subtype. AFM%, BMI and hs-CRP are the main influencing factors of endothelial dysfunction.
abstract_id: PUBMED:19735057
Oral glucose tolerance test effects on endothelial inflammation markers in healthy subjects and diabetic patients. The aim of this study was to evaluate the effect of an oral glucose tolerance test (OGTT) on the level of endothelial dysfunction and vascular inflammation markers in healthy subjects (H) and diabetic overweight patients (D). We enrolled 256 healthy subjects and 274 type 2 diabetic patients. We evaluated blood glucose (BG), soluble intercellular adhesion molecule-1 (sICAM-1), interleukin-6 (IL-6), high-sensitivity C reactive protein (hsCRP), soluble vascular cell adhesion molecule-1 (sVCAM-1), soluble E-selectin (sE-selectin), and tumor necrosis factor-alpha (TNF-alpha) at baseline and after OGTT. We observed that BG, sICAM-1, IL-6, hs-CRP, sVCAM-1, sE-selectin, and TNF-alpha values were higher in D group than in H group. In a large sample of adult healthy subjects and type 2 diabetics we observed that both answer to an OGTT with a significant increase in biomarkers of systemic low-grade inflammation and endothelial dysfunction such as hsCRP, IL-6, TNF-alpha, sICAM-1, sVCAM-1, and sE-selectin. Type 2 diabetics experienced, however, a more significant increase in TNF-alpha, and sE-selectin.
abstract_id: PUBMED:33208829
Blood-derived extracellular vesicles isolated from healthy donors exposed to air pollution modulate in vitro endothelial cells behavior. The release of Extracellular Vesicles (EVs) into the bloodstream is positively associated with Particulate Matter (PM) exposure, which is involved in endothelial dysfunction and related to increased risk of cardiovascular disease. Obesity modifies the effects of PM exposure on heart rate variability and markers of inflammation, oxidative stress, and acute phase response. We isolated and characterized plasmatic EVs from six healthy donors and confirmed a positive association with PM exposure. We stratified for Body Mass Index (BMI) and observed an increased release of CD61+ (platelets) and CD105+ (endothelium) derived-EVs after high PM level exposure in Normal Weight subjects (NW) and no significant variations in Overweight subjects (OW). We then investigated the ability to activate endothelial primary cells by plasmatic EVs after both high and low PM exposure. NW-high-PM EVs showed an increased endothelial activation, measured as CD105+/CD62e+ (activated endothelium) EVs ratio. On the contrary, cells treated with OW-high-PM EVs showed reduced endothelial activation. These results suggest the ability of NW plasmatic EVs to communicate to endothelial cells and promote the crosstalk between activated endothelium and peripheral cells. However, this capacity was lost in OW subjects. Our findings contribute to elucidate the role of EVs in endothelial activation after PM exposure.
abstract_id: PUBMED:30100224
Prediction of myocardial infarction, stroke and cardiovascular mortality with urinary biomarkers of oxidative stress: Results from a large cohort study. Background: Oxidative stress contributes to endothelial dysfunction and is involved in the pathogenesis of cardiovascular diseases (CVD). However, large population-based cohort studies are sparse and biomarkers of oxidative stress have not been evaluated for CVD risk prediction so far.
Methods: The associations of urinary oxidized guanine/guanosine (OxGua) levels (including 8-hydroxy-2'-deoxyguanosine (8-OHdGuo)) and 8-isoprostane levels with myocardial infarction, stroke and CVD mortality were examined in a population-based cohort of 9949 older adults from Germany with 14 years of follow-up in multivariable adjusted Cox proportional hazards models.
Results: Both OxGua and 8-isoprostane levels were associated with CVD mortality independently from other risk factors (hazard ratio (HR) [95% confidence interval] of top vs. bottom tertile: 1.32 [1.06; 1.64] and 1.58 [1.27; 1.98], respectively). Moreover, CVD mortality risk prediction was significantly improved when adding the two biomarkers to the European Society of Cardiology's Systematic Coronary Risk Evaluation (ESC SCORE) tool. The area under the curve (AUC) increased from 0.739 to 0.752 (p = 0.001). In addition, OxGua levels were associated with stroke incidence (HR for 1 standard deviation increase: 1.07 [1.01; 1.13]) and 8-isoprostane levels were associated with fatal stroke incidence (HR of top vs. bottom tertile: 1.77 [1.09; 2.89]). With respect to myocardial infarction, associations were observed for both biomarkers in obese subjects (BMI ≥ 30 kg/m2).
Conclusions: These results from a large cohort study add evidence to the involvement of an imbalanced redox system to the etiology of CVD. In addition, 8-isoprostane and OxGua measurements were shown to be useful for an improved CVD mortality prediction.
abstract_id: PUBMED:26268131
THE CONSUMPTION OF ACAI PULP CHANGES THE CONCENTRATIONS OF PLASMINOGEN ACTIVATOR INHIBITOR-1 AND EPIDERMAL GROWTH FACTOR (EGF) IN APPARENTLY HEALTHY WOMEN. Introduction: obesity, characterized by adiposity excess, is associated with endothelial dysfunction and possible inflammatory state with release of cytokines that determine endothelial function and can trigger chronic diseases. The dietary pattern are associated with the synthesis these cytokines. Fruits as the acai, which is rich in flavonoids, have a direct and beneficial effect on the control of this inflammatory process through the exercised antioxidant capacity.
Objective: to evaluate the effect of acai pulp consumption on the inflammatory markers, anthropometric measurements, body composition, biochemical and dietary parameters in healthy women.
Methods: forty women, were divided in 25 eutrophic and 15 with overweight. They intaked 200 g of acai pulp during 4 weeks. Anthropometric measurements, body composition, inflammatory markers, biochemical data, dietary intake and dietary antioxidants capacity were evaluated before and after the intervention.
Results And Discussion: after the intervention, there was significant increase of EGF (p = 0.021) and PAI- 1(p = 0.011) in overweight women. Moreover, there was increase in body weight (p = 0.031), body mass index (p = 0.028), percentage of truncal fat (p = 0.003) and triceps skinfold thickness (p = 0.046) in eutrophic women. However, the skinfold thickness (p = 0.018) and total body fat (p = 0.016) decreased in overweight women. There was reduction of total protein (p = 0.049) due to the globulin reduction (p = 0.005), but the nutritional status was maintained in eutrophic group.
Conclusion: the intake of 200g acai pulp, modulated the EGF and PAI-1 expression, possibly by modulation of acai on the parameters of body composition, dietary, clinical, biochemical and inflammatory, led to a redistribution and resizing of body fat of the trunk area, and presumably increased visceral fat.
abstract_id: PUBMED:34587244
GlyNAC Supplementation Improves Glutathione Deficiency, Oxidative Stress, Mitochondrial Dysfunction, Inflammation, Aging Hallmarks, Metabolic Defects, Muscle Strength, Cognitive Decline, and Body Composition: Implications for Healthy Aging. Cellular increases in oxidative stress (OxS) and decline in mitochondrial function are identified as key defects in aging, but underlying mechanisms are poorly understood and interventions are lacking. Defects linked to OxS and impaired mitochondrial fuel oxidation, such as inflammation, insulin resistance, endothelial dysfunction, and aging hallmarks, are present in older humans and are associated with declining strength and cognition, as well as the development of sarcopenic obesity. Investigations on the origins of elevated OxS and mitochondrial dysfunction in older humans led to the discovery that deficiencies of the antioxidant tripeptide glutathione (GSH) and its precursor amino acids glycine and cysteine may be contributory. Supplementation with GlyNAC (combination of glycine and N-acetylcysteine as a cysteine precursor) was found to improve/correct cellular glycine, cysteine, and GSH deficiencies; lower OxS; and improve mitochondrial function, inflammation, insulin resistance, endothelial dysfunction, genotoxicity, and multiple aging hallmarks; and improve muscle strength, exercise capacity, cognition, and body composition. This review discusses evidence from published rodent studies and human clinical trials to provide a detailed summary of available knowledge regarding the effects of GlyNAC supplementation on age-associated defects and aging hallmarks, as well as discussing why GlyNAC supplementation could be effective in promoting healthy aging. It is particularly exciting that GlyNAC supplementation appears to reverse multiple aging hallmarks, and if confirmed in a randomized clinical trial, it could introduce a transformative paradigm shift in aging and geriatrics. GlyNAC supplementation could be a novel nutritional approach to improve age-associated defects and promote healthy aging, and existing data strongly support the need for additional studies to explore the role and impact of GlyNAC supplementation in aging.
abstract_id: PUBMED:30219648
Normal-range albuminuria in healthy subjects increases over time in association with hypertension and metabolic outcomes. Albuminuria is a prognostic factor for mortality and cardiovascular events, even at low levels. Changes in albumin excretion are associated with end-stage renal disease and hypertension (HTN) in cohorts including high-risk participants. We aimed to investigate the evolvement of albumin excretion in healthy individuals with normal kidney function and normoalbuminuria, and possible associations with HTN and metabolic outcomes. The study cohort consisted of 1967 healthy adults with normal kidney function (estimated glomerular filtration rate ≥ 90 mL/min/1.73 m2; urine albumin to creatinine ratio [ACR] < 30 mg/g). Delta ACR slope was calculated as ACR difference between two consecutive visits divided by the time interval. During a mean follow-up period of 93.8 months, mean delta ACR slope was 0.27 ± 3.29 mg/g/year and was higher in participants with age >40 years, obesity, a high waist circumference, higher baseline ACR, HTN, prediabetes, and metabolic syndrome. Delta ACR slopes in the upper quartile predicted diabetes (OR = 1.31, P = .027) and albuminuria (4.34, P < .001). Upper quartile of ACR slopes correlated with a higher risk for new-onset HTN (1.249, P = .031). Delta systolic and diastolic blood pressures were associated with ACR slopes in addition to age, body mass index, and baseline ACR. In conclusion, accelerated change in ACR correlates with HTN and diabetes in healthy individuals with normal kidney function and normoalbuminuria.
abstract_id: PUBMED:20051903
Microvascular dysfunction in healthy insulin-sensitive overweight individuals. Background: Obesity is associated with increased cardiovascular morbidity. The skin is a unique site allowing simple, noninvasive assessment of capillary density and endothelial function. In the present study, we measured skin capillary density and endothelial function in a group of normotensive overweight/obese nondiabetic individuals and healthy lean controls.
Methods And Results: We examined 120 relatively insulin-sensitive overweight individuals (BMI 27.9 +/- 2.7 kg/m, mean +/- SD) with normal blood pressure and fasting plasma glucose and 130 lean (BMI 22.4 +/- 1.7 kg/m) controls. We used video microscopy to measure skin capillary density in the resting state and during venous occlusion. Laser Doppler flowmetry, combined with iontophoresis of acetylcholine (endothelial-dependent vasodilation) and following skin heating (endothelial-independent dilation), was performed. Resting capillary density was negatively correlated with BMI (r = -0.130, P < 0.05). Resting capillary density (mean +/- SE) was lower, however nonsignificantly, in overweight as compared with the lean individuals (88.6 +/- 1.5 vs. 91.8 +/- 1.4, P = 0.117). Capillary recruitment, defined as the percentage increase in capillary density during venous congestion, was higher in overweight (9.5 +/- 1.0%) than in controls (5.4 +/- 0.9%, P = 0.003), which remained significant after adjustment for age, sex, mean arterial pressure and fasting glucose. As a consequence, capillary density during venous occlusion was similar between the groups. Endothelial-dependent and independent cutaneous vasodilation was also similar between groups. No correlations were found between capillary density and plasma markers of adiposity, inflammation or endothelial dysfunction.
Conclusion: BMI was inversely correlated with resting capillary density. This suggests a lower baseline tissue perfusion associated with higher vasomotor tone. Despite this, capillary recruitment was higher in overweight as compared with lean individuals, resulting in similar capillary density during venous congestion. Our results suggest that skin microcirculation abnormalities, in the absence of endothelial dysfunction, may be one of the earliest detectable alterations in vascular function in overweight individuals.
abstract_id: PUBMED:26620151
Pharmacokinetics, pharmacodynamics and adverse event profile of GSK2256294, a novel soluble epoxide hydrolase inhibitor. Aims: Endothelial-derived epoxyeicosatrienoic acids may regulate vascular tone and are metabolized by soluble epoxide hydrolase enzymes (sEH). GSK2256294 is a potent and selective sEH inhibitor that was tested in two phase I studies.
Methods: Single escalating doses of GSK2256294 2-20 mg or placebo were administered in a randomized crossover design to healthy male subjects or obese smokers. Once daily doses of 6 or 18 mg or placebo were administered for 14 days to obese smokers. Data were collected on safety, pharmacokinetics, sEH enzyme inhibition and blood biomarkers. Single doses of GSK2256294 10 mg were also administered to healthy younger males or healthy elderly males and females with and without food. Data on safety, pharmacokinetics and biliary metabolites were collected.
Results: GSK2256294 was well-tolerated with no serious adverse events (AEs) attributable to the drug. The most frequent AEs were headache and contact dermatitis. Plasma concentrations of GSK2256294 increased with single doses, with a half-life averaging 25-43 h. There was no significant effect of age, food or gender on pharmacokinetic parameters. Inhibition of sEH enzyme activity was dose-dependent, from an average of 41.9% on 2 mg (95% confidence interval [CI] -51.8, 77.7) to 99.8% on 20 mg (95% CI 99.3, 100.0) and sustained for up to 24 h. There were no significant changes in serum VEGF or plasma fibrinogen.
Conclusions: GSK2256294 was well-tolerated and demonstrated sustained inhibition of sEH enzyme activity. These data support further investigation in patients with endothelial dysfunction or abnormal tissue repair, such as diabetes, wound healing or COPD.
abstract_id: PUBMED:25197813
Combined effects of sleep disordered breathing and metabolic syndrome on endothelial function: the Wisconsin Sleep Cohort study. Study Objectives: To examine the combined impact of sleep disordered breathing (SDB) and metabolic syndrome (MetS) in endothelial dysfunction.
Design: Cross-sectional assessment of endothelial function, MetS and SDB status in a population-based sample.
Setting: Community-based cohort.
Participants: Participants (n = 431) from the Wisconsin Sleep Cohort were studied between 2004 and 2007. MetS was defined following the National Cholesterol Education Program criteria. SDB severity was defined by the apnea-hypopnea index ([AHI] events/h of sleep) during overnight polysomnography. Fasting lipids, glucose, and insulin were measured and homeostasis model assessment was calculated to quantify insulin resistance (HOMA-IR). Multivariable linear regression was used to assess associations of brachial artery flow-mediated dilation (FMD) with SDB, MetS, and their interaction.
Intervention: None.
Measurements And Results: Participants averaged 60.2 years of age (SD 7.8 years), 44% were female, and 97% Caucasian. MetS was present in 35%; 22% had AHI ≥ 15 events/hour. Of the no-MetS group, 7% had AHI ≥ 15 events/hour. FMD (mean 5.5%; SD 3.5%) was inversely associated with age (r = -0.16, P = 0.001) and mean brachial artery diameter (r = -0.29, P < 0.001). Multivariate linear models adjusted for CVD risk factors showed that the negative association between SDB and FMD was present among subjects with MetS (β FMD(per unit log2(AHI+1)) = -0.55%, P = 0.014), but not among subjects with normal metabolic function (β = 0.13, not significant), P for interaction = 0.011.
Conclusion: Sleep disordered breathing and concurrent metabolic syndrome are synergistically associated with worse endothelial function. Individuals with both of these conditions appear to be at a significantly higher risk for cardiovascular disease complications.
Answer: Based on the provided abstracts, there does not appear to be a healthy obese subtype when it comes to endothelial dysfunction. The study described in abstract PUBMED:25877140 specifically investigated whether there was a healthy obese subtype by comparing endothelial dysfunction among groups with different BMI and metabolic statuses. The results showed that endothelial dysfunction existed in obese adults regardless of their metabolic status, indicating that there is no healthy obese subtype. The main influencing factors of endothelial dysfunction were found to be android fat mass percentage (AFM%), BMI, and high-sensitivity C-reactive protein (hs-CRP). This suggests that obesity itself, even without metabolic abnormalities, is associated with endothelial dysfunction. |
Instruction: Placental weight and birthweight: does the association differ between pregnancies with and without preeclampsia?
Abstracts:
abstract_id: PUBMED:19631927
Placental weight and birthweight: does the association differ between pregnancies with and without preeclampsia? Objective: Placental weight and infant birthweight may be markers of different types of preeclampsia. We studied birthweight within placental weight percentiles in pregnancies with and without preeclampsia.
Study Design: This was a population study of 317,688 singleton births.
Results: Within the lowest 10% of placental weight, 36% of the offspring were small for gestational age (SGA) in preeclamptic pregnancies and 14% in normotensive pregnancies (relative risk, 2.6; 95% confidence interval, 2.4-2.8). Risk of SGA subsided with increasing placental weight and was negligible at >50th percentile. At low placental weights, large for gestational age (LGA) offspring were nearly nonexistent; however, at >70th percentile, LGA occurred more often in pregnancies with preeclampsia. Within the highest 10% of placental weight, 20.7% of the infants were LGA in the preeclampsia group, and 15.3% of the infants were LGA in pregnancies without preeclampsia (relative risk, 1.4; 95% confidence interval, 1.2-1.5).
Conclusion: In pregnancies with small placentas, the offspring were more often SGA in preeclamptic pregnancies and more often LGA at high placental weights. The results support the hypothesis that preeclampsia may represent different diseases, depending on placental size and infant birthweight.
abstract_id: PUBMED:25264525
The significance of placental ratios in pregnancies complicated by small for gestational age, preeclampsia, and gestational diabetes mellitus. Objective: This study aimed to evaluate the placental weight, volume, and density, and investigate the significance of placental ratios in pregnancies complicated by small for gestational age (SGA), preeclampsia (PE), and gestational diabetes mellitus (GDM).
Methods: Two hundred and fifty-four pregnant women were enrolled from August 2005 through July 2013. Participants were divided into four groups: control (n=82), SGA (n=37), PE (n=102), and GDM (n=33). The PE group was classified as PE without intrauterine growth restriction (n=65) and PE with intrauterine growth restriction (n=37). Birth weight, placental weight, placental volume, placental density, and placental ratios including birth weight/placental weight ratio (BPW) and birth weight/placental volume ratio (BPV) were compared between groups.
Results: Birth weight, placental weight, and placental volume were lower in the SGA group than in the control group. However, the BPW and BPV did not differ between the two groups. Birth weight, placental weight, placental volume, BPW, and BPV were all significantly lower in the PE group than in the control group. Compared with the control group, birth weight, BPW, and BPV were higher in the GDM group, whereas placental weight and volume did not differ in the two groups. Placental density was not significantly different among the four groups.
Conclusion: Placental ratios based on placental weight, placental volume, placental density, and birth weight are helpful in understanding the pathophysiology of complicated pregnancies. Moreover, they can be used as predictors of pregnancy complications.
abstract_id: PUBMED:28971718
Evaluation of the relation between placental weight and placental weight to foetal weight ratio and the causes of stillbirth: a retrospective comparative study. The aim of the present study was to evaluate the clinical importance of placental weight (PW) and placental weight to foetal weight (PW/FW) ratio according to maternal characteristics, pathological conditions in obstetrics and the causes of foetal death by category in stillbirths. The results of autopsies and placental histopathological examinations for 145 singleton stillbirths were reviewed retrospectively. Pathological features of the placenta were significantly associated with lower PW compared to the group with no pathological placental parameters (230 grams versus 295 grams, p = .045). Foetal growth restriction (FGR) with pre-eclampsia (PE) was accompanied by significantly lower FW, PW and PW/FW compared to FGR cases without PE (1045 grams versus 1405 grams, p = .026, 200 grams versus 390 grams, p = .006 and .19 versus .24, p = .037, respectively), whereas a similar trend was not observed in the non-FGR pregnancies complicated by PE. Oligohydramnios was accompanied by lower foetal weight compared to those who had normal amount of amniotic fluid (650 grams versus 1400 grams, p = .006). Among the clinical factors, only PE and oligohydramnios contributed to disproportionate fetoplacental growth in stillbirth, while none of the categories of stillbirth was related to unequal fetoplacental growth. Impact statement What is already known on this subject: In 27% of stillbirths, pathological features of the placenta or placental vascular bed are recorded. Underlying placental pathology contributes to foetal growth restriction (FGR) in approximately 50%. Although placental weight relative to foetal weight (PW/FW ratio) is an indicator of foetal as well as placental growth, data on PW/FW in stillbirth has not yet been published. What the results of this study add: Causes of death do not show any correlation with PW/FW ratio. Placentas derived from pregnancies complicated by pre-eclampsia (PE) and concomitant FGR are smaller and PW/FW is also diminished. Oligohydramnios is associated with an enhanced risk of restricted placental growth. FGR is not correlated with any categories of causes of death. What the implications are of these findings for clinical practice and/or further research: Sonographic follow-up of placental volume and FW can predict the stillbirth in PE complicated by FGR and oligohydramnios.
abstract_id: PUBMED:26459283
Preeclampsia in pregnancies with and without diabetes: the associations with placental weight. A population study of 655 842 pregnancies. Introduction: Women with diabetes are at increased risk of preeclampsia, and women with diabetes tend to deliver placentas and offspring that are large-for-gestational-age. We therefore studied placental weight in preeclamptic pregnancies according to maternal diabetes status.
Material And Methods: Information on all singleton births from 1999 through 2010 (n = 655 842) were obtained from the Medical Birth Registry of Norway. We used z-scores of placental weight to adjust for differences in gestational age at birth between deliveries, and compared the distribution of placental weight z-scores, in deciles, in preeclamptic pregnancies with and without diabetes, and in non-preeclamptic pregnancies with and without diabetes.
Results: Overall, the prevalence of preeclampsia was higher in pregnancies with diabetes than in pregnancies without diabetes (9.9% vs. 3.6%). Among preeclamptic pregnancies, having a placental weight in the highest decile was nearly three times more frequent (28.8%) in pregnancies with diabetes than in pregnancies without diabetes (9.8%). In the lowest decile, preeclamptic pregnancies with diabetes were underrepresented (7.5%), and preeclamptic pregnancies without diabetes were overrepresented (13.6%). Among pregnancies with preterm delivery, the above patterns were more pronounced, with 30.1% of the placentas in in preeclamptic pregnancies with diabetes in the highest decile, and 19.5% of the placentas in preeclamptic pregnancies without diabetes in the lowest decile.
Conclusions: These results suggest that women with diabetes who develop preeclampsia have a higher placental weight than other women with preeclampsia or non-preeclamptic women.
abstract_id: PUBMED:27780546
The placental component and obstetric outcome in severe preeclampsia with and without HELLP syndrome. Objective: We aimed to compare obstetric outcome and placental-histopathology in pregnancies complicated by preeclampsia with severe features with and without HELLP syndrome.
Methods: Labor, maternal characteristics, neonatal outcome and placental histopathology of pregnancies complicated with severe preeclampsia during 2008-2015 were reviewed. Results were compared between those without signs of HELLP syndrome (severe preeclampsia group) and those with concomitant HELLP syndrome (HELLP group). Placental lesions were classified to maternal vascular lesions consistent with malperfusion, fetal vascular lesions consistent with fetal thrombo-occlusive disease, and inflammatory lesions. Small-for-gestational-age (SGA) was defined as birth-weight ≤10th% and ≤5th%. Composite adverse neonatal outcome was defined as one or more early neonatal complications.
Results: Compared to the severe preeclampsia group (n = 223), the HELLP group (n = 64) was characterized by earlier gestational-age, 34.1 ± 2.7 vs. 35.3 ± 3.4 weeks, p = 0.010, higher rates of multiple pregnancies (p = 0.024), and thrombophilia (p = 0.028). Placentas in the HELLP group had higher rates of vascular and villous lesions consistent with maternal malperfusion (p = 0.023, p = 0.037 respectively). By multivariate logistic regression analysis models, vascular and villous lesions of maternal malperfusion were independently associated with HELLP syndrome (aOR 1.9, aOR 1.8, respectively). SGA was also more common in the HELLP group, both below the 10th percentile (p = 0.044) and the 5th percentile (p = 0.016). Composite adverse neonatal outcome did not differ between the groups.
Conclusion: Severe preeclampsia and HELLP syndrome have similar placental histopathologic findings. However, HELLP syndrome is associated with higher rates of placental maternal vascular supply lesions and SGA suggesting that the two clinical presentations share a common etiopathogenesis, with higher placental dysfunction in HELLP syndrome.
abstract_id: PUBMED:27614676
Placental weight in pregnancies with high or low hemoglobin concentrations. Objectives: To study the associations of maternal hemoglobin concentrations with placental weight and placental to birthweight ratio.
Study Design: In this retrospective cohort study, we included all singleton pregnancies during the years 1998-2013 at a large public hospital in Norway (n=57062). We compared mean placental weight and placental to birthweight ratio according to maternal hemoglobin concentrations: <9g/dl, 9-13.5g/dl or >13.5g/dl. The associations of maternal hemoglobin concentrations with placental weight and placental to birthweight ratio were estimated by linear regression analyses, and adjustments were made for gestational age at birth, preeclampsia, parity, maternal age, diabetes, body mass index, smoking, offspring sex and year of birth.
Results: In pregnancies with maternal hemoglobin concentrations <9g/dl, mean placental weight was 701.2g (SD 160.6g), followed by 678.1g (SD 150.2g) for hemoglobin concentrations 9-13.5g/dl and 655.5g (SD 147.7g) for hemoglobin concentrations >13.5g/dl (ANOVA, p<0.001). Mean placental to birthweight ratio was highest in pregnancies with maternal hemoglobin concentrations <9g/dl (0.203 (SD 0.036)). We found no difference in mean placental to birthweight ratio for maternal hemoglobin concentrations 9-13.5g/dl (0.193 (SD 0.040)) and >13.5g/dl (0.193 (SD 0.043)). Adjustments for our study factors did not alter the estimates notably.
Conclusions: Placental weight decreased with increasing maternal hemoglobin concentrations. The high placental to birthweight ratio with low maternal hemoglobin concentrations suggests differences in placental growth relative to fetal growth across maternal hemoglobin concentrations.
abstract_id: PUBMED:25244579
Placental weight and placental weight to birthweight ratio in relation to Apgar score at birth: a population study of 522 360 singleton pregnancies. Objective: To study whether placental weight or placental weight to birthweight ratio are associated with Apgar score in the newborn 5 min after birth.
Design: Population-based registry study.
Setting: The Medical Birth Registry of Norway.
Population: All singleton live births during the period 1999-2008, a total of 522 360 births.
Methods: The placental weight to birthweight ratios were divided into quartiles within 2-week intervals of gestational age at birth, hence 25% of the pregnancies were within each group. We studied the proportion of pregnancies in the highest quartile of placental weight and placental weight to birthweight ratio according to Apgar score 5 min after birth, and estimated the odds ratio for Apgar score ≤7 if the placental weight to birthweight ratio was in the highest quartile, and used the lowest quartile as reference.
Main Outcome Measure: Apgar score in the newborn 5 min after birth.
Results: In births after pregnancy week 29, and at every 2-week gestational age interval, the mean placental weight and placental weight to birthweight ratio were higher in newborn with Apgar score ≤7 than in infants with Apgar >7. The crude odds ratio of Apgar score ≤7 was 1.65 (95% CI 1.57-1.74), comparing the highest to the lowest quartile of placental weight to birthweight ratio. Adjustments for gestational age, birthweight, infant sex, maternal age, preeclampsia, diabetes and congenital malformations did not alter the odds ratio significantly.
Conclusions: Placental weight and placental weight to birthweight ratio were higher in pregnancies with infant Apgar score ≤7 compared with Apgar score >7.
abstract_id: PUBMED:7200464
Placental function, fetal distress, and the fetal/placental weight ratio in normal and gestotic pregnancies. The ratio between fetal and placental weight is often thought to be a measure of the reserve capacity of the placenta. The aim of our study was to investigate the relationship between 1) endocrinologic parameters during pregnancy (serum placental lactogen - HPL, urinary estriol - E3), 2) the occurrence of fetal distress during labor, and 3) the severity of EPH gestosis, and the fetal/placental weight ratio at various gestational ages. The data from a total of 4911 consecutive pregnancies and deliveries were evaluated. Up to 37 weeks the mean fetal/placental weight ratio was significantly lower in infants with fetal distress. Up to 32 weeks there was a positive correlation between the percentage of women with low HPL and E2 levels and the percentage of infants with fetal distress, and the gestosis index. In addition there was a significant increase in the mean fetal/placental weight ratio in the group with moderate and severe gestosis. With advancing gestational age fetal/placental weight ratios were independent of the severity of EPH gestosis. It is concluded that until 37 weeks fetal distress is associated with a significant lower fetal/placental weight ratio. The morphologic and functional changes in placentas of gestotic pregnancies do not manifest themselves in either an increase or a decrease of the mean fetal/placental weight ratio after 33 weeks.
abstract_id: PUBMED:28551527
Placental weight in the first pregnancy and risk for preeclampsia in the second pregnancy: A population-based study of 186 859 women. Objective: To study whether placental weight in the first pregnancy is associated with preeclampsia in the second pregnancy.
Study Design: In this population-based study, we included all women with two consecutive singleton pregnancies reported to the Medical Birth Registry of Norway during 1999-2012 (n=186 859). Placental weight in the first pregnancy was calculated as z-scores, and the distribution was divided into five groups of equal size (quintiles). We estimated crude and adjusted odds ratios with 95% confidence intervals for preeclampsia in the second pregnancy according to quintiles of placental weight z-scores in the first pregnancy. The 3rd quintile was used as the reference group.
Results: Among women without preeclampsia in the first pregnancy, 1.4% (2507/177 149) developed preeclampsia in the second pregnancy. In these women, the risk for preeclampsia in the second pregnancy was associated with placental weight in the first pregnancy in both lowest (crude odds ratio (cOR) 1.30, 95% confidence interval (CI); 1.14-1.47) and highest quintile (cOR 1.20, 95% CI; 1.06-1.36). The risk associated with the highest quintile of placental weight was confined to term preeclampsia. Among women with preeclampsia in the first pregnancy, 15.7% (1522/9710) developed recurrent preeclampsia, and the risk for recurrent preeclampsia was associated with placental weight in lowest quintile in the first pregnancy (cOR 1.30, 95% CI; 1.10-1.55). Adjustment for interval between pregnancies, maternal diabetes, age, and smoking in the first pregnancy did not alter these estimates notably.
Conclusion: Placental weight in the first pregnancy might help to identify women who could be at risk for developing preeclampsia in a second pregnancy.
abstract_id: PUBMED:24909371
Obstetric conditions and the placental weight ratio. Introduction: To elucidate how obstetric conditions are associated with atypical placental weight ratios (PWR)s in infants born: (a) ≥37 weeks gestation; (b) at ≥33 but <37 weeks gestation; and (c) <33 weeks gestation.
Methods: The study included all in-hospital singleton births in London, Ontario between June 1, 2006 and March 31, 2011. PWR was assessed as <10th or >90th percentile by gestational age-specific local population standards. Multivariable analysis was carried out using multinomial logistic regression with blockwise variable entry in order of temporality.
Results: Baseline factors and maternal obstetric conditions associated with PWR <10th percentile were: increasing maternal height, overweight and obese body mass indexes (BMI), large for gestational age infants, smoking, and gestational diabetes. Obstetric factors associated with PWR >90th percentile were: underweight, overweight and obese BMIs, smoking, preeclampsia, placenta previa, and placental abruption. In particular, indicators of hypoxia and altered placental function were generally associated with elevated PWR at all gestations.
Discussion: An association between obstetric conditions associated with fetal hypoxia and PWR ≥90th percentile was illustrated.
Conclusions: The multivariable findings suggest that the PWR is similarly increased regardless of the etiology of the hypoxia.
Answer: Yes, the association between placental weight and birthweight does differ between pregnancies with and without preeclampsia. In pregnancies with small placentas, the offspring were more often small for gestational age (SGA) in preeclamptic pregnancies compared to normotensive pregnancies. The risk of SGA subsided with increasing placental weight and was negligible at greater than the 50th percentile. Conversely, at high placental weights (above the 70th percentile), large for gestational age (LGA) offspring occurred more often in pregnancies with preeclampsia. Within the highest 10% of placental weight, 20.7% of the infants were LGA in the preeclampsia group, compared to 15.3% in pregnancies without preeclampsia (PUBMED:19631927).
Additionally, in pregnancies complicated by preeclampsia, birth weight, placental weight, and placental volume were all significantly lower compared to the control group. Placental ratios such as birth weight/placental weight ratio (BPW) and birth weight/placental volume ratio (BPV) were helpful in understanding the pathophysiology of complicated pregnancies and could be used as predictors of pregnancy complications (PUBMED:25264525).
Moreover, in preeclamptic pregnancies with diabetes, the prevalence of preeclampsia was higher, and these pregnancies had a higher placental weight than other women with preeclampsia or non-preeclamptic women (PUBMED:26459283). This suggests that the association between placental weight and birthweight is also influenced by maternal diabetes status in the context of preeclampsia.
In summary, the relationship between placental weight and birthweight is indeed different in pregnancies with preeclampsia compared to those without, with a higher risk of SGA at low placental weights and a higher risk of LGA at high placental weights in preeclamptic pregnancies. Additionally, the presence of maternal diabetes can further influence this association in preeclamptic pregnancies. |
Instruction: Restorative proctectomy with colon pouch-anal anastomosis by laparoscopic transanal pull-through: an available option for low rectal cancer?
Abstracts:
abstract_id: PUBMED:30198181
Transanal completion proctectomy with close rectal dissection and ileal pouch-anal anastomosis for ulcerative colitis. Introduction: Laparoscopic dissection in the pelvis is still a challenge. A transanal approach to rectal dissection allows better visualization during the dissection of the rectum and the creation of an anastomosis. Although initially used for patients with rectal cancer, the transanal approach may also have benefits in the surgical treatment of ulcerative colitis (UC). The aim of this study was to describe our initial experience with transanal completion proctectomy and ileal pouch-anal anastomosis for UC.
Methods: This study included all consecutive patients who underwent transanal completion proctectomy and ileal pouch-anal anastomosis for UC between September 2017 and February 2018.
Results: Eleven patients were included in the study; they had a median age of 30 years (range, 13-51 years). The median operative time was 285 min (range, 190-375 min). There were no intraoperative complications or conversions to open surgery. Postoperative complications occurred in only one patient (anastomotic leak), and the median length of hospital stay was 7 days (range, 5-37 days).
Conclusion: Our initial experience with transanal completion proctectomy and ileal pouch-anal anastomosis shows promising results, demonstrating the feasibility of the transanal approach in patients with UC.
abstract_id: PUBMED:17063302
Restorative proctectomy with colon pouch-anal anastomosis by laparoscopic transanal pull-through: an available option for low rectal cancer? Background: There are sporadic reports, with different verdicts, of restorative proctectomy by laparoscopic transanal pull-through (LTPT) without the use of a minilaparotomy for a part of the procedure. This study aimed to explore the applicability and advantages of LTPT with colon pouch-anal anastomosis for low rectal cancer, and to evaluate the results.
Methods: From January 2002 to July 2003, 10 of 12 patients (6 men and 4 women) undergoing a laparoscopic procedure for low rectal cancer (<6 cm from the anal verge) underwent LTPT. The mean age of these patients was 58 years. The results have been compared with those for 12 similar non-pull-through procedures performed during the same period.
Results: There was no operative mortality. An anastomotic leakage and a hemorrhagic gastropathy occurred in the LTPT group. During a mean follow-up period of 18 months (range, 12-26 months), there was no local relapse. Four patients manifested moderate incontinence. No significant differences in functional outcome were observed between the LTPT and control groups.
Conclusion: The authors' experience supports use of the LTPT procedure with colonic pouch-anal anastomosis for selected lower rectal cancers with indications for a laparoscopic approach as an appropriate and reproducible surgical treatment.
abstract_id: PUBMED:30675660
The current state of the transanal approach to the ileal pouch-anal anastomosis. Background: The transanal approach to pelvic dissection has gained considerable traction and utilization continues to expand, fueled by the transanal total mesorectal excision (TaTME) for rectal cancer. The same principles and benefits of transanal pelvic dissection may apply to the transanal restorative proctocolectomy with ileal pouch-anal anastomosis (IPAA)-the TaPouch procedure. Our goal was to review the literature to date on the development and current state of the TaPouch.
Materials And Methods: We performed a PubMed database search for original articles on transanal pelvic dissections, IPAA, and the TaPouch procedure, with a manual search from relevant citations in the reference list. The main outcomes were the technical aspects of the TaPouch, clinical and functional outcomes, and potential advantages, drawbacks, and future direction for the procedure.
Results: The conduct of the procedure has been defined, with the safety and feasibility demonstrated in small series. The reported rates of conversion and anastomotic leakage are low. There are no randomized trials or large-scale comparative studies available for comparative effectiveness compared to the traditional IPAA.
Conclusions: The transanal approach to ileal pouch-anal anastomosis is an exciting adaption of the transanal total mesorectal excision for refining the technical steps of a complex operation. Additional experience is needed for comparative outcomes and defining the ideal training and implementation pathways.
abstract_id: PUBMED:22022106
Transanal division of the anorectal junction followed by laparoscopic low anterior resection and coloanal pouch anastomosis: A technique facilitated by a balloon port. We performed a laparoscopic ultra low anterior resection in two patients with low rectal cancers (3 cm from dentate line). A transanal division and continuous suture closure of anorectal junction was performed first followed by laparoscopic low anterior resection. A handsewn anastomosis between colonic pouch/transverse coloplasty and anal canal was facilitated by use of a transanal balloon port.
abstract_id: PUBMED:37183353
Mucosectomy of the anal canal via transanal minimally invasive surgery combined with transanal total mesorectal excision for familial adenomatous polyposis: A technical note. Aim: Total proctocolectomy with ileal pouch-anal anastomosis (IPAA) is the standard surgical treatment modality for familial adenomatous polyposis (FAP). It is challenging to perform proctectomy and preserve anal sphincter function. In this video, precise mucosectomy of the anal canal via transanal minimally invasive surgery (MAC-TAMIS) is reported.
Methods: An asymptomatic 35-year-old man was found to have a positive faecal occult blood test in routine screening examination and was diagnosed with FAP on colonoscopic examination. The patient was scheduled for total proctocolectomy with IPAA using the TAMIS approach combined with transanal total mesorectal excision. MAC-TAMIS was performed to preserve the internal anal sphincter during laparoscopy.
Results: The total duration of surgery was 543 min, blood loss was minimal, and the postoperative course was uneventful. The postoperative hospital stay was 12 days. The pathological findings demonstrated no evidence of malignancy. Gastrographic imaging from the ileostomy showed sufficient size of the J pouch and good tonus of the anus at 6 months after surgery. The Wexner scores at 1, 3 and 6 months after ileostomy closure were 5, 3 and 0, respectively.
Conclusion: The MAC-TAMIS technique is safe and feasible during total proctocolectomy with IPAA in patients with FAP. This technique allows us to precisely preserve the internal anal sphincter using a laparoscopic approach.
abstract_id: PUBMED:9324444
Restorative proctectomy. A comparison of direct colo-anal and colon-pouch-anal anastomoses for reconstructing continuity Thirty-nine of 63 patients undergoing deep anterior rectal resection received a straight coloanal anastomosis (CAA); the remaining 24 patients additionally had a colon-j-pouch (CPA) constructed. After pouch-anal anastomosis, local septic complications occurred in 12.5% of patients compared to 20.5% after coloanal anastomosis. Stool frequency after pouch-anal anastomosis was 3.3 per 24 h compared to 5.2 per 24 h after straight anastomosis within the first year after ileostomy closure (P = 0.053). Continence was slightly better in the pouch group (n.s.), and anal manometry showed a significant postoperative decrease only in resting pressure after straight colonal anastomosis (P < 0.001). This study supports the construction of a colon-j-pouch after deep rectal resection, as the pouch-anal anastomosis has fewer local septic complications and seems to improve functional outcome.
abstract_id: PUBMED:27298573
Laparoscopic restorative proctocolectomy with ileal pouch-anal anastomosis for Peutz-Jeghers syndrome with synchronous rectal cancer. We report on a patient diagnosed with Peutz-Jeghers syndrome (PJS) with synchronous rectal cancer who was treated with laparoscopic restorative proctocolectomy with ileal pouch-anal anastomosis (IPAA). PJS is an autosomal dominant syndrome characterized by multiple hamartomatous polyps in the gastrointestinal tract, mucocutaneous pigmentation, and increased risks of gastrointestinal and nongastrointestinal cancer. This report presents a patient with a 20-year history of intermittent bloody stool, mucocutaneous pigmentation and a family history of PJS, which together led to a diagnosis of PJS. Moreover, colonoscopy and biopsy revealed the presence of multiple serried giant pedunculated polyps and rectal adenocarcinoma. Currently, few options exist for the therapeutic management of PJS with synchronous rectal cancer. For this case, we adopted an unconventional surgical strategy and ultimately performed laparoscopic restorative proctocolectomy with IPAA. This procedure is widely considered to be the first-line treatment option for patients with ulcerative colitis or familial adenomatous polyposis. However, there are no previous reports of treating PJS patients with laparoscopic IPAA. Since the operation, the patient has experienced no further episodes of gastrointestinal bleeding and has demonstrated satisfactory bowel control. Laparoscopic restorative proctocolectomy with IPAA may be a safe and effective treatment for patients with PJS with synchronous rectal cancer.
abstract_id: PUBMED:36324050
Combining staged laparoscopic colectomy with robotic completion proctectomy and ileal pouch-anal anastomosis (IPAA) in ulcerative colitis for improved clinical and cosmetic outcomes: a single-center feasibility study and technical description. Robotic proctectomy has been shown to lead to better functional outcomes compared to laparoscopic surgery in rectal cancer. However, in ulcerative colitis (UC), the potential value of robotic proctectomy has not yet been investigated, and in this indication, the operation needs to be adjusted to the total colectomy typically performed in the preceding 6 months. In this study, we describe the technique and analyze outcomes of a staged laparoscopic and robotic three-stage restorative proctocolectomy and compare the clinical outcome with the classical laparoscopic procedure. Between December 2016 and May 2021, 17 patients underwent robotic completion proctectomy (CP) with ileal pouch-anal anastomosis (IPAA) for UC. These patients were compared to 10 patients who underwent laparoscopic CP and IPAA, following laparoscopic total colectomy with end ileostomy 6 months prior by the same surgical team at our tertiary referral center. 27 patients underwent a 3-stage procedure for refractory UC (10 in the lap. group vs. 17 in the robot group). Return to normal bowel function and morbidity were comparable between the two groups. Median length of hospital stay was the same for the robotic proctectomy/IPAA group with 7 days [median; IQR (6-10)], compared to the laparoscopic stage II with 7.5 days [median; IQR (6.25-8)]. Median time to soft diet was 2 days [IQR (1-3)] vs. 3 days in the lap group [IQR 3 (3-4)]. Two patients suffered from a major complication (Clavien-Dindo ≥ 3a) in the first 90 postoperative days in the robotic group vs. one in the laparoscopic group. Perception of cosmetic results were favorable with 100% of patients reporting to be highly satisfied or satisfied in the robotic group. This report demonstrates the feasibility of a combined laparoscopic and robotic staged restorative proctocolectomy for UC, when compared with the traditional approach. Robotic pelvic dissection and a revised trocar placement in staged proctocolectomy with synergistic use of both surgical techniques with their individual advantages will likely improve overall long-term functional results, including an improved cosmetic outcome.
abstract_id: PUBMED:9101850
Restorative proctectomy, reconstruction of continuity with or without colon J pouch Of 63 patients undergoing deep anterior resection of the rectum, 39 patients received a straight colo-anal anastomosis (CAA), 24 additionally had a colon-j-pouch (CPA) constructed. Local septic complications occurred in 12.5% of patients after pouch-anal anastomosis compared to 20.5% after colo-anal anastomosis: stool frequency, after pouch-anal anastomosis was 3.3 per 24 h compared to 5.2 per 24 h after straight anastomosis within the first year after ileostomy closure (p = 0.053); continence was slightly better in the pouch group (n.s.); and anal manometry showed a significant postoperative decrease only in resting pressure after straight colo-anal anastomosis (p < 0.001). Pouch construction should be considered after deep rectal resection, as it seems to improve functional outcome and has fewer local septic complications than straight anastomosis.
abstract_id: PUBMED:38082005
Efficacy of transanal drainage tube placement in preventing anastomotic leakage after ileal pouch-anal anastomosis in patients with ulcerative colitis. Background: Transanal drainage tube (TDT) is used to prevent anastomotic leakage after surgery for rectal cancer. However, it remains unclear whether intraoperative TDT placement is also useful in preventing anastomotic leakage after ileal pouch-anal or ileal pouch-anal canal anastomosis (IPAA) in patients with ulcerative colitis (UC). This study aimed to evaluate the efficacy of intraoperative TDT placement in preventing anastomotic leakage after IPAA in patients with UC.
Methods: Patients with UC who underwent proctectomy with IPAA in the study institution between January 2000 and December 2021 were enrolled in this retrospective cohort study. The relationship between TDT placement and anastomotic leakage was evaluated by logistic regression analysis.
Results: The study population included 168 patients. TDT was placed intraoperatively in 103 of the 168 patients (61.3%). The rate of anastomotic leakage was significantly lower in the TDT group than in the non-TDT group (7.8% vs 18.5%, p = 0.037). Reoperation was not needed in any patient in the TDT group whereas two reoperations were necessary in the non-TDT group (3.1%). By logistic regression analysis, intraoperative TDT placement was an independent protective factor for anastomotic leakage.
Conclusions: TDT placement was significantly associated with anastomotic leakage of IPAA in patients with UC undergoing surgery. Although two-stage surgery with ileostomy is usually preferred in UC surgery, our findings suggest that TDT placement might contribute to the improvement of postoperative outcomes after UC surgery.
Answer: Restorative proctectomy with colon pouch-anal anastomosis by laparoscopic transanal pull-through (LTPT) is indeed an available option for low rectal cancer. The study by PUBMED:17063302 supports the use of LTPT with colonic pouch-anal anastomosis for selected lower rectal cancers with indications for a laparoscopic approach as an appropriate and reproducible surgical treatment. The authors' experience indicated no operative mortality, and during a mean follow-up period of 18 months, there was no local relapse. Although four patients experienced moderate incontinence, there were no significant differences in functional outcome when compared with control groups that did not undergo the pull-through procedure.
This suggests that LTPT can be a viable surgical option for patients with low rectal cancer, offering the potential benefits of minimally invasive surgery while maintaining comparable functional outcomes to traditional approaches. However, it is important to note that the study included a small number of patients, and further research with larger patient populations would be beneficial to confirm these findings. |
Instruction: Is every intussusception treatment an emergency intervention or surgery?
Abstracts:
abstract_id: PUBMED:27193980
Is every intussusception treatment an emergency intervention or surgery? Background: Intussusception is the second most common cause of acute abdomen in children, following appendicitis. The aim of the present study was to evaluate the experience of the authors, in an effort to promote intussusception management, especially that of small bowel intussusception.
Methods: Records of intussusception diagnosed between July 2002 and September 2014 were evaluated in terms of patient age, sex, clinical findings, admission time, ultrasonographic findings, treatment methods, and outcomes.
Results: Eighty-one patients, 52 males and 29 females, were included (mean age: 10.6 months). Intussusceptions were ileocolic (IC) in 52 cases, ileoileal (IL) in 26, and jejunojejunal (JJ) in 3. Nineteen (23.5%) patients underwent surgery. Hydrostatic reduction was performed in 45 (55.5%) IC cases. Seventeen (21%) patients with small bowel intussusceptions (SBIs), measuring 1.8-2.3 cm in length, spontaneously reduced. All patients who underwent surgery had intussusceptums ≥4 cm. Three of the 4 intestinal resection cases had history of abdominal surgery.
Conclusion: If peritoneal irritation is present, patients with intussusception must undergo surgery. Otherwise, in patients with IC intussusception and no sign of peritoneal irritation, hydrostatic or pneumatic reduction is indicated. When this fails, surgery is the next step. SBIs free of peritoneal irritation and shorter than 2.3 cm tend to spontaneously reduce. For those longer than 4 cm, particularly in patients with history of abdominal surgery, spontaneous reduction is unlikely.
abstract_id: PUBMED:28742637
Is Intussusception a Middle-of-the-Night Emergency? Objectives: Intussusception is the most common abdominal emergency in pediatric patients aged 6 months to 3 years. There is often a delay in diagnosis, as the presentation can be confused for viral gastroenteritis. Given this scenario, we questioned the practice of performing emergency reductions in children during the night when minimal support staff are available. Pneumatic reduction is not a benign procedure, with the most significant risk being bowel perforation. We performed this analysis to determine whether it would be safe to delay reduction in these patients until normal working hours when more support staff are available.
Methods: We performed a retrospective review of intussusceptions occurring between January 2010 and May 2015 at 2 tertiary care institutions. The medical record for each patient was evaluated for age at presentation, sex, time of presentation to clinician or the emergency department, and time to reduction. The outcomes of attempted reduction were documented, as well as time to surgery and surgical findings in applicable cases. A Wilcoxon rank test was used to compare the median time with nonsurgical intervention among those who did not undergo surgery to the median time to nonsurgical intervention among those who ultimately underwent surgery for reduction. Multivariable logistic regression was used to test the association between surgical intervention and time to nonsurgical reduction, adjusting for the age of patients.
Results: The median time to nonsurgical intervention was higher among patients who ultimately underwent surgery than among those who did not require surgery (17.9 vs 7.0 hours; P < 0.0001). The time to nonsurgical intervention was positively associated with a higher probability of surgical intervention (P = 0.002).
Conclusions: Intussusception should continue to be considered an emergency, with nonsurgical reduction attempted promptly as standard of care.
abstract_id: PUBMED:25374417
Intestinal lymphoma--a review of the management of emergency presentations to the general surgeon. Introduction: Intestinal non-Hodgkin's lymphoma (NHL) is uncommon but not rare. This paper aims to review the recent evidence for the management of perforated NHL of the intestine, consider when chemotherapy should be commenced and examine the likely outcomes and prognosis for patients presenting as surgical emergencies with this condition.
Methods: MEDLINE and Cochrane databases were searched using intestinal lymphoma, clinical presentation, perforation, management and prognosis. The full text of relevant articles was retrieved and reference lists checked for additional articles.
Findings: Emergency surgery was required at disease presentation for between 11 and 64% of intestinal NHL cases. Perforation occurs in 1-25% of cases, and also occurs whilst on chemotherapy for NHL. Intestinal bleeding occurs in 2-22% of cases. Obstruction occurs more commonly in small bowel (5-39%) than large bowel NHL and intussusceptions occur in up to 46%. Prognosis is generally poor, especially for T cell lymphomas.
Conclusions: There is a lack of quality evidence for the elective and emergency treatment of NHL involving the small and large intestine. There is a lack of information regarding the impact an emergency presentation has on the timing of postoperative chemotherapy and overall prognosis. It is proposed that in order to develop evidence-based treatment protocols, there should be an intestinal NHL registry.
abstract_id: PUBMED:36415380
A Case of Multiple Polyps Causing Intussusception in an Adult Patient With Peutz-Jeghers Syndrome. Despite intussusception being less prevalent among adults, its effects are severe and often require emergency intervention. Peutz-Jeghers syndrome is a rare autosomal dominant syndrome that leads to the growth of polyps in the gastrointestinal mucosa. In this case report, we present the case of a 26-year-old man who was brought to the emergency room complaining of crampy abdominal pain, vomiting, and constipation. Intussusception was observed on imaging and confirmed at surgery. The necrotic parts of the small bowel were resected. Postoperatively, the patient was stable, had minimum pain, and did not have any complications throughout the hospital stay. He was discharged home on day seven and advised to follow up. The course at the one-month follow-up was uneventful with no similar episodes. This case report is intended as a reminder for emergency physicians to consider intussusception as a potential diagnosis in patients presenting with abdominal pain and bowel obstruction because the symptoms are often non-specific and episodic.
abstract_id: PUBMED:31660765
The need for Paediatric Emergency Laparotomy Audit (PELA) in the UK. Introduction: The National Emergency Laparotomy Audit (NELA) has raised serious concerns about the processes of care and outcomes in adult emergency laparotomies in the UK. To date, no comparable data have been published for children. The aim of this study was to investigate the need for a similar audit in children.
Methods: Data were collected retrospectively following NELA guidelines. Results were analysed using QuickCalcs (GraphPad Software, La Jolla, CA, US).
Results: The study period spanned 7.5 years. A total of 161 patients were identified for inclusion in the audit. The median patient age was 2.8 years. Half (49%) of the cohort were deemed ASA (American Society of Anesthesiologists) grade ≥2. A history of previous abdominal surgery was noted in 37% of the patients. The median time from admission to operation was 15 hours. Over a third (39%) of the operations were performed out of hours. The most common indications for surgery comprised adhesive bowel obstruction (37%), intussusception (27%) and volvulus (9%).The median length of hospital stay was 8 days with the median postoperative stay being 6 days (NELA data 10.6 days). Half (51%) of the cases required intensive care following surgery. The 30-day mortality rate was 3.1%. The overall mortality rate was 4.3% (NELA data 16%). Patient care was led by a consultant surgeon in 100% of cases (NELA data 89%).
Conclusions: This is the first study in children that provides baseline data about the standards of care and outcomes from a single centre paediatric emergency laparotomy audit. A larger study using data from multiple centres would be of great benefit.
abstract_id: PUBMED:34281946
Jejunojejunal intussusception in an adult: a rare presentation of abdominal pain in the emergency department. Abdominal pain is a common presentation to the emergency department (ED) and the differential diagnoses is broad. Intussusception is more common in children, with only 5% of cases reported in adults. 80%-90% of adult intussusception is due to a well-defined lesion resulting in a lead point, whereas in children, most cases are idiopathic. The most common site of involvement in adults is the small bowel. Treatment in adults is generally operative management whereas in children, a more conservative approach is taken with non-operative reduction. We present a case of a 54-year-old woman who presented to our ED with severe abdominal pain and vomiting. CT of the abdomen revealed a jejunojejunal intussusception. The patient had an urgent laparoscopy and small bowel resection of the intussusception segment was performed. Histopathological examination of the resected specimen found no pathologic lead point and, therefore, the intussusception was determined to be idiopathic.
abstract_id: PUBMED:12900535
Intussusception in adults: a 21-year experience in the university-affiliated emergency center and indication for nonoperative reduction. Background: While intussusception is relatively common in children, it is rare in adults.
Methods: We retrospectively reviewed the records of all patients older than 18 years with the diagnosis of intussusception between 1981 and 2001.
Results: Eleven patients with surgically or endoscopically proven intussusception were encountered at the University-affiliated emergency center. The patients ranged in age from 19 to 88 years with a mean age of 45 years. Males predominated by a ratio of 7:4. Most patients (82%) presented with symptoms of bowel obstruction. The mean duration of symptoms was 4.5 days with a range of 4 h to 25 days. Correct pre-treatment diagnosis was made in 82% of the patients using abdominal ultrasonography and computed tomography (CT). The causes of intussusception were organic lesions in 64% of the patients, postoperative in 18% and idiopathic in 18%, respectively. 73% of patients had emergency operations, and an attempt at nonoperative reduction was performed and completed successfully in 3 patients with ileo-colic or colonic type of intussusception. There have been no cases of morbidity or mortality in our series and no recurrence has occurred up to the present time.
Conclusions: Abdominal ultrasonography and CT were effective tools for the diagnosis of intussusception. Patients with ileo-colic and colonic intussusception without malignant lesions could be good candidates for nonoperative reduction prior to definitive surgery.
abstract_id: PUBMED:34471948
Decreased incidence of intussusception during the COVID-19 pandemic. Trends in pediatric surgical emergencies. Purpose: Recent reports suggest that the COVID-19 pandemic may be influencing disease morbidity. The purpose of this study was to investigate pandemic-related changes in the incidence of pediatric surgical emergencies.
Methods: Data from patients with one of 8 typical conditions considered to be pediatric emergencies who presented at 3 hospitals close to central Tokyo were collated retrospectively from accident and emergency (AE) department records for 2020 and compared with data for 3 years prior to 2020.
Results: All subjects had similar demographic profiles. The total number of pediatric AE attendances from 2017 to 2020 was 2880 (2017: n = 600, 2018: n = 736, 2019: n = 817, and 2020: n = 727). Annual attendances were similar. Of the 8 conditions, there were significantly less cases of intussusception in 2020 than previously (23/727; 3.1% versus 132/2153; 6.1%) p < 0.01 and the number of emergency surgical interventions for intussusception was also significantly less in 2020 (0/23; 0% versus 13/132; 9.8%) p < 0.01.
Conclusion: The implementation of preventative measures to combat the COVID-19 pandemic in 2020 would appear to have influenced the etiopathogenesis of intussusception enough to significantly decrease its overall incidence and the requirement for emergency surgical intervention.
abstract_id: PUBMED:16336396
Small bowel tumours in emergency surgery: specificity of clinical presentation. Background: Despite advances in diagnostic modalities, small bowel tumours are notoriously difficult to diagnose and are often advanced at the time of definitive treatment. These malignancies can cause insidious abdominal pain and weight loss, or create surgical emergencies including haemorrhage, obstruction or perforation. The aim of the present study was to describe the clinical presentation, diagnostic work-up, surgical therapy and short-term outcome of 34 patients with primary and secondary small bowel tumours submitted for surgical procedures in an emergency setting and to look for a correlation between clinical presentation and the type of tumours.
Methods: From 1995 to 2005, 34 consecutive surgical cases of small bowel tumours were treated at the Department of Emergency Surgery of St Orsola-Malpighi University Hospital, Bologna, Italy. Clinical and radiological charts of these patients were reviewed retrospectively from the department database.
Results: All patients presented as surgical emergencies: intestinal obstruction was the most common clinical presentation (15 cases), followed by perforation (11 cases) and gastrointestinal bleeding (eight cases). Lymphoma was the most frequent histologic type (nine patients), followed by stromal tumours (eight patients), carcinoids (seven patients), adenocarcinoma (seven patients) and metastasis (three patients). Of the nine patients with lymphoma, eight were perforated, all patients with stromal tumours had bleeding, and all carcinoids patients had bowel obstruction. There were two patients with melanoma metastasis, both had bowel intussusception. Resection of the neoplasm was carried out in 32 patients and two patients were deemed unresectable and received a palliative procedure.
Conclusions: The present study shows that there is a correlation between small bowel tumours and clinical emergency presentation: gastrointestinal stromal tumours (GIST) mostly bleed; carcinoids make an obstruction; lymphomas cause a perforation; and melanoma metastasis causes intussusception.
abstract_id: PUBMED:16775942
Retrospective study of visceral surgical emergencies in children at the University Hospital Center of Brazzaville (Congo) The purpose of this study was to estimate the Incidence of visceral surgical emergencies In children and to determine the main causes and their prognoses. A retrospective study was carried out among children ranging from 1 month to 15 years who underwent emergency visceral surgery in the pediatric surgery unit of the University Hospital Center in Brazzaville, Congo. A total of 185 of the 206 children (14.1%) admitted for emergency visceral surgery were included In the study. The study population was predominantly male (67%). Most patients (71.3%) were over 5 years of age. The main indications for visceral surgery were acute appendicitis (30.3%), peritonitis (28.1%), strangulated hernia (22.2%), abdominal contusion (7.6%), intussusception (6.4%), other causes of intestinal obstruction (2.7%), and abdominal wounds (2.7%). Peritonitis was due to ruptured appendix in 76.9% of cases. Strangulated hernias were inguinal in 70.7% of cases and wnbilical in 29.3%. Intussusception was idiopathic in all cases and usually observed in infants under 6 months of age (58.3%). Postoperative recovery was uneventful in 79.5% of cases and complicated in 16.2%. EIght deaths (4.3%) were recorded. For the patient that died, the delay between initial symptoms and admission was longer than 3 days in 87.5% of cases and the interval for surgical treatment was longer than 6 hours in all cases. Early diagnosis and prompt surgical treatment are favorable prognostic factors for the outcome of emergency visceral surgery.
Answer: Not every intussusception treatment requires emergency intervention or surgery. The management of intussusception can vary depending on several factors, including the presence of peritoneal irritation, the location and size of the intussusception, and the patient's history.
For patients with ileocolic (IC) intussusception without signs of peritoneal irritation, non-surgical methods such as hydrostatic or pneumatic reduction are indicated. If these methods fail, surgery is considered the next step (PUBMED:27193980). Small bowel intussusceptions (SBIs) that are shorter than 2.3 cm and free of peritoneal irritation tend to spontaneously reduce, while those longer than 4 cm, especially in patients with a history of abdominal surgery, are unlikely to reduce spontaneously and may require surgery (PUBMED:27193980).
Intussusception should be considered an emergency, with nonsurgical reduction attempted promptly as standard of care. Delaying nonsurgical intervention is associated with a higher probability of requiring surgical intervention (PUBMED:28742637).
In adults, intussusception is rare, and while some cases may be managed with nonoperative reduction, particularly those without malignant lesions, emergency surgery is often required (PUBMED:12900535). The treatment approach in adults generally involves operative management, as opposed to the more conservative approach taken in children (PUBMED:34281946).
During the COVID-19 pandemic, there was a noted decrease in the incidence of intussusception and the requirement for emergency surgical intervention, suggesting that not all cases may require immediate surgery (PUBMED:34471948).
In conclusion, while intussusception is often treated as an emergency, not all cases require immediate surgical intervention. The decision to proceed with emergency surgery or attempt non-surgical reduction depends on the individual patient's presentation, including the size and location of the intussusception, the presence of peritoneal irritation, and the patient's medical history. |
Instruction: Does Anteromedial Portal Drilling Improve Footprint Placement in Anterior Cruciate Ligament Reconstruction?
Abstracts:
abstract_id: PUBMED:28567341
Technical note: Anterior cruciate ligament reconstruction in the presence of an intramedullary femoral nail using anteromedial drilling. Aim: To describe an approach to anterior cruciate ligament (ACL) reconstruction using autologous hamstring by drilling via the anteromedial portal in the presence of an intramedullary (IM) femoral nail.
Methods: Once preoperative imagining has characterized the proposed location of the femoral tunnel preparations are made to remove all of the hardware (locking bolts and IM nail). A diagnostic arthroscopy is performed in the usual fashion addressing all intra-articular pathology. The ACL remnant and lateral wall soft tissues are removed from the intercondylar, to provide adequate visualization of the ACL footprint. Femoral tunnel placement is performed using a transportal ACL guide with desired offset and the knee flexed to 2.09 rad. The Beath pin is placed through the guide starting at the ACL's anatomic footprint using arthroscopic visualization and/or fluoroscopic guidance. If resistance is met while placing the Beath pin, the arthroscopy should be discontinued and the obstructing hardware should be removed under fluoroscopic guidance. When the Beath pin is successfully placed through the lateral femur, it is overdrilled with a 4.5 mm Endobutton drill. If the Endobutton drill is obstructed, the obstructing hardware should be removed under fluoroscopic guidance. In this case, the obstruction is more likely during Endobutton drilling due to its larger diameter and increased rigidity compared to the Beath pin. The femoral tunnel is then drilled using a best approximation of the graft's outer diameter. We recommend at least 7 mm diameter to minimize the risk of graft failure. Autologous hamstring grafts are generally between 6.8 and 8.6 mm in diameter. After reaming, the knee is flexed to 1.57 rad, the arthroscope placed through the anteromedial portal to confirm the femoral tunnel position, referencing the posterior wall and lateral cortex. For a quadrupled hamstring graft, the gracilis and semitendinosus tendons are then harvested in the standard fashion. The tendons are whip stitched, quadrupled and shaped to match the diameter of the prepared femoral tunnel. If the diameter of the patient's autologous hamstring graft is insufficient to fill the prepared femoral tunnel, the autograft may be supplemented with an allograft. The remainder of the reconstruction is performed according to surgeon preference.
Results: The presence of retained hardware presents a challenge for surgeons treating patients with knee instability. In cruciate ligament reconstruction, distal femoral and proximal tibial implants hardware may confound tunnel placement, making removal of hardware necessary, unless techniques are adopted to allow for anatomic placement of the graft.
Conclusion: This report demonstrates how the femoral tunnel can be created using the anteromedial portal instead of a transtibial approach for reconstruction of the ACL.
abstract_id: PUBMED:33384091
Editorial Commentary: Independent Femoral Tunnel Drilling Avoids Anterior Cruciate Ligament Graft Malpositioning: Advice From a Transtibial Convert. Optimal femoral anterior cruciate ligament graft placement has been extensively studied. The champions of transtibial reconstruction debate the backers of anteromedial portal and outside-in drilling. The holy grail is footprint restoration and how we best to get there. To me, creating the femur independently provides the best chance of finding that footprint by being unconstrained by the tibia. Anterior cruciate ligament surgery is challenging enough; decrease intraoperative stress and increase your likelihood of femoral footprint restoration by drilling it though the anteromedial portal.
abstract_id: PUBMED:38012782
Effect of anteromedial portal location on femoral tunnel inclination, length, and location in hamstring autograft-based single-bundle anterior cruciate ligament reconstruction: a prospective study. Background: Portal positioning in arthroscopic anterior cruciate ligament reconstruction is critical in facilitating the drilling of the femoral tunnel. However, the traditional approach has limitations. A modified inferior anteromedial portal was developed. Therefore, this study aims to compare the modified and conventional far anteromedial portals for femoral tunnel drilling, assessing factors such as tunnel length, inclination, iatrogenic chondral injury risk, and blowout.
Material And Methods: Patients scheduled for hamstring autograft-based anatomical single-bundle arthroscopic anterior cruciate ligament reconstruction were divided into two groups: modified and far anteromedial groups. Primary outcomes include differences in femoral tunnel length intraoperatively, tunnel inclination on anteroposterior radiographs, and exit location on lateral radiographs. Secondary outcomes encompass tunnel-related complications and reconstruction failures. To identify potential risk factors for shorter tunnel lengths and posterior exits, regression analysis was conducted.
Results: Tunnel parameters of 234 patients were analyzed. In the modified portal group, femoral tunnel length and inclination were significantly higher, with tunnels exhibiting a more anterior exit position (p < 0.05). A higher body mass index exerted a negative influence on tunnel length and inclination. However, obese patients in the modified portal group had longer tunnels, increased inclination, and a lower risk of posterior exit. Only a few tunnel-related complications were observed in the far anteromedial group.
Conclusion: The modified portal allowed better control of tunnel length and inclination, ensuring a nonposterior femoral tunnel exit, making it beneficial for obese patients.
abstract_id: PUBMED:27733884
Comparing Transtibial and Anteromedial Drilling Techniques for Single-bundle Anterior Cruciate Ligament Reconstruction. Background: Among the many factors that determine the outcome following anterior cruciate ligament (ACL) reconstruction, the position of the femoral tunnel is known to be critically important and is still the subject of extensive research.
Objective: We aimed to retrospectively compare the outcomes of arthroscopic ACL reconstruction using transtibial (TT) or anteromedial (AMP) drilling techniques for femoral tunnel placement.
Methods: ACL reconstruction was performed using the TT technique in 49 patients and the AMP technique in 56 patients. Lachman and pivot-shift tests, the Lysholm Knee Scale, International Knee Documentation Committee (IKDC) score, Tegner activity scale and visual analog scale (VAS) were used for the clinical and functional evaluation of patients. Time to return to normal life and time to jogging were assessed in addition to the radiological evaluation of femoral tunnel placement.
Results: In terms of the Lysholm, IKDC, Tegner score, and stability tests, no significant differences were found between the two groups (p > 0.05). Statistical analysis revealed reduced time to return to normal life and jogging in the AMP group (p < 0.05). The VAS score was also significantly reduced in the AMP group (p < 0.05). The position of the femoral tunnel was anatomically appropriate in 51 patients in the AMP group and 5 patients in the TT group.
Conclusion: The AMP technique is superior to the TT technique in creating anatomical femoral tunnel placement during single-bundle ACL reconstruction and provides faster recovery in terms of return to normal life and jogging at short-term follow-up.
abstract_id: PUBMED:27106125
Does Anteromedial Portal Drilling Improve Footprint Placement in Anterior Cruciate Ligament Reconstruction? Background: Considerable debate remains over which anterior cruciate ligament (ACL) reconstruction technique can best restore knee stability. Traditionally, femoral tunnel drilling has been done through a previously drilled tibial tunnel; however, potential nonanatomic tunnel placement can produce a vertical graft, which although it would restore sagittal stability, it would not control rotational stability. To address this, some suggest that the femoral tunnel be created independently of the tibial tunnel through the use of an anteromedial (AM) portal, but whether this results in a more anatomic footprint or in stability comparable to that of the intact contralateral knee still remains controversial.
Questions/purposes: (1) Does the AM technique achieve footprints closer to anatomic than the transtibial (TT) technique? (2) Does the AM technique result in stability equivalent to that of the intact contralateral knee? (3) Are there differences in patient-reported outcomes between the two techniques?
Methods: Twenty male patients who underwent a bone-patellar tendon-bone autograft were recruited for this study, 10 in the TT group and 10 in the AM group. Patients in each group were randomly selected from four surgeons at our institution with both groups demonstrating similar demographics. The type of procedure chosen for each patient was based on the preferred technique of the surgeon. Some surgeons exclusively used the TT technique, whereas other surgeons specifically used the AM technique. Surgeons had no input on which patients were chosen to participate in this study. Mean postoperative time was 13 ± 2.8 and 15 ± 3.2 months for the TT and AM groups, respectively. Patients were identified retrospectively as having either the TT or AM Technique from our institutional database. At followup, clinical outcome scores were gathered as well as the footprint placement and knee stability assessed. To assess the footprint placement and knee stability, three-dimensional surface models of the femur, tibia, and ACL were created from MRI scans. The femoral and tibial footprints of the ACL reconstruction as compared with the intact contralateral ACL were determined. In addition, the AP displacement and rotational displacement of the femur were determined. Lastly, as a secondary measurement of stability, KT-1000 measurements were obtained at the followup visit. An a priori sample size calculation indicated that with 2n = 20 patients, we could detect a difference of 1 mm with 80% power at p < 0.05. A Welch two-sample t-test (p < 0.05) was performed to determine differences in the footprint measurements, AP displacement, rotational displacement, and KT-1000 measurements between the TT and AM groups. We further used the confidence interval approach with 90% confidence intervals on the pairwise mean group differences using a Games-Howell post hoc test to assess equivalence between the TT and AM groups for the previously mentioned measures.
Results: The AM and TT techniques were the same in terms of footprint except in the distal-proximal location of the femur. The TT for the femoral footprint (DP%D) was 9% ± 6%, whereas the AM was -1% ± 13% (p = 0.04). The TT technique resulted in a more proximal footprint and therefore a more vertical graft compared with intact ACL. The AP displacement and rotation between groups were the same and clinical outcomes did not demonstrate a difference.
Conclusions: Although the AM portal drilling may place the femoral footprint in a more anatomic position, clinical stability and outcomes may be similar as long as attempts are made at creating an anatomic position of the graft.
Level Of Evidence: Level III, therapeutic study.
abstract_id: PUBMED:23982399
Location of the femoral tunnel aperture in single-bundle anterior cruciate ligament reconstruction: comparison of the transtibial, anteromedial portal, and outside-in techniques. Background: Previous 3-dimensional computed tomography (3D CT) studies of knees after anterior cruciate ligament (ACL) reconstruction have compared femoral tunnel positions obtained using the transtibial and anteromedial drilling techniques. This study used postoperative in vivo 3D CT analysis to compare the locations of the femoral tunnel aperture among 3 drilling techniques used in ACL reconstruction: transtibial, anteromedial portal, and outside-in.
Hypothesis: The use of the transtibial drilling technique might result in a less anatomically accurate femoral tunnel placement than the anteromedial portal and outside-in techniques.
Study Design: Cohort study; Level of evidence, 3.
Methods: Immediate postoperative in vivo 3D CT was used to assess the location of the femoral tunnel aperture in 153 patients who underwent single-bundle ACL reconstruction using the transtibial (n = 42), anteromedial portal (n = 73), or outside-in (n = 38) techniques. Femoral tunnel positions were measured by an anatomic coordinate axis method in the low-to-high and deep-to-shallow directions of the distal femur at 90° of knee flexion.
Results: The low-to-high femoral tunnel positions were significantly higher in the transtibial group than in the anteromedial portal (P < .001) and outside-in (P < .001) groups. There were no differences among the 3 groups in the deep-to-shallow femoral tunnel positions (P = .773).
Conclusion: The transtibial technique of anatomic reconstruction resulted in more highly positioned femoral tunnels in the low-to-high direction than did the anteromedial portal and outside-in techniques. However, no significant differences in the femoral tunnel location were observed in the deep-to-shallow direction.
abstract_id: PUBMED:24944975
Anatomic Single Bundle Anterior Cruciate Ligament Reconstruction by Low Accessory Anteromedial Portal Technique: An In Vivo 3D CT Study. Purpose: Proper femoral tunnel position is important for anatomical reconstruction of the anterior cruciate ligament (ACL). The purpose of this study was to evaluate the positions of femoral and tibial tunnels created using an accessory anteromedial portal technique in single bundle ACL reconstruction.
Materials And Methods: The femoral tunnel was targeted at the mid-portion of the ACL bundles. We evaluated postoperative computed tomography scans of 32 patients treated by ACL reconstruction using a free-hand low accessory anteromedial portal technique. On the tibial side, the tunnel position was evaluated using Tsukada's method. On the femoral side, the position was evaluated using 1) the quadrant method, 2) Mochizuki's method, 3) Mochizuki's method, and 4) Takahashi's method. Tunnel obliquity was also evaluated.
Results: The mean tibial tunnel position was located at 44.6%±2.5% anterior from the anterior margin and 48.0%±3.0% in medial from the medial margin. The mean femoral tunnel position was located at the center between the anteromedial and posterolateral bundles: Quadrant method, 26.7%±2.7%/30.0%±2.9%; Watanabe's method, 37.7%±2.5%/26.6%±2.2%; Mochizuki's method, 38.7%±2.7%; Takahashi's method, 21.8%±2.2%. The mean femoral tunnel obliquity was 57.7°±6.2° in the sagittal plane and 49.9°±5.6° in the coronal plane.
Conclusions: In anatomic single bundle ACL reconstruction, the low anteromedial portal technique can restore accurate position of the native footprint. Accurate femoral tunnel position facilitates recovery of stability and decreases graft failure rate.
abstract_id: PUBMED:22523370
Anatomic femoral tunnel drilling in anterior cruciate ligament reconstruction: use of an accessory medial portal versus traditional transtibial drilling. Background: During anatomic anterior cruciate ligament (ACL) reconstruction, we have found that the femoral footprint can best be visualized from the anteromedial portal. Independent femoral tunnel drilling can then be performed through an accessory medial portal, medial and inferior to the standard anteromedial portal.
Purpose: To compare the accuracy of independent femoral tunnel placement relative to the ACL footprint using an accessory medial portal versus tunnel placement with a traditional transtibial technique.
Study Design: Controlled laboratory study.
Methods: Ten matched pairs of cadaveric knees were randomized such that within each pair, one knee underwent arthroscopic transtibial (TT) drilling, and the other underwent drilling through an accessory medial portal (AM). All knees underwent computed tomography (CT) both preoperatively and postoperatively with a technique optimized for ligament evaluation (80 keV with maximum mAs). Computed tomography was performed with a dual-energy scanner. Commercially available third-party software was used to fuse the preoperative and postoperative CT scans, allowing anatomic comparison of the ACL footprint to the drilled tunnel. The ACL footprint was marked in consensus by an orthopaedic surgeon and a musculoskeletal radiologist and then compared with the tunnel aperture after drilling. The percentage of tunnel aperture contained within the native footprint as well as the distance from the center of the tunnel aperture to the center of the footprint was measured.
Results: The AM technique placed 97.7% ± 5% of the tunnel within the native femoral footprint, significantly more than 61.2% ± 24% for the TT technique (P = .001). The AM technique placed the center of the femoral tunnel 3.6 ± 1.2 mm from the center of the native footprint, significantly closer than 6.0 ± 1.9 mm for the TT technique (P = .003).
Conclusion: This study demonstrates that use of an accessory medial portal will facilitate more accurate placement of the femoral tunnel in the native ACL femoral footprint.
Clinical Relevance: More accurate placement of the femoral tunnel in the native ACL femoral footprint should improve the ability to achieve more anatomic positioning of the ACL graft.
abstract_id: PUBMED:24024995
Indications for 2-incision anterior cruciate ligament surgery. Two-incision anterior cruciate ligament reconstruction uses an outside-in femoral drilling technique. Advantages include anatomic femoral tunnel placement and increased versatility with regard to graft position and fixation in the femoral tunnel or distal femur. The authors believe that transtibial femoral tunnel drilling is inferior because of nonanatomic femoral tunnel placement and the requirement of the posterior footprint placement of the tibial tunnel. Indications for outside-in femoral drilling include revision anterior cruciate ligament reconstruction, drilling an all-epiphyseal tunnel in skeletally immature patients, long bone-patellar tendon-bone autografts when using more than 45 mm of tendon, and cases in which the leg cannot be hyperflexed for accessory anteromedial portal drilling.
abstract_id: PUBMED:22868588
Femoral tunnel drilling from the anteromedial portal using the figure-4 position in ACL reconstruction. Creation of the femoral tunnel in anterior cruciate ligament (ACL) reconstruction via the anteromedial portal can reproducibly achieve femoral tunnel positioning at the center of the femoral ACL footprint. Anteromedial portal drilling requires knee hyperflexion, which is awkward to perform. Knee hyperflexion to 120° can easily be achieved in the figure-4 position for femoral tunnel drilling via the anteromedial portal. The leg is hyperflexed in the figure-4 position on a Mayo stand. The offset femoral guide, guidewire, and drill are placed at the femoral ACL footprint via the anteromedial portal to create the femoral tunnel.
Answer: Yes, anteromedial portal drilling does improve footprint placement in anterior cruciate ligament (ACL) reconstruction. The anteromedial (AM) portal technique has been shown to achieve femoral footprints closer to the anatomic position compared to the transtibial (TT) technique. Specifically, the AM technique results in a more distal-proximal location of the femoral footprint, which is closer to the native ACL, as opposed to the TT technique that results in a more proximal footprint and therefore a more vertical graft (PUBMED:27106125).
Furthermore, the use of the anteromedial portal allows for independent femoral tunnel drilling, which can facilitate more accurate placement of the femoral tunnel within the native ACL femoral footprint (PUBMED:22523370). This is supported by a study that found the AM technique placed 97.7% of the tunnel within the native femoral footprint, significantly more than the TT technique (PUBMED:22523370).
Additionally, the anteromedial portal technique has been associated with faster recovery in terms of return to normal life and jogging at short-term follow-up, and it has been shown to be superior to the TT technique in creating anatomical femoral tunnel placement during single-bundle ACL reconstruction (PUBMED:27733884).
Moreover, the use of a modified inferior anteromedial portal has been shown to allow better control of tunnel length and inclination, ensuring a nonposterior femoral tunnel exit, which is particularly beneficial for obese patients (PUBMED:38012782).
In summary, anteromedial portal drilling is advantageous for improving the anatomical accuracy of femoral tunnel placement in ACL reconstruction, which is critical for restoring knee stability and function. |
Instruction: Are Obese Individuals with no Feature of Metabolic Syndrome but Increased Waist Circumference Really Healthy?
Abstracts:
abstract_id: PUBMED:27219879
Are Obese Individuals with no Feature of Metabolic Syndrome but Increased Waist Circumference Really Healthy? A Cross Sectional Study. Aim: Patients displaying the metabolically healthy but obese phenotype have an intermediate cardiometabolic prognosis compared to normal weight healthy and metabolically unhealthy obese subjects. We aimed to evaluate the proportion of patients with a definite metabolically healthy obese phenotype and better characterize them.
Methods: Definite metabolically healthy obese phenotype was defined as having none of the International Diabetes Federation metabolic syndrome criteria, excluding waist circumference. We recruited 1 159 obese patients (body mass index 38.4±6.3 kg/m(2)) including 943 women, without known diabetes. Patients were characterized for cardiometabolic disorders.
Results: As the 202 (17.4%) metabolically healthy obese individuals were younger and had lower body mass indexes than the 957 metabolically unhealthy obese patients, they were matched for gender, age and body mass index with 404 metabolically unhealthy obese patients. In addition to the features of metabolic syndrome, when compared to unhealthy subjects, definite metabolically healthy obese patients were less frequently found with either homeostasis model assessment of insulin resistance index>3 (23.6 vs. 38.9%, p<0.001), or abnormal oral glucose tolerance test (13.9 vs. 33.9%, p<0.001), or HbA1c value≥5.7% (43.9 vs. 54.2%, p<0.05) or pulse pressure≥60 mmHg (11.7 vs. 64.9%, p<0.001). However, there were no significant differences in the prevalence of microalbuminuria (11.1 vs. 12.3%), cardiac autonomic dysfunction (45.5 vs. 35.3%) and fatty liver index ≥ 60 (5.6 vs. 10.2%).
Conclusion: Our data do not support the characterization of metabolically healthy obesity, even definite, as really healthy, as many patients with this phenotype have abnormal cardiovascular markers and glucose or liver abnormalities. HbA1c measurement seems to be more sensitive than OGTT to detect dysglycemia in this population.
abstract_id: PUBMED:36163212
Association of cumulative excess weight and waist circumference exposure with transition from metabolically healthy obesity to metabolically unhealthy. Background And Aims: The association between obesity severity and duration with the transition from metabolically healthy obese/overweight (MHO) phenotype to metabolically unhealthy obese (MUO) phenotype is not well understood.
Methods And Results: This study includes the Tehran Lipid and Glucose Study participants who were initially classed as MHO. Cumulative excess weight (CEW) and cumulative excess waist circumference (CEWC) scores, which represent the accumulation of body mass index and waist circumference deviations from expected values over time (kg/m2 ∗ y and cm ∗ y, respectively), were calculated until the transition from MHO to MUO or the end of follow-up. The sex-stratified association of CEW and CWEC with the transition from MHO to MUO was investigated by time-dependent Cox models, adjusting for confounders. Out of 2525 participants, 1732 (68.5%) were women. During 15 years of follow-up, 1886 (74.6%) participants transitioned from MHO to MUO. A significant association was found between CEW and CEWC quartiles with the development of MUO among women participants (fully adjusted hazard ratios in the fourth quartile of CEW and CEWC [95% (CI)]:1.65 [1.37-1.98] and [95% CI]: 1.83 [1.53-2.19]). There was no significant association between CEW and CEWC with the MHO transition to MUO among men participants.
Conclusion: Over 15 years of follow-up in TLGS, general and central obesity accumulation was associated with the increased transition from MHO to MUO among women participants. More research with a larger sample size is needed to confirm and explain why the results are different for men and women.
abstract_id: PUBMED:18682591
Waist circumference measurement in clinical practice. The obesity epidemic is a major public health problem worldwide. Adult obesity is associated with increased morbidity and mortality. Measurement of abdominal obesity is strongly associated with increased cardiometabolic risk, cardiovascular events, and mortality. Although waist circumference is a crude measurement, it correlates with obesity and visceral fat amount, and is a surrogate marker for insulin resistance. A normal waist circumference differs for specific ethnic groups due to different cardiometabolic risk. For example, Asians have increased cardiometabolic risk at lower body mass indexes and with lower waist circumferences than other populations. One criterion for the diagnosis of the metabolic syndrome, according to different study groups, includes measurement of abdominal obesity (waist circumference or waist-to-hip ratio) because visceral adipose tissue is a key component of the syndrome. The waist circumference measurement is a simple tool that should be widely implemented in clinical practice to improve cardiometabolic risk stratification.
abstract_id: PUBMED:27796813
Neck circumference as an effective measure for identifying cardio-metabolic syndrome: a comparison with waist circumference. Neck circumference is a new anthropometric index for estimating obesity. We aimed to determine the relationship between neck circumference and body fat content and distribution as well as the efficacy of neck circumference for identifying visceral adiposity and metabolic disorders. A total of 1943 subjects (783 men, 1160 women) with a mean age of 58 ± 7 years were enrolled in this cross-sectional study. Metabolic syndrome was defined according to the standard in the 2013 China Guideline. Analyses were conducted to determine optimal neck circumference cutoff points for visceral adiposity quantified by magnetic resonance imaging, and to compare the performance of neck circumference with that of waist circumference in identifying abdominal obesity and metabolic disorders. Visceral fat content was independently correlated with neck circumference. Receiver operating characteristic curves showed that the area under the curve for the ability of neck circumference to determine visceral adiposity was 0.781 for men and 0.777 for women. Moreover, in men a neck circumference value of 38.5 cm had a sensitivity of 56.1 % and specificity of 83.5 %, and in women, a neck circumference value of 34.5 cm had a sensitivity of 58.1 % and specificity of 82.5 %. These values were the optimal cutoffs for identifying visceral obesity. There were no statistically significant differences between the proportions of metabolic syndrome and its components identified by an increased neck circumference and waist circumference. Neck circumference has the same power as waist circumference for identifying metabolic disorders in a Chinese population.
abstract_id: PUBMED:27050332
A comparison of the clinical usefulness of neck circumference and waist circumference in individuals with severe obesity. Purpose/Aim: Neck circumference (NC) is an emerging anthropometric parameter that has been proposed to reflect metabolic health. The aim of the current study was to compare its clinical usefulness to waist circumference (WC) in the assessment of individuals with severe obesity.
Materials And Methods: A total of 255 subjects participated in the study. All anthropometric measurements were done by a single medical professional. Biochemical measurements included oral glucose-tolerance tests (OGTTs), fasting insulin, lipids, and hepatic enzymes.
Results: The mean age of the participants was 49 ± 12 years with the mean body mass index (BMI) of 36.9 ± 6.2 kg/m2. Correlation analyses revealed that while WC was better associated with adiposity parameters, it was of little use in comparison to NC with regard to metabolic outcomes. In men, NC was positively associated with fasting plasma glucose, fasting insulin, FINDRISC scores. ROC analyses showed NC was better in distinguishing type 2 diabetes (AUC = 0.758; p < 0.001), insulin resistance (AUC = 0.757; p = 0.001), metabolic syndrome (AUC = 0.724; p < 0.001), and hypertension (AUC = 0.763; p = 0.001). Similar correlations were observed in women. Using binary logistic regression, we determined that a NC of ≥35 cm in women and ≥38 cm in men are valuable cut-off values to use in the everyday practice.
Conclusion: In individuals with severe obesity, NC performs better than WC in the assessment of metabolic health.
abstract_id: PUBMED:36900706
Overlooking of Individuals with Cardiometabolic Risk by Evaluation of Obesity Using Waist Circumference and Body Mass Index in Middle-Aged Japanese Women. Waist circumference is often used for the diagnosis of visceral obesity and metabolic syndrome. In Japan, obesity in women is defined by the government as a waist circumference of ≥90 cm and/or BMI of ≥25 kg/m2. However, there has been a controversy for almost two decades as to whether waist circumference and its above-optimal cutoff are appropriate for the diagnosis of obesity in health checkups. Instead of waist circumference, the waist-to-height ratio has been recommended for the diagnosis of visceral obesity. In this study, the relationships between the waist-to-height ratio and cardiometabolic risk factors, including diabetes, hypertension and dyslipidemia, were investigated in middle-aged Japanese women (35~60 years) who were diagnosed as not having obesity according to the above Japanese criteria of obesity. The percentage of subjects showing normal waist circumference and normal BMI was 78.2%, and about one-fifth of those subjects (16.6% of the overall subjects) showed a high waist-to-height ratio. In subjects with normal waist circumference and normal BMI, odds ratios of high vs. not high waist-to-height ratio for diabetes, hypertension and dyslipidemia were significantly higher than the reference level. A considerable proportion of women who have a high cardiometabolic risk might be overlooked at annual lifestyle health checkups in Japan.
abstract_id: PUBMED:35568423
Does the Presence of Type 2 Diabetes or Metabolic Syndrome Impact Reduction in Waist Circumference During Weight Loss? Objectives: Our aim in this study was to compare the change in waist circumference given the same degree of weight loss in patients who meet the criteria for metabolic syndrome or type 2 diabetes and those who do not meet these criteria. Because visceral adiposity is a key feature of both conditions and intra-abdominal adipocytes show higher lipolytic activity, we sought to determine whether changes in waist circumference differed in individuals with and without these conditions.
Methods: The Ottawa Hospital Weight Management Clinic offers a course in lifestyle modification and uses 12 weeks of total meal replacement. We compared the decrease in waist circumference between patients with metabolic syndrome or diabetes and those without these conditions who had lost a similar amount of weight using measurements from the first 6 weeks of meal replacement.
Results: We evaluated 3,559 patients who attended the program between September 1992 and April 2015. The patient population was largely Caucasian and of European descent and all meetings were face to face. The mean weight loss for men was 15.1±20.2 kg, and the mean weight loss for women was 9.7±2.4 kg. There were no significant differences in decrease in waist circumference between those with and without metabolic syndrome in both men (11.7±3.9 cm vs 11.4±3.8 cm, p=0.48) and women (9.0±3.6 cm vs 9.1±3.7 cm, p=0.26).
Conclusions: Our results show that, given the same degree of weight loss, patients with and without diabetes or metabolic syndrome experience a similar change in waist circumference.
abstract_id: PUBMED:28024832
Increased waist circumference is the main driver for the development of the metabolic syndrome in South African Asian Indians. There is no current evidence available on the prevalence of metabolic syndrome (MetS) in South African Asian Indians, who are at high risk for cardiovascular disease. The aim of our study was to determine the prevalence of the MetS in this group, between males and females, as well as in the different age-groups, using the harmonised criteria and determined the main components driving the development of MetS.
Design And Methods: This cross-sectional study recruited randomly selected community participants between the ages of 15 and 65 years, in the community of Phoenix, in KwaZulu-Natal. All subjects had anthropometric variables and blood pressure measured, as well as blood drawn for blood glucose and lipids after overnight fasting. The MetS was determined using the harmonised criteria.
Results: There were 1378 subjects sampled, mean age 45.5±13years and 1001 (72.6%) women. The age standardised prevalence for MetS was 39.9% and significantly higher (p<0.001) in women (49.9% versus 35.0% in men). The MetS was identified in 6.9% of young adults (15-24 years), with a four-fold increase in the 25-34year olds, and 60.1% in the 55-64year old group. Clustering of MetS components was present in all age-groups, but increased with advancing age. The independent contributors to the MetS were increased waist circumference, raised triglycerides and obesity. This study highlights the high prevalence of MetS in this ethnic group and the emergence of MetS in our younger subjects. Urgent population-based awareness campaigns, focussing on correcting unhealthy lifestyle behaviours should begin in childhood.
abstract_id: PUBMED:32126448
SVM-based waist circumference estimation using Kinect. Background And Objective: Conventional anthropometric studies using Kinect depth sensors have concentrated on estimating the distances between two points such as height. This paper deals with a novel waist measurement method using SVM regression, further widening spectrum of Kinect's potential applications. Waist circumference is a key index for the diagnosis of abdominal obesity, which has been linked to metabolic syndromes and other related diseases. Yet, the existing measuring method, tape measure, requires a trained personnel and is therefore costly and time-consuming.
Methods: A dataset was constructed by recording both 30 frames of Kinect depth image and careful tape measurement of 19 volunteers by a clinical investigator. This paper proposes a new SVM regressor-based approach for estimating waist circumference. A waist curve vector is extracted from a raw depth image using joint information provided by Kinect SDK. To avoid overfitting, a data augmentation technique is devised. The 30 frontal vectors and 30 backside vectors, each sampled for 1 s per person, are combined to form 900 waist curve vectors and a total of 17,100 samples were collected from 19 individuals. On an individual basis, we performed leave-one-out validation using the SVM regressor with the tape measurement-gold standard of waist circumference measurement-values labeled as ground-truth. On an individual basis, we performed leave-one-out validation using the SVM regressor with the tape measurement-gold standard of waist circumference measurement-values labeled as ground-truth.
Results: The mean error of the SVM regressor was 4.62 cm, which was smaller than that of the geometric estimation method. Potential uses are discussed.
Conclusions: A possible method for measuring waist circumference using a depth sensor is demonstrated through experimentation. Methods for improving accuracy in the future are presented. Combined with other potential applications of Kinect in healthcare setting, the proposed method will pave the way for patient-centric approach of delivering care without laying burdens on patients.
abstract_id: PUBMED:27184997
Self-Measured vs Professionally Measured Waist Circumference. Purpose: Although waist circumference can provide important metabolic risk information, logistic issues inhibit its routine use in outpatient practice settings. We assessed whether self-measured waist circumference is sufficiently accurate to replace professionally measured waist circumference for identifying high-risk patients.
Methods: Medical outpatients and research participants self-measured their waist circumference at the same visit during which a professionally measured waist circumference was obtained. Participants were provided with standardized pictorial instructions on how to measure their waist circumference, and professionals underwent standard training.
Results: Self- and professionally measured waist circumference data were collected for 585 women (mean ± SD age = 40 ± 14 years, mean ± SD body mass index = 27.7 ± 6.0 kg/m(2)) and 165 men (mean ± SD age = 41 ± 14 years, mean ± SD body mass index = 29.3 ± 4.6 kg/m(2)). Although self- and professionally measured waist circumference did not differ significantly, we found a clinically important false-negative rate for the self-measurements. Eleven percent of normal-weight and 52% of overweight women had a professionally measured waist circumference putting them in a high-risk category for metabolic syndrome (ie, greater than 88 cm); however, 57% and 18% of these women, respectively, undermeasured their waist circumference as falling below that cutoff. Fifteen percent and 84% of overweight and class I obese men, respectively, had a professionally measured waist circumference putting them in the high-risk category (ie, greater than 102 cm); however, 23% and 16% of these men, respectively, undermeasured their waist circumference as falling below that cutoff.
Conclusions: Despite standardized pictorial instructions for self-measured waist circumference, the false-negative rate of self-measurements approached or exceeded 20% for some groups at high risk for poor health outcomes.
Answer: Obese individuals with increased waist circumference but no other features of metabolic syndrome may not be truly healthy. A study found that even when obese patients did not meet the criteria for metabolic syndrome, except for waist circumference, they still exhibited abnormal cardiovascular markers and glucose or liver abnormalities. This suggests that the metabolically healthy obese phenotype may not be as benign as previously thought (PUBMED:27219879). Waist circumference is a strong indicator of cardiometabolic risk, cardiovascular events, and mortality, and it correlates with obesity and visceral fat amount, which is a surrogate marker for insulin resistance (PUBMED:18682591). Additionally, neck circumference, another anthropometric measure, has been shown to be as effective as waist circumference in identifying metabolic disorders, indicating that obesity-related health risks can be associated with various body measurements (PUBMED:27796813).
Furthermore, a study on the transition from metabolically healthy obesity to metabolically unhealthy obesity found that cumulative excess weight and waist circumference exposure were significantly associated with this transition among women, suggesting that the accumulation of general and central obesity over time increases the risk of becoming metabolically unhealthy (PUBMED:36163212). Another study highlighted that a considerable proportion of middle-aged Japanese women who were not classified as obese based on waist circumference and BMI might still have a high cardiometabolic risk, as indicated by a high waist-to-height ratio (PUBMED:36900706).
In conclusion, obese individuals with increased waist circumference, even in the absence of other metabolic syndrome features, may not be entirely healthy, as increased waist circumference is a main driver for the development of metabolic syndrome and is associated with increased cardiometabolic risk (PUBMED:28024832). |
Instruction: Are depressive symptoms nonspecific in patients with acute stroke?
Abstracts:
abstract_id: PUBMED:1882994
Are depressive symptoms nonspecific in patients with acute stroke? Objective: Some investigators have suggested that major depression might be overdiagnosed in stroke patients because of changes in appetite, sleep, or sexual interest caused by their medical illness; others have suggested that depression may be underdiagnosed in stroke patients who deny symptoms of depression because of anosognosia, neglect, or aprosody. The authors' goal was to determine how frequently depressive symptoms occur in acute stroke patients with and without depressed mood to estimate how often diagnostic errors of inclusion or exclusion may be made.
Method: They examined the rate of autonomic and psychological symptoms of depression in 205 patients who were consecutively hospitalized for acute stroke. Eighty-five (41%) of these patients had depressed mood, and 120 (59%) had no mood disturbance. Forty-six (54%) of the 85 patients with depressed mood (22% of all patients) were assigned the DSM-III diagnosis of major depression.
Results: The 120 patients without mood disturbance had a mean of one autonomic symptom, but the 85 patients with depressed mood had a mean of almost four. Tightening the diagnostic criteria to account for one more nonspecific autonomic symptom decreased the number of patients with major depression by only three; adding two more criteria decreased the number by only five. Thus, the rate of DSM-III major depression was 1% higher than the rate with one extra nonspecific autonomic symptom and 2% higher than the rate with two extra criteria. Conversely, loosening diagnostic criteria to account for denial of depressive illness increased the rate of major depression by only 5%.
Conclusions: Both autonomic and psychological depressive symptoms are strongly associated with depressed mood in acute stroke patients.
abstract_id: PUBMED:32728586
Instrumental music therapy reduced depression levels in stroke patients. Background: Stroke is the fifth cause of death and disability, leading also to depression. However, depression in stroke patients is hardly handled optimally. The purpose of this study therefore is to determine the effectiveness of instrumental music therapy in reducing depressive symptoms in stroke patients. Design and methods: It used a quasi-experiment pre-post design with a simple random sampling with 59 respondents. The respondents were divided into 3 groups as follows; group A (standard treatment), group B (instrumental music therapy), and group C (combined treatment). Results: The results show that the combined treatment provided the most significant influence on reducing the level of depression (P=0.001) with a contribution of 68.6% compared to the group A which was given standard treatment (P=0.001) with a contribution of 61.7%. Instrumental music therapy had no effect (P=0.986), though it contributed most among the three interventions, specifically 82.6%. Conclusions: The study recommended further improvement to include music as treatment options for reducing depression among stroke patients.
abstract_id: PUBMED:37916850
Depressive Symptoms in Young and Middle-Aged Stroke Patients: A Transition Analysis. Background: There is heterogeneity in depressive symptoms. However, latent classes of depressive symptoms and the transition and influences of these in young and middle-aged stroke patients are unclear.
Objectives: The aim of this study was to identify the latent classes of depressive symptoms and their transition patterns over time and the influencing factors in young and middle-aged stroke patients from stabilization to 6 months after discharge.
Methods: This is a longitudinal study following the Strengthening the Reporting of Observational Studies in Epidemiology checklist. A total of 272 young and middle-aged stroke participants were recruited from a hospital neurology ward in Henan Province, China. Participants completed a questionnaire on sociodemographic and health information. Latent transition analysis was used to evaluate the transition pattern of latent classes from stabilization to 6 months after discharge and its influencing factors.
Results: One hundred seventy-nine participants were included in the analysis. Three latent classes of depressive symptoms were identified as "mild symptoms," "grief-sleep-fatigue symptoms," and "severe symptoms." Most participants remained in the original latent class from stabilization to 6 months after discharge (probability of 83.8%, 83.8%, and 88.8%). From 3 to 6 months after discharge, the participants with fewer complications were more likely to transition into the mild symptom class.
Discussion: The findings indicate that from stabilization to 6 months after discharge, depressive symptoms in young and middle-aged stroke patients in China transitioned gradually from the severe symptom class to the mild symptom. Patients with fewer numbers of poststroke complications were more likely to transition to the mild symptoms class. Future research should focus on depressive symptoms in early-stage stroke patients and provide sufficient psychological support to patients with a high number of complications.
abstract_id: PUBMED:37696638
Remote interventions for informal caregivers of patients with stroke: a systematic review and meta-analysis. Objectives: It is unclear whether remote interventions are effective in improving outcomes of informal caregivers of patients who had a stroke. We synthesised evidence for the impact of remote interventions on informal caregivers of patients who had a stroke. Moreover, we also analysed its potential effects on patients who had a stroke.
Design: Systematic review and meta-analysis.
Data Sources: PubMed, Excerpta Medica Database, Web of Science, the Cochrane Library, China National Knowledge Infrastructure, Wanfang Database and China Science and Technology Journal Database were searched from inception up to 1 February 2022.
Eligibility Criteria: We included randomised controlled trials (RCTs) that assessed the effect of remote interventions on informal caregivers who provide unpaid care for patients who had a stroke living at home compared with traditional interventions, including with respect to caregivers' mood, care burden, life satisfaction and perceived competence. Moreover, we considered the potential impact of remote interventions on the depressive and anxiety symptoms, functional rehabilitation and re-admission of patients who had a stroke. Only studies published in Chinese or English were included. We excluded studies of interventions aimed at healthcare professionals or patients who had a stroke and those that could not provide complete data.
Data Extraction And Synthesis: Data analyses were performed using RevMan V.5.3. The Cochrane Collaboration risk of bias tool for RCTs was used to evaluate the quality of the included studies, and the review is reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. For continuous outcomes, we calculated the mean difference or standardised mean difference (SMD) and 95% CIs. The Grading of Recommendations, Assessment, Development, and Evaluations method was used to assess the certainty of the evidence.
Results: Eight RCTs with a total of 733 participants were included. Compared with traditional interventions, for informal caregivers, we found that remote interventions did not produce significant effects on depressive symptoms (SMD -0.04, 95% CI -0.24 to 0.15), anxiety symptoms (SMD -0.26, 95% CI -0.94 to 0.43), care burden (SMD -0.06, 95% CI -0.56 to 0.45), life satisfaction (SMD -0.16, 95% CI -0.43 to 0.11), or perceived competence (SMD 0.37, 95% CI -0.23 to 0.96). Similarly, for patients who had a stroke, remote interventions had no significant effect on depression (SMD 0.16, 95% CI -0.61 to 0.93) or anxiety symptoms (SMD -0.34, 95% CI -0.72 to 0.04). The effects of remote interventions on functional rehabilitation and re-admission in patients who had a stroke were evaluated by three studies and two studies, respectively, but the studies were too varied to combine their data in meta-analysis.
Conclusions: Current evidence suggests that remote interventions for informal caregivers of patients who had a stroke have no significant superiority over traditional interventions. However, the quality of the included studies was low and more high-quality evidence is required to determine the possible impacts of remote interventions.
Prospero Registration Number: CRD42022313544.
abstract_id: PUBMED:33145603
Cognitive and emotional symptoms in patients with first-ever mild stroke: The syndrome of hidden impairments. Objective: To evaluate the prevalence of cognitive and emotional impairments one year after first-ever mild stroke in younger patients Design: Prospective, observational, cohort study.
Subjects: A consecutive sample of 117 previously cognitively healthy patients aged 18-70 years with mild stroke (National Institutes of Health Stroke Scale score ≤ 3) were included in 2 hospitals in Norway during a 2-year period.
Methods: At 12-month follow-up, patients were assessed using validated instruments for essential cognitive domains, fatigue, depression, anxiety, apathy and pathological laughter and crying.
Results: In total, 78 patients (67%) had difficulty with one or a combination of the cognitive domains psychomotor speed, attention, executive and visuospatial function, and memory. Furthermore, 50 patients (43%) had impairment in either one or a combination of the emotional measures for anxiety, depressive symptoms, fatigue, apathy or emotional lability. A total of 32 patients (28%) had both cognitive and emotional impairments. Only 21 patients (18%) scored within the reference range in all the cognitive and emotional tools.
Conclusion: Hidden impairments are common after first-ever mild stroke in younger patients. Stroke physicians should screen for hidden impairments using appropriate tools.
abstract_id: PUBMED:28851230
Post-stroke depression as a predictor of caregivers burden of acute ischemic stroke patients in China. Our aim was to explore the independent attribution of Post-stroke depression (PSD) to caregiver burden of acute ischemic stroke patients. A cross-sectional survey was performed with 271 acute ischemic stroke patients in the Huai-He Hospital and First People's Hospital of Kaifeng City in China. PSD was assessed by Self-rating Depressive Scale, and caregiver burden was assessed by Zarit Caregiver Burden Interview. Clustered logistic regression was applied to identify the impact of PSD on caregiver burden. As results, female patients, normal muscle strength and PSD were associated with caregiver burden. PSD correlated with an independent influence of 17.2% on the risk of caregiver burden, The independent influence of PSD on caregiver burden was smaller than that of social-demographics of caregivers and clinical factors of stroke patients This study suggests that PSD may have a modest influence on caregiver burden.
abstract_id: PUBMED:35196958
Predictors of cognitive and emotional symptoms 12 months after first-ever mild stroke. Even mild strokes may affect the patients' everyday life by impairing cognitive and emotional functions. Our aim was to study predictors of such impairments one year after first-ever mild stroke. We included cognitively healthy patients ≤ 70 years with acute mild stroke. Vascular risk factors, sociodemographic factors and stroke classifications were recorded. At one-year post-stroke, different domains related to cognitive and emotional function were assessed with validated instruments. Logistic regression analyses were performed to identify predictors of cognitive and emotional outcome. Of 117 patient assessed at follow-up, only 21 patients (18%) scored within the reference range on all cognitive and emotional assessments. Younger age, multiple infarcts, and being outside working life at stroke onset were independent predictors of cognitive impairments (psychomotor speed, attention, executive and visuospatial function, memory). Female gender and a higher National Institutes of Health Stroke Scale (NIHSS) score at discharge were significantly associated with emotional impairments (anxiety, depressive symptoms, fatigue, apathy, emotional lability) after one year, but these associations were only seen in the unadjusted models. In conclusion, patients in working age may profit from a follow-up during the post-stroke period, with extra focus on cognitive and emotional functions.
abstract_id: PUBMED:26987919
The relationship between depressive symptoms and diabetic complications in elderly patients with diabetes: Analysis using the Diabetes Study from the Center of Tokyo Women's Medical University (DIACET). Aims: To investigate the association between likelihood or severity of depression and symptoms associated with diabetic complications in elderly Japanese patients with diabetes.
Methods: This single-center cross-sectional study included 4283 patients with diabetes, 65 years and older (mean age was 73 ± 6 years, 38.7% were women, 3.9% had type 1 diabetes). Participants completed a self-administered questionnaire including items on subjective symptoms associated with diabetic microangiopathy, frequency of clinical visits due to vascular diseases (heart diseases, stroke, or gangrene), hospitalization, and the Patient Health Questionnaire-9 (PHQ-9), a simple but reliable measure of depression. The associations between severity of depression and diabetic complications were examined using logistic regression analysis.
Results: According to the PHQ-9 scores, patients were classified into the following 3 categories: 0-4 points (n=2975); 5-9 points (n=842); and 10 or more points (n=466). Higher PHQ-9 scores were associated with increased odds ratios for retinopathy, symptoms related to peripheral polyneuropathy and autonomic neuropathy, and end-stage renal disease requiring dialysis after adjustment for age, gender, smoking status, and HbA1c (all p<0.05).
Conclusions: Significant relationships were found between depression severity and chronic diabetic complications among elderly Japanese patients with diabetes.
abstract_id: PUBMED:35768781
Functional decline, long term symptoms and course of frailty at 3-months follow-up in COVID-19 older survivors, a prospective observational cohort study. Background: Aging is one of the most important prognostic factors increasing the risk of clinical severity and mortality of COVID-19 infection. However, among patients over 75 years, little is known about post-acute functional decline.
Objective: The aim of this study was to identify factors associated with functional decline 3 months after COVID-19 onset, to identify long term COVID-19 symptoms and transitions between frailty statesafter COVID-19 onset in older hospitalized patients.
Methods: This prospective observational study included COVID-19 patients consecutively hospitalized from March to December 2020 in Acute Geriatric Ward in Nantes University Hospital. Functional decline, frailty status and long term symptoms were assessed at 3 month follow up. Functional status was assessed using the Activities of Daily Living simplified scale (ADL). Frailty status was evaluated using Clinical Frailty Scale (CFS). We performed multivariable analyses to identify factors associated with functional decline.
Results: Among the 318 patients hospitalized for COVID-19 infection, 198 were alive 3 months after discharge. At 3 months, functional decline occurred in 69 (36%) patients. In multivariable analysis, a significant association was found between functional decline and stroke (OR = 4,57, p = 0,003), history of depressive disorder (OR = 3,05, p = 0,016), complications (OR = 2,24, p = 0,039), length of stay (OR = 1,05, p = 0,025) and age (OR = 1,08, p = 0,028). At 3 months, 75 patients described long-term symptoms (49.0%). Of those with frailty (CFS scores ≥5) at 3-months follow-up, 30% were not frail at baseline. Increasing frailty defined by a worse CFS state between baseline and 3 months occurred in 41 patients (26.8%).
Conclusions: This study provides evidence that both the severity of the COVID-19 infection and preexisting medical conditions correlates with a functional decline at distance of the infection. This encourages practitioners to establish discharge personalized care plan based on a multidimensional geriatric assessment and in parallel on clinical severity evaluation.
abstract_id: PUBMED:30375007
Early Symptom Measurement of Post-Stroke Depression: Development and validation of a new short version. Aims: The aim of this study was to develop and test the psychometric properties of the Early Symptom Measurement of Post-Stroke Depression-Short Form (ESMPSD-SF).
Background: The ESMPSD is a specific measurement tool designed to detect early depressive symptoms in acute stroke patients, but it is too long for use in busy clinical settings.
Design: A psychometric study was conducted.
Methods: Five hundred and twenty-one post-stroke patients completed two questionnaires, the demographic and the ESMPSD questionnaire, over a period of 10 months, from July 2016-April 2017. The item reduction process was used to reduce the number of items in the ESMPSD questionnaire and consisted of item analysis, exploratory, and confirmatory factor analysis.
Results: The item reduction process resulted in a 12-item short version questionnaire with evidence of acceptable construct validity and internal reliability. Four factors explaining high total variance were extracted: "low," "guilt," "emotional," and "wakefulness". Estimates of all confirmatory model fit indices met the standard criteria. All standardized factor loading estimates of the 12 items met the standard criteria and the variances explained by the items were acceptable. Moreover, internal reliability estimates of the 12-item questionnaire were acceptable, and the corrected item-total correlation and item-subscale correlation also demonstrated evidence of acceptable reliability of the short form questionnaire.
Conclusions: The ESMPSD-SF demonstrates evidence of acceptable reliability and validity to screen early depressive symptoms in acute stroke patients in busy clinical settings.
Answer: Depressive symptoms are not nonspecific in patients with acute stroke. The study by Robinson et al. (PUBMED:1882994) found that both autonomic and psychological depressive symptoms are strongly associated with depressed mood in acute stroke patients. Their research indicated that depressive symptoms frequently occur in acute stroke patients with depressed mood, and adjusting the diagnostic criteria for nonspecific autonomic symptoms only slightly altered the rate of major depression diagnosis. This suggests that depressive symptoms are a relevant and specific concern in the context of acute stroke.
Additionally, other studies have shown that interventions such as instrumental music therapy can reduce depression levels in stroke patients (PUBMED:32728586), and that there are distinct patterns of depressive symptoms that can be identified and transition over time in young and middle-aged stroke patients (PUBMED:37916850). Furthermore, post-stroke depression (PSD) has been identified as a predictor of caregiver burden (PUBMED:28851230), and cognitive and emotional symptoms, including depression, are prevalent even after mild strokes (PUBMED:33145603). These findings underscore the specificity and significance of depressive symptoms in the context of stroke and their impact on both patients and caregivers. |
Instruction: Is there an association between hypothyroidism and open-angle glaucoma in an elderly population?
Abstracts:
abstract_id: PUBMED:18355921
Is there an association between hypothyroidism and open-angle glaucoma in an elderly population? An epidemiologic study. Purpose: There have been conflicting reports pertaining to the association between hypothyroidism and open-angle glaucoma (OAG). The purpose of this study was to assess the hypothesized association between preexisting hypothyroidism and development of OAG in a population-based setting.
Design: Case-control study.
Participants: The study population and controls were taken from all patients in a large US managed care database aged >or=60 years with 4 years of continuous eligibility dating from January 1, 2001, through December 31, 2004.
Methods: A total of 4728 newly diagnosed OAG patients were matched with 14 184 controls (3:1 matching) based on age and gender.
Main Outcome Measures: Conditional logistic regression was used to assess the relationship between hypothyroidism and OAG while controlling for various risk factors (ischemic heart disease, cerebrovascular disease, hyperlipidemia, hypertension, arterial disease, diabetes, and migraines).
Results: Based on a diagnosis of hypothyroidism or use of a thyroid replacement therapy, prior hypothyroidism was found in 815 (17.2%) OAG subjects and in 2498 (17.6%) control subjects. After adjusting for the specified risk factors, patients with OAG were not found to be associated with a prior hypothyroid diagnosis when compared with control subjects (odds ratio, 0.93; 95% confidence interval, 0.85-1.01).
Conclusions: An association between prior hypothyroidism and OAG development was not found. The large proportion of patients receiving thyroid replacement therapy may have negated any OAG-related consequences of hypothyroidism.
abstract_id: PUBMED:9246288
Primary open angle glaucoma and hypothyroidism: chance or true association? The prevalence of hypothyroidism in British patients with primary open angle glaucoma (POAG) was examined. A recently reported study from Montreal had shown a significant increase (p < 0.004) in biochemical hypothyroidism (23.4%) in a population of 64 POAG patients compared with controls (4.7%). Mechanisms for a possible causal association between the two diseases are discussed, including mucopolysaccharide deposition in the trabecular meshwork and vasculopathy altering ocular bloodflow. Reports of improved glaucoma control following treatment of hypothyroidism are discussed. This study examined 100 consecutive patients with POAG in a specialist glaucoma clinic. All patients were questioned regarding symptoms of thyroid dysfunction and previous thyroid disease. All patients not already taking thyroxine underwent an assay of thyroid stimulating hormone. The 4% (95% CI 1.1-9.4%) prevalence of overt hypothyroidism in our study shows no clinically significant increase either over controls in the Montreal study or over our local population. We conclude that in our local population there is no evidence for a clinically important association of hypothyroidism with glaucoma.
abstract_id: PUBMED:25197907
Allergic rhinitis is associated with open-angle glaucoma: a population-based case-control study. Background: Despite many reports linking allergic rhinitis (AR) to problems of the eye, the relationship between AR and open-angle glaucoma (OAG) has not been studied. The purpose of this epidemiology study was to provide an estimation of the association of OAG with AR by using a population-based data set in Taiwan.
Methods: We retrieved our study sample for this case-control study from the Longitudinal Health Insurance Database 2000. We extracted 7063 subjects with OAG as cases and 21,189 matched controls (three controls per case). We used conditional logistic regression analyses to calculate the odds ratio (OR) and corresponding 95% confidence interval (CI) to describe the association between OAG and having previously been diagnosed with AR.
Results: A chi-squared test showed that there was a significant difference in the prevalence of prior AR between cases and controls (28.8% versus 22.3%; p < 0.001). A conditional logistic regression analysis suggested that the OR of having previously been diagnosed with AR for cases was 1.40 (95% CI, 1.31∼1.48; p < 0.001) compared with controls after adjusting for monthly income, geographic region, urbanization level, hypertension, diabetes, asthma, coronary heart disease, hyperlipidemia, and hypothyroidism. It also revealed that OAG was consistently and significantly associated with prior AR across all age groups. In particular, subjects aged 50∼59 years had the highest OR for prior AR among cases compared with controls (OR, 1.77; 95% CI, 1.53∼2.06; p < 0.001).
Conclusion: This outcome research found that there was an association between AR and OAG.
abstract_id: PUBMED:23601803
Obstructive sleep apnea and increased risk of glaucoma: a population-based matched-cohort study. Purpose: Previous studies had reported an increased prevalence of glaucoma in patients with obstructive sleep apnea (OSA). However, the risk of open-angle glaucoma (OAG) among patients with OSA remains unclear. Using a nationwide, population-based dataset in Taiwan, this study aimed to examine the prevalence and risk of OAG among patients with OSA during a 5-year follow-up period after a diagnosis of OSA.
Design: A retrospective, matched-cohort study.
Participants And Controls: This study used data sourced from the Longitudinal Health Insurance Database 2000. We included 1012 subjects with OSA in the study cohort and randomly selected 6072 subjects in the comparison group.
Methods: Each subject in this study was individually traced for a 5-year period to identify those subjects who subsequently received a diagnosis of OAG. Cox proportional hazards regression was performed to calculate the 5-year risk of OAG between the study and comparison cohorts.
Main Outcome Measures: The incidence and risk of OAG between the study and comparison groups.
Results: During the 5-year follow-up period, the incidence rate per 1000 person-years was 11.26 (95% confidence interval [CI], 8.61-14.49) and 6.76 (95% CI, 5.80-7.83) for subjects with and without OSA, respectively. After adjusting for monthly income, geographic region, diabetes, hypertension, coronary heart disease, obesity, hyperlipidemia, renal disease, hypothyroidism, and the number of outpatient visits for ophthalmologic care during the follow-up period, stratified Cox proportional hazards regression revealed that the hazard ratio for OAG within the 5-year period for subjects with OSA was 1.67 (95% CI, 1.30-2.17; P<0.001) that of comparison subjects.
Conclusions: Our results suggest that OSA is associated with an increased risk of subsequent OAG diagnosis during a 5-year follow-up period.
Financial Disclosures(s): The authors have no proprietary or commercial interest in any of the materials discussed in this article.
abstract_id: PUBMED:15350317
Hypothyroidism and the development of open-angle glaucoma in a male population. Purpose: To determine if hypothyroidism is associated with an increased risk of glaucoma using a large cohort of patients.
Design: Nested case-control study.
Participants: Patients seen at the Veterans Affairs Medical Center in Birmingham, Alabama with newly diagnosed glaucoma between 1997 and 2001 were selected (n = 590) and age-matched to nonglaucoma controls (n = 5897).
Methods: Patient information was extracted from the Birmingham Veterans Affairs Medical Center data files containing demographic, clinical, and medication information. An index date was assigned to the glaucoma subjects corresponding to the time of diagnosis. Patients who had a glaucoma diagnosis before the observation period of the study were excluded. Ten controls were randomly selected for each patient and matched on age (+/-1 year) and an encounter on or before the index date of the matched case.
Main Outcome Measures: Odds ratios (ORs) for the association between the prior diagnosis of hypothyroidism and the risk of developing glaucoma with adjustment for the presence of diabetes, lipid metabolism disorders, hypertension, cardiovascular disease, cerebrovascular disease, arterial disease, and migraines.
Results: After adjustment for the other potential risk factors, patients were significantly more likely to have prior hypothyroidism than controls (OR, 1.40; 95% confidence interval, 1.01-1.97).
Conclusions: Our study has demonstrated a significantly greater risk of subjects with a preexisting diagnosis of hypothyroidism developing glaucoma, compared with controls, in a large Veterans Affairs Medical Center population.
abstract_id: PUBMED:8414419
An association between hypothyroidism and primary open-angle glaucoma. Purpose: To test the hypothesis that there is an association between hypothyroidism and primary open-angle glaucoma.
Methods: The study was conducted in a case-control fashion. Sixty-four patients with primary open-angle glaucoma were evaluated for hypothyroidism by history and by undergoing a thyroid-stimulating hormone immunoradiometric assay. Sixty-four control subjects from the general eye clinic were evaluated in the same manner. Patients found to have elevated thyroid-stimulating hormone immunoradiometric assay were evaluated by an endocrinologist for hypothyroidism.
Results: Of the primary open-angle glaucoma group, 23.4% had hypothyroidism. A diagnosis was made previously in 12.5% patients, and 10.9% were newly diagnosed. Of the control subjects, 4.7% had hypothyroidism. A diagnosis had been made previously in 1.6% of the control subjects, and 3.1% were newly diagnosed. The difference between the two groups was found to be statistically significant.
Conclusion: A statistically significant association between hypothyroidism and primary open-angle glaucoma is demonstrated. There is a large group (10.9%) of patients with primary open-angle glaucoma with undiagnosed hypothyroidism.
abstract_id: PUBMED:20557938
Hypothyroidism and the risk of developing open-angle glaucoma: a five-year population-based follow-up study. Objective: To investigate the risk of open-angle glaucoma (OAG) after a diagnosis of hypothyroidism.
Design: A retrospective, population-based follow-up study using an administrative database.
Participants: The study group comprised 257 hypothyroidism patients. The comparison group included 2056 subjects.
Methods: Data were retrospectively collected from the Taiwan Longitudinal Health Insurance Database. The study cohort comprised patients aged ≥ 60 who received a first diagnosis of hypothyroidism (International Classification of Diseases, Ninth Revision, Clinical Modification code 244.9) from 1997 to 2001 (n = 257). The comparison cohort consisted of randomly selected patients without hypothyroidism who were aged ≥ 60 and had no diagnosis of glaucoma before 2001 (8 for every OAG patient; n = 2056). Each sampled patient was tracked for 5 years from their index visit. Cox proportional hazard regressions were used to compute the 5-year OAG-free survival rate, after adjusting for possible confounding factors.
Main Outcome Measures: The risk of developing OAG during the 5-year follow-up period.
Results: Open-angle glaucoma developed in 7.4% of patients with hypothyroidism and 3.8% of patients in the comparison cohort during the follow-up period. Hypothyroid patients had a significantly lower 5-year OAG-free survival rate than patients in the comparison cohort. After adjusting for patients' age, gender, monthly income, urbanization level, and comorbid medical disorders, hypothyroidism patients were found to have a 1.78-fold (95% confidence interval [CI], 1.04-3.06) greater risk of developing OAG than the comparison cohort. This association remained significant in untreated hypothyroidism patients (adjusted hazard ratio [HR], 2.37; 95% CI, 1.10-5.09) and became statistically nonsignificant in patients treated with levothyroxine (adjusted HR, 1.73; 95% CI, 0.89-3.38).
Conclusions: Hypothyroid patients had a significantly increased risk of OAG development during the 5-year follow-up period. Levothyroxine seemed to be protective.
abstract_id: PUBMED:14716330
Open-angle glaucoma and systemic thyroid disease in an older population: The Blue Mountains Eye Study. Purpose: To assess whether thyroid disease is independently associated with open-angle glaucoma (OAG), using history of thyroid disease and current thyroxine use.
Methods: The Blue Mountains Eye Study examined 3654 persons, aged 49-97 years. Interviewers collected self-reported history of diagnosis and treatment for thyroid disease. Eye examinations included applanation tonometry, stereoscopic optic disc photography and automated perimetry. OAG was diagnosed from the presence of matching typical glaucomatous field changes and optic disc cupping, independent of intraocular pressure. Associations between thyroid disease (history and treatment) and OAG were assessed in a multivariate model.
Results: Of 324 participants (8.9%) reporting history of thyroid disease, 147 (4.0%) were currently using thyroxine. Although we could not accurately categorize the thyroid disorder for all cases, current use of thyroxine suggests a prior hypothyroid state. All thyroid disease subgroups affected women more frequently than men, P=0.001. OAG was diagnosed in 108 subjects (3.0%) and was more frequent in those reporting past thyroid disease (4.6 vs 2.8%). This relationship was not statistically significant after adjusting for potential confounders, multivariate odds ratio (OR) 1.6; 95% confidence interval (95% CI) 0.9-2.9. OAG was significantly more frequent, however, in subjects reporting current thyroxine use (6.8 vs 2.8%), multivariate OR 2.1; 95% CI 1.0-4.4, or history of thyroid surgery (6.5 vs 2.8%), multivariate OR 2.5; 95% CI 1.0-6.2.
Conclusions: This population-based study suggests that thyroid disease, indicated by current thyroxine use or past thyroid surgery, could be independently related to OAG.
abstract_id: PUBMED:24263380
Increased risk of open-angle glaucoma following chronic rhinosinusitis: a population-based matched-cohort study. Purpose: Anatomically, the eyes and paranasal sinuses are neighboring structures and some studies have mentioned eye disease in conjunction with chronic rhinosinusitis (CRS). However, to the best of our knowledge, no prior research has investigated the risk of developing open-angle glaucoma (OAG) among CRS patients. This study aims to provide an estimated risk of developing OAG among patients with CRS by using a population-based data set in Taiwan.
Methods: This retrospective cohort study used data sourced from the 'Longitudinal Health Insurance Database 2000'. A total of 15,642 CRS patients were included in the study cohort and 46,926 subjects were randomly extracted as a comparison cohort. A cox proportional-hazards regression analysis was performed to calculate the 5-year risk of subsequently developing OAG following a diagnosis of CRS between the study cohort and the comparison cohort.
Results: The incidence rate of developing OAG over the 5-year follow-up period was 5.45 (95% CI: 4.95-5.98) per 1000 person-years for the study cohort and 2.80 (95% CI: 2.60-3.03) per 1000 person-years for the comparison cohort. After censoring the cases that died over the 5-year period and adjusting for the factors of monthly income, geographic region, hypertension, diabetes, coronary heart disease, hyperlipidemia, and hypothyroidism the hazard ratio for developing OAG over the 5-year period for subjects with CRS to subjects without CRS was 1.73 (95% CI: 1.53-1.96).
Conclusion: We found that those subjects with CRS had a significantly higher risk of developing OAG over the 5-year follow-up period as compared with subjects without CRS.
abstract_id: PUBMED:29069095
Hypothyroidism as a risk factor for open angle glaucoma: A systematic review and meta-analysis. Purpose: The relationship between hypothyroidism and primary open angle glaucoma (POAG) has attracted intense interest recently, but the reported results have been controversial. This meta-analysis was carried out to determine the association between hypothyroidism and POAG.
Methods: The literature was identified from three databases (Web of Science, Embase, and PubMed). The meta-analyses were performed using random-effects models, with results reported as adjusted odds ratios (ORs) with 95% confidence intervals (CI 95%).
Results: A total of 11 studies meeting the inclusion criteria were included in the final meta-analysis. The pooled OR based on 11 risk estimates showed a statistically significant increased risk of POAG prevalence among individuals with hypothyroidism (OR = 1.64, 95% CI = 1.27-2.13). Substantial heterogeneity among these studies was detected (P < 0.001; I2 = 83.2%). Sub-group analysis revealed that the cohort studies and case-control studies showed a significant association between hypothyroidism and POAG, which was not observed in cross-sectional studies. There was no significant publication bias in this study.
Conclusions: The findings of this meta-analysis indicate that individuals with hypothyroidism have an increased risk of developing POAG.
Answer: The association between hypothyroidism and open-angle glaucoma (OAG) in an elderly population has been investigated in several studies, but the results have been conflicting. Some studies have found a significant association between hypothyroidism and OAG. For instance, a study conducted on a male population at the Veterans Affairs Medical Center in Birmingham, Alabama, found that patients were significantly more likely to have prior hypothyroidism than controls, suggesting an increased risk of developing glaucoma among those with a preexisting diagnosis of hypothyroidism (PUBMED:15350317). Another study that examined the risk of OAG after a diagnosis of hypothyroidism in a Taiwanese population found that hypothyroid patients had a significantly increased risk of developing OAG during a 5-year follow-up period (PUBMED:20557938). A systematic review and meta-analysis also concluded that individuals with hypothyroidism have an increased risk of developing POAG (PUBMED:29069095).
However, other studies have not found a significant association. A large US managed care database study did not find an association between prior hypothyroidism and OAG development after adjusting for various risk factors (PUBMED:18355921). Similarly, a study of British patients with POAG found no clinically significant increase in the prevalence of overt hypothyroidism compared to controls or the local population, leading to the conclusion that there was no evidence for a clinically important association of hypothyroidism with glaucoma in their local population (PUBMED:9246288). The Blue Mountains Eye Study also did not find a statistically significant relationship between past thyroid disease and OAG after adjusting for potential confounders, although it did find that OAG was more frequent in subjects reporting current thyroxine use or history of thyroid surgery (PUBMED:14716330).
In summary, while some studies suggest an association between hypothyroidism and OAG, particularly in certain populations or under specific conditions, other studies do not support this association. The evidence is mixed, and further research may be needed to clarify the relationship between hypothyroidism and OAG in the elderly population. |
Instruction: Are All Oscillators Created Equal?
Abstracts:
abstract_id: PUBMED:33643004
Dynamics of Structured Networks of Winfree Oscillators. Winfree oscillators are phase oscillator models of neurons, characterized by their phase response curve and pulsatile interaction function. We use the Ott/Antonsen ansatz to study large heterogeneous networks of Winfree oscillators, deriving low-dimensional differential equations which describe the evolution of the expected state of networks of oscillators. We consider the effects of correlations between an oscillator's in-degree and out-degree, and between the in- and out-degrees of an "upstream" and a "downstream" oscillator (degree assortativity). We also consider correlated heterogeneity, where some property of an oscillator is correlated with a structural property such as degree. We finally consider networks with parameter assortativity, coupling oscillators according to their intrinsic frequencies. The results show how different types of network structure influence its overall dynamics.
abstract_id: PUBMED:29881643
Systems and synthetic biology approaches in understanding biological oscillators. Background: Self-sustained oscillations are a ubiquitous and vital phenomenon in living systems. From primitive single-cellular bacteria to the most sophisticated organisms, periodicities have been observed in a broad spectrum of biological processes such as neuron firing, heart beats, cell cycles, circadian rhythms, etc. Defects in these oscillators can cause diseases from insomnia to cancer. Elucidating their fundamental mechanisms is of great significance to diseases, and yet challenging, due to the complexity and diversity of these oscillators.
Results: Approaches in quantitative systems biology and synthetic biology have been most effective by simplifying the systems to contain only the most essential regulators. Here, we will review major progress that has been made in understanding biological oscillators using these approaches. The quantitative systems biology approach allows for identification of the essential components of an oscillator in an endogenous system. The synthetic biology approach makes use of the knowledge to design the simplest, de novo oscillators in both live cells and cell-free systems. These synthetic oscillators are tractable to further detailed analysis and manipulations.
Conclusion: With the recent development of biological and computational tools, both approaches have made significant achievements.
abstract_id: PUBMED:29383730
Capacitive coupling synchronizes autonomous microfluidic oscillators. Even identically designed autonomous microfluidic oscillators have device-to-device oscillation variability that arises due to inconsistencies in fabrication, materials, and operation conditions. This work demonstrates, experimentally and theoretically, that with appropriate capacitive coupling these microfluidic oscillators can be synchronized. The size and characteristics of the capacitive coupling needed and the range of input flow rate differences that can be synchronized are also characterized. In addition to device-to-device variability, there is also within-device oscillation noise that arises. An additional advantage of coupling multiple fluidic oscillators together is that the oscillation noise decreases. The ability to synchronize multiple autonomous oscillators is also a first step towards enhancing their usefulness as tools for biochemical research applications where multiplicate experiments with identical temporal-stimulation conditions are required.
abstract_id: PUBMED:29757252
Biological Oscillators in Nanonetworks-Opportunities and Challenges. One of the major issues in molecular communication-based nanonetworks is the provision and maintenance of a common time knowledge. To stay true to the definition of molecular communication, biological oscillators are the potential solutions to achieve that goal as they generate oscillations through periodic fluctuations in the concentrations of molecules. Through the lens of a communication systems engineer, the scope of this survey is to explicitly classify, for the first time, existing biological oscillators based on whether they are found in nature or not, to discuss, in a tutorial fashion, the main principles that govern the oscillations in each oscillator, and to analyze oscillator parameters that are most relevant to communication engineer researchers. In addition, the survey highlights and addresses the key open research issues pertaining to several physical aspects of the oscillators and the adoption and implementation of the oscillators to nanonetworks. Moreover, key research directions are discussed.
abstract_id: PUBMED:22379491
Transitory behaviors in diffusively coupled nonlinear oscillators. We study collective behaviors of diffusively coupled oscillators which exhibit out-of-phase synchrony for the case of weakly interacting two oscillators. In large populations of such oscillators interacting via one-dimensionally nearest neighbor couplings, there appear various collective behaviors depending on the coupling strength, regardless of the number of oscillators. Among others, we focus on an intermittent behavior consisting of the all-synchronized state, a weakly chaotic state and some sorts of metachronal waves. Here, a metachronal wave means a wave with orderly phase shifts of oscillations. Such phase shifts are produced by the dephasing interaction which produces the out-of-phase synchronized states in two coupled oscillators. We also show that the abovementioned intermittent behavior can be interpreted as in-out intermittency where two saddles on an invariant subspace, the all-synchronized state and one of the metachronal waves play an important role.
abstract_id: PUBMED:27222579
Search for supersolidity in solid 4He using multiple-mode torsional oscillators. In 2004, Kim and Chan (KC) reported a decrease in the period of torsional oscillators (TO) containing samples of solid (4)He, as the temperature was lowered below 0.2 K [Kim E, Chan MHW (2004) Science 305(5692):1941-1944]. These unexpected results constituted the first experimental evidence that the long-predicted supersolid state of solid (4)He may exist in nature. The KC results were quickly confirmed in a number of other laboratories and created great excitement in the low-temperature condensed-matter community. Since that time, however, it has become clear that the period shifts seen in the early experiments can in large part be explained by an increase in the shear modulus of the (4)He solid identified by Day and Beamish [Day J, Beamish J (2007) Nature 450(7171):853-856]. Using multiple-frequency torsional oscillators, we can separate frequency-dependent period shifts arising from changes in the elastic properties of the solid (4)He from possible supersolid signals, which are expected to be independent of frequency. We find in our measurements that as the temperature is lowered below 0.2 K, a clear frequency-dependent contribution to the period shift arising from changes in the (4)He elastic properties is always present. For all of the cells reported in this paper, however, there is always an additional small frequency-independent contribution to the total period shift, such as would be expected in the case of a transition to a supersolid state.
abstract_id: PUBMED:36838064
The Coupled Reactance-Less Memristor Based Relaxation Oscillators for Binary Oscillator Networks. This paper discusses the application of coupled reactance-less memristor-based oscillators (MBO) with binary output signals in oscillatory networks. A class of binary-coupled memristor oscillators provides simple integration with standard CMOS logic elements. Combining MBOs with binary logic elements ensures the operation of complex information processing algorithms. The analysis of the simplest networks based on MBOs is performed. The typical reactance-less MBO with current and potential inputs is considered. The output responses for input control signals are analyzed. It is shown that the current input signal impacts primarily the rate of memristor resistance variation, while the potential input signal changes the thresholds. The exploit of the potential input for the synchronization of coupled MBOs and current control input in order to provide the necessary encoding of information is suggested. The example of the application of coupled MBOs in oscillatory networks is given, and results of simulation are presented.
abstract_id: PUBMED:27818082
Noise Induces the Population-Level Entrainment of Incoherent, Uncoupled Intracellular Oscillators. Intracellular oscillators entrain to periodic signals by adjusting their phase and frequency. However, the low copy numbers of key molecular players make the dynamics of these oscillators intrinsically noisy, disrupting their oscillatory activity and entrainment response. Here, we use a combination of computational methods and experimental observations to reveal a functional distinction between the entrainment of individual oscillators (e.g., inside cells) and the entrainment of populations of oscillators (e.g., across tissues). We demonstrate that, in the presence of intracellular noise, weak periodic cues robustly entrain the population averaged response, even while individual oscillators remain un-entrained. We mathematically elucidate this phenomenon, which we call stochastic population entrainment, and show that it naturally arises due to interactions between intrinsic noise and nonlinear oscillatory dynamics. Our findings suggest that robust tissue-level oscillations can be achieved by a simple mechanism that utilizes intrinsic biochemical noise, even in the absence of biochemical couplings between cells.
abstract_id: PUBMED:33266715
Stochastic Thermodynamics of Oscillators' Networks. We apply the stochastic thermodynamics formalism to describe the dynamics of systems of complex Langevin and Fokker-Planck equations. We provide in particular a simple and general recipe to calculate thermodynamical currents, dissipated and propagating heat for networks of nonlinear oscillators. By using the Hodge decomposition of thermodynamical forces and fluxes, we derive a formula for entropy production that generalises the notion of non-potential forces and makes transparent the breaking of detailed balance and of time reversal symmetry for states arbitrarily far from equilibrium. Our formalism is then applied to describe the off-equilibrium thermodynamics of a few examples, notably a continuum ferromagnet, a network of classical spin-oscillators and the Frenkel-Kontorova model of nano friction.
abstract_id: PUBMED:28750200
Incoherent Inputs Enhance the Robustness of Biological Oscillators. Robust biological oscillators retain the critical ability to function in the presence of environmental perturbations. Although central architectures that support robust oscillations have been extensively studied, networks containing the same core vary drastically in their potential to oscillate, and it remains elusive what peripheral modifications to the core contribute to this functional variation. Here, we have generated a complete atlas of two- and three-node oscillators computationally, then systematically analyzed the association between network structure and robustness. We found that, while certain core topologies are essential for producing a robust oscillator, local structures can substantially modulate the robustness of oscillations. Notably, local nodes receiving incoherent or coherent inputs respectively promote or attenuate the overall network robustness in an additive manner. We validated these relationships in larger-scale networks reflective of real biological oscillators. Our findings provide an explanation for why auxiliary structures not required for oscillation are evolutionarily conserved and suggest simple ways to evolve or design robust oscillators.
Answer: No, all oscillators are not created equal. The dynamics and robustness of oscillators can vary significantly depending on their structure, coupling, and the nature of their components. For instance, Winfree oscillators, which are models of neurons, exhibit dynamics that are influenced by the network structure, including correlations between oscillator degrees and assortativity based on intrinsic frequencies (PUBMED:33643004). Biological oscillators, which are found in a wide range of processes from neuron firing to circadian rhythms, can have their fundamental mechanisms elucidated by simplifying the systems to contain only the most essential regulators, and their robustness can be affected by local structures that modulate the core topology (PUBMED:29881643).
Moreover, the synchronization of oscillators, such as microfluidic oscillators, can be affected by factors like capacitive coupling and device-to-device variability (PUBMED:29383730). Biological oscillators used in nanonetworks face challenges in maintaining a common time knowledge, and their classification and principles are crucial for communication engineers (PUBMED:29757252). Diffusively coupled nonlinear oscillators can exhibit a range of collective behaviors, including out-of-phase synchrony and intermittent behaviors, depending on the coupling strength (PUBMED:22379491).
Even in the search for supersolidity in solid helium using torsional oscillators, the observed behaviors are not solely due to the intrinsic properties of the oscillators but also to changes in the elastic properties of the medium (PUBMED:27222579). The design of binary-coupled memristor oscillators for oscillatory networks shows that the integration of these oscillators with logic elements can lead to complex information processing (PUBMED:36838064).
Furthermore, the entrainment of intracellular oscillators can be influenced by intracellular noise, which can robustly entrain the population-level response even when individual oscillators remain un-entrained (PUBMED:33266715). The stochastic thermodynamics of oscillators' networks also highlights that the dynamics and thermodynamics of these systems can be complex and dependent on the network configuration (PUBMED:28750200).
In summary, the diversity in the design, function, and interaction of oscillators leads to a wide range of behaviors and properties, indicating that oscillators are not created equal. |
Instruction: Intractable anemia among hemodialysis patients: a sign of suboptimal management or a marker of disease?
Abstracts:
abstract_id: PUBMED:15696453
Intractable anemia among hemodialysis patients: a sign of suboptimal management or a marker of disease? Background: Most incident hemodialysis (HD) patients who initiate dialysis therapy with anemia usually can achieve a hemoglobin (Hb) level of 11 g/dL or greater (> or =110 g/L) within a few months of the initiation of recombinant human erythropoietin (EPO) therapy. However, patients unable to achieve this level may be at greater risk for adverse outcomes. Whether intractable anemia is a modifiable problem or a marker for other conditions is unclear. This question was addressed in a cohort of 130,544 incident HD patients from 1996 to 2000 who were administered EPO regularly.
Methods: Medicare claims data were used to determine demographic characteristics, comorbidities, hospitalizations, and related events. Patients who did not achieve an Hb level of 11 g/dL or greater (> or =110 g/L; n = 19,096; 14.6%) during months 4 to 9 after dialysis therapy initiation were compared with those who did.
Results: Patients unable to achieve an Hb level of 11 g/dL (110 g/L) were younger and more often of nonwhite race. In addition, these patients had more comorbid conditions; experienced more hospitalizations with longer stays, more infectious hospitalizations, and more catheter insertions; and were administered more blood transfusions. EPO was administered in higher and increasing doses during the years of study among patients with intractable anemia compared with those with an Hb level of 11 g/dL or greater (> or =110 g/L), likely denoting increasing attempts to correct anemia over the years.
Conclusion: It is apparent that incident HD patients unable to achieve an Hb level of 11 g/dL or greater (> or =110 g/L) have a greater disease burden. The independent association of intractable anemia with such future outcomes as cardiovascular events and hospitalizations remains to be determined.
abstract_id: PUBMED:36579530
Factors Influencing Self-Management Behaviors among Hemodialysis Patients. Aim: To investigate the factors affecting hemodialysis patients' self-management ability at a dialysis center in Taiwan.
Background: Taiwan has the highest incidence and prevalence of end-stage kidney disease (ESKD) in the world. Over 90% of patients with ESKD receiving hemodialysis (HD) and self-management behaviors are critical among these patients. Failure to adhere to self-managed care increases the cost of medical care and the risk of morbidity and mortality.
Methods: In this cross-sectional study, a total of 150 HD patients were observed for their self-management behaviors and the factors influencing these behaviors including education level, comorbid conditions, biochemical analysis, depression, and social support, etc., were analyzed.
Results: Self-management behaviors in HD patients were significantly impaired in the presence of diabetes mellitus, hypertension, anemia, hypoalbuminemia, and depression. The major predictor of patients' self-management was depression, explaining 14.8% of the total variance. Further addition of social support, hypertension, and diabetes mellitus into the regression model increased the total explained variance to 28.6%. Of the various domains of self-management, the partnership domain received the highest score, whereas emotional processing received the lowest score.
Conclusions: This study found the important factors influencing self-management behaviors; through this acknowledgement and early correction of these factors, we hope to improve HD patients' individual life quality and further decrease their morbidity and mortality.
abstract_id: PUBMED:30623084
Dialysis-related practice patterns among hemodialysis patients with cancer. Rationale Aims And Objectives: With the achievement of longevity in hemodialysis patients, the risk of comorbid cancer has begun to draw attention. In the present study, we examined dialysis-related practice patterns and compared those patterns by cancer status.
Methods: Using data from the Japan Dialysis Outcomes and Practice Patterns Study phase 4, we evaluated 2153 hemodialysis patients. Baseline cancer status for patients was separated into 3 categories: no cancer, cancer with recent treatment, and cancer without recent treatment. We then assessed variations among hemodialysis patients in dialysis-related practice patterns, including anemia management, management of mineral and bone metabolism disorder, nutritional management, and dialysis treatment, by cancer status.
Results: We observed both similarities and differences in dialysis-related practice patterns among hemodialysis patients, by cancer status. Hemoglobin levels were largely similar for all cancer statuses, although erythropoiesis stimulating agents dose tended to be higher in hemodialysis patients with recent cancer treatment (multivariable adjusted mean difference of erythropoiesis stimulating agents dose: 5.4 × 103 IU/L/month) than in those without cancer. Phosphorus and calcium levels were also similar. Nutrition statuses were similar among cancer statuses, as were dialysis therapies. These results suggested that physicians do not modulate their dialysis-related practices based on whether or not a hemodialysis patient has cancer.
Conclusion: Among long-term facility-based hemodialysis patients with cancer, we detected no statistically significant differences to suggest that cancer status affects hemodialysis practice regarding mineral and bone disorder management, nutritional management, and dialysis treatment. Facility-based hemodialysis patients with recent cancer treatment, however, receive a higher dose of erythropoietin-stimulating agent than those without cancer.
abstract_id: PUBMED:37275357
Major cardiovascular events and associated factors among routine hemodialysis patients with end-stage renal disease at tertiary care hospital in Somalia. Introduction: Cardiovascular complications are the most significant cause of death in patients undergoing routine hemodialysi (HD) with end-stage renal disease (ESRD). The main objective of this study is to determine the significant cardiac events and risk factors in patients undergoing routine hemodialysis in Somalia.
Methods: We carried out a cross-sectional retrospective study in a single dialysis center in Somalia. Two hundred out of 224 were included. All of them had ESRD and were on hemodialysis during the study period between May and October 2021. The records of all patients were reviewed, and the following parameters were analyzed socio-demographic factors, risk factors for cardiovascular disease, and the presence of cardiovascular diseases.
Results: The mean age was 54 ± 17.5 years (range 18-88 years), and 106 (53%) patients were males. The prevalence of a cardiovascular disease among hemodialysis patients was 29.5%. Moreover, the distribution of cardiovascular diseases was different; heart failure was the most common, about 27.1%, followed by coronary artery disease (17%), pericarditis and pericardial-effusion (13.6%), dysrhythmia (10.2%), cerebrovascular-accident (8.5%), and peripheral vascular disease (3.4%). About 176 (88%) participants had at least one modifiable cardiovascular risk factor. The most common modifiable cardiovascular risk factor was hypertension (n = 45, 25.1%), followed by anemia (n = 28, 15.6%) and diabetes (n = 26, 14.5%). Younger (18-30) participants were six times less likely to have cardiovascular events among hemodialysis than older age 0.4 (0.11-1.12).
Conclusion: Low prevalence rate of cardiovascular complications was confirmed in ESRD patients receiving hemodialysis in the main HD center in Somalia. Diabetes, anemia, and hypertension were the highest significant risk factors for CVD in HD patients with ESRD in Somalia.
abstract_id: PUBMED:28497088
Pulmonary hypertension among patients undergoing hemodialysis. Introduction: The epidemiology of pulmonary hypertension (PHT) among long-term hemodialysis patients has been described in relatively small studies in Iran. Objectives: The purpose of this study was to evaluate the prevalence of PHT and its relationship among end-stage renal disease (ESRD) patients undergoing long-term hemodialysis (HD). Patients and Methods: In a cross-sectional study, patients with ESRD treated with HD for at least 3 months in the Imam hospital enrolled for the study. PHT was defined as an estimated systolic pulmonary artery pressure (PAP) equal to or higher than 25 mm Hg using echocardiograms performed by cardiologist. Results: A total of 69 HD patients were included in the investigation. The mean of age of our patients was 52.6±15.3 years. The mean duration of HD was 39±36 months. The mean ejection fraction was 45±7%. The prevalence of PHT was 62.3%. These patients were more likely to have lower ejection fraction. The PHT was more common among female HD patients. We did not find any association between PHT and cause of ESRD, duration of HD, anemia and serum calcium, phosphor and parathyroid hormone levels. Conclusion: Our findings show that PHT is a common problem among ESRD patients undergoing maintenance HD and it is strongly associated with heart failure. It is necessary to screen this disorder among these patients.
abstract_id: PUBMED:27604984
Intravenous iron administration strategies and anemia management in hemodialysis patients. Background: The effect of maintenance intravenous (IV) iron administration on subsequent achievement of anemia management goals and mortality among patients recently initiating hemodialysis is unclear.
Methods: We performed an observational cohort study, in adult incident dialysis patients starting on hemodialysis. We defined IV administration strategies over a 12-week period following a patient's initiation of hemodialysis; all those receiving IV iron at regular intervals were considered maintenance, and all others were considered non-maintenance. We used multivariable models adjusting for demographics, clinical and treatment parameters, iron dose, measures of iron stores and pro-infectious and pro-inflammatory parameters to compare these strategies. The outcomes under study were patients' (i) achievement of hemoglobin (Hb) of 10-12 g/dL, (ii) more than 25% reduction in mean weekly erythropoietin stimulating agent (ESA) dose and (iii) mortality, ascertained over a period of 4 weeks following the iron administration period.
Results: Maintenance IV iron was administered to 4511 patients and non-maintenance iron to 8458 patients. Maintenance IV iron administration was not associated with a higher likelihood of achieving an Hb between 10 and 12 g/dL {adjusted odds ratio (OR) 1.01 [95% confidence interval (CI) 0.93-1.09]} compared with non-maintenance, but was associated with a higher odds of achieving a reduced ESA dose of 25% or more [OR 1.33 (95% CI 1.18-1.49)] and lower mortality [hazard ratio (HR) 0.73 (95% CI 0.62-0.86)].
Conclusions: Maintenance IV iron strategies were associated with reduced ESA utilization and improved early survival but not with the achievement of Hb targets.
abstract_id: PUBMED:25834556
Anemia in patients on chronic hemodialysis in Cameroon: prevalence, characteristics and management in low resources setting. Background: Anemia is a common complication of chronic kidney disease. We investigated the prevalence, characteristics and management of anemia in patients on chronic hemodialysis and assessed the response to blood-transfusion based management in Cameroon.
Methods: This was a cohort study of five months' duration (August-December 2008) conducted at the Yaoundé General Hospital's hemodialysis center, involving 95 patients (67 men, 70.5%) on chronic hemodialysis by a native arteriovenous fistula. A monthly evaluation included full blood counts, number of pints of red cell concentrates transfused, and vital status.
Results: At baseline, 75 (79%) patients had anemia which was microcytic and hypochromic in 32 (43%). Anemia was corrected in 67 (70.5%) patients using blood transfusion only, while 28 (29.5%) patients were receiving erythropoietin (11 regularly, 39%). Only 77.2% of 342 pints (median 3.0, range 0-17 per patients) of red cell concentrates prescribed were effectively received during the follow-up at an unacceptably high cost to patients and families. Mean hemoglobin and mean corpuscular hemoglobin levels remained stable during follow-up, while mean corpuscular volume increased. Erythropoietin treatment was the main determinant of favorable trajectories of hematological markers.
Conclusions: Patients on chronic hemodialysis have predominantly microcytic hypochromic anemia, with limited capacity for correction using blood transfusion.
abstract_id: PUBMED:37345253
Pharmacotherapy considerations in pregnant patients on hemodialysis. Purpose: Successful pregnancy rates on dialysis are increasing with the advent of intensive hemodialysis and advances in medical management.
Summary: Data support the use of intensive hemodialysis in pregnant women with end-stage kidney disease (ESKD). This paper provides an overview of common pharmacotherapeutic changes in management when caring for a pregnant woman receiving intensive hemodialysis. Pregnant patients on peritoneal dialysis were excluded from this analysis due to insufficient data. Topics covered include those related to anemia (iron and erythropoietin stimulating agents), blood pressure agents, monitoring of phosphorus, as well as nutrition and anticoagulation.
Conclusion: When patients on hemodialysis become pregnant, medication adjustments are needed regarding antihypertensives, anemia management, and mineral-bone disease management as many agents require dose adjustment, switching agents due to teratogenicity, or cessation due to fetal complications. There are minimal data in this population; however, successful and healthy infants have been delivered in this patient population with the medication changes discussed.
abstract_id: PUBMED:36011888
Causes of Hospitalization among End-Stage Kidney Disease Cohort before and after Hemodialysis. Patients with end-stage kidney disease (ESKD) have a greater risk of comorbidities, including diabetes and anemia, and have higher hospital admission rates than patients with other diseases. The cause of hospital admissions is associated with ESKD prognosis. This retrospective cohort study involved patients with ESKD who received hemodialysis and investigated whether the cause of hospital admission changed before versus after they started hemodialysis. This study recruited 592 patients with ESKD who received hemodialysis at any period between January 2005 and November 2017 and had been assigned the International Classification of Diseases Ninth Revision Clinical Modification (ICD-9-CM) code for ESKD. The patients' demographic data and hospitalization status one year before and two years after they received hemodialysis were analyzed. A McNemar test was conducted to analyze the diagnostic changes from before to after hemodialysis in patients with ESKD. The study's sample of patients with ESKD comprised more women (51.86%) than men and had an average age of 67.15 years. The numbers of patients admitted to the hospital for the following conditions all decreased significantly after they received hemodialysis: type 2 (non-insulin-dependent and adult-onset) diabetes; native atherosclerosis; urinary tract infection; gastric ulcer without mention of hemorrhage, perforation, or obstruction; pneumonia; reflux esophagitis; duodenal ulcer without mention of hemorrhage, perforation, or obstruction; and bacteremia. Most patients exhibited one or more of the following comorbidities: diabetes (n = 407, 68.75%), hypertension (n = 491, 82.94%), congestive heart failure (n = 161, 27.20%), ischemic heart disease (n = 125, 21.11%), cerebrovascular accident (n = 93, 15.71%), and gout (n = 96, 16.22%). An analysis of variance (ANOVA) indicated that changes in the ICD-9-CM codes for native atherosclerosis, urinary tract infection, pneumonia, and hyperkalemia were associated with age. Patients who developed pneumonia before or after they received hemodialysis tended to be older (range: 69-70 years old). This study investigated the causes of hospital admission among patients with ESKD one year before and two years after they received hemodialysis. This study's results revealed hypertension to be the most common comorbidity. Regarding the cause of admission, pneumonia was more prevalent in older than in younger patients. Moreover, changes in the ICD-9-CM codes of native atherosclerosis, urinary tract infection, pneumonia, and hyperkalemia were significantly correlated with age. Therefore, when administering comprehensive nursing care and treatment for ESKD, clinicians should not only focus on comorbidities but also consider factors (e.g., age) that can affect patient prognosis.
abstract_id: PUBMED:25457994
Pregnancies in hemodialysis and in patients with end-stage chronic kidney disease : epidemiology, management and prognosis Pregnancy in patients presenting end-stage renal disease is rare and there are currently no recommendations for the management of these patients. In hemodialysis patients, reduced fertility and medical reluctance limit the frequency of pregnancies. Although the prognosis has significantly improved, a significant risk for unfavorable maternal (pre-eclampsia, eclampsia) and fetal (pre-term birth, intrauterine growth restriction, still death) outcome still remains. Increasing dialysis dose with the initiation of daily dialysis sessions, early adaptation of medications to limit teratogenicity and management of chronic kidney disease complications (anemia, hypertension) are required. A tight coordination between nephrologists and obstetricians remains the central pillar of the care. In peritoneal dialysis, pregnancy is also possible with modification of the exchange protocol and reducing volumes.
Answer: Intractable anemia among hemodialysis patients can be both a sign of suboptimal management and a marker of disease. A study of 130,544 incident hemodialysis patients who were administered erythropoietin (EPO) regularly found that those unable to achieve a hemoglobin (Hb) level of 11 g/dL or greater had a greater disease burden, including more comorbid conditions, more hospitalizations with longer stays, more infectious hospitalizations, and more catheter insertions. They also received more blood transfusions and higher and increasing doses of EPO, indicating attempts to correct anemia over the years (PUBMED:15696453). This suggests that intractable anemia may be a marker of underlying disease severity and complexity, which could be challenging to manage optimally.
However, the independent association of intractable anemia with future outcomes such as cardiovascular events and hospitalizations remains to be determined (PUBMED:15696453). Other studies have shown that factors such as diabetes, hypertension, and hypoalbuminemia can significantly impair self-management behaviors in hemodialysis patients, which are critical for managing conditions like anemia (PUBMED:36579530). Additionally, the management of anemia in hemodialysis patients with cancer does not appear to differ significantly from those without cancer, although those with recent cancer treatment receive higher doses of erythropoiesis-stimulating agents (PUBMED:30623084).
In summary, intractable anemia in hemodialysis patients can be indicative of both suboptimal management, potentially due to the complexity of the patient's condition and the challenges in achieving self-management, as well as a marker of more severe underlying disease. The relationship between anemia management and patient outcomes is multifaceted and may require individualized approaches to optimize treatment and improve patient health. |
Instruction: Does impaired left ventricular relaxation affect P wave dispersion in patients with hypertension?
Abstracts:
abstract_id: PUBMED:24997065
P wave dispersion increased in childhood depending on blood pressure, weight, height, and cardiac structure and function Introduction: Increased P wave dispersion are identified as a predictor of atrial fibrillation. There are associations between hypertension, P wave dispersion, constitutional and echocardiographic variables. These relationships have been scarcely studied in pediatrics.
Objective: The aim of this study was to determine the relationship between P wave dispersion, blood pressure, echocardiographic and constitutional variables, and determine the most influential variables on P wave dispersion increases in pediatrics.
Method: In the frame of the PROCDEC II project, children from 8 to 11 years old, without known heart conditions were studied. Arterial blood pressure was measured in all the children; a 12-lead surface electrocardiogram and an echocardiogram were done as well.
Results: Left ventricular mass index mean values for normotensive (25.91±5.96g/m(2.7)) and hypertensive (30.34±8.48g/m(2.7)) showed significant differences P=.000. When we add prehypertensive and hypertensive there are 50.38% with normal left ventricular mass index and P wave dispersion was increased versus 13.36% of normotensive. Multiple regression demonstrated that the mean blood pressure, duration of A wave of mitral inflow, weight and height have a value of r=0.88 as related to P wave dispersion.
Conclusions: P wave dispersion is increased in pre- and hypertensive children compared to normotensive. There are pre- and hypertensive patients with normal left ventricular mass index and increased P wave dispersion. Mean arterial pressure, duration of the A wave of mitral inflow, weight and height are the variables with the highest influence on increased P wave dispersion.
abstract_id: PUBMED:14510652
Does impaired left ventricular relaxation affect P wave dispersion in patients with hypertension? Objective: P wave dispersion (PD) is considered to reflect the heterogeneous conduction in atria. We investigated whether there was a correlation between the left ventricular (LV) relaxation and PD.
Method And Results: Fifty-three hypertensive patients < or =60 years old were divided into two groups: Group A, 27 patients, aged 54+/-5 years with the impaired LV relaxation and Group B, 26 patients, aged 51+/-8 years with normal LV relaxation. The P wave durations were measured in all 12 leads of ECG and PD was defined as the difference between maximum and minimum P wave duration (Pmax-Pmin). Mitral inflow velocities (E and A), E deceleration time (DT), isovolumic relaxation time (IVRT), left atrial and ventricular diameters, and wall thickness of LV were obtained by echocardiography. Clinical characteristics of both groups were comparable. The wall thickness of LV, Pmax, and left atrial dimension were not different in both groups. A velocity was higher (P<0.001), but E velocity (P=0.03) and E/A ratio (P<0.001) were lower in group A than in group B. IVRT and DT were also significantly longer in group A. PD was significantly higher in group A compared to group B (51+/-9 vs 41+/-11 ms, P=0.01). This difference resulted from the Pmin (61+/-10 vs 67+/-9 ms, P=0.03, respectively). Multivariate analysis revealed a significant correlation between PD and A velocity (r=0.46, P=0.01), E/A ratio (r=-0.53, P=0.001), DT (r=0.65, P<0.001), and IVRT (r=0.73, P<0.001).
Conclusion: This study suggests that impaired LV relaxation contributes to the heterogeneous atrial conduction in hypertensive patients.
abstract_id: PUBMED:26932799
Three dimensional left atrial volume index is correlated with P wave dispersion in elderly patients with sinus rhythm. Background: P wave dispersion is a noninvasive electrocardiographic predictor for atrial fibrillation. The aim of the study was to explore relation between left atrial volume index assessed by 3-dimensional echocardiography and P wave dispersion in elderly patients.
Methods: Seventy-three consecutive patients over the age of 65 (mean age: 75 ± 7 years, 17 men) were included. P wave dispersion is calculated as the difference between maximum and minimum P wave durations. Left atrial volume index was measured by both 2-dimensional and 3-dimensional echocardiography and categorized as normal (≤ 34 mL/m(2)) or increased (mild, 35-41 mL/m(2); moderate, 42-48 mL/m(2); severe, ≥ 49 mL/m(2)).
Results: Thirty-one patients had normal left atrium while 24 patients had mildly enlarged, nine had moderately enlarged, and nine had severely enlarged left atrium. Prolongation of P wave dispersion was more prevalent in patients with dilated left atrium. P wave dispersion was significantly correlated with both 2-dimensional (r = 0.600, p < 0.001) and 3-dimensional left atrial volume index (r = 0.688, p < 0.001). Both left atrial volume indexes were associated with prolonged P wave dispersion when adjusted for age, sex, presence of hypertension, and left ventricular mass index. Receiver-operator characteristic (ROC) analysis revealed that a 3-dimensional left atrial volume index ≥ 25 mL/m(2) separated patients with prolonged P wave dispersion with a sensitivity of 82.2 %, specificity of 67.9 %, positive predictive value of 80.4 %, and negative predictive value of 70.4 %.
Conclusion: In elderly patients, 3-dimensional left atrial volume index showed a better correlation with P wave dispersion and might be helpful in discriminating patients with prolonged P wave dispersion, who might be prone to atrial fibrillation.
abstract_id: PUBMED:33950063
Relationship between P-wave dispersion, left ventricular mass index and function in Nigerian hypertensive patients. Hypertension is the most prevalent cardiovascular disorder in the world. It is associated with target-organ damage in various organs and ECG changes. P-wave dispersion (PWD), which represents inhomogeneous atrial conduction and discontinuation of impulses, has been observed, when prolonged, to predict atrial fibrillation, particularly in the setting of hypertension. This study of PWD in 150 hypertensive patients and controls sought to determine the prevalence of PWD in Nigerian hypertensives and its relationship to left ventricular mass index and left ventricular function. Mean PWD in normal subjects was 32.14 ± 4.72 ms and was significantly shorter than that in hypertensive patients at 38.29 ± 8.02 ms. In the total population, 51.3% had prolonged PWD ( > 33.46 ms); 70% in the hypertensives and 32.7% of controls. The only significant difference in hypertensives with prolonged and normal PWD was the waist circumference. There was a negative correlation between PWD and ejection fraction (r = -0.17, p = 0.03), but not with diastolic function.
abstract_id: PUBMED:24566550
Association of P wave dispersion and left ventricular diastolic dysfunction in non-dipper and dipper hypertensive patients. Objective: Objective of this study was to investigate the correlation between P wave dispersion and left ventricular diastolic function, which are associated with the increased cardiovascular events in patients with dipper and non-dipper hypertensive (HT).
Methods: Eighty sex and age matched patients with dipper and non-dipper HT, and 40 control subject were included in this observational cross-sectional study. P wave dispersion was measured through electrocardiography obtained during the admission. The left ventricular LV ejection fraction was measured using the modified Simpson's rule by echocardiography. In addition, diastolic parameters including E/A rate, deceleration time (DT) and isovolumetric relaxation time (IVRT) were recorded. Independent samples Bonferroni, Scheffe and Tamhane tests and correlation test (Spearman and Pearson) were used for statistical analysis.
Results: P wave dispersion was found to be significantly increased in the non-dipper than in the dipper group (56.0±5.6 vs. 49.1±5.3, p<0.001). Pmax duration was found significantly higher (115.1±5.6 vs. 111.1±5.8, p=0.003) and Pmin duration significantly lower (59.0±5.6 vs. 62.3±5.3, p=0.009) in the non-dippers. Correlation analysis demonstrated presence of moderate but significant correlation between P-wave dispersion and left ventricular mass index (r=0.412, p=0.011), IVRT (r=0.290 p=0.009), DT (r=0.210, p=0.052) and interventricular septum thickness (r=0.230 p=0.04).
Conclusion: P wave dispersion and P Max were found to be significantly increased and P min significantly decreased in the non-dipper HT patients compared to the dipper HT patients. P-wave dispersion is associated with left ventricular dysfunction in non-dipper and dipper HT.
abstract_id: PUBMED:7874854
Impaired left ventricular relaxation and arterial stiffness in patients with essential hypertension. 1. This study was designed to determine how left ventricular relaxation function in patients with essential hypertension is impaired by arterial haemodynamic load that is increased in early ejection phase. These patients did not suffer from cardiac hypertrophy or disturbed coronary perfusion. We used a high-fidelity multisensor catheter to record pressure and flow signals in the ascending aorta. The timing and magnitude of wave reflection were obtained by decomposing the measured waves into their forward and backward components. Radionuclide angiography was employed to obtain the time-activity curve. The left ventricular relaxation function was assessed by analysing the time-activity curve, which was filtered using Fourier expansion with the number of harmonics for minimum error. 2. In comparison with age-matched normotensive subjects (seven subjects with mean blood pressure 97 mmHg), hypertensive subjects (seven subjects with mean blood pressure 138 mmHg) had a shorter backward wave arrival time (193 +/- 26 versus 258 +/- 35 ms) and a higher reflection factor (0.58 +/- 0.12 versus 0.42 +/- 0.07). Isovolumic relaxation period was prolonged in hypertensive subjects (118 +/- 19 versus 90 +/- 19 ms). There was an inverse correlation between isovolumic relaxation period and backward wave arrival time in all 14 subjects (r = -0.67, P < 0.05). In contrast, there were no significant differences in cardiac output and time to peak ejection rate between the two groups. 3. Our analyses revealed that early return of the enhanced wave reflection may profoundly impair left ventricular relaxation function in patients with hypertension.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:17027016
Relation between P-wave dispersion and left ventricular geometric patterns in newly diagnosed essential hypertension. Aim: P-wave durations and P-wave dispersion (PD) are considered to reflect the heterogeneous conduction in atria. The aim of this study was to investigate PD and P-wave duration in different left ventricle geometric patterns of hypertensive patients.
Methods: One hundred forty-nine consecutive patients with newly diagnosed essential hypertension and 29 healthy control groups were included in the study. The maximum and minimum P-wave duration (Pmax and Pmin, respectively) and PD were measured from the 12-lead surface electrocardiogram. Echocardiographic examination was also performed in all subjects. Four different geometric patterns were identified in hypertensive patients according to left ventricular mass index (LVMI) and relative wall thickness.
Results: P-wave dispersion was longer in concentric remodeling (CR), concentric hypertrophy (CH), and eccentric hypertrophy (EH) groups when compared with the control group (P = .009, P < .001, P < .001, respectively). P-wave dispersion of normal left ventricle (NLV) geometric pattern was not different from that of the control group. Patients with NLV geometric pattern had shorter PD than patients who had CH and EH (NLV vs CH, P < .001; NLV vs EH, P = .025). P-wave dispersion of the NLV group was not different from that of the CR group. Patients with CR had also shorter PD than patients who had CH (P = .002). In bivariate analysis, there was a significant correlation between PD with left ventricle geometry, body surface area, LVMI, and relative wall thickness. In multiple linear regression analysis, PD was independently correlated only with LVMI (beta = .425, P < .001).
Conclusions: P-wave dispersion is independently associated with LVMI rather than left ventricle geometry and relative wall thickness in hypertensive patients. Thus, it is increased particularly in patients with CH and EH.
abstract_id: PUBMED:30950573
Association of P wave peak time with left ventricular end-diastolic pressure in patients with hypertension. Left ventricular diastolic dysfunction (LVDD) is commonly seen in hypertensive patients, and it is associated with increased morbidity and mortality. Hence, the detection of LVDD with a simple, inexpensive, and easy-to-obtain method can contribute to improving patient prognosis. Therefore, we aimed to evaluate whether there was any association between the electrocardiographic P wave peak time (PWPT) and invasively measured left ventricular end-diastolic pressure (LVEDP) in hypertensive patients who had undergone coronary angiography following preliminary diagnosis of coronary artery disease. A total of 78 patients were included in this cross-sectional study. The PWPT was defined as the time from the beginning of the P wave to its peak, and it was calculated from the leads DII and VI . In all patients, LVEDP was measured in steady state. The PWPT in lead DII was significantly longer in patients with high LVEDP; however, there was no significant difference between groups in terms of PWPT in the lead VI . In multivariable analysis, PWPT in lead DII was found to be independent predictor of increased LVEDP (OR: 1.257, 95% CI: 1.094-1.445; P = 0.001). In receiver operating characteristic curve analysis, the optimal cut-off value of PWPT in the lead DII for prediction of elevated LVEDP was 64.8 ms, with a sensitivity of 68.7% and a specificity of 91.3% (area under curve: 0.882, 95% CI: 0.789-0.944, P < 0.001). In conclusion, this study result suggested that prolonged PWPT in the lead DII may be an independent predictor of increased LVEDP among hypertensive patients.
abstract_id: PUBMED:8901825
Left ventricular hypertrophy and QT dispersion in hypertension. The interlead variation in QT length on a standard electrocardiograph reflects regional repolarization differences in the heart. To investigate the association between this interlead variation (QT dispersion) and left ventricular hypertrophy, we subjected 100 untreated subjects to 12-lead electrocardiography and echocardiography. Additionally, 24 previously untreated subjects underwent a 6-month treatment study with ramipril and felodipine. In the cross-sectional part of the study, QT dispersion corrected for heart rate (QTc dispersion) was significantly correlated with left ventricular mass index (r = .30, P < .01), systolic pressure (r = .30, P < .01), the ratio of peak flow velocity of the early filling wave to peak flow velocity of the atrial wave (E/A ratio) (r = -.22, P = .02), isovolumic relaxation time (r = .31, P < .01), and age (r = .21, P < .04). In the treatment part of the study, lead-adjusted QTc dispersion decreased from 24 to 19 milliseconds after treatment, and after a subsequent 2 weeks of drug washout remained at 19 milliseconds (P < .01). The changes in left ventricular mass index at these stages were 144, 121, and 124 g/m2 (P < .01). Systolic pressure decreased from 175 to 144 mm Hg and increased again to 164 mm Hg after drug washout (P < .01). The E/A ratio (0.97, 1.02, and 1.02; P = 69) and isovolumic relaxation time (111, 112, and 112; P = .97) remained unchanged through the three assessment points. In conclusion, QT dispersion is increased in association with an increased left ventricular mass index in hypertensive individuals. Antihypertensive therapy with ramipril and felodipine reduced both parameters. If an increased QT dispersion is a predictor of sudden death in this group of individuals, then the importance of its reduction is evident.
abstract_id: PUBMED:18651004
Atrial conduction delay and its association with left atrial dimension, left atrial pressure and left ventricular diastolic dysfunction in patients at risk of atrial fibrillation. Background: Atrial conduction delay and its association with left atrial dimension, left atrial pressure and left ventricular (LV) diastolic dysfunction in patients at risk of atrial fibrillation (AF) may be assessed by high-resolution electrocardiography of P wave.
Objectives: To determine how left atrial size, left atrial pressure and LV diastolic dysfunction, measured noninvasively by transthoracic echocardiography, influence atrial conduction time.
Methods: Signal-averaged electrocardiography of P wave and echocardiogram were performed on 70 patients (average age of 63+/-10 years; 37 male and 33 female), divided into three groups: group A, patients with paroxysmal AF (n=29); group B, patients with type 2 diabetes mellitus and arterial hypertension, but without AF (n=23); and group C, healthy control patients (n=18). Standard statistical methods were used.
Results: Filtered P wave duration, measured by signal-averaged electrocardiography, was significantly prolonged in group A and group B compared with control group C (138+/-12 ms and 125+/-9 ms versus 117+/-8 ms; P </= 0.001 and P </= 0.01, respectively). Left atrial diameter, area and volume were significantly increased in group A compared with group C (P </= 0.01, P </= 0.05 and P </= 0.001, respectively), but no significant differences were found in these dimensions between group B and group C. Left atrial pressure, determined with the Doppler echocardiographic parameter ratio of early diastolic transmitral velocity to mitral annular early diastolic velocity, was significantly higher in group A, as well in group B, than group C (P </= 0.05, P </= 0.01). As well, impaired LV relaxation was present more often in group A (42%) and group B (50%).
Conclusion: Atrial conduction delay in patients at risk of AF (patients with diabetes and hypertension in group B) was associated with increased left atrial pressure and impaired LV relaxation. Patients with paroxysmal AF (group A) presented left atrial dilation, increased left atrial pressure and impaired LV relaxation, and these factors were associated with more significantly prolonged atrial conduction in group A than in group B.
Answer: Yes, impaired left ventricular relaxation does affect P wave dispersion in patients with hypertension. A study investigating the correlation between left ventricular (LV) relaxation and P wave dispersion (PD) in hypertensive patients found that impaired LV relaxation contributes to heterogeneous atrial conduction, which is reflected in increased PD. The study divided 53 hypertensive patients into two groups based on their LV relaxation status. Group A, with impaired LV relaxation, had significantly higher PD compared to Group B, with normal LV relaxation (51±9 ms vs 41±11 ms, P=0.01). Multivariate analysis revealed a significant correlation between PD and several echocardiographic parameters related to LV relaxation, including A velocity (r=0.46, P=0.01), E/A ratio (r=-0.53, P=0.001), deceleration time (DT) (r=0.65, P<0.001), and isovolumic relaxation time (IVRT) (r=0.73, P<0.001) (PUBMED:14510652).
Additionally, other studies have shown associations between P wave dispersion and various measures of cardiac structure and function in hypertensive patients. For instance, P wave dispersion was found to be increased in hypertensive children and was influenced by mean arterial pressure, duration of the A wave of mitral inflow, weight, and height (PUBMED:24997065). In elderly patients, three-dimensional left atrial volume index, which can be related to LV diastolic function, was correlated with P wave dispersion (PUBMED:26932799). Moreover, in Nigerian hypertensive patients, P wave dispersion was negatively correlated with ejection fraction, although not with diastolic function (PUBMED:33950063).
In conclusion, impaired LV relaxation is associated with increased P wave dispersion in hypertensive patients, indicating a link between LV diastolic dysfunction and atrial electrical heterogeneity. |
Instruction: Treatment of obstructive sleep apnea syndrome in patients from a teaching hospital in Brazil: is it possible?
Abstracts:
abstract_id: PUBMED:18766394
Treatment of obstructive sleep apnea syndrome in patients from a teaching hospital in Brazil: is it possible? Objective: The aim of this study was to evaluate the efficacy of a cost-effective intra-oral appliance for obstructive sleep apnea syndrome built into a large teaching hospital.
Materials And Methods: Out of 20 evaluated and treated patients, 14 concluded the study: eight men and six women, with a mean age of 42-46 (mean + SD) years and mean body mass index of 27.66. Inclusion criteria were mild or moderate apnea-hypopnea index (AHI) according to a polysomnographic study. All patients were treated with the monobloco intra-oral appliance. They were then submitted to a follow-up polysomnographic study after 60 days using the appliance. An orofacial clinical evaluation was carried out with the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) questionnaire and with clinical evaluation questionnaire devised by the Orofacial Pain Team before and 60 days after fitting the intra-oral appliance.
Results: The AHI showed a statistically meaningful (p = 0.002) reduction from 15.53 to 7.82 events per hour, a non-statistically significant oxygen saturation increase from 83.36 to 84.86 (p = 0.09), and Epworth's sleepiness scale reduction from 9.14 to 6.36 (p = 0.001). Three patients did not show any improvement. The most common side effect during the use of the appliance/device was pain and facial discomfort (28.57%), without myofascial or temporomandibular joint pain as evaluated by the RDC/TMD questionnaire.
Conclusions: The intra-oral device produced a significant reduction of the apnea-hypopnea index during the study period with the use of the monobloco intra-oral appliance. Patients did not show prior myofascial pain or 60 days after use of the intra-oral appliance.
abstract_id: PUBMED:23712497
Surgical treatment of sleep apnea: association between surgeon/hospital volume with outcomes. Objectives/hypothesis: To identify the association between surgeon/hospital volume with outcomes in surgical treatment for obstructive sleep apnea (OSA) in a nationally representative sample. We hypothesized that surgeons/hospitals with lower patient volumes would have: higher mortality rates, longer hospital length of stay (LOS), and higher postoperative complication rates and hospitalization charges.
Study Design: Secondary data analysis of the 2007 Nationwide Inpatient Sample database.
Methods: We selected 24,298 adults undergoing OSA surgery. The data analysis included trend test, regression, and multivariate models that were adjusted by demographic and clinical variables.
Results: The patients were mostly White (76.43%), male (78.26%), with a mean age of 46 years. Patients treated by surgeons with low volume of procedures (1 procedure/year) had significantly higher mortality rate (odds ratio [OR] 3.05; confidence interval [CI], 1.96-4.77), longer average LOS (increased until 8.16 hours), and higher hospitalization charges (increased up to $1701.75) versus medium- and high-volume surgeons (2-4 procedures/year; greater than/or equal to 5 procedures/year, respectively). Patients treated at hospitals with low volume of procedures (0-5/year) had significantly higher occurrence of oxygen desaturation (OR, 2.12; CI, 1.50-2.99), longer LOS (increased until almost 2 hours) and higher hospitalization charges (at least $951.50 more expensive) versus patients treated at high-volume hospitals (greater than/or equal to 18 procedures/year).
Conclusion: Our investigation validates the hypothesis that lower volume standards (surgeon/hospital) are associated with increase of LOS following surgery to treat OSA, as well as lower surgeon volume associated with increase of mortality and hospitalization charges and lower hospital volume with occurrence of oxygen desaturation as postoperative complication.
abstract_id: PUBMED:24822101
Characterization of primary symptoms leading to Chinese patients presenting at hospital with suspected obstructive sleep apnea. Objectives: We identified the primary symptoms leading to Chinese patients presenting at hospital with suspected obstructive sleep apnea (OSA) and studied the prevalence and characteristics of OSA in confirmed cases.
Methods: We collected data on 350 consecutive patients (302 males and 43±11 years old) with suspected OSA who underwent overnight polysomnography (PSG).
Results: Among all patients, rankings of primary symptoms that led to the patients presenting at hospital for PSG were observed apnea (33%), snoring alone (29%), choking/gasping (13%), daytime sleepiness (5%) and other (20%). For severe OSA, prevalence rate was 61%, apnea hypopnea index (AHI) was 64±18, age was 44±10 years old, body mass index (BMI) was 28±3.5 kg/m(2), and hypertension rate was 28%.
Conclusions: Self-awareness of symptoms led a majority of the patients to present at hospital in China. Compared to currently available case series studies, our results suggest that OSA patients in East Asian countries are characterized by higher prevalence and more severe apnea, younger age, poorer sleep quality, but less obesity and less comorbidity with hypertension, relative to countries in North America, South America and Europe.
abstract_id: PUBMED:25317084
Diagnosis and treatment of sleep disordered breathing in hospitalized cardiac patients: a reduction in 30-day hospital readmission rates. Background: Sleep disordered breathing (SDB) is associated with significant cardiovascular sequelae and positive airway pressure (PAP) has been shown to improve heart failure and prevent the recurrence of atrial fibrillation in cardiac patients with sleep apnea. Patients who are hospitalized with cardiac conditions frequently have witnessed symptoms of SDB but often do not have a diagnosis of sleep apnea. We implemented a clinical paradigm to perform unattended sleep studies and initiate treatment with PAP in hospitalized cardiac patients with symptoms consistent with SDB. We hypothesized that PAP adherence in cardiac patients with SDB would reduce readmission rates 30 days after discharge.
Methods: 106 consecutive cardiac patients hospitalized for heart failure, arrhythmias, and myocardial infarction and who reported symptoms of SDB were evaluated. Patients underwent a type III portable sleep study and those patients diagnosed with sleep apnea were started on PAP. Demographic data, SDB type, PAP adherence, and data regarding 30-day hospital readmission/ED visits were collected.
Results: Of 106 patients, 104 had conclusive diagnostic studies using portable monitoring systems. Seventy-eight percent of patients (81/104) had SDB (AHI ≥ 5 events/h). Eighty percent (65/81) had predominantly obstructive sleep apnea, and 20% (16/81) had predominantly central sleep apnea. None of 19 patients (0%) with adequate PAP adherence, 6 of 20 (30%) with partial PAP use, and 5 of 17 (29%) of patients who did not use PAP were readmitted to the hospital or visited the emergency department (ED) for a cardiac issue within 30 days from discharge (p = 0.025).
Conclusions: Performing diagnostic unattended sleep studies and initiating PAP treatment in hospitalized cardiac patients was feasible and provided important clinical information. Our data indicate that hospital readmission and ED visits 30 days after discharge were significantly lower in patients with cardiac disease and SDB who adhered to PAP treatment than those who were not adherent.
Commentary: A commentary on this article appears in this issue on page 1067.
abstract_id: PUBMED:38291419
Nomogram for hospital-acquired venous thromboembolism among patients with cardiovascular diseases. Background: Identifying venous thromboembolism (VTE) is challenging for patients with cardiovascular diseases due to similar clinical presentation. Most hospital-acquired VTE events are preventable, whereas the implementation of VTE prophylaxis in clinical practice is far from sufficient. There is a lack of hospital-acquired VTE prediction models tailored specifically designed for patients with cardiovascular diseases. We aimed to develop a nomogram predicting hospital-acquired VTE specifically for patients with cardiovascular diseases.
Material And Methods: Consecutive patients with cardiovascular diseases admitted to internal medicine of Fuwai hospital between September 2020 and August 2021 were included. Univariable and multivariable logistic regression were applied to identify risk factors of hospital-acquired VTE. A nomogram was constructed according to multivariable logistic regression, and internally validated by bootstrapping.
Results: A total of 27,235 patients were included. During a median hospitalization of four days, 154 (0.57%) patients developed hospital-acquired VTE. Multivariable logistic regression identified that female sex, age, infection, pulmonary hypertension, obstructive sleep apnea, acute coronary syndrome, cardiomyopathy, heart failure, immobility, central venous catheter, intra-aortic balloon pump and anticoagulation were independently associated with hospital-acquired VTE. The nomogram was constructed with high accuracy in both the training set and validation (concordance index 0.865 in the training set, and 0.864 in validation), which was further confirmed in calibration. Compared to Padua model, the Fuwai model demonstrated significantly better discrimination ability (area under curve 0.865 vs. 0.786, net reclassification index 0.052, 95% confidence interval 0.012-0.091, P = 0.009; integrated discrimination index 0.020, 95% confidence interval 0.001-0.039, P = 0.051).
Conclusion: The incidence of hospital-acquired VTE in patients with cardiovascular diseases is relatively low. The nomogram exhibits high accuracy in predicting hospital-acquired VTE in patients with cardiovascular diseases.
abstract_id: PUBMED:32715796
Hospital screening for obstructive sleep apnea in patients admitted to a rural, tertiary care academic hospital with heart failure. Background: Rural communities represent a vulnerable population that would significantly benefit from hospital-based OSA screening given these areas tend to have significant health-care disparities and poor health outcomes. Although inpatient screening has been studied at urban hospitals, no study to date has assessed this approach in rural populations.
Methods: This study utilized the Electronic Medical Record (EMR) to generate a list of potential candidates by employing inclusion/exclusion criteria as screening. Subjects identified were then approached and offered information regarding the study. Screening for OSA entailed a tiered approach utilizing the sleep apnea clinical score (SAC) and portable sleep testing. Individuals identified as high risk (SAC ≥ 15) for OSA underwent evaluation with a portable sleep testing system while hospitalized. All participants with an apnea-hypopnea index (AHI) ≥5 events/h confirmed by a sleep medicine physician were considered screen positive for OSA. If approved/available, subjects screening positive for OSA were provided with an auto-titrating continuous positive airway pressure (PAP). Patient characteristics were analyzed using descriptive statistics. Categorical data were described using contingency tables, including counts and percentages. Continuously scaled measures were summarized by median with range. This study was registered with ClinicalTrials.gov. Identifier: NCT03056443.
Results: Nine hundred and fifty-eight potential subjects were identified. The three most common reasons for exclusion included previous OSA diagnosis or exposure to PAP therapy (n = 357), advanced illness (n = 380), and declined participation by the individual (n = 68). The remaining 31 subjects underwent further evaluation for obstructive sleep apnea. Twenty-three subjects had a high sleep apnea clinic score. Per our study protocol, 13 subjects who screened positive for OSA were initiated on APAP therapy. Conclusion: Our study provides important insight into the burden of sleep-disordered breathing (SDB) and unique challenges of hospital-based OSA screening/treatment in a rural setting. Our study identified barriers to successful screening in a rural population that may be well addressed by adapting previous research in hospital sleep medicine.
abstract_id: PUBMED:25273934
Overlap syndrome between chronic obstructive pulmonary disease and obstructive sleep apnoea in a Southeast Asian teaching hospital. Introduction: Overlap syndrome between obstructive sleep apnoea (OSA) and chronic obstructive pulmonary disease (COPD) is important but under-recognised. We aimed to determine the prevalence of overlap syndrome and the predictors of OSA in patients with COPD.
Methods: Patients aged ≥ 40 years were recruited from a dedicated COPD clinic and underwent overnight polysomnography. A diagnosis of OSA was made when apnoea-hypopnoea index (AHI) was ≥ 5.
Results: In all, 22 patients (aged 71 ± 9 years), predominantly men, were recruited. Mean values recorded were: predicted forced expiratory volume in the first second percentage 55 ± 15; body mass index 23.7 ± 6.5 kg/m2; Epworth Sleepiness Scale score 5.6 ± 5.8; and AHI 15.8 ± 18.6. Among the 14 patients with OSA (prevalence of overlap syndrome at 63.6%), the mean number of hospital visits for COPD exacerbations in the preceding one year was 0.5 ± 0.7. Patients with overlap syndrome had worse modified Medical Research Council dyspnoea scale scores and a lower percentage of rapid eye movement (REM) sleep than patients without. There were no other statistical differences in lung function or sleep study indices between the two patient groups.
Conclusion: The majority of our patients had overlap syndrome and minimal exacerbations, and were not obese or sleepy. Significant differences between patients with and without overlap syndrome were seen in two aspects - the former was more dyspnoeic and had less REM sleep. Our findings suggest that standard clinical predictors cannot be used for patients with overlap syndrome, and therefore, a high index of suspicion is needed.
abstract_id: PUBMED:9792578
Continuous positive airway pressure requirement during the first month of treatment in patients with severe obstructive sleep apnea. Objectives: (1) To compare the continuous positive airway pressure (CPAP) requirement at the time of diagnosis (T0), after 2 weeks (T2), and after 4 weeks (T4) of CPAP treatment, in patients with severe obstructive sleep apnea (OSA); and (2) to assess whether any alteration in CPAP requirement over the first 4 weeks of CPAP treatment would influence daytime alertness, subjective sleepiness, or mood.
Design: A prospective, controlled, single-blind crossover study.
Setting: University teaching hospital.
Patients: Ten patients with newly diagnosed and previously untreated severe OSA (aged 52+/-9 years, apnea hypopnea index [AHI] of 99+/-31) and subsequently 10 control patients (aged 52+/-11 years, AHI 85+/-17).
Measurements: Overnight polysomnography with CPAP titration to determine the CPAP requirement, which was standardized for body position and sleep stage, on all three occasions (T0, T2, T4). Objective sleep quality, daytime alertness, subjective sleepiness (Epworth Sleepiness Scale), and mood (Hospital Anxiety and Depression Scale).
Results: CPAP requirement decreased from T0 to T2 (median difference, 1.5 cm H2O, 95% confidence interval [CI], 1.1 to 2.7 cm H2O, p=0.0004) and did not differ between T2 and T4. Use of the lower CPAP pressure during T2 to T4 was associated with a decrease in Epworth scale (mean difference, 2.6, 95% CI, 1.2 to 4; p=0.01) and anxiety (median change, 2; 95% CI, 0.5 to 2.9, p=0.03) scores, as compared with the first 2 weeks. Daytime alertness did not differ between T0 to T2 and T2 to T4.
Conclusion: CPAP requirement falls within 2 weeks of starting CPAP treatment. A change to the lower required CPAP was not associated with any deterioration in daytime alertness but was associated with small subjective improvements in sleepiness and mood.
abstract_id: PUBMED:19700274
A retrospective analysis of airway management in obese patients at a teaching institution. Study Objective: To identify patient characteristics that influence the choice of awake fiberoptic intubation (AFI) versus intubation after general anesthesia in obese patients.
Design: Retrospective study.
Setting: Memorial Hermann Hospital, Houston, TX.
Measurements: Perioperative records of 283 obese patients [body mass index (BMI) >34 kg/m2] who underwent elective surgery between January 1991 and December 1999 were studied. Patients' data were divided into two groups according to method of induction and intubation: asleep direct laryngoscopy versus AFI. Patient demographics, BMI, Mallampati airway classification, history of gastroesophageal reflux disease, peptic ulcer disease, hiatal hernia, and obstructive sleep apnea syndrome were compared between the two groups. Bivariate and multivariate analyses were performed.
Main Results: AFI was performed in 12 (4.2%) obese patients, and direct laryngoscopy was performed in 271 (95.8%) obese patients. Difficult intubation was reported in 21 (7.4%) cases, and there were no reported cases of failed intubations. Bivariate analyses demonstrated that AFI patients were more likely to have a BMI > or = 60 kg/m2 (P < 0.001), Mallampati class III or IV airway (P < 0.001), and be men (P = 0.004). These three factors were also statistically significant in the multivariate logistic regression. In particular, each one kg/m(2) increase in BMI was associated with a 7% increased likelihood of AFI. Men were approximately 4 times likelier than women to have an AFI. Compared with patients with a Mallampati Class I or II airway, those with Mallampati Classes III or IV were about 26 times as likelier to have an AFI.
Conclusions: Patients selected for AFI were predominantly men, with a Mallampati Class III or IV airway, and BMI > or = 60 kg/m2.
abstract_id: PUBMED:21677893
Driving simulator performance remains impaired in patients with severe OSA after CPAP treatment. Study Objectives: To assess the effectiveness of CPAP treatment in improving 90-minute driving simulator performance in severe OSA patients compared to age/gender matched controls.
Design: Driving simulator performance was assessed at baseline and 3 months later, with OSA patients treated with CPAP during the interval.
Setting: University Teaching Hospital.
Participants: Patients with severe OSA (n = 11) and control subjects without OSA (n = 9).
Interventions: CPAP MEASUREMENTS AND RESULTS: Simulator driving parameters of steering deviation, braking reaction time and crashes were measured at baseline and ∼3 months follow-up. At baseline, OSA subjects demonstrated significantly greater steering deviation compared to controls (mean [95% CI], OSA group, 49.9 cm [43.7 to 56.0 cm] vs control group, 34.9 cm [28.1 to 41.7 cm], p = 0.003). Following ∼3 months of CPAP treatment (mean ± SD 6.0 ± 1.4 h/night), steering deviation in OSA subjects improved by an average of 3.1 cm (CI, 1.4 to 4.9), p < 0.001, while no significant steering changes were observed in the control group. Despite the improvement, steering deviation in the OSA group remained significantly higher than in controls (OSA group, 46.7 cm [CI, 40.6 to 52.8 cm] vs control group, 36.1 cm [CI, 29.3 to 42.9 cm], p = 0.025).
Conclusions: While driving simulator performance improved after ∼3 months of CPAP treatment with high adherence in patients with severe OSA, performance remained impaired compared to control subjects. These results add to the growing body of evidence that some neurobehavioral deficits in patients with severe OSA are not fully reversed by treatment. Further studies are needed to assess causes of residual driving simulator impairment and to determine whether this is associated with persistent elevated real-life accident risk.
Trial Registration: Data presented in this manuscript was collected as part of a clinical trial "Experimental Investigations of Driving Impairment in Obstructive Sleep Apnoea" ACTRN12610000009011, http://www.anzctr.org.au/trial_view.aspx?ID=334979
Answer: Yes, treatment of obstructive sleep apnea syndrome (OSAS) in patients from a teaching hospital in Brazil is possible. A study conducted in a large teaching hospital in Brazil evaluated the efficacy of a cost-effective intra-oral appliance for OSAS. The study included 20 patients, out of which 14 completed the study. These patients were treated with a monobloco intra-oral appliance and underwent a follow-up polysomnographic study after 60 days of using the appliance. The results showed a statistically significant reduction in the apnea-hypopnea index (AHI) from 15.53 to 7.82 events per hour (p = 0.002). Additionally, there was a reduction in the Epworth's sleepiness scale from 9.14 to 6.36 (p = 0.001), indicating an improvement in daytime sleepiness. The study concluded that the intra-oral device produced a significant reduction in AHI during the study period, and patients did not show prior myofascial pain or 60 days after the use of the intra-oral appliance (PUBMED:18766394). |
Instruction: Evaluation of laparoscopic transperitoneal adrenalectomy: is it feasible for large masses?
Abstracts:
abstract_id: PUBMED:30430653
Transperitoneal versus retroperitoneal laparoscopic adrenalectomy for large pheochromocytoma: Comparative outcomes. Objectives: To evaluate operative and oncological outcomes of laparoscopic adrenalectomy through a transperitoneal approach and retroperitoneal approach for large (>5 cm in diameter) pheochromocytomas.
Methods: We retrospectively compared the results of a transperitoneal approach with those of a retroperitoneal approach in 22 patients (mean age 57.5 years, range 38-76 years) with unilateral large pheochromocytomas (12 right, 10 left). The mean body mass index, operation time, pneumoperitoneum time, estimated blood loss, fluctuation in blood pressure and complication rate were compared between the two approaches.
Results: The mean tumor diameter (range) was 7.0 cm (range 5.2-15.5 cm), and no significant differences were observed between the transperitoneal approach and retroperitoneal approach in any baseline clinical parameter. For right-sided procedures, significant differences were found for operation time (113 vs 85 min), pneumoperitoneum time (93 vs 64 min) and estimated blood loss (96 vs 23 mL; P < 0.05, transperitoneal approach and retroperitoneal approach, respectively). No open conversion or recurrence was reported, but one right transperitoneal approach case required blood transfusion. No difference in these parameters was noted on the left side.
Conclusions: For right side procedures, the retroperitoneal approach is feasible, safer and faster than the transperitoneal approach for large pheochromocytomas. Early transection of the feeding artery is beneficial for managing the tumor and reducing the risk of bleeding.
abstract_id: PUBMED:24729820
Two-stage bilateral laparoscopic adrenalectomy for large pheochromocytomas. A 66-year-old Lithuanian female patient with a history of hypertension was diagnosed with bilateral adrenal tumors during a routine sonoscopy. Scintigraphy with metaiodobenzylguanidine and computed tomography scan revealed right 130/116/93 mm and left 85/61/53 mm pheochromocytomas. The patient suffered from hypertension with blood pressure over 240/100 mm Hg and heartbeat disturbances. Blood adrenaline levels exceeded the norm 10-fold. After possible spread of tumors was rejected, laparoscopic transperitoneal adrenalectomy was planned in 2 stages, starting on the right then followed by the left side. After preoperative treatment with adrenoblockers, 2-stage bilateral laparoscopic adrenalectomy was performed. 13 cm × 12 cm × 9.5 cm right adrenal and, 3 months later, 8.5 cm × 8 cm × 6 cm left adrenal pheochromocytomas were removed. Histologically - radical extirpation, pheochromocytomas with possible malignant potential. Stable remission of hypertension was achieved postoperatively. Laparoscopic transperitoneal adrenalectomy is a safe and feasible method of treatment of large benign and possible malignant, but noninvasive pheochromocytomas.
abstract_id: PUBMED:32773296
Comparison between retroperitoneal and transperitoneal laparoscopic adrenalectomy: Are both equally safe? Study Objectives: Compare the rates of major intra- and postoperative complications, surgical conversion and mortality between transperitoneal versus retroperitoneal laparoscopic adrenalectomy.
Patients And Methods: In a series of 344 consecutive unilateral laparoscopic adrenalectomies, performed from January 1997 to December 2017, we evaluated the rates of major intra- and postoperative complications (Clavien-Dindo≥III) and surgical conversion of the two approaches.
Results: The retroperitoneal laparoscopic route was used in 259 patients (67.3%) and the transperitoneal laparoscopic route in 85 patients (22.1%). A total of 12 (3.5%) major postoperative complications occurred, with no statistically significant difference between the two approaches (P=0.7). In univariate analysis, the only predictor of major postoperative complication was Cushing's syndrome (P=0.03). The surgical conversion rate was higher in the transperitoneal route group (10/85 (11.8%) compared to 6/259 (2.3%), P=0.0003) in the retroperitoneal route group. One death occurred in each group. Independent predictors of surgical conversion in multivariate analysis included the transperitoneal laparoscopic approach (OR 1.7, 95% CI 1.3-1.9, P=0.02), advanced age (OR 1.2, 95% CI 1.1-1.6, P=0.04) and large tumor size (OR 1.3, 95% CI 1.1-1.7, P=0.01).
Conclusion: Both transperitoneal and retroperitoneal approaches for laparoscopic adrenalectomy are safe, with an equivalent rate of major complications and mortality. The surgical conversion rate was higher for the transperitoneal route. The retroperitoneal approach should be reserved for small adrenal lesions.
abstract_id: PUBMED:34930214
Comparison of lateral transperitoneal and retroperitoneal approaches for homolateral laparoscopic adrenalectomy. Background: There is a lack of data regarding the appropriateness of transperitoneal and retroperitoneal approaches for homolateral laparoscopic adrenalectomy. The aim of this study is to compare lateral transperitoneal and retroperitoneal approach for left-sided and right-sided laparoscopic adrenalectomy respectively.
Methods: Between January 2014 and December 2019, 242 patients underwent left-sided and 252 patients underwent right-sided laparoscopic adrenalectomy. For left side, transperitoneal approach was used in 132 (103 with tumors < 5 cm and 29 with tumors ≥ 5 cm) and retroperitoneal approach in 110 (102 with tumors < 5 cm and 8 with tumors ≥ 5 cm). For right side, transperitoneal approach was used in 139 (121 with tumors < 5 cm and 18 with tumors ≥ 5 cm) and retroperitoneal approach in 113 (102 with tumors < 5 cm and 11 with tumors ≥ 5 cm). Patient characteristics and perioperative outcomes were recorded. For each side, both approaches were compared for tumors < 5 cm and ≥ 5 cm respectively.
Results: For left-sided tumors < 5 cm, transperitoneal approach demonstrated shorter operative time, less blood loss and longer time to oral intake. For left-sided tumors ≥ 5 cm, the peri-operative data of both approaches was comparable. For right-sided tumors < 5 cm, transperitoneal approach demonstrated shorter operative time and less blood loss. For right-sided tumors ≥ 5 cm, the peri-operative data was comparable.
Conclusions: Lateral transperitoneal and retroperitoneal approach are both effective for laparoscopic adrenalectomy. Lateral transperitoneal approach is faster with less blood loss for tumors < 5 cm.
abstract_id: PUBMED:32117492
Transperitoneal laparoscopic surgery in large adrenal masses. Introduction: The laparoscopic adrenalectomy (LA) has become the gold standard since the transperitoneal laparoscopic approach was first reported.
Aim: To evaluate the applicability, safety and short-term results of laparoscopic surgery in adrenal masses over 6 cm.
Material And Methods: Demographic data, hormonal activities, imaging modalities, operative findings, operation time, conversion rates, complications, duration of hospital stay and histopathologic results of 128 patients who underwent laparoscopic adrenalectomy were evaluated retrospectively. Patients included in the learning curve (n = 23), robotic surgery cases (n = 15) and patients with suspected metastasis (n = 4) were excluded from the study. Six cm mass size was taken as a reference and two groups were formed (group 1: < 6 cm, group 2: ≥ 6 cm). The results of the two groups were compared.
Results: There were 64 cases in group 1 and 22 cases in group 2. Functional mass ratio and mass sides were similar between the groups (p = 0.30 and p = 0.17, respectively). The mean mass size in group 1 was 36.4 ±11.2 mm and in group 2 82.4 ±15.5 mm. The conversion rate was similar between the two groups (p = 0.18). The duration of surgery was 135.5 ±8.29 min in group 1, 177.0 ±14.9 min in group 2 (p = 0.014). Morbidity and lengths of hospital stay were similar (p = 0.76, p = 0.34 respectively). Adrenocortical carcinoma was detected in three cases in group 1, which were completed laparoscopically, and in two cases in group 2, which were converted to open surgery (p = 0.46).
Conclusions: Although open surgery is still recommended in the guidelines, studies are now being carried out to ensure that laparoscopy can be safely performed on masses over 6 cm. There was no difference between short-term follow-up and histopathologic results in our study.
abstract_id: PUBMED:33195643
Retroperitoneal vs transperitoneal laparoscopic lithotripsy of 20-40 mm renal stones within horseshoe kidneys. Background: Horseshoe kidney (HK) with renal stones is challenging for urologists. Although both retroperitoneal and transperitoneal laparoscopic approaches have been reported in some case reports, the therapeutic outcome of retroperitoneal compared with transperitoneal laparoscopic lithotripsy is unknown.
Aim: To assess the efficacy of laparoscopic lithotripsy for renal stones in patients with HK.
Methods: This was a retrospective study of 12 patients with HK and a limited number (n ≤ 3) of 20-40 mm renal stones treated with either retroperitoneal or transperitoneal laparoscopic lithotripsy (June 2012 to May 2019). The perioperative data of both groups were compared including operation time, estimated blood loss, postoperative fasting time, perioperative complications and stone-free rate (SFR).
Results: No significant difference was observed for age, gender, preoperative symptoms, body mass index, preoperative infection, hydronephrosis degree, largest stone diameter, stone number and isthmus thickness. The mean postoperative fasting time of the patients in the retroperitoneal group and the transperitoneal group was 1.29 ± 0.49 and 2.40 ± 0.89 d, respectively (P = 0.019). There was no significant difference in operation time (194.29 ± 102.48 min vs 151.40 ± 39.54 min, P = 0.399), estimated blood loss (48.57 ± 31.85 mL vs 72.00 ± 41.47 mL, P = 0.292) and length of hospital stay (12.14 ± 2.61 d vs 12.40 ± 3.21 d, P = 0.881) between the retroperitoneal and transperitoneal groups. All patients in both groups had a complete SFR and postoperative renal function was within the normal range. The change in estimated glomerular filtration rate (eGFR) from the preoperative stage to postoperative day 1 in the retroperitoneal group and the transperitoneal group was -3.86 ± 0.69 and -2.20 ± 2.17 mL/(min·1.73 m2), respectively (P = 0.176). From the preoperative stage to the 3-mo follow-up, the absolute change in eGFR values for patients in the retroperitoneal group and the transperitoneal group was -3.29 ± 1.11 and -2.40 ± 2.07 mL/(min·1.73 m2), respectively (P = 0.581).
Conclusion: Both retroperitoneal and transperitoneal laparoscopic lithotripsy seem to be safe and effective for HK patients with a limited number of 20-40 mm renal stones.
abstract_id: PUBMED:29118532
Transperitoneal laparoscopic repair of retrocaval ureter: Our experience and review of literature. Context And Aim: Retrocaval ureter (RCU), also known as circumcaval ureter, occurs due to anomalous development of inferior vena cava (IVC) and not ureter. The surgical approach for this entity has shifted from open to laparoscopic and robotic surgery. This is a relatively new line of management with very few case reports. Herein, we describe the etiopathology, our experience with six cases of transperitoneal laparoscopic repair of RCU operated at tertiary care center in India and have reviewed different management options.
Methods: From 2013 to 2016, we operated total six cases of transperitoneal laparoscopic repair of RCU. All were male patients with average age of 29.6 years (14-50). Pain was their only complaint with normal renal function and no complications. After diagnosis with CT Urography, they underwent radionuclide scan and were operated on. Postoperative follow-up was done with ultrasonography every 3 months and repeat radionuclide scan at 6 months. The maximum follow-up was for 2.5 years.
Results: All cases were completed laparoscopically. Average operating time was 163.2 min. Blood loss varied from 50 to 100 cc. Ureteroureterostomy was done in all patients. None developed urinary leak or recurrent obstruction postoperatively. Maximum time for the requirement of external drainage was for 4 days (2-4 days). Average postoperative time for hospitalization was 3.8 days. Follow-up ultrasound and renal scan showed unobstructed drainage.
Conclusions: Transperitoneal or retroperitoneal approach can be considered equivalent as parameters like operative time, results are comparable for these two modalities. We preferred transperitoneal approach as it provides good working space for intracorporeal suturing.
abstract_id: PUBMED:26692663
Prospective study of preoperative factors predicting intraoperative difficulty during laparoscopic transperitoneal simple nephrectomy. Objective: To prospectively study and identify, the preoperative factors which predict intraoperative difficulty in laparoscopic transperitoneal simple nephrectomy.
Patients And Method: Seventy seven patients (41 males and 36 females) with mean age of 43 ± 17 years, undergoing transperitoneal laparoscopic simple nephrectomy at our institute between February 2012 to May 2013 were included in this study. Preoperative patients' characteristics recorded were: Gender of patients, history of intervention, palpable lump, BMI, urine culture, side, size of kidney, fixity of kidney on USG, perinephric fat stranding on preoperative CT scan, periureteral fat stranding, perinephric collection, enlarged hilar lymph nodes, renal vascular anomalies, differential renal function on renogram. Preoperative factors of these patients were noted and intraoperative difficulty in the surgery was scored between 1 (easiest) to 10 (most difficult or open conversion) by a single surgeon (who was a part of all studies either as operating surgeon or assistant). Using SPSS 15.0 software, multivariate and univariate analysis was done.
Results: In multivariate analysis presence of pyonephrosis on preoperative evaluation and BMI < 25kg/m(2) were found to be statistically significant factors predicting intraoperative difficulty during laparoscopic simple nephrectomy. On univariate analysis following factors were associated with increased surgeon's score: Lower BMI, palpable kidney, pyonephrosis, history of renal intervention, perinephric fat stranding, right side, fixity of kidney on USG with surrounding structures.
Conclusion: Our findings suggest that presence of pyonephrosis as identified on preoperative imaging and a BMI of less than 25 Kg/m(2) are the most significant factors predicting intraoperative difficulty during laparoscopic simple nephrectomy.
abstract_id: PUBMED:25877815
Evaluation of laparoscopic transperitoneal adrenalectomy: is it feasible for large masses? Aim: The aim of this paper was to determine whether laparoscopic adrenalectomy (LA) is a safe and effective treatment for the management of large adrenal tumors.
Methods: We retrospectively evaluated the data of patients who underwent LA at our institution between September 2002 and September 2012. Seventy-six transperitoneal LA were performed by the same surgical team. Patients with invasive tumors to adjacent organs or distant metastasis were excluded from the study. All patients were operated using the 450 oblique position as transperitoneal approach.
Results: The mean age of the patients was 48.3 years (range 20-68 years). The mean tumor size was 5.37 cm (range 2-15 cm). Sixteen patients had tumor size over 8 cm. The mean tumor weight was 31.2 gr (range 2-156 g). The lesions were localized on the right side in 42 (55%) patients and on the left side in 34 (45%) patients. The mean intraoperative blood loss was 114 mL (range 20-400 mL) and the mean operative time was 112 min (range 55-300 min). Six patients (7%) required conversion to open procedure. The mean hospitalization time was 2.5 days (range 1-4 days). Five patients (6%) had post-operative minor complications. There were no incidents of capsular invasion or adverse cardiovascular events.
Conclusion: LA is safe and feasible for both malign and benign adrenal lesions. Good preoperative assessment, surgical skills, team work and adherence to anatomical and surgical principles are the key to success for large adrenal masses.
abstract_id: PUBMED:30675659
Laparoscopic bilateral anterior transperitoneal adrenalectomy: 24 years experience. Background: The aim of this study is to evaluate the feasibility, safety, advantages and surgical outcomes of laparoscopic bilateral adrenalectomy (LBA) by an anterior transperitoneal approach.
Methods: From 1994 to 2018, 552 patients underwent laparoscopic adrenalectomy, unilateral in 531 and bilateral in 21 patients (9 females and 12 males). All patients who underwent LBA were approached via a transperitoneal anterior route and form our study population. Indications included: Cushing's disease (n = 11), pheochromocytoma (n = 6), Conn's disease (n = 3) and adrenal cysts (n = 1).
Results: Mean operative time was 195 ± 86.2 min (range 55-360 min). Conversion was necessary in one case for bleeding. Three patients underwent concurrent laparoscopic cholecystectomy with laparoscopic common bile duct exploration and ductal stone extraction in one. Three postoperative complications occurred in one patient each: subhepatic fluid collection, intestinal ileus and pleural effusion. Mean hospital stay was 6.1 ± 4.7 days (range 2-18 days).
Conclusions: In our experience, transperitoneal anterior LBA was feasible and safe. Based on our results, we believe that this approach leads to prompt recognition of anatomical landmarks with early division of the main adrenal vein prior to any gland manipulation, with a low risk of bleeding and without the need to change patient position. Unlike the lateral approach, there is no need to mobilize the spleno-pancreatic complex on the left or the liver on the right. The ability to perform associated intraperitoneal procedures, if required, is an added benefit.
Answer: The evaluation of laparoscopic transperitoneal adrenalectomy (LA) for large adrenal masses suggests that it is a feasible and safe procedure. Studies have shown that LA can be effectively performed on large adrenal tumors, including those over 6 cm in size, with good short-term outcomes and without significant differences in conversion rates or histopathologic results when compared to smaller masses (PUBMED:32117492). Specifically, one study reported that the mean tumor size was 5.37 cm, with some patients having tumors over 8 cm, and the mean tumor weight was 31.2 grams. The mean intraoperative blood loss was 114 mL, and the mean operative time was 112 minutes. The conversion rate to open surgery was 7%, and the mean hospitalization time was 2.5 days, with a 6% rate of minor post-operative complications (PUBMED:25877815).
Furthermore, for large pheochromocytomas, both the transperitoneal and retroperitoneal approaches have been compared. For right-sided procedures, the retroperitoneal approach has been found to be faster, safer, and associated with less blood loss than the transperitoneal approach (PUBMED:30430653). However, for left-sided procedures, the peri-operative data for tumors larger than 5 cm was comparable between both approaches (PUBMED:34930214).
In the context of bilateral adrenalectomy, the transperitoneal anterior approach has been deemed feasible and safe, with the added benefit of not requiring patient repositioning and allowing for the performance of concurrent intraperitoneal procedures if necessary (PUBMED:30675659).
Overall, the evidence suggests that laparoscopic transperitoneal adrenalectomy is a viable option for the management of large adrenal masses, with careful preoperative assessment and adherence to surgical principles being crucial for successful outcomes. |
Instruction: Emergency endotracheal intubation-related adverse events in bronchial asthma exacerbation: can anesthesiologists attenuate the risk?
Abstracts:
abstract_id: PUBMED:25801541
Emergency endotracheal intubation-related adverse events in bronchial asthma exacerbation: can anesthesiologists attenuate the risk? Purpose: Airway management in severe bronchial asthma exacerbation (BAE) carries very high risk and should be performed by experienced providers. However, no objective data are available on the association between the laryngoscopist's specialty and endotracheal intubation (ETI)-related adverse events in patients with severe bronchial asthma. In this paper, we compare emergency ETI-related adverse events in patients with severe BAE between anesthesiologists and other specialists.
Methods: This historical cohort study was conducted at a Japanese teaching hospital. We analyzed all BAE patients who underwent ETI in our emergency department from January 2002 to January 2014. Primary exposure was the specialty of the first laryngoscopist (anesthesiologist vs. other specialist). The primary outcome measure was the occurrence of an ETI-related adverse event, including severe bronchospasm after laryngoscopy, hypoxemia, regurgitation, unrecognized esophageal intubation, and ventricular tachycardia.
Results: Of 39 patients, 21 (53.8 %) were intubated by an anesthesiologist and 18 (46.2 %) by other specialists. Crude analysis revealed that ETI performed by an anesthesiologist was significantly associated with attenuated risk of ETI-related adverse events [odds ratio (OR) 0.090, 95 % confidence interval (CI) 0.020-0.41, p = 0.001]. The benefit of attenuated risk remained significant after adjusting for potential confounders, including Glasgow Coma Score, age, and use of a neuromuscular blocking agent (OR 0.058, 95 % CI 0.010-0.35, p = 0.0020).
Conclusions: Anesthesiologist as first exposure was independently associated with attenuated risk of ETI-related adverse events in patients with severe BAE. The skill and knowledge of anesthesiologists should be applied to high-risk airway management whenever possible.
abstract_id: PUBMED:8222690
Endotracheal intubation and mechanical ventilation in severe asthma. Objective: To determine the occurrence rate of complications and mortality in patients with severe asthma requiring endotracheal intubation and mechanical ventilation.
Design: Retrospective review of medical records from September 1982 to July 1988.
Setting: Urban, teaching hospital serving primarily indigent patients.
Patients: Fifty-seven adult patients with asthma requiring tracheal intubation and mechanical ventilation.
Interventions: None.
Measurements And Main Results: Fifty-seven patients requiring tracheal intubation and mechanical ventilation during 69 hospital admissions were identified. Medication noncompliance and upper respiratory tract infections were recorded as the most frequent precipitating events for exacerbation of asthma. Forty-nine intubations were initiated because of a clinical diagnosis of respiratory distress, but multiple indications were present in 42 admissions. One or more complications occurred in 31 episodes of endotracheal intubation and mechanical ventilation (45%). Death occurred in four (6%) of 69 admissions. Three of the four deaths occurred in patients who had a cardiorespiratory arrest before hospital admission.
Conclusions: While complications occurred in 45% of patients with severe asthma requiring intubation and mechanical ventilation, the mortality rate was low. We conclude that intubation and mechanical ventilation in patients with life-threatening asthma are safe and beneficial interventions.
abstract_id: PUBMED:8933315
Successful intubation with the Combitube in acute asthmatic respiratory distress by a Parkmedic. The Combitube is a relatively new device used for blind insertion emergency intubation. We report a case of successful Combitube treatment of an acute respiratory arrest secondary to an acute asthma exacerbation. An advanced EMT-II (National Park Service Parkmedic) utilized this device. Our review of the literature revealed no reported cases of an advanced EMT-II, nor any other cases, using the Combitube in asthma-related respiratory failure.
abstract_id: PUBMED:35275334
Dupilumab-Associated Adverse Events During Treatment of Allergic Diseases. Among the new biological therapies for atopic diseases, dupilumab is a fully human monoclonal antibody directed against IL-4Rα, the common chain of interleukin-4 and interleukin-13 receptors. Dupilumab showed clinical improvements in patients with atopic dermatitis, asthma, and chronic rhinosinusitis and is currently under development for other indications. While dupilumab is considered to be well tolerated, a number of recent publications have reported various adverse events. This review aims to summarize the current knowledge about these adverse events, which may help clinicians to improve the follow-up of patients on dupilumab. Injection-site reactions are the most common reported adverse event. However, dupilumab has also been shown to cause ophthalmic complications (e.g., dry eyes, conjunctivitis, blepharitis, keratitis, and ocular pruritus), head and neck dermatitis, onset of psoriatic lesions, progression of cutaneous T-cell lymphoma exacerbation, alopecia areata, hypereosinophilia, and arthritis. Most are managed during dupilumab treatment continuation, but some (e.g., severe conjunctivitis) may result in a discontinuation of treatment. Their molecular origin is unclear and requires further investigations. Among other hypothesis, it has been suggested that T helper (Th)2-mediated pathway inhibition may worsen Th1/Th17-dependent immune responses. An ophthalmological examination for the presence of potential predictive indicators of ophthalmic adverse events is recommended before initiation of dupilumab therapy.
abstract_id: PUBMED:18523132
Meta-analysis: effects of adding salmeterol to inhaled corticosteroids on serious asthma-related events. Background: Recent analyses have suggested an increased risk for serious asthma-related adverse events in patients receiving long-acting beta-agonists.
Purpose: To examine whether the incidences of severe asthma-related events (hospitalizations, intubations, deaths, and severe exacerbations) differ in persons receiving salmeterol plus inhaled corticosteroids compared with inhaled corticosteroids alone.
Data Sources: The GlaxoSmithKline (Research Triangle Park, North Carolina) database, MEDLINE, EMBASE, CINAHL, and the Cochrane Database of Systemic Reviews (1982 to September 2007) were searched without language restriction.
Study Selection: Randomized, controlled trials reported in any language that compared inhaled corticosteroids plus salmeterol (administered as fluticasone propionate/salmeterol by means of a single device or concomitant administration of inhaled corticosteroids and salmeterol) versus inhaled corticosteroids alone in participants with asthma.
Data Extraction: Three physicians independently reviewed and adjudicated blinded case narratives on serious adverse events that were reported in the GlaxoSmithKline trials.
Data Synthesis: Data from 66 GlaxoSmithKline trials involving a total of 20 966 participants with persistent asthma were summarized quantitatively. The summary risk difference for asthma-related hospitalizations from these trials was 0.0002 (95% CI, -0.0019 to 0.00231; P = 0.84) for participants receiving inhaled corticosteroids plus salmeterol (n = 35 events) versus those receiving inhaled corticosteroids alone (n = 34 events). One asthma-related intubation and 1 asthma-related death occurred among participants receiving inhaled corticosteroids with salmeterol; no such events occurred among participants receiving inhaled corticosteroids alone. A subset of 24 trials showed a decreased risk for severe asthma-related exacerbations for inhaled corticosteroids plus salmeterol versus inhaled corticosteroids alone (risk difference, -0.025 [CI, -0.036 to -0.014]; P <0.001).
Limitations: The included trials involved selected patients who received careful follow-up. Only 26 trials were longer than 12 weeks. Few deaths and intubations limited the ability to measure risk for these outcomes.
Conclusion: Salmeterol combined with inhaled corticosteroids decreases the risk for severe exacerbations, does not seem to alter the risk for asthma-related hospitalizations, and may not alter the risk for asthma-related deaths or intubations compared with inhaled corticosteroids alone.
abstract_id: PUBMED:28297811
Analysis of short-term respiratory adverse events in 183 bronchial thermoplasty procedures Objective: To analyze the short-term (3 weeks) adverse respiratory events after bronchial thermoplasty(BT) in patients with severe asthma. Methods: The China-Japan Friendship Hospital recruited 62 patients with severe asthma for BT treatment from March 2014 to July 2016, with a total of 183 BT procedures. The data of adverse respiratory events within 3 weeks after procedure were collected to analyze the factors that might potentially influence the occurrence of adverse events. Results: Forty-three patients (69.4%) experienced adverse respiratory events within 3 weeks after treatment. Totally 153 adverse respiratory events occurred after 87 procedures(47.5%). The main adverse events were cough (15 events, 8.20%), sputum production (37 events, 20.22%), temporary PEF reduction (37 events, 20.22%), chest distress (12 events, 6.56%), blood in sputum (11 events, 6.01%), asthma exacerbation (10 events, 5.46%), and pneumonia(6 events, 3.28%). Most events were relieved or resolved with standard therapy in 1 week. No severe adverse events including tracheal intubation, malignant arrhythmias or death occurred within 3 weeks after procedure. The baseline eosinophil percentage in induced sputum and blood, operation times, and preoperative FEV(1) (% predicted) might influence the occurrence of adverse events after treatment. Patients with preoperative FEV(1) (% predicted) ≥60% had lower risk of adverse events. Conclusion: BT showed a good security profile in treating patients with severe asthma within 3 weeks after procedure.
abstract_id: PUBMED:9742860
Emergency room visits by patients with exacerbations of asthma We retrospectively analyzed patterns of emergency room visits by patients with exacerbations of asthma from December 1995 through November 1996. A total of 591 episodes in 198 patients were reviewed. The average age was 35.8 years, ranging from 15 to 71. The largest number of visits occurred in September. The number of visits per year ranged from 1 to 22; the mean was 2.9 per year. In patients who were followed on a regular basis at our institution, serve attacks accounted for 7.1% of the total, compared with 21.6 percent at other hospitals or outpatient clinics. We suspect that this difference was related to differences in the use of inhaled steroids. At our institution, 89% of patients were taking inhaled steroids; at other hospitals or clinics, only 21% were taking inhaled steroids. Of the 198 patients, 33 fulfilled one of the following criteria: (1) endotracheal intubation for respiratory failure or respiratory arrest, (2) respiratory acidosis (pH < 7.35) without endotracheal intubation; 27% of those patients had been given a diagnosis of mild asthma before the acute exacerbation. We conclude that patient education and standard guidelines for treatment of asthma, are very important for appropriate management of asthma, to prevent exacerbations and asthma-related deaths.
abstract_id: PUBMED:24569935
Prehospital non-invasive ventilation in Germany: results of a nationwide survey of ground-based emergency medical services Background And Objectives: Non-invasive ventilation (NIV) is an evidence-based treatment of acute respiratory failure and can be helpful to reduce morbidity and mortality. In Germany national S3 guidelines for inhospital use of NIV based on a large number of clinical trials were published in 2008; however, only limited data for prehospital non-invasive ventilation (pNIV) and hence no recommendations for prehospital use exist so far.
Aim: In order to create a database for pNIV in Germany a nationwide survey was conducted to explore the status quo for the years 2005-2008 and to survey expected future developments including disposability, acceptance and frequency of pNIV.
Material And Methods: A questionnaire on the use of pNIV was developed and distributed to 270 heads of medical emergency services in Germany.
Results: Of the 270 questionnaires distributed 142 could be evaluated (52 %). The pNIV was rated as a reasonable treatment option in 91 % of the respondents but was available in only 54 out of the 142 responding emergency medical services (38 %). Continuous positive airway pressure (98 %) and biphasic positive airway pressure (22 %) were the predominantly used ventilation modes. Indications for pNIV use were acute cardiogenic pulmonary edema (96 %), acute exacerbation of chronic obstructive pulmonary disease (89 %), asthma (32 %) and pneumonia (28 %). Adverse events were reported for panic (20 ± 17%) and non-threatening heart rhythm disorders (8 ± 5%), the rate of secondary intubation was low (reduction from 20 % to 10 %) and comparable to data from inhospital treatment.
Conclusion: Prehospital NIV in Germany was used by only one third of all respondents by the end of 2008. Based on the clinical data a growing application for pNIV is expected. Controlled prehospital studies are needed to enunciate evidence-based recommendations for pNIV.
abstract_id: PUBMED:33207982
Predicting the requiring intubation and invasive mechanical ventilation among asthmatic exacerbation-related hospitalizations. Objective: To identify the predictors of requiring intubation and invasive mechanical ventilation (IMV) in asthmatic exacerbation (AE)-related hospitalizations.
Methods: This study was conducted in southern Thailand between October 2016 and September 2018. The characteristics and clinical findings of patients admitted for AE requiring intubation and IMV were analyzed. The variables were evaluated by univariate and multivariate analysis to identify the independent predictors.
Results: A total of 509 patients with a median age of 53 years were included in the study. Being female (60.2%), having no previous use of a controller (64.5%), having a history of smoking, and having a high level of white blood cell count (14,820 cells/mm3) were the significantly more common characteristics of the patients requiring mechanical ventilation. Univariate analysis showed that being male (OR = 1.96 95% CI, 1.22-3.13), having a history of 1-2 AEs in the past 12 months (OR = 3.27 95% CI, 1.75-6.12), and having an absolute eosinophil count ≥300 cells/mm3 (OR = 1.68 95% CI, 1.05-2.69) were associated with patients requiring IMV, whereas the patients who were taking a reliever (OR = 0.36 95% CI, 0.23-0.57) and controller (OR = 0.42 95% CI, 0.27-0.68) were associated with a decreased risk of requiring intubation and IMV. In multivariate analysis, only 1-2 AEs within the past 12 months (OR = 3.12, 95% CI, 1.19-8.21) was an independent predictor of requiring intubation and IMV in patients with AE-related hospitalization (p = 0.021).
Conclusions: This study found that a history of 1-2 AEs in the past 12 months was a strong independent predictor for the requirement of intubation and IMV in patients hospitalized for AE-related conditions.
abstract_id: PUBMED:32026414
Can the Number of Radiofrequency Activations Predict Serious Adverse Events after Bronchial Thermoplasty? A Retrospective Case-Control Study. Introduction: Bronchial thermoplasty (BT) is a bronchoscopic procedure that involves the delivery of thermal radiofrequency energy to the bronchial wall for treating severe asthma. It has been suggested that too many radiofrequency activations could induce serious adverse events (SAEs) at an early stage. We aimed to examine the number of radiofrequency activations at each session and early lung function changes from baseline to determine whether these are related to SAEs.
Methods: We retrospectively investigated 13 consecutive patients who underwent three sessions each of BT for severe asthma from February 2015 to January 2016. Lung function tests were performed on the day before and after each BT procedure. Since we compared the number of activations and lung function changes from baseline after each session, a total of 39 sessions were reviewed. The relationship between the number of radiofrequency activations and each lung function change from baseline was also examined by linear regression analysis.
Results: A total of 10 SAEs (4 of pneumonia, 3 of atelectasis, 2 of bronchial asthma exacerbation and 1 of hemoptysis) were observed following the 39 BT sessions. When we compared sessions with and without SAEs, there were no differences in the number of activations (mean ± SD, 71.5 ± 28.6 times in sessions with SAEs; 66.5 ± 25.1 times in sessions without SAEs; p = 0.772) and lung function changes (mean changes in FVC/%FVC/FEV1/%FEV1/%PEF from baseline; - 0.49 l/- 14.2%/- 0.36 l/- 11.7%/- 9.6% in sessions with SAEs; - 0.43 l/- 13.3%/- 0.34 l/- 12.1%/- 9.4% in sessions without SAEs; p > 0.05 for all the above). Increase in the number of activations correlated with decreased FEV1 (R2 = 0.17, p = 0.0088) and %FEV1 (R2 = 0.11, p = 0.0357).
Conclusions: Increase in the number of radiofrequency activations during BT is related to a decrease in FEV1 and %FEV1 from baseline. The number of radiofrequency activations, however, is not associated with SAEs after BT.
Answer: Yes, anesthesiologists can attenuate the risk of emergency endotracheal intubation (ETI)-related adverse events in patients with severe bronchial asthma exacerbation (BAE). A historical cohort study conducted at a Japanese teaching hospital analyzed BAE patients who underwent ETI in the emergency department and found that ETI performed by an anesthesiologist was significantly associated with an attenuated risk of ETI-related adverse events, including severe bronchospasm after laryngoscopy, hypoxemia, regurgitation, unrecognized esophageal intubation, and ventricular tachycardia. The benefit of reduced risk remained significant even after adjusting for potential confounders such as Glasgow Coma Score, age, and the use of a neuromuscular blocking agent. This suggests that the skill and knowledge of anesthesiologists should be applied to high-risk airway management whenever possible (PUBMED:25801541). |
Instruction: Interpretation of trauma radiographs by junior doctors in accident and emergency departments: a cause for concern?
Abstracts:
abstract_id: PUBMED:9315930
Interpretation of trauma radiographs by junior doctors in accident and emergency departments: a cause for concern? Objectives: To investigate how well junior doctors in accident and emergency (A&E) were able to diagnose significant x ray abnormalities after trauma and to compare their results with those of more senior doctors.
Methods: 49 junior doctors (senior house officers) in A&E were tested with an x ray quiz in a standard way. Their results were compared with 34 consultants and senior registrars in A&E and radiology, who were tested in the same way. The quiz included 30 x rays (including 10 normal films) that had been taken after trauma. The abnormal films all had clinically significant, if sometimes uncommon, diagnoses. The results were compared and analysed statistically.
Results: The mean score for the abnormal x rays for all the junior doctors was only 32% correct. The 10 junior doctors were more experience scored significantly better (P < 0.001) but their mean score was only 48%. The mean score of the senior doctors was 80%, which was significantly higher than the juniors (P < 0.0001).
Conclusions: The majority of junior doctors misdiagnosed significant trauma abnormalities on x ray. Senior doctors scored well, but were not infallible. This suggests that junior doctors are not safe to work on their own in A&E departments. There are implications for training, supervision, and staffing in A&E departments, as well as a need for fail-safe mechanisms to ensure adequate patient care and to improve risk management.
abstract_id: PUBMED:31301784
Accuracy of appendicular radiographic image interpretation by radiographers and junior doctors in Ghana: Can this be improved by training? Introduction: Access to image interpretation in Ghana remains a challenge with the limited number of radiologists. Radiographers with the right skills and knowledge in image interpretation could help address this challenge. The aims of the study were to determine and compare the ability (accuracy, sensitivity and specificity) of radiographers and junior doctors in interpreting appendicular trauma radiographs both before and after training.
Methods: An action research study involving a pre and post training test was carried out to determine the level of accuracy, sensitivity and specificity in abnormality detection by radiographers after undergoing training when compared to junior doctors. Eight radiographers and twelve junior doctors were invited to interpret an image bank of 30 skeletal radiographs, both before and upon completion of an educational program. The participants' tests were scored against a reference standard provided by an experienced radiologist. Pre and post-test analysis were carried out for comparison.
Results: Post training mean accuracy (radiographers 83.3% vs 68.8%, p = 0.017; doctors 81.9% vs 71.6%, p = 0.003), sensitivity (radiographers 83.3% vs 69.2%, p = 0.042; doctors 77.2% vs 67.8% p = 0.025) and specificity (radiographers 83.3% vs 68.3%, p = 0.011; doctors 86.7% vs 75.6% p = 0.005) of both groups significantly improved. No significant differences were recorded between the radiographers and doctors after the training event.
Conclusion: The study revealed that, with a well-structured training program, radiographers and junior doctors could improve on their accuracies in radiographic abnormality detection and commenting on trauma radiographs.
abstract_id: PUBMED:12956672
Interpreting trauma radiographs. Background: Many accident and emergency clinicians regard the radiographic image as an extension of the clinical examination, as a provisional diagnosis, based on clinical signs and symptoms, can be confirmed or refuted by inspection of X-rays. However, the value of radiography in this context is not determined by the actual presence of trauma or pathology on the radiograph, but is dependent on the ability of a clinician to identify any trauma or pathology present. Traditionally, the responsibility for interpreting radiographic images within the accident and emergency environment in the United Kingdom (UK) has been with medical clinicians. However, expansion of the nursing role has begun to change the boundaries of professional practice and now many nurses are both requesting and interpreting trauma radiographs.
Aim: To ascertain the ability of accident and emergency doctors and nurses to interpret trauma radiographs, and identify whether there is a consistent standard of interpretive accuracy that could be used as a measure of competence.
Methods: A literature review was conducted using the Cochrane Library, Medline and CINAHL databases and the keywords radiographic interpretation, radiographic reporting, accident and emergency and emergency/nurse practitioner.
Findings: The ability of accident and nursing doctors and nurses to interpret trauma radiographs accurately varies markedly, and no identified published study has established an appropriate level of accuracy that should be achieved in order to demonstrate satisfactory competence in the interpretation of radiographic images.
Conclusions: Determining a measure of interpretive accuracy that can be used to assess ability to interpret radiographic trauma images is fraught with difficulties. Consequently, nurses may attempt to prove their skills by directly comparing their abilities to those of their medical colleagues. However, as a result of marked variation in the ability of senior house officers to interpret trauma radiographs, a similar ability does not automatically imply that a satisfactory level of ability has been achieved.
abstract_id: PUBMED:2372624
Improving the care of patients with major trauma in the accident and emergency department. Objective: To determine whether improvement in the care of victims of major trauma could be made by using the revised trauma score as a triage tool to help junior accident and emergency doctors rapidly identify seriously injured patients and thereby call a senior accident and emergency specialist to supervise their resuscitation.
Design: Comparison of results of audit of management of all seriously injured patients before and after these measures were introduced.
Setting: Accident and emergency department in an urban hospital.
Patients: All seriously injured patients (injury severity score greater than 15) admitted to the department six months before and one year after introduction of the measures.
Results: Management errors were reduced from 58% (21/36) to 30% (16/54) (p less than 0.01). Correct treatment rather than improvement in diagnosis or investigation accounted for almost all the improvement.
Conclusions: The management of seriously injured patients in the accident and emergency department can be improved by introducing two simple measures: using the revised trauma score as a triage tool to help junior doctors in the accident and emergency department rapidly identify seriously injured patients, and calling a senior accident and emergency specialist to supervise the resuscitation of all seriously injured patients.
Implications: Care of patients in accident and emergency departments can be improved considerably at no additional expense by introducing two simple measures.
abstract_id: PUBMED:34976474
Perceived Barriers to Participation in Clinical Research Amongst Trauma and Orthopaedic Community: A Survey of 148 Consultants and Junior Doctors in Wales. Background: Research has led to substantial improvement in health and quality of life. It is pertinent for doctors to participate in research to keep up with the advances of modern medicine and forms one of the seven pillars of clinical governance defined by the General Medical Council. However, clinicians face multiple barriers to participating in research. The objective of this study was to identify barriers in participation and to recommend solutions for better engagement in orthopaedic research.
Methodology: Trauma and Orthopaedic consultants and junior doctors in Wales were asked to complete a web-based survey with 15 questions about barriers to participation and suggestions for increasing involvement in clinical research.
Results: A total of 148 completed forms were received which included 60 consultants and 88 junior doctors. The response rate was 86%. The most frequently reported barriers to clinical research were time constraints, excess paperwork, lack of knowledge about research methods, and lack of awareness of ongoing research studies. Most participants were keen to be involved in research in the future. Majority responded that they would more likely take part in research activity if there were formal training sessions and more dedicated research sessions scheduled into their timetable. Need for more incentives and allocation of a research officer were other suggestions. Most orthopaedic staff recognised the relevance of research to their job/training. Conclusion: There are multiple perceived barriers to participating in research at all levels in the orthopaedic community; however, these could be mitigated by implementing simple measures.
abstract_id: PUBMED:8783908
Accident & emergency department diagnosis--how accurate are we? An audit of the accuracy of diagnosis for admitted patients made by the medical officers of the Accident and Emergency Department was carried out recently in Toa Payoh Hospital. This was done for a period of one week lasting from 2nd to 8th February, 1994. A total of 122 admissions were studied and their diagnoses at admission compared with the diagnoses at discharge made by the doctors from the various discipliner in the wards. It was found that a high degree of accuracy of diagnosis was achieved by the medical officers of the Accident and Emergency (A&E) Department for surgical disciplines (82.9% for General Surgery, and 95.8% for Orthopaedic Surgery), and an acceptable degree of accuracy (77.6%) for General Medicine. In addition, the usage of laboratory investigations in the Accident and Emergency Department was also studied. We also assessed the performances of trainees, senior and junior medical officers as well. It is hoped that such an audit will serve to define standards for diagnostic accuracy in the Accident and Emergency Department. This can be a useful tool in the future for measuring and improving the performance of individual Emergency Room medical officers, and also the various Accident and Emergency Departments.
abstract_id: PUBMED:14521966
Requesting and interpreting trauma radiographs: a role extension for accident & emergency nurses. Government supported expansion of the nursing role within Accident & Emergency (A&E) departments in the United Kingdom (UK) has begun to break down the traditional barriers to professional practice. Today, many nurses working within A&E departments are both requesting and interpreting radiographic examinations as part of their normal working practice. However, role expansion does not occur without increased responsibility. Unsatisfactory requests for radiography and inaccurate radiographic interpretation may result in inappropriate patient treatment, misuse of resources, patient recall and litigation. Nurses undertaking these role extensions need to ensure that their levels of knowledge and skill to perform the role are appropriate and adequately supported. This article summarises the results of a national questionnaire survey of A&E nurse managers that aimed to identify current working practices, including education, training and limitations to practice, with respect to the requesting and interpretation of trauma radiographs by A&E nurses.
abstract_id: PUBMED:18660393
Performance of emergency medicine residents in the interpretation of radiographs in patients with trauma. Background: Radiographs are vital diagnostic tools that complement physical examination in trauma patients. A study was undertaken to assess the performance of residents in emergency medicine in the interpretation of trauma radiographs.
Methods: 348 radiographs of 100 trauma patients admitted between 1 March and 1 May 2007 were evaluated prospectively. These consisted of 93 cervical spine (C-spine) radiographs, 98 chest radiographs, 94 radiographs of the pelvis and 63 computed tomographic (CT) scans. All radiological material was evaluated separately by five emergency medicine residents and a radiology resident who had completed the first 3 years of training. The same radiographs were then evaluated by a radiologist whose opinion was considered to be the gold standard. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were calculated.
Results: The mean (SE) age of the patients was 29 (2) years (range 2-79). There were no statistically significant differences in terms of pathology detection between the emergency medicine residents and the radiologist. The agreement between the emergency medicine residents and the radiology resident was excellent for radiographs of the pelvis and the lung (kappa (kappa) = 0.928 and 0.863, respectively; p<0.001) and good for C-spine radiographs and CT scans (kappa = 0.789 and 0.773, respectively; p<0.001).
Conclusions: Accurate interpretation of radiographs by emergency medicine residents who perform the initial radiological and therapeutic interventions on trauma patients is of vital importance. The performance of our residents was found to be satisfactory in this regard.
abstract_id: PUBMED:11696496
Alcohol and radiographs in the accident and emergency department. Objective: To investigate the contribution of alcohol ingestion to the radiological workload of an inner city accident and emergency (A&E) department.
Methods: A prospective survey of patients presenting to A&E who required radiographs was performed over a seven day period. The A&E clinician questioned patients about alcohol intake during the six hours before the onset of the presenting complaint or injury, and made an objective assessment of signs of alcohol ingestion or intoxication. An assessment was made also of the relative contribution of alcohol as a cause of patients' injuries.
Results: A total of 419 patients who had radiography fulfilled the inclusion criteria, and a questionnaire was completed for 351 (84%). Forty (11%) of 351 were found to have ingested alcohol. Thirty five (87%) of 40 patients who had ingested alcohol were radiographed for trauma, as compared with 171 (55%) of the 311 who had not (p<0.001). Alcohol was considered to have been causative of injury in 30% and a contributory factor in an additional 58%. Radiographs of the skull, face and jaw accounted for 18 (33%) of 55 radiographs from trauma patients who had ingested alcohol compared with 20 (9%) of 212 radiographs from those who had not (p<0.001). There was no significant difference in the proportion of abnormal radiographs between these two groups (27% of radiographs from trauma patients who had ingested alcohol compared with 23% of radiographs from those who had not, p>0.2).
Conclusion: Patients with alcohol related injuries requiring radiography have a significant impact on the radiological workload of an A&E department, although the prevalence of alcohol ingestion detected in this study was less than expected from previous studies.
abstract_id: PUBMED:24575526
Professional knowledge of accident and emergency doctors on the management of dental injuries. Background: Accident and emergency (A&E) doctors are often the first to assess dental and dento-alveolar injuries. Early diagnosis and appropriate management is essential for a good long-term prognosis of restored dental aesthetics and function.
Objective: To evaluate A&E doctors' knowledge of the management of dental injuries, and appropriate onward referral.
Design And Setting: A cross-sectional survey using a questionnaire sent to A&E doctors of all grades at all nine A&E hospitals in South Wales over a consecutive five-month period.
Method: Data relating to the level of knowledge of emergency management of dental trauma were analysed. The relationship between doctor's level of experience and previous training received on the management of common dento-alveolar trauma was considered. Data were analysed for Pearson correlation coefficients.
Results: The response rate to the questionnaire was 72% (n = 118). There was a negligible correlation between the career grade of the doctors and their knowledge of the management of dental injuries (r = 0.128, p < 0.001). A strong positive linear correlation (r = 0.928, p < 0.001) was found between those with prior training and their knowledge of dental injury management.
Conclusions: The findings from this study suggest that A&E doctors have only partial knowledge of the management of dental injuries, though appropriate training can significantly increase knowledge.
Answer: Yes, the interpretation of trauma radiographs by junior doctors in accident and emergency (A&E) departments is a cause for concern. Studies have shown that junior doctors, including senior house officers, often misdiagnose significant trauma abnormalities on x-ray. One study found that the mean score for the abnormal x-rays for all the junior doctors was only 32% correct, and even those with more experience had a mean score of only 48%. In contrast, senior doctors had a significantly higher mean score of 80% (PUBMED:9315930). This suggests that junior doctors may not be safe to work on their own in A&E departments without adequate training, supervision, and fail-safe mechanisms to ensure patient care and improve risk management.
Another study in Ghana showed that both radiographers and junior doctors could improve their accuracies in radiographic abnormality detection after undergoing a well-structured training program. Post-training, the mean accuracy, sensitivity, and specificity of both groups significantly improved, with no significant differences between the radiographers and doctors after the training event (PUBMED:31301784).
The literature also indicates that the ability of A&E doctors and nurses to interpret trauma radiographs accurately varies markedly, and no published study has established an appropriate level of accuracy that should be achieved to demonstrate satisfactory competence in the interpretation of radiographic images (PUBMED:12956672).
In summary, the evidence suggests that junior doctors often struggle with the accurate interpretation of trauma radiographs, which can have serious implications for patient care. However, targeted training programs can significantly improve their diagnostic abilities, underscoring the importance of proper education and support for junior medical staff in A&E departments. |
Instruction: Stroke Hormones and Outcomes in Women (SHOW) study: is the 'healthy-user effect' valid for women after stroke?
Abstracts:
abstract_id: PUBMED:19702448
Stroke Hormones and Outcomes in Women (SHOW) study: is the 'healthy-user effect' valid for women after stroke? Aims: To determine differences in stroke severity and outcomes in women using hormone therapy (HT) versus nonusers at baseline, and to investigate whether there is a 'healthy-user effect' of HT in women with stroke.
Materials And Methods: A total of 133 women over the age of 18 years with acute ischemic stroke were enrolled and categorized based on their use of HT at the time of stroke. Initial stroke severity was assessed at admission, and disability and activities of daily living were assessed at 6-month intervals for 2 years.
Results: A total of 30% of the cohort were HT users. HT users were less likely to have hypertension and were leaner than nonusers. There were no differences in initial stroke severity, mortality or any of the functional status outcomes based on HT use at baseline.
Conclusion: There did appear to be a healthy-user effect for HT users at baseline, but following stroke, there were no significant differences in long-term outcomes.
abstract_id: PUBMED:29855724
Women and Migraine: the Role of Hormones. Purpose Of Review: Migraine is a debilitating disease, that is encountered in countless medical offices every day and since it is highly prevalent in women, it is imperative to have a clear understanding of how to manage migraine. There is a growing body of evidence regarding the patterns we see in women throughout their life cycle and how we approach migraine diagnosis and treatment at those times.
Recent Findings: New guidelines regarding safety of medication during pregnancy and lactation are being utilized to help guide management decisions in female migraineurs. There is also new data surrounding the risk of stroke in individuals who suffer from migraine with aura. This article seeks to provide an overview of a woman's migraine throughout her lifetime, the impact of hormones and an approach to management.
abstract_id: PUBMED:8665427
Women, hormones and blood pressure. Raised blood pressure is an important risk factor for both coronary artery disease and stroke in women. In terms of exogenous sex hormones, use of premenopausal oral contraceptives has been consistently associated with higher blood pressure levels; both estrogenic and progestogenic components have been implicated. In contrast, a randomized trial has shown no effect of post-menopausal hormone use on blood pressure. Observational studies indicate a protective effect of postmenopausal estrogen use on coronary artery disease. This is probably largely mediated through effects on lipoproteins and not blood pressure; data on post-menopausal estrogen use and stroke risk are less consistent. Treatment trials have demonstrated beneficial effects of lowering blood pressure on cardiovascular disease, particularly regarding stroke in women. The women most likely to benefit from individually-based clinical preventive interventions for cardiovascular disease, such as hypertension treatment or estrogen replacement therapy, are women who have high absolute risk of cardiovascular disease, ie, older women with high risk factor levels with a family or existing history of cardiovascular disease. Nevertheless, the large international variation in rates of cardiovascular disease indicate the large potential for prevention and suggest that most women are likely to benefit from lifestyles conducive to cardiovascular health, that is increasing physical activity, not smoking and following diets low in sodium and saturated fat and high in fruits and vegetables.
abstract_id: PUBMED:35596031
Albumin-to-globulin ratio predicts clinical outcomes of heart failure with preserved ejection fraction in women. Despite advances in medicine, heart failure with preserved ejection fraction (HFpEF) remains an increasing health concern associated with a high mortality rate. Research has shown sex-based differences in the clinical characteristics of patients with HF; however, definitive biomarkers for poor clinical outcomes of HFpEF in women are unavailable. We focused on the albumin-to-globulin ratio (AGR), a biomarker for malnutrition and inflammation and investigated its usefulness as a predictor of clinical outcomes of HFpEF in women. We measured the AGR in consecutive 224 women with HFpEF and 249 men with HFpEF. There were 69 cardiac events in women with HFpEF and 69 cardiac events in men with HFpEF during the follow-up period. The AGR decreased with advancing New York Heart Association functional class in women with HFpEF. Patients were categorized into three groups based on AGR tertiles. Kaplan-Meier analysis showed that among the three groups, the risk for cardiac events and HF-associated rehospitalizations was the highest in the lowest tertile in women with HFpEF. Univariate and multivariate Cox proportional hazard regression analyses showed that after adjustment for confounding risk factors, the AGR was an independent predictor of cardiac events and HF-associated rehospitalizations in women with HFpEF, but not in men with HFpEF. The addition of AGR to the risk factors significantly improved the net reclassification and integrated discrimination indices in women with HFpEF. This is the first study that highlights the significant association between the AGR and the severity and clinical outcomes of HFpEF in women. Addition of AGR to the risk factors improved its prognostic value for clinical outcomes, which indicates that this variable may serve as a useful clinical biomarker for HFpEF in women.
abstract_id: PUBMED:36746378
Stroke in Women: A Review Focused on Epidemiology, Risk Factors, and Outcomes. Stroke is a particularly important issue for women. Women account for over half of all persons who experienced a stroke. The lifetime risk of stroke is higher in women than in men. In addition, women have worse stroke outcomes than men. Several risk factors have a higher association with stroke in women than in men, and women-specific risk factors that men do not have should be considered. This focused review highlights recent findings in stroke epidemiology, risk factors, and outcomes in women.
abstract_id: PUBMED:22482277
Cardiovascular disease in women: implications for improving health outcomes. Objective: To collate data on women and cardiovascular disease in Australia and globally to inform public health campaigns and health care interventions.
Design: Literature review.
Results: Women with acute coronary syndromes show consistently poorer outcomes than men, independent of comorbidity and management, despite less anatomical obstruction of coronary arteries and relatively preserved left ventricular function. Higher mortality and complication rates are best documented amongst younger women and those with ST-segment-elevation myocardial infarction. Sex differences in atherogenesis and cardiovascular adaptation have been hypothesised, but not proven. Atrial fibrillation carries a relatively greater risk of stroke in women than in men, and anticoagulation therapy is associated with higher risk of bleeding complications. The degree of risk conferred by single cardiovascular risk factors and combinations of risk factors may differ between the sexes, and marked postmenopausal changes are seen in some risk factors. Sociocultural factors, delays in seeking care and differences in self-management behaviours may contribute to poorer outcomes in women. Differences in clinical management for women, including higher rates of misdiagnosis and less aggressive treatment, have been reported, but there is a lack of evidence to determine their effects on outcomes, especially in angina. Although enrolment of women in randomised clinical trials has increased since the 1970s, women remain underrepresented in cardiovascular clinical trials.
Conclusions: Improvement in the prevention and management of CVD in women will require a deeper understanding of women's needs by the community, health care professionals, researchers and government.
abstract_id: PUBMED:16807276
Hormones and cardiovascular health in women. Cardiovascular diseases (CVDs) may have their origin before birth: the combination of being small at birth and having an overly rich post-natal diet increases the likelihood of obesity and of acquiring a specific metabolic syndrome in adulthood that carries an increased risk of CVD. The incidence of CVD and mortality is very low in women of reproductive age but rises to a significant level in older women. In this article, we discuss CVD in relation to hormonal contraception, pregnancy and polycystic ovarian syndrome (PCOS) in younger women and menopause in older women. Women with PCOS have a higher risk of diabetes and hypertension, but studies to date have not shown an effect on CVD events. Use of combined hormonal contraception has only small effects on CVD because of the low baseline incidence of myocardial infarction (MI), stroke and venous thromboembolism (VTE) among young women. Women with existing risk factors or existing CVD, however, should consider alternative contraception. In pregnancy, CVD is rare, although, in the West, it now accounts for a significant proportion of maternal mortality as the frequency of obstetrical causes of mortality has substantially declined. The frequency of VTE is 15 per 10,000 during pregnancy and the post-partum period. In older women, menopause causes a slightly higher risk of MI after allowing for age, although there is substantial heterogeneity in the results of studies on menopause and age at menopause and MI. A larger effect might have been expected, because estrogen reduces the risk of developing atherosclerosis in premenopausal women, whereas in post-menopausal women who may have established atherosclerotic disease, estrogen increases the risk of myocardial disease through the effects on plaque stability and clot formation. Recent trial results indicate that hormone treatment in menopause does not favourably affect the risk of MI, stroke or other vascular disease. Thus, prevention of CVD should rely on diet and fitness, low-dose aspirin and treatment of hypertension, hyperglycaemia and hyperlipidaemia.
abstract_id: PUBMED:25795991
Bioidentical hormones, menopausal women, and the lure of the "natural" in U.S. anti-aging medicine. In 2002, the Women's Health Initiative, a large-scale study of the safety of hormone replacement therapy (HRT) for women conducted in the United States, released results suggesting that use of postmenopausal HRT increased women's risks of stroke and breast cancer. In the years that followed, as rates of HRT prescription fell, another hormonal therapy rose in its wake: bioidentical hormone replacement therapy (BHRT). Anti-aging clinicians, the primary prescribers of BHRT, tout it as a safe and effective alternative to treat menopausal symptoms and, moreover, as a preventative therapy for age-related diseases and ailments. Through in-depth interviews with 31 U.S.-based anti-aging clinicians and 25 female anti-aging patients, we analyze attitudes towards BHRT. We illustrate how these attitudes reveal broader contemporary values, discourses, and discomforts with menopause, aging, and biomedicine. The attraction to and promise of BHRT is rooted in the idea that it is a "natural" therapy. BHRT is given both biomedical and embodied legitimacy by clinicians and patients because of its purported ability to become part of the body's "natural" processes. The normative assumption that "natural" is inherently "good" not only places BHRT beyond reproach, but transforms its use into a health benefit. The clinical approach of anti-aging providers also plays a role by validating patients' embodied experiences and offering a "holistic" solution to their symptoms, which anti-aging patients see as a striking contrast to their experiences with conventional biomedical health care. The perceived virtues of BHRT shed light on the rhetoric of anti-aging medicine and a deeply complicated relationship between conventional biomedicine, hormonal technologies, and women's bodies.
abstract_id: PUBMED:36847058
Trends and Outcomes of ST-Segment-Elevation Myocardial Infarction Among Young Women in the United States. Background Although there has been a decrease in the incidence of ST-segment-elevation myocardial infarction (STEMI) in the United States, this trend might be stagnant or increasing in young women. We assessed the trends, characteristics, and outcomes of STEMI in women aged 18 to 55 years. Methods and Results We identified 177 602 women aged 18 to 55 with the primary diagnosis of STEMI from the National Inpatient Sample during years 2008 to 2019. We performed trend analyses to assess hospitalization rates, cardiovascular disease (CVD) risk factor profile, and in-hospital outcomes stratified by three age subgroups (18-34, 35-44, and 45-55 years). We found STEMI hospitalization rates were decreased in the overall study cohort from 52 per 100 000 hospitalizations in 2008 to 36 per 100 000 in 2019. This was driven by decreased proportion of hospitalizations in women aged 45 to 55 years (74.2% to-71.7%; P<0.001). Proportion of STEMI hospitalizationincreased in women aged 18-34 (4.7%-5.5%; P<0.001) and 35-44 years (21.2%-22.7%; P<0.001). The prevalence of traditional and non-traditional female-specific or female-predominant CVD risk factors increased in all age subgroups. The adjusted odds of in-hospital mortality in the overall study cohort and age subgroups were unchanged throughout the study period. Additionally, we observed an increase in the adjusted odds of cardiogenic shock, acute stroke, and acute kidney injury in the overall cohort over the study period. Conclusions STEMI hospitalizations are increasing among women aged <45 years, and in-hospital mortality has not changed over the past 12 years in women aged <55 years. Future studies on the optimization of risk assessment and management of STEMI in young women are urgently needed.
abstract_id: PUBMED:16029776
Relating health policy to women's health outcomes. Individuals' social and economic circumstances, including socioeconomic status and medical care availability, are central to health outcomes, particularly for women. These factors are often mediated by governmental policies. This exploratory study found associations between women's health outcomes and state-level policies related to women's health. Outcomes were mortality rates for four leading causes of death for women in the US (heart disease, stroke, lung cancer, and breast cancer), infant mortality, and a mental health outcome variable. State policies on key women's health issues were evaluated on the degree to which they adequately protected women's health. Our regression models accounted for significant variance in mortality rates and substantial variance in the mental health outcome. Policies affecting access to care (Medicaid eligibility and efforts to expand Medicaid) and community (environmental health tracking and violence against women) were significantly associated with mortality outcomes. State health policies should be examined further for their relationship to health outcomes.
Answer: The Stroke Hormones and Outcomes in Women (SHOW) study aimed to determine differences in stroke severity and outcomes in women using hormone therapy (HT) versus nonusers at baseline and to investigate the existence of a 'healthy-user effect' for HT in women with stroke (PUBMED:19702448). The study enrolled 133 women with acute ischemic stroke and categorized them based on their use of HT at the time of stroke. The results showed that 30% of the cohort were HT users, who were less likely to have hypertension and were leaner than nonusers. However, there were no differences in initial stroke severity, mortality, or any of the functional status outcomes based on HT use at baseline (PUBMED:19702448).
The conclusion of the SHOW study was that although there appeared to be a healthy-user effect for HT users at baseline, following a stroke, there were no significant differences in long-term outcomes between HT users and nonusers (PUBMED:19702448). This suggests that the 'healthy-user effect' observed at baseline did not translate into better stroke outcomes for women who were on hormone therapy at the time of their stroke. |
Instruction: Quantification of vaginal support: are continuous summary scores better than POPQ stage?
Abstracts:
abstract_id: PUBMED:20728072
Quantification of vaginal support: are continuous summary scores better than POPQ stage? Objective: This analysis compared 3 continuous variables as summary support loss (SL) scores with pelvic organ prolapse (POP) quantification (POPQ) ordinal stages.
Study Design: We used pooled baseline data from 1141 subjects in 3 randomized trials (CARE, n = 322; OPUS, n = 380; ATLAS, n = 439) to test 3 SL measures. The relative responsiveness was assessed using the standardized response mean of 2-year outcome data from the CARE trial.
Results: Each SL measure was strongly correlated with POPQ ordinal staging; the single most distal POPQ point had the strongest correlation. Improvements in anatomic support were weakly correlated with improvements in POP Distress Inventory (r = 0.17-0.24; P < .01 for each) but not with changes in POP Impact Questionnaire for all measures of SL or POPQ stage.
Conclusion: While continuous, single number summary measures compared favorably to ordinal POPQ staging system, the single most distal POPQ point may be preferable to POPQ ordinal stages to summarize or compare group data.
abstract_id: PUBMED:25489146
Comparative study to evaluate the intersystem association and reliability between standard pelvic organ prolapse quantification system and simplified pelvic organ prolapse scoring system. Purpose: The purpose of this study was to determine the association between the standard pelvic organ prolapse quantification (POPQ) classification system and the simplified pelvic organ prolapse (S-POP) classification system.
Method: This is an observational study, in which 100 subjects, whose average age was 60 ± 10 years, with pelvic floor disorder symptoms underwent two systems of examinations-POPQ classification system and S-POP classification system at Safdarjung hospital-done by four gynecologists (two specialists and two resident doctors) using a prospective randomized study, blinded to each other's findings. Data were compared using appropriate statistics.
Results: The weighted Kappa statistics for the intersystem reliability of the S-POP classification system compared with standard POPQ classification system were 0.82 for the overall stage: 0.83 and 0.86 for the anterior and posterior vaginal walls respectively; 0.81 for the apex/vaginal cuff; and 0.89 for the cervix. All these results demonstrate significant agreement between the two systems.
Conclusion: There is almost perfect intersystem agreement between the S-POP classification system and the standard POPQ classification system in respect of the overall stage as well as each point within the same system.
abstract_id: PUBMED:15662489
A new vaginal speculum for pelvic organ prolapse quantification (POPQ). The purposes of this study were to introduce a new vaginal speculum, describe the technique of using the new speculum in identifying and measuring the severity of pelvic organ prolapse (POP), and present results of a pilot study comparing the new speculum to the conventional instruments used in performing POP quantification (POPQ). The new speculum has retractable upper and lower blades marked in centimeters. POPQ was performed with one instrument using the new speculum and multiple instruments performing the conventional technique. Twenty-two patients underwent POPQ-11 using the new speculum and 11 using conventional instruments. The duration of the procedure and the level of discomfort were assessed. The POPQ method using the new speculum is described. Preliminary experience with the new speculum showed that the length of examination is significantly shorter (p<0.001) and the comfort level is better than with the conventional technique (p=0.088). A new vaginal speculum with adjustable blades simplifies POPQ. Preliminary testing suggests potential savings in procedure time and reduction in patient discomfort.
abstract_id: PUBMED:36105883
Identifying and correcting for misspecifications in GWAS summary statistics and polygenic scores. Publicly available genome-wide association studies (GWAS) summary statistics exhibit uneven quality, which can impact the validity of follow-up analyses. First, we present an overview of possible misspecifications that come with GWAS summary statistics. Then, in both simulations and real-data analyses, we show that additional information such as imputation INFO scores, allele frequencies, and per-variant sample sizes in GWAS summary statistics can be used to detect possible issues and correct for misspecifications in the GWAS summary statistics. One important motivation for us is to improve the predictive performance of polygenic scores built from these summary statistics. Unfortunately, owing to the lack of reporting standards for GWAS summary statistics, this additional information is not systematically reported. We also show that using well-matched linkage disequilibrium (LD) references can improve model fit and translate into more accurate prediction. Finally, we discuss how to make polygenic score methods such as lassosum and LDpred2 more robust to these misspecifications to improve their predictive power.
abstract_id: PUBMED:36207709
Prevalence and surgical outcomes of stage 3 and 4 pelvic organs prolapse in Jimma university medical center, south west Ethiopia. Background: Pelvic organ prolapse (POP) affects about half of the women and affects their quality of life. The current study is, therefore, aimed at determining the prevalence and surgical outcomes of severe stage POP at Jimma University medical center from November 2016 to May 2018.
Method: A Hospital-based cross-sectional study was conducted on all patients with stage 3 and 4 POP, who were admitted, and had surgery. Data were collected from the patient's chart, and logbooks, which were filled up from entry till her discharge. A Simplified POPQ(S-POPQ) was used to stage the prolapse at admission, at discharge, and three months follow-ups.
Results: Among 92 patients who were analyzed, POP accounts for 10.6% of all gynecologic admissions, and 43.8% of all gynecologic surgeries. The mean age of patients is 46 (± 12) years, and nearly 34% of the patients had stage 3 and 66% had stage 4 POP. Based on the type of prolapse, 93.5% of patients had stage 3 and more anterior vaginal wall prolapse (AVWP) and apical prolapse, while 57.6% had stage 3 or more posterior vaginal wall prolapse. Out of 72 patients who had anterior colporrhaphy, 58.7% had anterior colporrhaphy with colposuspension. Out of 83 patients who had apical suspension, 48.2%, 39.8%, and 12% had uterosacral, sacrospinous, and Richardson respectively. Ninety-seven patients had stage 0 or 1 POP at discharge while 90% of 20 patients who returned for follow-up at three months had stage 0 or 1 POP. Eight patients had surgery-related complications; bladder injury, urinary retention, Hemorrhage during SSLF, and rectal injury.
Conclusion: The prevalence of pelvic organ prolapse is high and the majority of patients presented with advanced-stage pelvic organ prolapse, with a long duration of symptoms and associated problems. The surgical techniques used have resulted in a high immediate success rate of 97% and 90% at discharge and three months follow up respectively. Therefore, awareness creation activities are important to facilitate an early presentation for treatment to improve the quality of life and the current surgical technique; native tissue vaginal repair (NTVR), being practiced in the setup has had better success.
abstract_id: PUBMED:31911524
Preoperative POPQ versus Simulated Apical Support as a Guideline for Anterior or Posterior Repair at the Time of Transvaginal Apical Suspension (PREPARE trial): study protocol for a randomised controlled trial. Introduction: Transvaginal reconstructive surgery is the mainstay of treatment for symptomatic pelvic organ prolapse. Although adequate support for the vaginal apex is considered essential for durable surgical repair, the optimal management of anterior and posterior vaginal wall prolapse in women undergoing transvaginal apical suspension remains unclear. The objective of this trial is to compare surgical outcomes of pelvic organ prolapse quantification (POPQ)-based surgery with outcomes of simulated apical support-based surgery for anterior or posterior vaginal wall prolapse at the time of transvaginal apical suspension.
Methods And Analysis: This is a randomised, multicentre, non-inferiority trial. While women who are assigned to the POPQ-based surgery group will undergo anterior or posterior colporrhaphy for all stage 2 or greater anterior or posterior vaginal prolapse, those assigned to simulated apical support-based surgery will receive anterior or posterior colporrhaphy only for the prolapse unresolved under simulated apical support. The primary outcome measure is the composite surgical success, defined as the absence of anatomical (anterior or posterior vaginal descent beyond the hymen or descent of the vaginal apex beyond the half-way point of vagina) or symptomatic (the presence of vaginal bulge symptoms) recurrence or retreatment for prolapse by either surgery or pessary, at 2 years after surgery. Secondary outcomes include the rates of anterior or posterior colporrhaphy, the changes in anatomical outcomes, condition-specific quality of life and sexual function, perioperative outcomes and adverse events.
Ethics And Dissemination: This study was approved by the institutional review board of each participating centre (Seoul National University College of Medicine/Seoul National University Hospital, Chonnam National University Hospital, Seoul St. Mary's Hospital, International St. Mary's Hospital). The results of the study will be published in peer-reviewed journals, and the findings will be presented at scientific meetings.
Trial Registration Number: NCT03187054.
abstract_id: PUBMED:17120177
Correlation of pelvic organ prolapse quantification system scores with obstetric parameters and lower urinary tract symptoms in primiparae postpartum. This study investigated the correlation between results of the pelvic organ prolapse quantification (POPQ) system at 3 days and at 2 months postpartum with obstetric parameters and lower urinary tract symptoms (LUTS) in 125 primiparae with vaginal delivery. The clinical characteristics, prevalence of pregnancy-related LUTS, and POPQ scores were evaluated. Regarding the relationship of obstetric parameters with POPQ scoring, the gh was found positively correlated with the body mass index and vaginal laceration at 2 months postpartum. The POPQ evaluation did not find the LUTS to be significantly related to the prolapse score. The mean scores of points C and D were significantly increased, and gh, pb, and tvl were significantly decreased between the initial and 2-month follow-up scores. Our results revealed that a decrease in vaginal size is the principal change during the first 2 months postpartum and that with the exception of gh, neither the obstetric parameters nor the LUTS were associated with the POPQ scoring system.
abstract_id: PUBMED:20936258
The inter-system association between the simplified pelvic organ prolapse quantification system (S-POP) and the standard pelvic organ prolapse quantification system (POPQ) in describing pelvic organ prolapse. Introduction And Hypothesis: The objective of this study is to determine the association between the POPQ and a simplified version of the POPQ.
Methods: This was an observational study. The subjects with pelvic floor disorder symptoms underwent two exams: a POPQ exam and a simplified POPQ. To compare with the simplified POPQ, vaginal segments of the POPQ exam were defined using points Ba, Bp, C, and D. Primary outcome was the association between the overall ordinal stages from each exam.
Results: One hundred forty-three subjects with mean age of 56 +/- 13 years. Twenty three subjects were status post-hysterectomy. The Kendall's tau-b statistic for overall stage was 0.80, for the anterior vaginal wall the Kendall's tau-b was 0.71, for the posterior vaginal wall segment the Kendall's tau-b was 0.71, for the cervix the Kendall's tau-b was 0.88, for the posterior fornix/vaginal cuff the Kendall's tau-b was 0.85.
Conclusions: There is substantial association between the POPQ and a simplified version of the POPQ.
abstract_id: PUBMED:19089783
Relation between vaginal birth and pelvic organ prolapse. Objective: To evaluate the relation between vaginal birth and pelvic organ prolapse quantification (POPQ) stages III and IV prolapse and whether each additional vaginal birth is associated with an increase in pelvic support defects.
Design: Prospective cross-sectional study.
Setting: Gynecology clinic in a University Hospital.
Population: Four hundred and fifty-eight nulliparas and 892 multiparas, including 272 with one, 299 with two and 321 with at least three term vaginal deliveries.
Methods: In a Human Investigation Committee approved-study, the pelvic support of nulliparas and multiparas who only had term vaginal deliveries was evaluated for prolapse using the POPQ system.
Main Outcome Measure: 1. Difference in POPQ stages III and IV prolapse between nulliparas and multiparas. 2. Difference in POPQ stage distribution among nulliparas and multiparas who had one, two, and at least three term vaginal deliveries.
Results: Compared with nulliparas, the odds ratios of having POPQ stage II defect and stage III or IV prolapse associated with multiparas was 2.95 (95% CI: 2.06-4.24) and 1.01 (95% CI: 0.40-2.79), respectively. The prevalence of POPQ stage II defect among nulliparas and multiparas that had one, two, and at least three term vaginal deliveries was 25% (119/458), 50% (137/272), 66% (198/299), and 69% (220/321), respectively (p<0.001), while 1% (6/458), 1% (4/272), 2% (7/299), and 2% (8/321), respectively, had POPQ stage III or IV prolapse (p=0.618).
Conclusions: Vaginal birth is not associated with POPQ stages III and IV prolapse, but it is associated with an increase in POPQ stage II defect.
abstract_id: PUBMED:15223162
Influence of spontaneous and instrumental vaginal delivery on objective measures of pelvic organ support: assessment with the pelvic organ prolapse quantification (POPQ) technique and functional cine magnetic resonance imaging. Objective: To determine the influence of spontaneous and instrumented vaginal delivery on objective measures of pelvic organ support.
Study Design: Prospective study at a university hospital with two study groups: vaginal spontaneous delivery (n = 26) and vacuum extraction (n = 49). Control group consisted of healthy nulliparous volunteers (n = 20). Participants underwent pelvic organ support evaluation by use of the pelvic organ prolapse quantification (POPQ) examination and postpartum functional cine magnetic resonance imaging (MRI).
Results: Significant differences for individual POPQ component measurements were noted for points Aa and Ba, TVL, and GH (spontaneous delivery versus control) and in addition for Ap, Bp, and D (vacuum extraction versus control). Significant differences for MRI measurements were observed for the position of bladder base, bladder neck, posterior fornix of the vagina, anorectal junction, hiatus perimeter and depth of rectocele.
Conclusions: Considerable changes in pelvic organ support after vaginal delivery can be demonstrated by a reproducible and reliable clinical classification system (POPQ) and by functional cine magnetic resonance imaging.
Answer: The analysis of various studies suggests that continuous summary scores may be preferable to the POPQ ordinal staging system for quantifying vaginal support. One study found that three continuous variables as summary support loss (SL) scores were strongly correlated with POPQ ordinal staging, with the single most distal POPQ point having the strongest correlation. However, improvements in anatomic support were weakly correlated with improvements in POP Distress Inventory, indicating that while continuous summary measures compared favorably to the POPQ staging system, the single most distal POPQ point might be more suitable for summarizing or comparing group data (PUBMED:20728072).
Another study comparing the standard POPQ system with the simplified pelvic organ prolapse (S-POP) classification system found almost perfect intersystem agreement between the two systems, suggesting that simplified systems can reliably represent the standard POPQ system (PUBMED:25489146).
A new vaginal speculum designed to simplify the POPQ procedure showed potential savings in procedure time and reduction in patient discomfort, indicating that innovations in tools and techniques can improve the quantification process (PUBMED:15662489).
In summary, continuous summary scores and simplified systems may offer advantages over the traditional POPQ staging system, such as stronger correlations with specific POPQ points, better summarization for group data, and potentially greater efficiency and patient comfort during the assessment process. However, the choice between continuous summary scores and POPQ stage may depend on the specific context and goals of the assessment. |
Instruction: Physicians' recommendations for mammography: do tailored messages make a difference?
Abstracts:
abstract_id: PUBMED:8279610
Physicians' recommendations for mammography: do tailored messages make a difference? Objectives: Message tailoring, based on individual needs and circumstances, is commonly used to enhance face-to-face patient counseling. Only recently has individual tailoring become feasible for printed messages. This study sought to determine whether printed tailored recommendations addressing women's specific screening and risk status and perceptions about breast cancer and mammography are more effective than standardized printed recommendations.
Methods: Computer-assisted telephone interviews were conducted with 435 women, aged 40 to 65 years, who had visited family practice groups within the previous 2 years. Subjects were randomly allocated to receive individually tailored or standardized mammography recommendation letters mailed from physicians to patients' homes. Follow-up interviews were conducted 8 months later.
Results: Tailored letter recipients were more likely to remember and to have read more of their letters than standardized version recipients. After controlling for baseline status, tailored letter receipt was associated with more favorable follow-up mammography status for women with incomes below $26,000 and for Black women.
Conclusions: Tailored messages are a more effective medium for physicians' mammography recommendations; tailoring may be especially important for women of low socioeconomic status.
abstract_id: PUBMED:38390218
Rural adults' perceptions of nutrition recommendations for cancer prevention: Contradictory and conflicting messages. Despite robust evidence linking alcohol, processed meat, and red meat to colorectal cancer (CRC), public awareness of nutrition recommendations for CRC prevention is low. Marginalized populations, including those in rural areas, experience high CRC burden and may benefit from culturally tailored health information technologies. This study explored perceptions of web-based health messages iteratively in focus groups and interviews with 48 adults as part of a CRC prevention intervention. We analyzed transcripts for message perceptions and identified three main themes with subthemes: (1) Contradictory recommendations, between the intervention's nutrition risk messages and recommendations for other health conditions, from other sources, or based on cultural or personal diets; (2) reactions to nutrition risk messages, ranging from aversion (e.g., "avoid alcohol" considered "preachy") to appreciation, with suggestions for improving messages; and (3) information gaps. We discuss these themes, translational impact, and considerations for future research and communication strategies for delivering web-based cancer prevention messages.
abstract_id: PUBMED:32673238
A Retrospective Analysis of Provider-to-Patient Secure Messages: How Much Are They Increasing, Who Is Doing the Work, and Is the Work Happening After Hours? Background: Patient portal registration and the use of secure messaging are increasing. However, little is known about how the work of responding to and initiating patient messages is distributed among care team members and how these messages may affect work after hours.
Objective: This study aimed to examine the growth of secure messages and determine how the work of provider responses to patient-initiated secure messages and provider-initiated secure messages is distributed across care teams and across work and after-work hours.
Methods: We collected secure messages sent from providers from January 1, 2013, to March 15, 2018, at Mayo Clinic, Rochester, Minnesota, both in response to patient secure messages and provider-initiated secure messages. We examined counts of messages over time, how the work of responding to messages and initiating messages was distributed among health care workers, messages sent per provider, messages per unique patient, and when the work was completed (proportion of messages sent after standard work hours).
Results: Portal registration for patients having clinic visits increased from 33% to 62%, and increasingly more patients and providers were engaged in messaging. Provider message responses to individual patients increased significantly in both primary care and specialty practices. Message responses per specialty physician provider increased from 15 responses per provider per year to 53 responses per provider per year from 2013 to 2018, resulting in a 253% increase. Primary care physician message responses increased from 153 per provider per year to 322 from 2013 to 2018, resulting in a 110% increase. Physicians, nurse practitioners, physician assistants, and registered nurses, all contributed to the substantial increases in the number of messages sent.
Conclusions: Provider-sent secure messages at a large health care institution have increased substantially since implementation of secure messaging between patients and providers. The effort of responding to and initiating messages to patients was distributed across multiple provider categories. The percentage of message responses occurring after hours showed little substantial change over time compared with the overall increase in message volume.
abstract_id: PUBMED:27694109
Messages to Motivate Human Papillomavirus Vaccination: National Studies of Parents and Physicians. Background: Physician communication about human papillomavirus (HPV) vaccine is a key determinant of uptake. To support physician communication, we sought to identify messages that would motivate HPV vaccination.
Methods: From 2014 to 2015, we surveyed national samples of parents of adolescents ages 11 to 17 (n = 1,504) and primary care physicians (n = 776). Parents read motivational messages, selected from nine longer messages developed by the Centers for Disease Control and Prevention and six brief messages developed by the study team. Parents indicated whether each message would persuade them to get HPV vaccine for their adolescents. Physicians read the brief messages and indicated whether they would use them to persuade parents to get HPV vaccine for 11- to 12-year-old children.
Results: The highest proportion of parents (65%) and physicians (69%) found this brief message to be persuasive: "I strongly believe in the importance of this cancer-preventing vaccine for [child's name]." Parents disinclined to vaccinate were most receptive to messages with information about HPV infection being common, cancers caused by HPV, and HPV vaccine effectiveness. Parents' endorsement did not vary by race/ethnicity, education, child age, or child sex (all P > 0.05).
Conclusions: Our national surveys of parents and physicians identified messages that could motivate HPV vaccination, even among parents disinclined to vaccinate their children. The lack of difference across demographic subgroups in parental endorsement may suggest that these messages can be used across these subgroups.
Impact: Our findings support physicians' use of these messages with parents to help motivate uptake of this important cancer-preventing vaccine. Cancer Epidemiol Biomarkers Prev; 25(10); 1383-91. ©2016 AACR.
abstract_id: PUBMED:10384580
Upgrading clinical decision support with published evidence: what can make the biggest difference? Background: To enhance clinical decision support, presented messages are increasingly supplemented with information from the medical literature. The goal of this study was to identify types of evidence that can lead to the biggest difference.
Methods: Seven versions of a questionnaire were mailed to randomly selected active family practice physicians and internists across the United States. They were asked about the perceived values of evidence from randomized controlled trials, locally developed recommendations, no evidence, cost-effectiveness studies, expert opinion, epidemiologic studies, and clinical studies. Analysis of variance and pairwise comparisons were used for statistical testing.
Results: Seventy-six (52%) physicians responded. On a Likert scale from one to six, randomized controlled clinical trial was the highest rated evidence (mean 5.07, SD +/- 1.14). Such evidence was significantly superior to locally developed recommendations and no evidence at all (P < .05). The interaction was also strong between the types of evidence and clinical areas (P = .0001).
Conclusion: While most health care organizations present data without interpretation or simply try to enforce locally developed recommendations, such approaches appear to be inferior to techniques of reporting data with pertinent controlled evidence from the literature. Investigating physicians' perceptions is likely to benefit the design of computer generated messages.
abstract_id: PUBMED:31707838
The impact of physicians' recommendations on treatment preference and attitudes: a randomized controlled experiment on shared decision-making. Making decisions based on their own evaluation of relevant information and beliefs is very challenging for patients. Many patients feel that they lack the knowledge to make a decision and expect a recommendation by their physician. We conducted an experimental study to examine the impact of physicians' recommendations on the decision-making process. N = 194 medical laypeople were placed in a hypothetical scenario where they suffered from a cruciate ligament rupture and were faced with the decision about a treatment (surgery or physiotherapy). In a 3 × 2 between-group design we investigated the impact of physicians' recommendations (for surgery, for physiotherapy, no recommendation) and reasoning style (scientific, narrative) on treatment preference, certainty and satisfaction regarding treatment preference, and attitudes. We found that the recommendation had a significant influence on treatment preference and attitudes toward both treatments. Additionally, we found a significant increase in certainty and satisfaction after the intervention, independently of whether they received a recommendation. This finding suggested that a recommendation was not required to strengthen participants' confidence in their decision. There were no effects of reasoning style. We discuss the implications and suggest that physicians should be careful with recommendations in situations in which patients' preferences are important.
abstract_id: PUBMED:36973573
Inpatient Understanding of Their Care Team and Receipt of Mixed Messages: a Two-Site Cross-Sectional Study. Background: Patient understanding of their care, supported by physician involvement and consistent communication, is key to positive health outcomes. However, patient and care team characteristics can hinder this understanding.
Objective: We aimed to assess inpatients' understanding of their care and their perceived receipt of mixed messages, as well as the associated patient, care team, and hospitalization characteristics.
Design: We administered a 30-item survey to inpatients between February 2020 and November 2021 and incorporated other hospitalization data from patients' health records.
Participants: Randomly selected inpatients at two urban academic hospitals in the USA who were (1) admitted to general medicine services and (2) on or past the third day of their hospitalization.
Main Measures: Outcome measures include (1) knowledge of main doctor and (2) frequency of mixed messages. Potential predictors included mean notes per day, number of consultants involved in the patient's care, number of unit transfers, number of attending physicians, length of stay, age, sex, insurance type, and primary race.
Key Results: A total of 172 patients participated in our survey. Most patients were unaware of their main doctor, an issue related to more daily interactions with care team members. Twenty-three percent of patients reported receiving mixed messages at least sometimes, most often between doctors on the primary team and consulting doctors. However, the likelihood of receiving mixed messages decreased with more daily interactions with care team members.
Conclusions: Patients were often unaware of their main doctor, and almost a quarter perceived receiving mixed messages about their care. Future research should examine patients' understanding of different aspects of their care, and the nature of interactions that might improve clarity around who's in charge while simultaneously reducing the receipt of mixed messages.
abstract_id: PUBMED:29201949
Prevention Messages in Parent-Infant Bed-Sharing: Message Source, Credibility, and Effectiveness. Objective. Despite educational outreach, bed-sharing prevalence is rising. Mothers' and fathers' bed-sharing practices, prevention message source, perceived source credibility, and the effectiveness of the prevention message were evaluated. Methods. Data were collected from 678 community parents via an online survey. Results were analyzed using descriptive statistics and phi tests. Results. Bed-sharing reasons focused on comfort and ease. Mothers were more likely to receive prevention messages from individual professionals or organizations, whereas fathers were more likely to hear prevention messages from spouses/coparents and grandfathers. Physicians were the most common source, and physicians and grandmothers were rated as the most credible and effective. Conclusions. Prevention message source varies between mothers and fathers, highlighting the need for continued research with fathers. Grandmothers and physicians are effective and credible sources of prevention messages. Although less frequent, prevention messages from grandmothers were most effective. There was no evidence of effective messages from educational campaigns.
abstract_id: PUBMED:35049263
Understanding physicians' work via text analytics on EHR inbox messages. Objectives: To develop a text analytics methodology to analyze in a refined manner the drivers of primary care physicians' (PCPs') electronic health record (EHR) inbox work.
Study Design: This study used 1 year (2018) of EHR inbox messages obtained from the Epic system for 184 PCPs from 18 practices.
Methods: An advanced text analytics latent Dirichlet allocation model was trained on physicians' inbox message texts to identify the different work themes managed by physicians and their relative share of workload across physicians and clinics.
Results: The text analytics model identified 30 different work themes rolled up into 2 categories of medical and administrative tasks. We found that 50.8% (range across physicians, 34.5%-61.9%) of the messages were concerned with medical issues and 34.1% (range, 23.0%-48.9%) focused on administrative matters. More specifically, 13.6% (range, 7.1%-22.6%) of the messages involved ambiguous diagnosis issues, 13.2% (range, 6.9%-18.8%) involved condition management issues, 6.7% (range, 1.9%-13.4%) involved identified symptoms issues, 9.5% (range, 5.2%-28.9%) involved paperwork issues, and 17.6% (range, 9.3%-27.1%) involved scheduling issues. Additionally, there was significant variability among physicians and practices.
Conclusions: This study demonstrated that advanced text analytics provide a reliable data-driven methodology to understand the individual physician's EHR inbox management work with a significantly greater level of detail than previous approaches. This methodology can inform decision makers on appropriate workflow redesign to eliminate unnecessary workload on PCPs and to improve cost and quality of care, as well as staff work satisfaction.
abstract_id: PUBMED:1604432
Do physicians make too much money? The average net income of physicians in the USA is more than four times the average net income of people working in all domestic industries in the USA. When critics suggest that physicians make too much money, defenders typically appeal to the following four prominent principles of economic justice: Aristotle's Income Principle, the Free Market Principle, the Utilitarian Income Principle, and Rawls' Difference Principle. I shall show that no matter which of these four principles is assumed, the present high incomes of physicians cannot be defended.
Answer: Yes, tailored messages do make a difference in physicians' recommendations for mammography. A study found that printed tailored recommendations addressing women's specific screening and risk status and perceptions about breast cancer and mammography were more effective than standardized printed recommendations. Tailored letter recipients were more likely to remember and to have read more of their letters than recipients of standardized versions. After controlling for baseline status, receipt of a tailored letter was associated with more favorable follow-up mammography status for women with incomes below $26,000 and for Black women. This suggests that tailored messages are a more effective medium for physicians' mammography recommendations, and tailoring may be especially important for women of low socioeconomic status (PUBMED:8279610). |
Instruction: Are first ventilatory threshold and 6-minute walk test heart rate interchangeable?
Abstracts:
abstract_id: PUBMED:34805026
Agreement between heart rate at first ventilatory threshold on treadmill and at 6-min walk test in coronary artery disease patients on β-blockers treatment. The purpose of this study was to verify the accuracy of the agreement between heart rate at the first ventilatory threshold (HRVT1) and heart rate at the end of the 6-min walk test (HR6MWT) in coronary artery disease (CAD) patients on β-blockers treatment. This was a cross-sectional study with stable CAD patients, which performed a cardiopulmonary exercise test (CPET) on a treadmill and a 6-min walk test (6MWT) on nonconsecutive days. The accuracy of agreement between HRVT1 and HR6MWT was evaluated by Bland-Altman analysis and Lin's concordance correlation coefficient (rc), mean absolute percentage error (MAPE), and standard error of estimate (SEE). Seventeen stable CAD patients on β-blockers treatment (male, 64.7%; age, 61±10 years) were included in data analysis. The Bland-Altman analysis revealed a negative bias of -0.41±6.4 bpm (95% limits of agreements, -13 to 12.2 bpm) between HRVT1 and HR6MWT. There was acceptable agreement between HRVT1 and HR6MWT (rc=0.84; 95% confidence interval, 0.63 to 0.93; study power analysis=0.79). The MAPE of the HR6MWT was 5.1% and SEE was 6.6 bpm. The ratio HRVT1/HRpeak and HR6MWT/HRpeak from CPET were not significantly different (81%±5% vs. 81%±6%, P=0.85); respectively. There was a high correlation between HRVT1 and HR6MWT (r=0.85, P<0.0001). Finally, the results of the present study demonstrate that there was an acceptable agreement between HRVT1 and HR6MWT in CAD patients on β-blockers treatment and suggest that HR6MWT may be useful to prescribe and control aerobic exercise intensity in cardiac rehabilitation programs.
abstract_id: PUBMED:25770005
Are first ventilatory threshold and 6-minute walk test heart rate interchangeable? A pilot study in healthy elderlies and cardiac patients. Background: Heart rate (HR) at the ventilatory threshold (VT) is often used to prescribe exercise intensity in cardiac rehabilitation. Some studies have reported no significant difference between HR at VT and HR measured at the end of a 6-min walk test (6-MWT) in cardiac patients. The aim of this work was to assess the potential equivalence between those parameters at the individual level.
Method: Three groups of subjects performed a stress test and a 6-MWT: 22 healthy elderlies (GES, 77 ± 3.7 years), 10 stable coronary artery disease (CAD) patients (GMI, 50.9 ± 4.2 years) and 30 patients with chronic heart failure (GHF, 63.3 ± 10 years). We analyzed the correlation, mean bias, 95% confidence interval (95% CI) of the mean bias and the magnitude of the bias between 6-MWT-HR and VT-HR.
Results: There was a significant difference between 6-MWT and VT-HR in GHF (99.1 ± 8.8 vs 91.6 ± 18.6 bpm, P=0.016) but not in GES and GMI. The correlation between those 2 parameters was high for GMI (r=0.78, P<0.05), and moderate for GES and GHF (r=0.48 and 0.55, respectively, P<0.05). The 95% CI of bias was large (>30%) in GES and GHF and acceptable in GMI (8-12%).
Conclusion: 6-MWT-HR and VT-HR do not appear interchangeable at the individual level in healthy elderlies and CHF patients. In CAD patients, further larger studies and/or the development of other walk tests could help in confirming the interest of a training prescription based on walking performance, after an exhaustive study of their cardiometabolic requirements.
abstract_id: PUBMED:29979904
Dynamics of cardiorespiratory response during and after the six-minute walk test in patients with heart failure. Purpose: The six-minute walk test (6MWT) is a useful measure to evaluate exercise capacity with a simple method. The kinetics of oxygen uptake ([Formula: see text]O2) throughout constant-load exercise on cardiopulmonary exercise testing (CPX) are composed of three phases and the [Formula: see text]O2 kinetics are delayed in patients with heart failure (HF). This study aimed to investigate the kinetics of the cardiorespiratory response during and after the 6MWT according to exercise capacity. Methods: Forty-nine patients with HF performed CPX and the 6MWT. They were divided into two groups by 6MWT distance: 34 patients walked ≥300 m (HF-M), and 15 patients walked <300 m (HF-L). [Formula: see text]O2, minute ventilation ([Formula: see text]E), breathing frequency, tidal volume, and heart rate, both during and after the 6MWT, were recorded. The time courses of each parameter were compared between the two groups. CPX was used to assess functional capacity and physiological responses. Results: In the HF-M group, [Formula: see text]O2 and [Formula: see text]E stabilized from 3 min during the 6MWT and recovered for 3 min, respectively, after the 6MWT ended. In the HF-L group, [Formula: see text]O2 and VE stabilized from 4 min, respectively, during the 6MWT and did not recover within 3 min after the 6MWT ended. On CPX in the HF-M group, [Formula: see text]O2 peak, and anaerobic threshold were significantly higher, while the relationship between minute ventilation and carbon dioxide production was lower compared with the HF-L group. Conclusion: In lower exercise capacity patients with HF had slower [Formula: see text]O2 and [Formula: see text]E kinetics during and after the 6MWT.
abstract_id: PUBMED:24149549
Validity of the modified conconi test for determining ventilatory threshold during on-water rowing. The objectives of this study were to design a field test based on the Conconi protocol to determine the ventilatory threshold of rowers and to test its reliability and validity. A group of sixteen oarsmen completed a modified Conconi test for on-water rowing. The reliability of the detection of the heart rate threshold was evaluated using heart rate breaking point in the Conconi test and retest. Heart rate threshold was detected in 88.8% of cases in the test-retest. The validity of the modified Conconi test was evaluated by comparing the heart rate threshold data acquired with that obtained in a ventilatory threshold test (VT2). No significant differences were found for the values of different intensity parameters i.e. heart rate (HR), oxygen consumption (VO2), stroke rate (SR) and speed (S) between the heart rate threshold and the ventilatory threshold, (170.9 ± 6.8 vs. 169.3 ± 6.4 beats·min(-1); 42.0 ± 8.6 vs. 43.5 ± 8.3 ml·kg(-1)·min(-1); 25.8 ± 3.3 vs. 27.0 ± 3.2 strokes·min(-1) and 14.4 ± 0.8 vs. 14.6 ± 0.8 km·h(-1)). The differences in averages obtained in the Conconi test-retest were small with a low standard error of the mean. The reliability data between the Conconi test-retest showed low coefficients of variations (CV) and high intraclass correlation coefficients (ICC). The total errors for the Conconi test-retest are low for the measured variables (1.31 HR, 0.87 VO2, 0.65 SR, and 0.1 S). The Bland- Altman's method for analysis validity showed a strong concordance according to the analyzed variables. We conclude that the modified Conconi test for on-water rowing is a valid and reliable method for the determination of the second ventilatory threshold (VT2). Key pointsThe Modified Conconi test for on-water rowing is a simple and non-invasive method for the determination of anaerobic threshold for on-water rowing.The modified Conconi protocol for rowing was also shown to be a valid protocol for the calculation of the second ventilatory threshold using the ventilatory method.The Bland-Altman analysis suggests an adequate concordance for the modified Conconi test with the ventilatory method for the measurement of the ventilatory threshold.
abstract_id: PUBMED:28421409
Six-Minute Walk Test for Assessing Physical Functional Capacity in Chronic Heart Failure. Purpose Of The Review: The six-minute walk test (6MWT) is a submaximal exercise test for evaluating physical functional capacity. This review aims to report the research on the use of the 6MWT in chronic heart failure (CHF) that has been published in the past 5 years.
Recent Findings: The 6MWT distance does not accurately reflect peak VO2. Minimal clinically important difference in the 6MWT distance, and additional measurements, such as heart rate recovery, can assist in the interpretation of the 6MWT distance, so management decisions can be made. Incorporating mobile apps and information technology in measuring the 6MWT distance extends the usefulness of this simple walk test and improve remote monitoring of patients with CHF. The 6MWT is a useful tool in CHF programs. However, interpretation of the 6MWT distance must be with caution. With the advancement in technology, the 6MWT has the potential to facilitate the monitoring of people living in rural and remote areas.
abstract_id: PUBMED:26743588
Could the two-minute step test be an alternative to the six-minute walk test for patients with systolic heart failure? Background: The consequence of exercise intolerance for patients with heart failure is the difficulty climbing stairs. The two-minute step test is a test that reflects the activity of climbing stairs.
Design: The aim of the study design is to evaluate the applicability of the two-minute step test in an assessment of exercise tolerance in patients with heart failure and the association between the six-minute walk test and the two-minute step test.
Methods: Participants in this study were 168 men with systolic heart failure (New York Heart Association (NYHA) class I-IV). In the study we used the two-minute step test, the six-minute walk test, the cardiopulmonary exercise test and isometric dynamometer armchair.
Results: Patients who performed more steps during the two-minute step test covered a longer distance during the six-minute walk test (r = 0.45). The quadriceps strength was correlated with the two-minute step test and the six-minute walk test (r = 0.61 and r = 0.48). The greater number of steps performed during the two-minute step test was associated with higher values of peak oxygen consumption (r = 0.33), ventilatory response to exercise slope (r = -0.17) and longer time of exercise during the cardiopulmonary exercise test (r = 0.34). Fatigue and leg fatigue were greater after the two-minute step test than the six-minute walk test whereas dyspnoea and blood pressure responses were similar.
Conclusion: The two-minute step test is well tolerated by patients with heart failure and may thus be considered as an alternative for the six-minute walk test.
abstract_id: PUBMED:31590569
Confirming a beneficial effect of the six-minute walk test on exercise confidence in patients with heart failure. Background: Low confidence to exercise is a barrier to engaging in exercise in heart failure patients. Participating in low to moderate intensity exercise, such as the six-minute walk test, may increase exercise confidence.
Aim: To compare the effects of a six-minute walk test with an educational control condition on exercise confidence in heart failure patients.
Methods: This was a prospective, quasi-experimental design whereby consecutive adult patients attending an out-patient heart failure clinic completed the Exercise Confidence Scale prior to and following involvement in the six-minute walk test or an educational control condition.
Results: Using a matched pairs, mixed model design (n=60; 87% male; Mage=58.87±13.16), we identified a significantly greater improvement in Total exercise confidence (F(1,54)=4.63, p=0.036, partial η2=0.079) and Running confidence (F(1,57)=4.21, p=0. 045, partial η2=0.069) following the six-minute walk test compared to the educational control condition. These benefits were also observed after adjustment for age, gender, functional class and depression.
Conclusion: Heart failure patients who completed a six-minute walk test reported greater improvement in exercise confidence than those who read an educational booklet for 10 min. The findings suggest that the six-minute walk test may be used as a clinical tool to improve exercise confidence. Future research should test these results under randomized conditions and examine whether improvements in exercise confidence translate to greater engagement in exercise behavior.
abstract_id: PUBMED:36280976
Assessment of functional ability by anthropometric and physiological parameters during six-minute walk test in healthy children. Objective: To assess the functional ability and vitals of young children using six-minute walk test.
Methods: The analytical cross-sectional study was conducted from October 2019 to January 2020 at public and private schools of Rawalpindi and Islamabad after approval from the ethics review committee of Riphah College of Rehabilitation Sciences, Westridge Campus Rawalpindi, Pakistan, and comprised healthy children aged 7-12 years who were subjected to the six-minute walk test according to standardised guidelines. Data was collected using a semi-structured questionnaire. Anthropometric measurements, distance walked in six minutes, heart rate, respiratory rate, oxygen saturation, and rate of perceived exertion were the outcome variables. Data was analysed using SPSS 26.
Results: Of the 376 subjects, 225(59.8%) were boys and 151(40.2%) were girls. The mean age of the sample was 9.25±1.64 years. Mean distance covered by the children was 482.63±119.76 metres. Public school students performed better than those studying in private schools (p=0.001). The difference in gender terms was non-significant (p=0.926). Significant difference was observed in mean heart rate and respiratory rate post-walk (p<0.05). There was a weak positive correlation of the test with age and height (p<0.001), but not with weight, gender and body mass index (p>0.05).
Conclusion: The level of functional ability of the young students improved with age and was better among those studying at public schools. Besides, anthropometric and physiological parameters influenced the text performance.
abstract_id: PUBMED:25162649
Heart rate deflection point relates to second ventilatory threshold in a tennis test. The relationship between heart rate deflection point (HRDP) and the second ventilatory threshold (VT2) has been studied in continuous sports, but never in a tennis-specific test. The aim of the study was to assess the relationships between HRDP and the VT2, and between the maximal test performance and the maximal oxygen uptake ((Equation is included in full-text article.)) in an on-court specific endurance tennis test. Thirty-five high-level tennis players performed a progressive tennis-specific field test to exhaustion to determine HRDP, VT2, and (Equation is included in full-text article.). Ventilatory gas exchange parameters were continuously recorded by a portable telemetric breath-by-breath gas exchange measurement system. Heart rate deflection point was identified at the point at which the slope values of the linear portion of the time/heart rate (HR) relationship began to decline and was successfully determined in 91.4% of the players. High correlations (r = 0.79-0.96; p < 0.001) between physiological (HR and oxygen uptake [(Equation is included in full-text article.)]) and performance (Time, Stage, and Frequency of balls [Ballf]) variables corresponding to HRDP and VT2 were observed. Frequency of balls at the HRDP (BallfHRDP) was detected at 19.8 ± 1.7 shots per minute. Paired t-test showed no significant differences in HR (178.9 ± 8.5 vs. 177.9 ± 8.7 b·min for HRDP vs. HRVT2, respectively) at intensities corresponding to HRDP and VT2. Maximal test performance and (Equation is included in full-text article.)were moderately correlated (r = 0.56; p < 0.001). Heart rate deflection point obtained from this specific tennis test can be used to determine the VT2, and the BallfHRDP can be used as a practical performance variable to prescribe on-court specific aerobic training at or near VT2.
abstract_id: PUBMED:27508966
The self-perception of dyspnoea threshold during the 6-min walk test: a good alternative to estimate the ventilatory threshold in chronic obstructive pulmonary disease. To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.
Answer: The interchangeability of the first ventilatory threshold (VT1) and the heart rate measured at the end of a 6-minute walk test (6MWT-HR) varies among different patient populations and cannot be universally applied.
In a study involving stable coronary artery disease (CAD) patients on β-blockers, there was an acceptable agreement between HR at VT1 and HR6MWT, suggesting that HR6MWT may be useful to prescribe and control aerobic exercise intensity in cardiac rehabilitation programs (PUBMED:34805026). However, another study indicated that 6MWT-HR and VT1-HR do not appear interchangeable at the individual level in healthy elderlies and chronic heart failure (CHF) patients. In CAD patients, the correlation between the two parameters was high, but the study suggested that further research is needed to confirm the interest of a training prescription based on walking performance (PUBMED:25770005).
In patients with heart failure, the kinetics of the cardiorespiratory response during and after the 6MWT were found to be slower in those with lower exercise capacity, indicating that the 6MWT may not accurately reflect peak VO2 in this population (PUBMED:29979904; PUBMED:28421409). Additionally, the 6MWT has been shown to be a useful tool in CHF programs, but the interpretation of the 6MWT distance must be done with caution (PUBMED:28421409).
For systolic heart failure patients, the two-minute step test was suggested as an alternative to the 6MWT, as it is well tolerated and may reflect the activity of climbing stairs, which is relevant to the exercise intolerance experienced by these patients (PUBMED:26743588).
In summary, while there may be some level of agreement between VT1 and 6MWT-HR in certain populations, such as stable CAD patients on β-blockers, the two measures are not universally interchangeable across all groups, particularly in healthy elderlies and CHF patients. Further research is needed to explore the potential of alternative tests and to confirm the utility of 6MWT-HR for exercise prescription in different patient populations. |
Instruction: Is there association between changes in eGFR value and the risk of permanent type of atrial fibrillation?
Abstracts:
abstract_id: PUBMED:24836687
Renal disease and left atrial remodeling predict atrial fibrillation in patients with cardiovascular risk factors. Objectives: In this prospective population-based study, we tested the possible interaction between chronic kidney disease (CKD) and left atrium volume index (LAVI) in predicting incident atrial fibrillation (AF).
Methods: We enrolled 3549 Caucasian subjects, 1829 men and 1720 women, aged 60.7 ± 10.6 years, without baseline AF and thyroid disorders. Echocardiographic left ventricular mass and LAVI were measured. Renal function was calculated by estimated glomerular filtration rate (e-GFR). To test the effect of some clinical confounders on incident AF, we constructed different models including clinical and laboratory parameters. AF diagnosis was made by standard electrocardiogram or 24-h ECG-Holter, hospital discharge diagnoses, and by the all-clinical documentation.
Results: During the follow-up (53.3 ± 18.1 months), 546 subjects developed AF (4.5 events/100 patient-years). Progressors to AF were older, had a higher body mass index, blood pressure, LDL-cholesterol, glucose, cardiac mass, and LAVI, and had lower e-GFR. Hypertension, metabolic syndrome, diabetes, cardiac hypertrophy and CKD were more common among AF cases than controls. In the final Cox regression model, variables that remained significantly associated with AF were: cardiac hypertrophy (HR=1.495, 95% CI=1.215-1.841), renal disease (HR=1.528, 95% CI=1.261-1.851), age (HR=1.586, 95% CI=1.461-1.725) and LAVI (HR=2.920, 95% CI=2.426-3.515). The interaction analysis demonstrated a synergic effect between CKD and cardiac hypertrophy (HR=4.040, 95% CI=2.661-6.133), as well as between CKD and LAVI (HR=4.875, 95% CI=2.699-8.805). The coexistence of all three subclinical organ damages significantly increases the arrhythmic risk (HR=7.185, 95% CI=5.041-10.240).
Conclusions: Our data demonstrate that LAVI and CKD significantly interact in a synergic manner in increasing AF risk.
abstract_id: PUBMED:35422927
Predictive value of CTRP3 for the disease recurrence of atrial fibrillation patients after radiofrequency ablation. Objective: To explore the expression of plasma CTRP3 in patients with non-valvular paroxysmal atrial fibrillation after radiofrequency ablation and its predictive value for disease recurrence.
Methods: In this retrospective study, the patients in the Heart Center of Beijing Chaoyang Hospital from June 2016 to November 2017 were collected. According to the guidelines for diagnosis and treatment of atrial fibrillation 2016, patients diagnosed with paroxysmal atrial fibrillation were selected as the study subjects. All patients with successful radiofrequency ablation of atrial fibrillation were followed up by telephone or outpatient service at 1, 3, 6 and 12 months after radiofrequency ablation, respectively. Recurrence of atrial fibrillation was defined as a duration of rapid atrial arrhythmia ≥30 seconds confirmed by electrocardiogram or 24-hour ambulatory electrocardiogram 3 months after radiofrequency ablation. According to the follow-up results, the patients were divided into a recurrent group and non-recurrent group. The level of CTRP3 was detected by enzyme-linked immunosorbent assay (ELISA).
Results: Analysis of clinical baseline data showed significant differences between the recurrent group and the non-recurrent group in age, systolic blood pressure, diastolic blood pressure, EGFR, thyroid stimulating hormone level, platelet count, high-sensitivity C-reactive protein, NT proBNP, left atrial anterior posterior diameter, left atrial upper and lower diameter and CTRP3 (P < 0.05). The univariate logistic regression showed that older age (or = 1.08, P < 0.001), increased diastolic blood pressure (OR = 1.051, P = 0.002), cardiac dysfunction (OR = 2.594, P = 0.01), high-sensitivity C-reactive protein (OR = 1.134, P = 0.008) and NT proBNP (OR = 1.000, P = 0.005), increased anterior posterior diameter of left atrium (OR = 1.158, P < 0.001), increased upper and lower diameter of left atrium (OR = 1.133, P < 0.001), thrombocytopenia (OR = -0.008, P < 0.027) and CTRP3 (OR = 1.007, P = 0.006) were the risk factors for the recurrence of atrial fibrillation after radiofrequency ablation. Moreover, the multivariate logistic regression analysis demonstrated that CTRP3 (or = 1.032, P = 0.005) was an independent predictor of recurrence.
Conclusion: The plasma concentration of CTRP3 increased significantly in patients with recurrent atrial fibrillation after radiofrequency ablation. Moreover, CTRP3 was a predictor of recurrence after radiofrequency ablation in patients with atrial fibrillation.
abstract_id: PUBMED:36505361
Association of chronic kidney disease with all-cause mortality in patients hospitalized for atrial fibrillation and impact of clinical and socioeconomic factors on this association. Background: Atrial fibrillation (AF) and chronic kidney disease (CKD) often co-occur, and many of the same clinical factors and indicators of socioeconomic status (SES) are associated with both diseases. The effect of the estimated glomerular filtration rate (eGFR) on all-cause mortality in AF patients and the impact of SES on this relationship are uncertain.
Materials And Methods: This retrospective study examined 968 patients who were admitted for AF. Patients were divided into four groups based on eGFR at admission: eGFR-0 (normal eGFR) to eGFR-3 (severely decreased eGFR). The primary outcome was all-cause mortality. Cox regression analysis was used to identify the effect of eGFR on mortality, and subgroup analyses to determine the impact of confounding factors.
Results: A total of 337/968 patients (34.8%) died during follow-up. The average age was 73.70 ± 10.27 years and there were 522 males (53.9%). More than 39% of these patients had CKD (eGFR < 60 mL/min/1.73 m2), 319 patients with moderately decreased eGFR and 67 with severely decreased eGFR. After multivariate adjustment and relative to the eGFR-0 group, the risk for all-cause death was greater in the eGFR-2 group (HR = 2.416, 95% CI = 1.366-4.272, p = 0.002) and the eGFR-3 group (HR = 4.752, 95% CI = 2.443-9.242, p < 0.00001), but not in the eGFR-1 group (p > 0.05). Subgroup analysis showed that moderately to severely decreased eGFR only had a significant effect on all-cause death in patients with low SES.
Conclusion: Moderately to severely decreased eGFR in AF patients was independently associated with increased risk of all-cause mortality, especially in those with lower SES.
abstract_id: PUBMED:37139162
The association between tyrosine kinase inhibitors and fatal arrhythmia in patients with non-small cell lung cancer in Taiwan. Objective: As a standard therapy, tyrosine kinase inhibitors (TKIs) improved survival in patients with non-small cell lung cancer (NSCLC) and epidermal growth factor receptor (EGFR) mutation. However, treatment-related cardiotoxicity, particularly arrhythmia, cannot be ignored. With the prevalence of EGFR mutations in Asian populations, the risk of arrhythmia among patients with NSCLC remains unclear.
Methods: Using data from the Taiwanese National Health Insurance Research Database and National Cancer Registry, we identified patients with NSCLC from 2001 to 2014. Using Cox proportional hazards models, we analyzed outcomes of death and arrhythmia, including ventricular arrhythmia (VA), sudden cardiac death (SCD), and atrial fibrillation (AF). The follow-up duration was three years.
Results: In total, 3876 patients with NSCLC treated with TKIs were matched to 3876 patients treated with platinum analogues. After adjusting for age, sex, comorbidities, and anticancer and cardiovascular therapies, patients receiving TKIs had a significantly lower risk of death (adjusted HR: 0.767; CI: 0.729-0.807, p < 0.001) than those receiving platinum analogues. Given that approximately 80% of the studied population reached the endpoint of mortality, we also adjusted for mortality as a competing risk. Notably, we observed significantly increased risks of both VA (adjusted sHR: 2.328; CI: 1.592-3.404, p < 0.001) and SCD (adjusted sHR: 1.316; CI: 1.041-1.663, p = 0.022) among TKI users compared with platinum analogue users. Conversely, the risk of AF was similar between the two groups. In the subgroup analysis, the increasing risk of VA/SCD persisted regardless of sex and most cardiovascular comorbidities.
Conclusions: Collectively, we highlighted a higher risk of VA/SCD in TKI users than in patients receiving platinum analogues. Further research is needed to validate these findings.
abstract_id: PUBMED:36057558
Circulating plasma galectin-3 predicts new-onset atrial fibrillation in patients after acute myocardial infarction during hospitalization. Background: New-onset atrial fibrillation (NOAF) is a common complication in patients with acute myocardial infarction (AMI) during hospitalization. Galectin-3 (Gal-3) is a novel inflammation marker that is significantly associated with AF. The association between post-AMI NOAF and Gal-3 during hospitalization is yet unclear.
Objective: The present study aimed to investigate the predictive value of plasma Gal-3 for post-AMI NOAF.
Methods: A total of 217 consecutive patients admitted with AMI were included in this retrospective study. Peripheral venous blood samples were obtained within 24 h after admission and plasma Gal-3 concentrations were measured.
Results: Post-AMI NOAF occurred in 18 patients in this study. Patients with NOAF were older (p < 0.001) than those without. A higher level of the peak brain natriuretic peptide (BNP) (p < 0.001) and Gal-3 (p < 0.001) and a lower low-density lipoprotein cholesterol level (LDL-C) (p = 0.030), and an estimated glomerular filtration rate (e-GFR) (p = 0.030) were recorded in patients with post-AMI NOAF. Echocardiographic information revealed that patients with NOAF had a significantly decreased left ventricular eject fraction (LVEF) (p < 0.001) and an increased left atrial diameter (LAD) (p = 0.004) than those without NOAF. The receiver operating characteristic (ROC) curve analysis revealed a significantly higher value of plasma Gal-3 in the diagnosis of NOAF for patients with AMI during hospitalization (area under the curve (p < 0.001), with a sensitivity of 72.22% and a specificity of 72.22%, respectively. Multivariate logistic regression model analysis indicated that age (p = 0.045), plasma Gal-3 (p = 0.018), and LAD (p = 0.014) were independent predictors of post-MI NOAF.
Conclusions: Plasma Gal-3 concentration is an independent predictor of post-MI NOAF.
abstract_id: PUBMED:35492830
Mechanistic and Clinical Overview Cardiovascular Toxicity of BRAF and MEK Inhibitors: JACC: CardioOncology State-of-the-Art Review. Rapidly accelerated fibrosarcoma B-type (BRAF) and mitogen-activated extracellular signal-regulated kinase (MEK) inhibitors have revolutionized melanoma treatment. Approximately half of patients with melanoma harbor a BRAF gene mutation with subsequent dysregulation of the RAF-MEK-ERK signaling pathway. Targeting this pathway with BRAF and MEK blockade results in control of cell proliferation and, in most cases, disease control. These pathways also have cardioprotective effects and are necessary for normal vascular and cardiac physiology. BRAF and MEK inhibitors are associated with adverse cardiovascular effects including hypertension, left ventricular dysfunction, venous thromboembolism, atrial arrhythmia, and electrocardiographic QT interval prolongation. These effects may be underestimated in clinical trials. Baseline cardiovascular assessment and follow-up, including serial imaging and blood pressure assessment, are essential to balance optimal anti-cancer therapy while minimizing cardiovascular side effects. In this review, an overview of BRAF/MEK inhibitor-induced cardiovascular toxicity, the mechanisms underlying these, and strategies for surveillance, prevention, and treatment of these effects are provided.
abstract_id: PUBMED:29761335
Biomarkers and atrial fibrillation : Prediction of recurrences and thromboembolic events after rhythm control management Atrial fibrillation (AF) is the most common arrhythmia in clinical praxis and is associated with an increased risk for cardio- and cerebrovascular complications leading to an increased mortality. Catheter ablation represents one of the most important and efficient therapy strategies in AF patients. Nevertheless, the high incidence of arrhythmia recurrences after catheter ablation leads to repeated procedures and higher treatment costs. Recently, several scores had been developed to predict rhythm outcomes after catheter ablation. Biomarker research is also of enormous interest. There are many clinical and blood biomarkers pathophysiologically associated with AF occurrence, progression and recurrences. These biomarkers-including different markers in blood (e. g. von Willebrand factor, D‑dimer, natriuretic peptides) or urine (proteins, epidermal grown factor receptor) but also cardiac imaging (echocardiography, computed tomography, magnetic resonance imaging)-could help to improve clinical scores and be useful for individualized AF management and optimized patients' selection for different AF treatment strategies. In this review, the role of diverse biomarkers and their predictive value related to AF-associated complications are discussed.
abstract_id: PUBMED:21191795
Angiotensin II signaling up-regulates the immediate early transcription factor ATF3 in the left but not the right atrium. The atria respond to various pathological stimuli including pressure and volume overload with remodeling and dilatation. Dilatation of the left atrium is associated with atrial fibrillation. The mechanisms involved in chamber-specific hypertrophy are largely unknown. Angiotensin II is hypothesized to take part in mediating this response. ATF3 is an immediate early gene found at the receiving end of multiple stress and growth stimuli. Here we characterize ATF3 as a direct target gene for angiotensin II. ATF3 expression is regulated by angiotensin receptor-mediated signaling in vivo and in vitro at the transcriptional level. ATF3 induction is mediated by cooperation between both the AT(1A) and AT₂ receptor subtypes. While AT₂R blocker (PD123319) efficiently blocks ATF3 induction in response to angiotensin II injection, it results in an increase in blood pressure indicating that the effect of angiotensin II on ATF3 is independent of its effect on blood pressure. In contrast to adrenergic stimulation that induces ATF3 in all heart chambers, ATF3 induction in response to angiotensin II occurs primarily in the left chambers. We hypothesize that the activation of differential signaling pathways accounts for the chamber-specific induction of ATF3 expression in response to angiotensin II stimulation. Angiotensin II injection rapidly activates the EGFR-dependent pathways including ERK and PI3K-AKT in the left but not the right atrium. EGF receptor inhibitor (Gefitinib/Iressa) as well as the AKT inhibitor (Triciribine) significantly abrogates ATF3 induction by angiotensin II in the left chambers. Collectively, our data strongly place ATF3 as a unique nuclear protein target in response to angiotensin II stimulation in the atria. The spatial expression of ATF3 may add to the understanding of the signaling pathways involved in cardiac response to neuro-hormonal stimulation, and in particular to the understanding of left atrial-generated pathology such as atrial fibrillation.
abstract_id: PUBMED:36942434
Multiple cardiotoxicities during osimertinib therapy. Introduction: The tyrosine-kinase inhibitor osimertinib is an oral anti-cancer agent that is used for the treatment of patients with metastatic non-small cell lung cancer harbouring sensitising EGFR mutations. Patients receiving osimertinib are at higher risk of developing cardiac toxicity, and here we present the case of a 72-year-old male who developed multiple cardiotoxicities during therapy (i.e. QTc prolongation, atrial fibrillation, heart failure).
Case Report: A 72-year-old white British, ex-smoker male patient was admitted to our cancer centre with adenocarcinoma of the lung. Afatinib, gefitinib, osimertinib, and carboplatin plus pemetrexed chemotherapy were the treatments he received. At the 15th month of osimertinib therapy, the patient developed QTc prolongation. Two weeks after the first incidence of QTc prolongation, electrocardiography showed rate-controlled atrial fibrillation. In addition to his atrial fibrillation, echocardiography revealed severely impaired left ventricular systolic function (left ventricular ejection fraction: 30%).
Management And Outcomes: Baseline to osimertinib, an electrocardiography investigation was carried out as per the protocol. Baseline drug history was reviewed and rosuvastatin was discontinued before initiating osimertinib as both drugs contribute to QTc prolongation. Dabigatran, bisoprolol, and digoxin were started for the treatment of atrial fibrillation. Ramipril and spironolactone were prescribed for the treatment of heart failure but osimertinib continued uneventfully. The patient died of non-small cell lung cancer.
Discussion: Recommendations for practical and clinically relevant baseline and on-treatment assessments are considered which may reduce the risk of cardiac toxicity during osimertinib therapy. These include baseline cardiac risk stratification, consideration of concomitant medications that may result in additive cardiac risk, and use of electrocardiography and echocardiography surveillance.
abstract_id: PUBMED:36480925
Possible heart failure caused by osimertinib in a lung cancer patient. Introduction: Osimertinib is a third-generation tyrosine kinase inhibitor (TKI) indicated for the treatment of epidermal growth factor receptor mutated non-small cell lung cancer (NSCLC). It has demonstrated better results concerning effectiveness than other TKIs for the same indication. However, despite a good safety profile, it could produce some cardiotoxicity that does not occur with other drugs of the same group.
Case Report: We report the evolution and management of a female patient diagnosed with NSCLC who developed a grade 3 cardiotoxicity due to treatment with osimertinib. This patient suffered from a left bundle branch block, dyslipidemia, and hypertension as cardiovascular risk factors. After a long period of treatment with osimertinib, she developed a severe heart failure (HF) with an important decrease in left ventricular ejection fraction (LVEF), which triggered an admission to the oncology unit for eight days.
Management And Outcomes: Treatment with osimertinib was first suspended and then resumed after stabilization of the HF. She also developed atrial fibrillation during admission and has required narrow cardiac monitoring and management since the debut of the HF. After evaluating the benefit-risk balance, osimertinib was reintroduced and the patient continues in treatment at the moment, although the baseline LVEF is not recovered.
Discussion: There is scarce evidence in the literature concerning HF and important LVEF decrease due to osimertinib. However, its severity and repercussion for the patient justify the thorough screening of cardiovascular risk factors before starting the therapy.
Answer: The abstracts provided do not directly address the association between changes in estimated glomerular filtration rate (eGFR) values and the risk of permanent atrial fibrillation (AF). However, they do provide evidence of an association between chronic kidney disease (CKD), which is characterized by reduced eGFR, and the risk of AF.
In abstract PUBMED:24836687, it is reported that renal disease, as indicated by lower eGFR, was significantly associated with incident AF in a population-based study. The study found that subjects who progressed to AF had lower eGFR, and the final Cox regression model identified renal disease as a significant predictor of AF. Additionally, there was a synergistic effect between CKD and left atrial volume index (LAVI) in increasing AF risk.
Abstract PUBMED:36505361 also supports the association between CKD and adverse outcomes in patients with AF. It was found that patients with moderately to severely decreased eGFR had an increased risk of all-cause mortality, particularly in those with lower socioeconomic status.
While these abstracts suggest a link between renal dysfunction and AF, they do not specifically differentiate between types of AF (paroxysmal, persistent, or permanent). Therefore, based on the provided abstracts, it can be inferred that there is an association between CKD (implied by changes in eGFR) and the risk of AF in general, but there is no explicit evidence regarding the risk of permanent AF specifically. |
Instruction: Can global longitudinal strain predict reduced left ventricular ejection fraction in daily echocardiographic practice?
Abstracts:
abstract_id: PUBMED:25530159
Can global longitudinal strain predict reduced left ventricular ejection fraction in daily echocardiographic practice? Background: Transthoracic echocardiography (TTE) is the most commonly used method for measuring left ventricular ejection fraction (LVEF), but its reproducibility remains a matter of controversy. Speckle tracking echocardiography assesses myocardial deformation and left ventricular systolic function by measuring global longitudinal strain (GLS), which is more reproducible, but is not used routinely in hospital practice.
Aim: To investigate the feasibility of on-line two-dimensional GLS in predicting LVEF during routine echocardiographic practice.
Methods: The analysis involved 507 unselected consecutive patients undergoing TTE between August 2012 and November 2013. Echocardiograms were performed by a single sonographer. Echogenicity was noted as good, moderate or poor. Simple linear regression was used to assess the relationship between LVEF and GLS, overall and according to quality of echogenicity. Receiver operating curve (ROC) analysis was used to identify the threshold GLS that predicts LVEF≤40%.
Results: Mean LVEF was 64±11% and GLS was -18.0±4.0%. A reasonable correlation was found between LVEF and GLS (r=-0.53; P<0.001), which was improved when echogenicity was good (r=-0.60; P<0.001). GLS explained 28.1% of the variation in LVEF, and for one unit decrease in GLS, a 1.45 unit increase in LVEF was expected. Correlations between LVEF and GLS were -0.51 for patients in sinus rhythm (n=490) and -0.86 in atrial fibrillation (n=17). Based on ROC analysis, the area under the curve was 0.97 for GLS≥-14%, allowing detection of LVEF≤40% with a sensitivity of 95% and specificity of 86%.
Conclusion: Two-dimensional GLS is easy to obtain and accurately detects LVEF≤40% in unselected patients. GLS may be especially helpful when a suboptimal acoustic window makes LVEF measurement by Simpson's biplane method difficult and in atrial fibrillation patients with low heart rate variability.
abstract_id: PUBMED:38078698
Early improvement of global longitudinal strain after iron deficiency correction in heart failure with reduced ejection fraction. Background: Iron deficiency correction with ferric carboxymaltose improves symptoms and reduces rehospitalization in patients with reduced left ventricular ejection fraction. The mechanisms underlying these improvements are poorly understood. This study aimed to determine changes in left ventricular contractility after iron treatment as reflected in global longitudinal strain.
Methods: Prospective single-center study including 43 adults with reduced ejection fraction, non-anemic iron deficiency, and functional class II-III heart failure despite optimal medical treatment. Global longitudinal strain through speckle-tracking echocardiography was measured at baseline and 4 weeks after ferric carboxymaltose.
Results: A significant improvement in global longitudinal strain was detected (from -12.3% ± 4.0% at baseline to -15.6% ± 4.1%, p < .001); ferritin and transferrin saturation index had increased, but ejection fraction presented no significant changes (baseline 35.7% ± 4.6%, follow-up 37.2% ± 6.6%, p = .073).
Conclusions: In patients with heart failure and reduced ejection fraction, the correction of iron deficiency with ferric carboxymaltose is associated with an early improvement in global longitudinal strain, possibly suggesting a direct effect of iron correction on myocardial contractility.
abstract_id: PUBMED:29664507
Echocardiographic evaluation after pediatric heart transplant in Chile: initial application of a functional protocol with global longitudinal strain. Introduction: The echocardiographic evaluation of patients after heart transplantation is a useful tool. However, it is still necessary to define an optimal follow-up protocol.
Objective: To describe the results of the application of a functional echocardiographic protocol in patients being followed after pediatric heart transplantation.
Patients And Method: Alls patients being followed at our institution after pediatric heart transplantation underwent an echocardiographic examination with a functional protocol that included global longitudinal strain. Contemporaneous endomyocardial biopsy results and hemodynamic data were recorded.
Results: 9 patients were evaluated with our echocardiographic functional protocol. Of these patients, only 1 showed systolic left ventricular dysfunction according to classic parameters. However, almost all patients had an abnormal global longitudinal strain. Right ventricular systolic dysfunction was observed in all patients. No epidodes of moderate to severe rejectiom were recorded. No correlation was observed between these parameters and pulmonary artery pressure.
Conclusions: Subclinical biventricular systolic dysfunction was observed in the majority of the patients in this study. No association with rejection episodes or pulmonary hypertension was observed, which may be related to the absence of moderate or severe rejection episodes during the study period, and to the small sample size. Long term follow-up of these patients may better define the clinical relevance of our findings.
abstract_id: PUBMED:30314666
Adipose epicardial tissue association with subclinical systolic dysfunction detected by longitudinal strain in diabetic patients with poor glycemic control Objective: The aim of this study is to assess the association between epicardial adipose tissue (EAT) and infraclinical myocardial dysfunction detected by strain imaging in diabetic patients (T2DM) with poor glycemic control.
Methods: 22 patients with T2DM and 22 healthy control subjects of similar age and sex were prospectively recruited. Echocardiographic parameters were investigated.
Results: In comparison to controls, diabetic patients had significantly higher body mass index (27.7 vs. 24.6; P<0.01), waist perimeter (103 vs. 84; P<0.001) and usCRP level (5.4 vs. 1.5; P<0.01). On echocardiography; no differences were found in terms of ejection fraction or ventricular mass; however, patients with T2DM had significantly thicker EAT (8.7±0.7 vs. 3.0±1.0; P<0.001) and altered systolic longitudinal strain (-18.8±3.2 vs. 22.3±1.6; P<0.001). On multivariate analysis, EAT was identified as an independent contributor (β=0,46, P=0.001) to systolic longitudinal strain.
Conclusion: In patients with T2DM and poor glycemic control; EAT was associated with infraclinical systolic dysfunction evaluated by global longitudinal strain despite normal at rest ejection fraction and no coronary artery disease.
abstract_id: PUBMED:38357562
Assessment of ventricular function after total cavo-pulmonary derivation in adult patients: Interest of global longitudinal strain. Ventricular dysfunction is the most frequent complication in adult patients post-Fontan completion. Through this work, we aim to evaluate ventricular systolic function by conventional echographic parameters and by global longitudinal strain (GLS) to determine the prediction of early ventricular systolic dysfunction. This is a prospective monocentric study enrolling 15 clinically stable adult Fontan patients with preserved ejection fraction (EF). Myocardial deformation study by GLS with speckle tracking technique in addition to a standard Doppler transthoracic echocardiography (TTE) was performed. Cardiac magnetic resonance imaging (CMR) was also performed. A comparison of echocardiographic and CMR parameters was made. In comparison to CMR-derived EF, we found a significant correlation with GLS and TTE-derived EF (P=0.003 and 0.014). We divided our population into two groups based on the cut-off value of 50% of CMR derived EF. Comparison of GLS in both groups showed a significant correlation (P=0.003). A cut-off value of -13.3% showed sensitivity of 67% and specificity of 100%. GLS has a moderate diagnostic value for systolic myocardial dysfunction in the population of adult patients with Fontan circulation.
abstract_id: PUBMED:37413704
Prognostic Relevance of Left Ventricular Global Longitudinal Strain in Patients With Heart Failure and Reduced Ejection Fraction. Patients with heart failure (HF) and reduced ejection fraction (HFrEF) are complex patients who often have a high prevalence of co-morbidities and risk factors. In the present study, we investigated the prognostic significance of left ventricular (LV) global longitudinal strain (GLS) along with important clinical and echocardiographic variables in patients with HFrEF. Patients who had a first echocardiographic diagnosis of LV systolic dysfunction, defined as LV ejection fraction ≤45%, were selected. The study population was subdivided into 2 groups based on a spline curve analysis derived optimal threshold value of LV GLS (≤10%). The primary end point was occurrence of worsening HF, whereas the composite of worsening HF and all-cause death was chosen for the secondary end point. A total of 1,873 patients (mean age 63 ± 12 years, 75% men) were analyzed. During a median follow-up of 60 months (interquartile range 27 to 60 months), 256 patients (14%) experienced worsening HF and the composite end point of worsening HF and all-cause mortality occurred in 573 patients (31%). The 5-year event-free survival rates for the primary and secondary end point were significantly lower in the LV GLS ≤10% group compared with the LV GLS >10% group. After adjustment for important clinical and echocardiographic variables, baseline LV GLS remained independently associated with a higher risk of worsening HF (hazard ratio 0.95, 95% confidence interval 0.90 to 0.99, p = 0.032) and the composite of worsening HF and all-cause mortality (hazard ratio 0.94, 95% confidence interval 0.90 to 0.97, p = 0.001). In conclusion, baseline LV GLS is associated with long-term prognosis in patients with HFrEF, independent of various clinical and echocardiographic predictors.
abstract_id: PUBMED:37910595
Speckle tracking echocardiography-derived left ventricular global longitudinal strain in ex-thalassaemics. Aims: Long term survivors of haematopoietic stem cell transplantation (HSCT) for β-thalassemia major are designated "ex-thalassaemics". Whether ex-thalassaemics continue to harbour residual myocardial dysfunction and thereby stand the risk of heart failure-related morbidity and mortality is unknown. The aim of this study was to assess the prevalence and predictors of subclinical left ventricular (LV) dysfunction in an apparently normal ex-thalassaemic population.
Methods: We conducted a single centre cross-sectional study among 62 ex-thalassaemic patients, who had undergone HSCT for β-thalassaemia major at our centre. The primary outcome variable was LV systolic dysfunction, as assessed by 1) LV global longitudinal strain (GLS) derived by 2D speckle tracking echocardiography and 2) LV ejection fraction (EF) derived by 2D Simpsons Biplane method.
Results: Among the 62 patients included in the study, 7 [11.3%] were found to have LV systolic dysfunction, all of which were subclinical. Of these, 4 [6.5%] had abnormal GLS and LVEF, 2 [3.2%] had abnormal GLS with normal LVEF, and 1 [1.6%] had abnormal LVEF with low normal mean GLS. There were no statistically significant predictors of LV dysfunction in this cohort.
Conclusion: There was a high prevalence of subclinical myocardial dysfunction in the ex-thalassaemic population reiterating the need for close follow up of these patients. 2D Speckle tracking echocardiography-derived LV global longitudinal strain is an effective tool in detecting subclinical myocardial dysfunction in this cohort.
abstract_id: PUBMED:30937684
3D echocardiographic global longitudinal strain can identify patients with mildly-to-moderately reduced ejection fraction at higher cardiovascular risk. Severely reduced left ventricular (LV) ejection fraction (EF) derived from 2D echocardiographic (2DE) images is associated with increased mortality and used to guide therapeutic choices. Global longitudinal strain (GLS) is more sensitive than LVEF to detect abnormal LV function, and accordingly may help identify patients with mildly-to-moderately reduced LVEF who are at a similarly high cardiovascular (CV) risk. We hypothesized that 3D echocardiographic (3DE) measurements of EF and GLS, which are more reliable and reproducible, may have even better predictive value than the 2DE indices, and compared their ability to identify such patients. We retrospectively studied 104 inpatients with 2DE-derived LVEF of 30-50% who underwent transthoracic echocardiography during 2006-2010 period, had good quality images, and were followed-up through 2016. Both 2DE and 3DE images were analyzed to measure LVEF and GLS. Kaplan-Meier survival curves were generated for two subgroups defined by the median of each parameter as the cutoff. Of the 104 patients, 32 died of CV related causes. Cox regression revealed that 3D GLS was the only variable associated with CV mortality. Kaplan-Meier curves showed that 2D LVEF, 2D GLS and 3D EF were unable to differentiate patients at higher CV mortality risk, but 3D GLS was the only parameter to do so. Because 3D GLS is able to identify patients with mildly-to-moderately reduced LVEF who are at higher CV mortality risk, its incorporation into clinical decisions may improve survival of those who would benefit from therapeutic interventions not indicated according to the current guidelines.
abstract_id: PUBMED:35168822
Assessment of left ventricular global longitudinal strain in patients with type 2 diabetes: Relationship with microvascular damage and glycemic control. Background And Aims: Type 2 Diabetes mellitus (T2DM) is associated with a higher risk of Heart Failure; Left Ventricular (LV) diastolic dysfunction is often considered the first marker of Diabetic cardiomyopathy; however, early preclinical LV systolic dysfunction has also been observed by means of echocardiographic measurement of strain. This study is aimed at assessing determinants of impaired strain and diastolic ventricular dysfunction in patients with T2DM.
Methods And Results: Cross-sectional study, performed on a consecutive series of patients with T2DM aged 30-80 years, BMI<40 kg/m2, free of cardiovascular disease, assessing metabolic control, microvascular complications, echocardiographic measures. Out of 206 patients, 19.6% had GLS lower than 18. GLS showed a significant inverse correlation with HBA1c, (p = 0.016), BMI (p = 0.002), waist (p < 0.0001), and mean L:H Ratio (p = 0.019). In a multivariate regression for LV GLS including HbA1c, age, sex, BMI and mean RR SDNN index, only HbA1c retained statistical significance: (B = -0.050 [-0.091; -0.009], p = 0.01. Among markers of LV diastolic function, only the E/E' ratio was associated with HbA1c at a univariate analysis, and it retained statistical significance in a multivariate regression including HbA1c, age, sex and disease duration (B = 0.038 [0.03; -0.073], p = 0.032). No significant difference in any parameter of systolic or diastolic function was observed between patients with or without microalbuminuria or diabetic retinopathy.
Conclusion: In patients with T2DM, a reduced left ventricular global longitudinal strain appears to be independently associated with impaired glucose control and autonomic neuropathy, regardless of microvascular complications.
abstract_id: PUBMED:30450553
Ventricular global longitudinal strain is altered in children with sickle cell disease. Cardiac disease is the primary cause of death in sickle cell disease (SCD). Right and left ventricular global longitudinal strain (RVGLS, LVGLS) are early markers of systolic dysfunction but are not well investigated among children with SCD. One hundred and forty-three patients with HbSS or HbSβ0 -thalassaemia (median age 11 years, range 5-19 years) and 71 controls matched for age and sex were compared. RVGLS and LVGLS were measured and compared with conventional measures of echocardiography and markers of haemolysis and inflammation. RVGLS was higher in children with SCD than in controls (-25·72% ± 3·45% vs. -24·54% ± 2·41%, P = 0·005); LVGLS was not different. RVGLS decreased with older age in children with SCD (ρ = 0·338, P < 0·001) but not among controls. Decreased RVGLS was associated with increased left atrial end diastolic volume (ρ = 0·181, P = 0·04); RVGLS increased with cardiac output (r = -0·279, P = 0·01). RVGLS and LVGLS were not associated with disease-modifying therapies, degree of anaemia or haemolysis markers. Elevated RVGLS may indicate an early RV compensatory mechanism in response to upstream myocardial insults and elevated cardiac output. Global longitudinal strain may serve as an early marker of altered myocardial function in children with SCD.
Answer: Yes, global longitudinal strain (GLS) can predict reduced left ventricular ejection fraction (LVEF) in daily echocardiographic practice. A study involving 507 unselected consecutive patients undergoing transthoracic echocardiography (TTE) found a reasonable correlation between LVEF and GLS (r=-0.53; P<0.001), which improved when echogenicity was good (r=-0.60; P<0.001). GLS explained 28.1% of the variation in LVEF, and for one unit decrease in GLS, a 1.45 unit increase in LVEF was expected. Receiver operating curve (ROC) analysis identified a threshold GLS that predicts LVEF≤40% with high sensitivity (95%) and specificity (86%), suggesting that two-dimensional GLS is easy to obtain and accurately detects LVEF≤40% in unselected patients (PUBMED:25530159).
Additionally, GLS has been shown to be an early marker of systolic dysfunction in various patient populations, including those with heart failure with reduced ejection fraction (HFrEF), pediatric heart transplant recipients, diabetic patients with poor glycemic control, and adult patients post-Fontan completion (PUBMED:38078698, PUBMED:29664507, PUBMED:30314666, PUBMED:38357562). It has also been demonstrated to have prognostic relevance in patients with HFrEF, being associated with long-term prognosis independent of various clinical and echocardiographic predictors (PUBMED:37413704). Moreover, GLS has been found to be an effective tool in detecting subclinical myocardial dysfunction in ex-thalassaemics (PUBMED:37910595) and in identifying patients with mildly-to-moderately reduced LVEF who are at higher cardiovascular mortality risk (PUBMED:30937684). In patients with type 2 diabetes, a reduced GLS has been associated with impaired glucose control and autonomic neuropathy, regardless of microvascular complications (PUBMED:35168822). Lastly, in children with sickle cell disease, GLS may serve as an early marker of altered myocardial function (PUBMED:30450553).
These findings collectively support the use of GLS as a predictive tool for reduced LVEF in daily echocardiographic practice. |
Instruction: Adding consumer-providers to intensive case management: does it improve outcome?
Abstracts:
abstract_id: PUBMED:17535940
Adding consumer-providers to intensive case management: does it improve outcome? Objective: Over the past decade, there has been increasing interest in the employment of mental health consumers in various roles as providers of services. Although integration of consumers into case management services has been studied, the roles of consumers have been poorly defined and the benefits have not been established. The goal of this study was to evaluate whether consumers enhance case management outcome through the provision of social support.
Methods: This study compared consumer-assisted and non-consumer-assisted case management with standard clinic-based care. The consumer role focused on the development of social support by using peer staff who matched the profile of participants. A total of 203 clients with severe and persistent mental illness were randomly assigned to one of the three conditions and followed for 12 months.
Results: All three programs yielded the same general pattern of improvement over time for symptoms, health care satisfaction, and quality of life. Clients in the three programs also showed similar but small changes in measures of social network behavior. Consumer-assisted case management was unique in its use of peer-organized activities. Non-consumer-assisted case management made greater use of individual contacts with professional staff. Standard clinic-based care relied more on group and on individual therapy. Despite these variations in the pattern of services over a 12-month period, no one program emerged as categorically superior to the others.
Conclusions: Although more research is needed to determine optimal roles for consumers in mental health service delivery, a randomized trial found no evidence that the presence of consumers enhances case management outcome.
abstract_id: PUBMED:10401895
Comparing practice patterns of consumer and non-consumer mental health service providers. The practice patterns of consumer and non-consumer providers of assertive community treatment are compared using both quantitative and qualitative data collected as part of a randomized trial. Activity log data showed that there were few substantive differences in the pattern of either the administrative or direct service tasks performed by the two teams. In contrast, the qualitative data revealed that there were discernable differences in the "culture" of the two teams. The consumer team "culture" emphasized "being there" with the client while the non-consumer team was more concerned with accomplishing tasks.
abstract_id: PUBMED:21659293
A review of consumer-provided services on assertive community treatment and intensive case management teams: implications for future research and practice. Background: Assertive community treatment (ACT) is an evidence-based practice that provides intensive, in vivo services for adults with severe mental illness. Some ACT and intensive case management teams have integrated consumers as team members with varying results.
Methods: The authors reviewed the literature examining the outcomes of having consumer providers on case management teams, with attention devoted to randomized controlled trials (RCTs).
Results: Sixteen published studies were identified, including eight RCTs. Findings were mixed, with evidence supporting consumer-provided services for improving engagement and limited support for reduced hospitalizations. However, evidence was lacking for other outcomes areas such as symptom reduction or improved quality of life.
Conclusion: Including a consumer provider on an ACT team could enhance the outreach mechanisms of ACT, using a more recovery-focused approach to bring consumers into services and help engage them over time. More rigorous research is needed to further evaluate integrating consumer providers on teams.
abstract_id: PUBMED:9170772
Patterns of services and consumer outcome in an intensive case management program. This study examined the patterns of services provided to individuals with serious and persistent mental illness during their first year in an intensive case management program. Services in 10 content areas were examined, and patterns for more versus less "successful" individuals were compared. Differences emerged for services focusing on family and housing, suggesting that the need for community support services influences the need for continued intensive case management. Linear reductions in rehabilitation services suggest that such services may indeed be effective early in the treatment process. Finally, differences among case managers in service patterns for 5 of the 10 content areas suggest that case managers play an important role in determining the course of treatment.
abstract_id: PUBMED:23695812
Consumer health-care information technology Consumer health-care information technology is intended to improve patients' opportunities to gather information about their own health. Ideally, this will be achieved through an improved involvement of existing data bases and an improved communication of information to patients and to care providers, if desired by patients. Additionally, further interconnection of existing and new systems and pervasive system design may be used. All consumer health-care information technology services are optional and leave patients in control of their medical data at all times. This article reflects the current status of consumer health-care information technology research and suggests further research areas that should be addressed.
abstract_id: PUBMED:33277155
"Noise Factory": A qualitative study exploring healthcare providers' perceptions of noise in the intensive care unit. Objectives: This study aimed to explore healthcare providers' perceptions of noise in the intensive care unit.
Design: A qualitative exploratory study was conducted using group interviews.
Setting: The setting comprised a total of 15 participants (five physicians and ten registered nurses) working in an 18-bed medical surgical intensive care unit at a teaching hospital in Istanbul, Turkey. Semi-structured questions were formulated and used in focus group interviews, after which the recorded interviews were transcribed by the researchers. Thematic analysis was used to identify significant statements and initial codes.
Findings: Four themes were identified: the meaning of noise, sources of noise, effects of noise and prevention and management of noise. It was found that noise was an inevitable feature of the intensive care unit. The most common sources of noise were human-induced. It was also determined that device-induced noise, such as alarms, did not produce a lot of noise; however, when staff were late in responding, the sound transformed into noise. Furthermore, it was observed that efforts to decrease noise levels taken by staff had only a momentary effect, changing nothing in the long term because the entire team failed to implement any initiatives consistently. The majority of nurses stated that they were now becoming insensitive to the noise due to the constant exposure to device-induced noise.
Conclusion: The data obtained from this study showed that especially human-induced noise threatened healthcare providers' cognitive task functions, concentration and job performance, impaired communication and negatively affected patient safety. In addition, it was determined that any precautions taken to reduce noise were not fully effective. A team approach should be used in managing noise in intensive care units with better awareness.
abstract_id: PUBMED:19114573
Are comparisons of consumer satisfaction with providers biased by nonresponse or case-mix differences? Objective: This study examined how consumer satisfaction ratings differ between mental health care providers to determine whether comparison of ratings is biased by differences in survey response rates or consumer characteristics.
Methods: Consumer satisfaction surveys mailed by a mixed-model prepaid health plan were examined. Survey data were linked to computerized records regarding consumers' demographic (age, sex, and type of insurance coverage) and clinical (primary diagnosis and initial versus return visit) characteristics. Statistical models examined probabilities of returning the survey (N=8,025 returned surveys) and of giving an excellent satisfaction rating. Variability was separated into within-provider effects and between-provider effects.
Results: The overall response rate was 33.8%, and 49.9% of responders reported excellent satisfaction. Neither response rate nor satisfaction rating was related to primary diagnosis. Within the practices of individual providers, response rate and receiving an excellent rating were significantly associated with female sex, older age, longer enrollment in the health plan, and making a return visit. Analyses of between-provider effects, however, found that only a higher proportion of return visitors was significantly associated with higher response rates and higher satisfaction ratings.
Conclusions: There was little evidence that differences in response rate or in consumers served biased comparison of satisfaction ratings between mental health providers. Bias might be greater in a setting with more heterogeneous consumers or providers. Returning consumers gave higher ratings than first-time visitors, and analyses of satisfaction ratings may need to account for this difference. Extremely high or low ratings should be interpreted cautiously, especially for providers with a small number of surveys.
abstract_id: PUBMED:23543537
Consumer-providers of care for adult clients of statutory mental health services. Background: In mental health services, the past several decades has seen a slow but steady trend towards employment of past or present consumers of the service to work alongside mental health professionals in providing services. However the effects of this employment on clients (service recipients) and services has remained unclear.We conducted a systematic review of randomised trials assessing the effects of employing consumers of mental health services as providers of statutory mental health services to clients. In this review this role is called 'consumer-provider' and the term 'statutory mental health services' refers to public services, those required by statute or law, or public services involving statutory duties. The consumer-provider's role can encompass peer support, coaching, advocacy, case management or outreach, crisis worker or assertive community treatment worker, or providing social support programmes.
Objectives: To assess the effects of employing current or past adult consumers of mental health services as providers of statutory mental health services.
Search Methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library 2012, Issue 3), MEDLINE (OvidSP) (1950 to March 2012), EMBASE (OvidSP) (1988 to March 2012), PsycINFO (OvidSP) (1806 to March 2012), CINAHL (EBSCOhost) (1981 to March 2009), Current Contents (OvidSP) (1993 to March 2012), and reference lists of relevant articles.
Selection Criteria: Randomised controlled trials of current or past consumers of mental health services employed as providers ('consumer-providers') in statutory mental health services, comparing either: 1) consumers versus professionals employed to do the same role within a mental health service, or 2) mental health services with and without consumer-providers as an adjunct to the service.
Data Collection And Analysis: Two review authors independently selected studies and extracted data. We contacted trialists for additional information. We conducted analyses using a random-effects model, pooling studies that measured the same outcome to provide a summary estimate of the effect across studies. We describe findings for each outcome in the text of the review with considerations of the potential impact of bias and the clinical importance of results, with input from a clinical expert.
Main Results: We included 11 randomised controlled trials involving 2796 people. The quality of these studies was moderate to low, with most of the studies at unclear risk of bias in terms of random sequence generation and allocation concealment, and high risk of bias for blinded outcome assessment and selective outcome reporting.Five trials involving 581 people compared consumer-providers to professionals in similar roles within mental health services (case management roles (4 trials), facilitating group therapy (1 trial)). There were no significant differences in client quality of life (mean difference (MD) -0.30, 95% confidence interval (CI) -0.80 to 0.20); depression (data not pooled), general mental health symptoms (standardised mean difference (SMD) -0.24, 95% CI -0.52 to 0.05); client satisfaction with treatment (SMD -0.22, 95% CI -0.69 to 0.25), client or professional ratings of client-manager relationship; use of mental health services, hospital admissions and length of stay; or attrition (risk ratio 0.80, 95% CI 0.58 to 1.09) between mental health teams involving consumer-providers or professional staff in similar roles.There was a small reduction in crisis and emergency service use for clients receiving care involving consumer-providers (SMD -0.34 (95%CI -0.60 to -0.07). Past or present consumers who provided mental health services did so differently than professionals; they spent more time face-to-face with clients, and less time in the office, on the telephone, with clients' friends and family, or at provider agencies.Six trials involving 2215 people compared mental health services with or without the addition of consumer-providers. There were no significant differences in psychosocial outcomes (quality of life, empowerment, function, social relations), client satisfaction with service provision (SMD 0.76, 95% CI -0.59 to 2.10) and with staff (SMD 0.18, 95% CI -0.43 to 0.79), attendance rates (SMD 0.52 (95% CI -0.07 to 1.11), hospital admissions and length of stay, or attrition (risk ratio 1.29, 95% CI 0.72 to 2.31) between groups with consumer-providers as an adjunct to professional-led care and those receiving usual care from health professionals alone. One study found a small difference favouring the intervention group for both client and staff ratings of clients' needs having been met, although detection bias may have affected the latter. None of the six studies in this comparison reported client mental health outcomes.No studies in either comparison group reported data on adverse outcomes for clients, or the financial costs of service provision.
Authors' Conclusions: Involving consumer-providers in mental health teams results in psychosocial, mental health symptom and service use outcomes for clients that were no better or worse than those achieved by professionals employed in similar roles, particularly for case management services.There is low quality evidence that involving consumer-providers in mental health teams results in a small reduction in clients' use of crisis or emergency services. The nature of the consumer-providers' involvement differs compared to professionals, as do the resources required to support their involvement. The overall quality of the evidence is moderate to low. There is no evidence of harm associated with involving consumer-providers in mental health teams.Future randomised controlled trials of consumer-providers in mental health services should minimise bias through the use of adequate randomisation and concealment of allocation, blinding of outcome assessment where possible, the comprehensive reporting of outcome data, and the avoidance of contamination between treatment groups. Researchers should adhere to SPIRIT and CONSORT reporting standards for clinical trials.Future trials should further evaluate standardised measures of clients' mental health, adverse outcomes for clients, the potential benefits and harms to the consumer-providers themselves (including need to return to treatment), and the financial costs of the intervention. They should utilise consistent, validated measurement tools and include a clear description of the consumer-provider role (eg specific tasks, responsibilities and expected deliverables of the role) and relevant training for the role so that it can be readily implemented. The weight of evidence being strongly based in the United States, future research should be located in diverse settings including in low- and middle-income countries.
abstract_id: PUBMED:10544992
A study of client-focused case management and consumer advocacy: the Community and Consumer Service Project. Objective: The study investigated the provision of client-focused services to community-based clients with schizophrenia and bipolar disorder. It hypothesised that the delivery of more client-focused services would improve client outcome in terms of functioning, disability and satisfaction with services. Client-focused services were developed using an empowerment model of case management and by the addition of consumer advocates.
Method: Clients referred for case management were randomly allocated to one of three groups: standard case management (n = 35), client-focused case management (n = 39), or client-focused case management plus consumer advocacy (n = 45). Measures of functioning, disability, quality of life, burden of care and service satisfaction were measured at baseline and 12 months. Outcome data were collected concerning number and duration of hospital readmission, crisis intervention and compliance with treatment and services.
Results: While there were no differences between the groups on quantitative measures of functioning, disability, quality of life, service satisfaction and burden of care, there were significant between-group differences on qualitative measures of satisfaction with services.
Conclusions: Several methodological difficulties hampered interpretation of the findings. Although clients did not differ on outcome measures of functioning and disability, the group receiving client-focused case management reported greater satisfaction with service delivery.
abstract_id: PUBMED:26148737
Recovery-based services in a psychiatric intensive care unit - the consumer perspective. Objective: To describe the implementation of recovery-based practice into a psychiatric intensive care unit, and report change in seclusion rates over the period when these changes were introduced (2011-2013).
Method: Recovery-based practices including collaborative care, safety care plans, a comfort room, and debriefing after coercive interventions were introduced. A carer consultant was employed. A restraint and seclusion review committee, chaired by a peer worker, was established. A consumer exit interview was introduced and these data were collected, reviewed by staff and the peer worker and used to improve the ward environment. Rates of seclusion were measured during the period when recovery-based practices were introduced.
Results: Consumer feedback indicated that positive aspects of the psychiatric intensive care unit included approachable, helpful staff and completion of a safety care plan. Negative aspects included lack of involvement in decisions about admission and about medications, the non-smoking policy, and being placed in seclusion or restraint. There was a significant reduction in the number of consumers secluded and the total number of seclusions.
Conclusions: Recovery principles can be successfully introduced in a psychiatric intensive care unit environment. Introduction of recovery based practice was associated with a reduction in seclusion.
Answer: The addition of consumer providers to intensive case management has been explored in various studies, but the evidence regarding its impact on outcomes is mixed. A randomized trial found no evidence that the presence of consumers enhances case management outcomes, as all three programs (consumer-assisted, non-consumer-assisted case management, and standard clinic-based care) yielded the same general pattern of improvement over time for symptoms, health care satisfaction, and quality of life (PUBMED:17535940). Similarly, a systematic review of randomized controlled trials assessing the effects of employing consumers of mental health services as providers found that involving consumer-providers in mental health teams resulted in psychosocial, mental health symptom, and service use outcomes for clients that were no better or worse than those achieved by professionals employed in similar roles (PUBMED:23543537).
However, there is low-quality evidence suggesting that involving consumer-providers in mental health teams results in a small reduction in clients' use of crisis or emergency services (PUBMED:23543537). Additionally, consumer-providers may differ in their approach to service provision, emphasizing "being there" with the client, compared to non-consumer teams that are more task-oriented (PUBMED:10401895). Despite these differences in approach, the overall impact on traditional outcome measures such as symptom reduction or improved quality of life remains unclear, with limited support for reduced hospitalizations and evidence lacking for other outcome areas (PUBMED:21659293).
In conclusion, while the integration of consumer providers into intensive case management does not appear to significantly improve traditional outcome measures, it may offer other benefits such as reduced use of crisis services and a more recovery-focused approach to care. More rigorous research is needed to further evaluate the impact of integrating consumer providers on teams (PUBMED:21659293, PUBMED:23543537). |
Instruction: Can follow-up phone calls improve patients self-monitoring of blood glucose?
Abstracts:
abstract_id: PUBMED:27862497
Can follow-up phone calls improve patients self-monitoring of blood glucose? Aims And Objectives: To evaluate the effectiveness of follow-up phone calls in improving frequency of glucose monitoring over a three month period in two groups of patients with type 2 diabetes with the goal to lower haemoglobin A1C.
Background: Telephone intervention has been successfully used in improving adherence to diabetes self-management and other chronic disease conditions.
Design: A quality improvement study.
Methods: Forty one Type 2 diabetic patients with HA1C ≥7·5% were included in the study. The patients were assigned to two groups. The first group of patients received standard diabetic care (Group 1) and the second group of patients (Group 2) received standard diabetic care plus follow-up phone calls within two weeks after a monthly clinic visit over a three month period. A haemoglobin A1C if indicated was done at the initial study visit.
Results: There were no statistically significant differences in the baseline haemoglobin A1C between the two groups or the three month haemoglobin A1C of the two groups. There were no statistically significant differences in mean haemoglobin A1C change between Group 1 and Group 2. The analysis revealed that there were no statistically significant differences between groups in the number of patients who kept logs of their blood glucose readings throughout the study.
Conclusion: The intervention using telephone follow-up calls did not show a statistically significant improvement in overall HA1C, but there was a clinically significant change in HA1C in the group of patients that received follow-up phone calls.
Relevance To Clinical Practice: The clinical significance of the change in A1C in the follow-up phone call group (Group 2) supports that frequent contact by telephone may likely improve adherence to diabetes self-management.
abstract_id: PUBMED:33591624
Phone calls for improving blood pressure control among hypertensive patients attending private medical practitioners in India: Findings from Mumbai hypertension project. Despite the availability of effective medication, blood pressure control rates are low, particularly in low- and middle-income countries. Adherence to medication and follow-up visits are important factors in blood pressure control. This study assessed the effectiveness of reminder telephone calls on follow-up visits and blood pressure control among hypertensive patients as part of the Mumbai Hypertension Project. This project was initiated by PATH with the support from Resolve to Save Lives from January 2019 to February 2020. The study included hypertensive patients attending 164 private practices in Mumbai, India; practitioners screened all adults visiting their clinic during the project period. Among 13 184 hypertensive patients registered, the mean age was 53 years (SD = 12.38) and 52% were female. Among the 11 544 patients that provided phone numbers and gave consent for follow-up calls, 9528 responded to phone calls at least once and 5250 patients followed up at least once. Of the 5250 patients, 82% visited the clinic for follow-up visit within one month after receiving the phone call. The blood pressure control rate among those who answered phone calls and who did not answer phone calls increased from 23.6% to 48.8% (P <.001) and 21.0% to 44.3% (P <.001), respectively. The blood pressure control rate at follow-up was significantly associated with phone calls (OR: 1.51, 95% CI: 1.34 - 1.71). The study demonstrates that telephone call intervention and follow-up visits can improve patient retention in care and, subsequently, blood pressure control among hypertensive patients attending urban private sector clinics in India.
abstract_id: PUBMED:30135824
The impact of phone calls on follow-up rates in an online depression prevention study. Background: Automated Internet intervention studies have generally had large dropout rates for follow-up assessments. Live phone follow-ups have been often used to increase follow-up completion rates.
Objective: To compare, via a randomized study, whether receiving phone calls improves follow-up rates beyond email reminders and financial incentives in a depression prevention study.
Method: A sample of 95 participants (63 English-speakers and 32 Spanish-speakers) was recruited online to participate in a "Healthy Mood" study. Consented participants were randomized to either a Call or a No Call condition. All participants were sent up to three email reminders in one week at 1, 3, and 6 months after consent, and all participants received monetary incentives to complete the surveys. Those in the Call condition received up to ten follow-up phone calls if they did not complete the surveys in response to email reminders.
Results: The follow-up rates for Call vs. No Call conditions at 1, 3, and 6 months, respectively, were as follows: English speakers, 58.6% vs. 52.9%, 62.1% vs. 52.9%, and 68.9% vs. 47.1%; Spanish speakers, 50.0% vs. 35.7%, 33.3% vs. 21.4%, and 33.3% vs. 7.1%. The number of participants who completed follow-up assessments only after being called at 1-, 3- and 6 months was 2 (14.3%), 0 (0%), and 3 (25.0%) for English speakers, and 2 (18.9%), 0 (0%), and 1 (7.7%) for Spanish speakers. The number of phone calls made to achieve one completed follow-up was 58.8 in the English sample and 57.7 and Spanish-speaking sample.
Conclusions: Adding phone call contacts to email reminders and monetary incentives did increase follow-up rates. However, the rate of response to follow-up was low and the number of phone calls required to achieve one completed follow-up raises concerns about the utility of adding phone calls. We also discuss difficulties with using financial incentives and their implications.
abstract_id: PUBMED:37101035
Blood glucose self monitoring Self monitoring of blood glucose contributes to the integrated management of diabetes mellitus. It, thus, should be available for all patients with diabetes mellitus. Self monitoring of blood glucose improves patients safety, quality of life and glucose control. The current article represents the recommendations of the Austrian Diabetes Association for the use of blood glucose self monitoring according to current scientific evidence.
abstract_id: PUBMED:27052233
Blood glucose self monitoring Self monitoring of blood glucose contributes to the integrated management of diabetes mellitus. It, thus, should be available for all patients with diabetes mellitus type-1 and type-2. Self monitoring of blood glucose improves patients safety, quality of life and glucose control. The current article represents the recommendations of the Austrian Diabetes Association for the use of blood glucose self monitoring according to current scientific evidence.
abstract_id: PUBMED:12746624
Today data management in self-monitoring of blood glucose for diabetic patients Improving diabetes treatment needs intensive glucose monitoring which is restricting for patients and time-consuming for physicians. Up-to-date tools of data management were developed, following progress in computing technology and home computing. Glucometers with memory and softwares are able to improve data management of self blood glucose monitoring, personalized interactivity with physician. They are very important to develop telemedecine systems in diabetes care. These systems are designed to complement the daily care and intensive management of diabetics through telemonitoring and telecare services.
abstract_id: PUBMED:21106908
Using a cell phone-based glucose monitoring system for adolescent diabetes management. Introduction: Mobile technology may be useful in addressing several issues in adolescent diabetes management.
Purpose: To assess the feasibility and acceptability of a cell phone glucose monitoring system for adolescents with type 1 diabetes and their parents.
Methods: The authors recruited patients with type 1 diabetes who had been diagnosed for at least 1 year. Each adolescent used the system for 6 months, filling out surveys every 3 months to measure their usability and satisfaction with the cell phone glucose monitoring system, as well as how use of the system might affect quality of family functioning and diabetes management.
Results: Adolescents reported positive feelings about the technology and the service, even though a concerning number of them had significant technical issues that affected continued use of the device. Nearly all thought that the clinic involvement in monitoring testing behavior was quite acceptable. The use of the Glucophone™ did not, however, significantly change the quality of life of the adolescents, their level of conflict with their parents, their reported self-management of diabetes, or their average glycemic control within the short time frame of the study.
Conclusions: As a feasibility study of the technology, this work was successful in demonstrating that cell phone glucose monitoring technology can be used in an adolescent population to track and assist in self-monitoring behavior. The authors speculate that explicitly attempting to change behavior, perhaps with the use of behavioral contracts, would enhance the technology's ability to improve outcomes.
abstract_id: PUBMED:23250470
Blood glucose self monitoring Self monitoring of blood glucose contributes to the integrated management of diabetes mellitus. It, thus, should be available for all patients with diabetes mellitus type-1 and type-2. Self monitoring of blood glucose improves patients safety, quality of life and glucose control. The current article represents the recommendations of the Austrian Diabetes Association for the use of blood glucose self monitoring according to current scientific evidence.
abstract_id: PUBMED:20046663
The potential use of radio frequency identification devices for active monitoring of blood glucose levels. Imagine a diabetes patient receiving a text message on his mobile phone warning him that his blood glucose level is too low or a patient's mobile phone calling an emergency number when the patient goes into diabetic shock.Both scenarios depend on automatic, continuous monitoring of blood glucose levels and transmission of that information to a phone. The development of advanced biological sensors and integration with passive radio frequency identification technologies are the key to this. These hold the promise of being able to free patients from finger stick sampling or externally worn devices while providing continuous blood glucose monitoring that allows patients to manage their health more actively. To achieve this promise, however, a number of technical issues need to be addressed.
abstract_id: PUBMED:29987085
Physicians' Views of Self-Monitoring of Blood Glucose in Patients With Type 2 Diabetes Not on Insulin. This qualitative study examines to what extent and why physicans still prescribe self-monitoring of blood glucose (SMBG) in patients with non-insulin-treated type 2 diabetes (NITT2D) when the evidence shows it increases cost without improving hemoglobin A1c (HbA1c), general well being, or health-related quality of life. Semistructured phone interviews with 17 primary care physicians indicated that the majority continue to recommend routine self-monitoring of blood glucose due to a compelling belief in its ability to promote the lifestyle changes needed for glycemic control. Targeting physician beliefs about the effectiveness of self-monitoring of blood glucose, and designing robust interventions accordingly, may help reduce this practice.
Answer: Follow-up phone calls have been explored as a method to improve patients' self-monitoring of blood glucose. A quality improvement study with 41 Type 2 diabetic patients found that while the intervention using telephone follow-up calls did not show a statistically significant improvement in overall haemoglobin A1C, there was a clinically significant change in HA1C in the group of patients that received follow-up phone calls (PUBMED:27862497). This suggests that frequent contact by telephone may likely improve adherence to diabetes self-management, which is relevant to clinical practice.
The effectiveness of phone calls has also been demonstrated in other health management contexts, such as blood pressure control among hypertensive patients. A study from the Mumbai Hypertension Project showed that reminder telephone calls improved follow-up visits and blood pressure control among hypertensive patients (PUBMED:33591624).
However, the impact of phone calls on follow-up rates in an online depression prevention study indicated that while adding phone call contacts to email reminders and monetary incentives did increase follow-up rates, the response rate to follow-up was low and the number of phone calls required to achieve one completed follow-up raises concerns about the utility of adding phone calls (PUBMED:30135824).
In general, self-monitoring of blood glucose is recognized as contributing to the integrated management of diabetes mellitus and is recommended for all patients with diabetes mellitus to improve patient safety, quality of life, and glucose control (PUBMED:27052233, PUBMED:37101035, PUBMED:23250470).
Therefore, while follow-up phone calls may not significantly change clinical metrics like haemoglobin A1C, they could potentially improve adherence to self-monitoring practices, which is an important aspect of diabetes self-management. However, the effectiveness and efficiency of such interventions may vary and should be considered in the context of the resources required to implement them. |
Instruction: Can we define surgical site infection accurately in colorectal surgery?
Abstracts:
abstract_id: PUBMED:24811074
Can we define surgical site infection accurately in colorectal surgery? Background: Increasingly, surgical site infection (SSI) is being tied to quality of care. The incidence of SSI after colorectal surgery differs widely. We hypothesize that it is difficult to define SSI reliably and reproducibly when adhering to the U.S. Centers for Disease Control and Prevention (CDC) definitions.
Methods: Elective intra-abdominal colorectal procedures via a clean-contaminated incision performed at a single institution between January 1 and May 1, 2011 were queried. Three attending surgeons examined all patients' records retrospectively for documentation of SSI. These data were compared with the institutional National Surgeon Quality Improvement Program (NSQIP) data with regard to deep and superficial incisional SSI.
Results: Seventy-one cases met the inclusion criteria. There were six SSIs identified by NSQIP, representing 8.4% of cases. Review of the three attending surgeons demonstrated a significantly higher incidence of SSI, at 27%, 38%, and 23% (p=0.002). The percent of overall agreement between all reviewers was 82.16 with a kappa of 0.64, indicating only modest inter-rater agreement. Lack of attending surgeon documentation and subjective differences in chart interpretation accounted for most discrepancies between the surgeon and NSQIP SSI capture rates.
Conclusions: This study highlights the difficulty in defining SSI in colon and rectal surgery, which oftentimes is subjective and difficult to discern from the medical record. According to these preliminary data from our institution, there is poor reliability between clinical reviewers in defining SSI on the basis of the CDC criteria, which has serious implications. The interpretation of clinical trials may be jeopardized if we cannot define SSI accurately. Furthermore, according to current CDC definitions and infection tracking strategies, these data suggest that the institutional incidence of SSI may not be a reliable measure by which to compare institutions. Better methods for defining SSI should be implemented if these data are made publicly available and tied to performance measures.
abstract_id: PUBMED:33544977
Diagnosis of colorectal and emergency surgical site infections in the era of enhanced recovery: an all-Wales prospective study. Aim: Surgical site infections (SSIs) are associated with increased morbidity, hospital stay and cost. The literature reports that 25% of patients who undergo colorectal surgical procedures develop a SSI. Due to the enhanced recovery programme, patients are being discharged earlier with some SSIs presenting in primary care, making accurate recording of SSIs difficult. The aim of this study was to accurately record the 30-day SSI rate after surgery performed by colorectal surgeons nationally within Wales.
Method: During March 2019, a national prospective snapshot study of all patients undergoing elective or emergency colorectal and general surgical procedures under the care of a colorectal consultant at 12 Welsh hospitals was completed. There was a multimodal 30-day follow-up using electronic records, clinic visits and/or telephone calls. Diagnosis of SSI was based on Centers for Disease Control and Prevention diagnostic criteria.
Results: Within Wales, of the 545 patients included, 13% developed a SSI within 30 days, with SSI rates of 14.3% for elective surgery and 11.7% for emergency surgery. Of these SSIs, 49.3% were diagnosed in primary care, with 28.2% of patients being managed exclusively in the community. There were two peaks of diagnosis at days 5-7 and days 22-28. SSI rates between laparoscopic (8.6%) and open (16.2%) surgeries were significantly different (p = 0.028), and there was also a significantly different rate of SSI between procedure groups (p = 0.001), with high SSI rates for colon (22%) and rectal (18.9%) surgery compared with general surgical procedures.
Conclusion: This first all-Wales prospective study demonstrated an overall SSI rate of 13%. By incorporating accurate primary care follow-up it was found that 49.3% of these SSIs were diagnosed in primary care.
abstract_id: PUBMED:16266634
Quality improvement program of nosocomial infection in colorectal cancer surgery Background And Objective: The surgical-site infection (SSI) is a complication of colorectal neoplasia surgery. The objectives of the study were to identify the SSI risk factors associated with colon surgery and to describe a strategy of quality improvement using surgical-site rates.
Patients And Method: Prospective cohort study of in-patients undergoing neoplasia colorectal surgery between 1st July 2002 to 30th June 2003. A descriptive analysis was implemented. Benchmarking was used as tool of quality improvement, and the outcomes were measured using the standardized infection ratio (SIR). To define the risk factors, the Chi square test and logistic regression test were used in univariate and multivariate analysis, respectively.
Results: 148 patients were included in the study. The SSI accumulative incidence rate (IA) was 10.14%, and the incidence rate was 6.47 SSI per 1000 days. The SIR was 1.53 the first semester and 1.02 the second one. The multivariate analysis identified two risk factors associated with SSI: unscheduled admission (odds ratio [OR] = 7.47, 95% confidence interval [CI] 2.03-27.48) and a risk index of American Society of Anaesthesiologists (ASA) > or = 3 (OR = 6.77, IC 95%, 1.15-39.84).
Conclusions: An unscheduled admission and high risk ASA index were risk factors associated with SSI in patients undergoing colorectal surgery. The program of quality improvement based on benchmark achieved a reduction of SSI rates similar to the standard ones.
abstract_id: PUBMED:37700554
Safety and efficacy of intraperitoneal drain placement after emergency colorectal surgery: An international, prospective cohort study. Aim: Intraperitoneal drains are often placed during emergency colorectal surgery. However, there is a lack of evidence supporting their use. This study aimed to describe the efficacy and safety of intraperitoneal drain placement after emergency colorectal surgery.
Method: COMPlicAted intra-abdominal collectionS after colorectal Surgery (COMPASS) is a prospective, international, cohort study into which consecutive adult patients undergoing emergency colorectal surgery were enrolled (from 3 February 2020 to 8 March 2020). The primary outcome was the rate of intraperitoneal drain placement. Secondary outcomes included rate and time-to-diagnosis of postoperative intraperitoneal collections, rate of surgical site infections (SSIs), time to discharge and 30-day major postoperative complications (Clavien-Dindo III-V). Multivariable logistic and Cox proportional hazards regressions were used to estimate the independent association of the outcomes with drain placement.
Results: Some 725 patients (median age 68.0 years; 349 [48.1%] women) from 22 countries were included. The drain insertion rate was 53.7% (389 patients). Following multivariable adjustment, drains were not significantly associated with reduced rates (odds ratio [OR] = 1.56, 95% CI: 0.48-5.02, p = 0.457) or earlier detection (hazard ratio [HR] = 1.07, 95% CI: 0.61-1.90, p = 0.805) of collections. Drains were not significantly associated with worse major postoperative complications (OR = 1.26, 95% CI: 0.67-2.36, p = 0.478), delayed hospital discharge (HR = 1.11, 95% CI: 0.91-1.36, p = 0.303) or increased risk of SSIs (OR = 1.61, 95% CI: 0.87-2.99, p = 0.128).
Conclusion: This is the first study investigating placement of intraperitoneal drains following emergency colorectal surgery. The safety and clinical benefit of drains remain uncertain. Equipoise exists for randomized trials to define the safety and efficacy of drains in emergency colorectal surgery.
abstract_id: PUBMED:36517055
The role of laparoscopy in emergency colorectal surgery. Objectives: To assess the outcomes of the laparoscopic approach compared to those of the open approach in emergency colorectal surgery.
Methods: This retrospective cohort study included all patients aged >15 years who underwent emergency colorectal surgery from 2016-2021 at King Abdulaziz Medical City, Riyadh, Saudi Arabia. Patients were divided based on the surgical approach into laparoscopic and open groups.
Results: A total of 241 patients (182 open resections, 59 laparoscopic approaches) were included in this study. The length of stay in the intensive care unit was shorter in the laparoscopic than in the open group (1±3 days vs. 7±16 days). After multivariable logistic regression, patients undergoing laparoscopic resection had a 70% lower risk of surgical site infection than those undergoing open surgery (adjusted odds ratio=0.33, 95% confidence interval: [0.06-1.67]), a difference that was not significant (p=0.18). Lastly, patients who underwent open surgery had a high proportion of 30-day mortality (n=26; 14.3%), compared to those who underwent laparoscopic resection (n=2; 3.4%, p=0.023).
Conclusion: Laparoscopy in emergency colorectal surgery is safe and feasible, with a trend toward better outcomes. Colorectal surgery specialization is an independent predictor of an increased likelihood of undergoing laparoscopy in emergency colorectal surgery.
abstract_id: PUBMED:26923148
Perioperative Glycemic Control During Colorectal Surgery. Hyperglycemia occurs frequently among patients undergoing colorectal surgery and is associated with increased risk of poor clinical outcomes, especially related to surgical site infections. Treating hyperglycemia has become a target of many enhanced recovery after surgery programs developed for colorectal procedures. There are several unique considerations for patients undergoing colorectal surgery including bowel preparations and alterations in oral intake. Focused protocols for those with diabetes and those at risk of hyperglycemia are needed in order to address the specific needs of those undergoing colorectal procedures.
abstract_id: PUBMED:31791356
Role of the intestinal microbiome in colorectal cancer surgery outcomes. Objectives: Growing evidence supports the role of the intestinal microbiome in the carcinogenesis of colorectal cancers, but its impact on colorectal cancer surgery outcomes is not clearly defined. This systematic review aimed to analyze the association between intestinal microbiome composition and postoperative complication and survival following colorectal cancer surgery.
Methods: A systematic review was conducted according to the 2009 PRISMA guidelines. Two independent reviewers searched the literature in a systematic manner through online databases, including Medline, Scopus, Embase, Cochrane Oral Health Group Specialized Register, ProQuest Dissertations and Theses Database, and Google Scholar. Human studies investigating the association between the intestinal microbiome and the short-term (anastomotic leakage, surgical site infection, postoperative ileus) and long-term outcomes (cancer-specific mortality, overall and disease-free survival) of colorectal cancer surgery were selected. Patients with any stage of colorectal cancer were included. The Newcastle-Ottawa scale for case-control and cohort studies was used for the quality assessment of the selected articles.
Results: Overall, 8 studies (7 cohort studies and 1 case-control) published between 2014 and 2018 were included. Only one study focused on short-term surgical outcomes, showing that anastomotic leakage is associated with low microbial diversity and abundance of Lachnospiraceae and Bacteroidaceae families in the non-cancerous resection lines of the stapled anastomoses of colorectal cancer patients. The other 7 studies focused on long-term oncological outcomes, including survival and cancer recurrence. The majority of the studies (5/8) found that a higher level of Fusobacterium nucleatum adherent to the tumor tissue is associated with worse oncological outcomes, in particular, increased cancer-specific mortality, decreased median and overall survival, disease-free and cancer-specific survival rates. Also a high abundance of Bacteroides fragilis was found to be linked to worse outcomes, whereas the relative abundance of the Prevotella-co-abundance group (CAG), the Bacteroides CAG, and the pathogen CAG as well as Faecalibacterium prausnitzii appeared to be associated with better survival.
Conclusions: Based on the limited available evidence, microbiome composition may be associated with colorectal cancer surgery outcomes. Further studies are needed to elucidate the role of the intestinal microbiome as a prognostic factor in colorectal cancer surgery and its possible clinical implications.
abstract_id: PUBMED:30693743
Impact of sequential implementation of multimodal perioperative care pathways on colorectal surgical outcomes Background: Standardized care protocols offer the potential to reduce postoperative complication rates. The purpose of this study was to determine whether there was an additive benefit associated with the sequential implementation of the evidence-based surgical site infection bundle (SSIB) and enhanced recovery after surgery (ERAS) protocols for patients undergoing colorectal surgery in a community hospital.
Methods: Patients at a single institution who underwent elective colorectal surgery between Apr. 1, 2011, and Dec. 31, 2015, were identified by means of American College of Surgeons National Surgical Quality Improvement Program data. Patients were stratified into 3 groups according to the protocol implementation dates: pre-SSIB/pre-ERAS (control), post-SSIB/pre-ERAS and post-SSIB/post-ERAS. Primary outcomes assessed were length of stay and wound complication rates. We used inverse proportional weighting to control for possible differences between the groups.
Results: There were 368 patients included: 94 in the control group, 95 in the post-SSIB/pre-ERAS group and 179 in the post-SSIB/post-ERAS group. In the adjusted analyses, mean length of stay (control group 7.6 d, post-SSIB/post-ERAS group 5.5 d, p = 0.04) and overall wound complication rates (14.7% and 6.5%, respectively, p = 0.049) were reduced after sequential implementation of the protocols.
Conclusion: Sequential implementation of quality-improvement initiatives yielded additive benefit for patients undergoing colorectal surgery in a community hospital, with a decrease in length of stay and wound complication rates. The amount of improvement attributable to either initiative is difficult to define as they were implemented sequentially. The improved outcomes were realized after the introduction of the ERAS protocol in adjusted analyses.
abstract_id: PUBMED:35275247
Attitudes towards Enhanced Recovery after Surgery (ERAS) interventions in colorectal surgery: nationwide survey of Australia and New Zealand colorectal surgeons. Background: Whilst Enhanced Recovery after Surgery (ERAS) has been widely accepted in the international colorectal surgery community, there remains significant variations in ERAS programme implementations, compliance rates and best practice recommendations in international guidelines.
Methods: A questionnaire was distributed to colorectal surgeons from Australia and New Zealand after ethics approval. It evaluated specialist attitudes towards the effectiveness of specific ERAS interventions in improving short term outcomes after colorectal surgery. The data were analysed using a rating scale and graded response model in item response theory (IRT) on Stata MP, version 15 (StataCorp LP, College Station, TX).
Results: Of 300 colorectal surgeons, 95 (31.7%) participated in the survey. Of eighteen ERAS interventions, this study identified eight strategies as most effective in improving ERAS programmes alongside early oral feeding and mobilisation. These included pre-operative iron infusion for anaemic patients (IRT score = 7.82 [95% CI: 6.01-9.16]), minimally invasive surgery (IRT score = 7.77 [95% CI: 5.96-9.07]), early in-dwelling catheter removal (IRT score = 7.69 [95% CI: 5.83-9.01]), pre-operative smoking cessation (IRT score = 7.68 [95% CI: 5.49-9.18]), pre-operative counselling (IRT score = 7.44 [95% CI: 5.58-8.88]), avoiding drains in colon surgery (IRT score = 7.37 [95% CI: 5.17-8.95]), avoiding nasogastric tubes (IRT score = 7.29 [95% CI: 5.32-8.8]) and early drain removal in rectal surgery (IRT score = 5.64 [95% CI: 3.49-7.66]).
Conclusions: This survey has demonstrated the current attitudes of colorectal surgeons from Australia and New Zealand regarding ERAS interventions. Eight of the interventions assessed in this study including pre-operative iron infusion for anaemic patients, minimally invasive surgery, early in-dwelling catheter removal, pre-operative smoking cessation, pre-operative counselling, avoidance of drains in colon surgery, avoiding nasogastric tubes and early drain removal in rectal surgery should be considered an important part of colorectal ERAS programmes.
abstract_id: PUBMED:27320902
Colorectal surgical site infection reduction strategies. Objective: PeaceHealth Sacred Heart Medical Center at RiverBend is a 379 bed hospital; 15,060 surgical procedures were performed in 2014 with 254 being colorectal surgical cases. Using the ACS NSQIP program the hospital was identified as a high outlier for surgical site infection (SSI) in all cases and in colorectal procedures in our July 2012 semiannual report (SAR).
Methods: A best practice bundle to reduce SSI's and a colorectal enhanced recovery pathway were concurrently initiated
Results: After implementation of these best practice elements, our ACS NSQIP results showed a reduction in our colorectal SSI rate from 17.58% for 2011 to 5.11% (1st decile/low outlier) in the January 2015 SAR. Our SSI rate for all cases were reduced from 4.87% to 1.71% (1st decile/low outlier) during the same time period. A reduction in length of stay for our colorectal surgical patients was also noted.
Conclusions: The ACS NSQIP program was essential to identify colorectal SSI as an area of concern and for providing data to drive quality improvement measures.
Answer: Defining surgical site infection (SSI) accurately in colorectal surgery is challenging and has been shown to be somewhat subjective and difficult to discern from medical records. A study highlighted the difficulty in defining SSI in colon and rectal surgery, with poor reliability between clinical reviewers when using the CDC criteria. The study found significant discrepancies in SSI rates identified by different attending surgeons compared to the institutional National Surgeon Quality Improvement Program (NSQIP) data, suggesting that the institutional incidence of SSI may not be a reliable measure by which to compare institutions (PUBMED:24811074).
Another study in Wales reported that 13% of patients developed a SSI within 30 days, with nearly half of these SSIs diagnosed in primary care. This indicates that many SSIs present after discharge, making accurate recording difficult (PUBMED:33544977). Additionally, a study on nosocomial infection in colorectal cancer surgery identified unscheduled admission and a high American Society of Anaesthesiologists (ASA) risk index as risk factors associated with SSI, suggesting that patient-specific factors can influence SSI rates (PUBMED:16266634).
Furthermore, the role of the intestinal microbiome in colorectal cancer surgery outcomes has been investigated, with some evidence suggesting that microbiome composition may be associated with postoperative complications such as anastomotic leakage and surgical site infection (PUBMED:31791356).
In terms of interventions, the implementation of multimodal perioperative care pathways, including surgical site infection bundles (SSIB) and enhanced recovery after surgery (ERAS) protocols, has been shown to reduce postoperative complication rates and length of stay for patients undergoing colorectal surgery (PUBMED:30693743). A survey of colorectal surgeons in Australia and New Zealand identified several ERAS interventions that are considered effective in improving short-term outcomes after colorectal surgery (PUBMED:35275247).
Overall, while efforts are being made to standardize definitions and improve the accuracy of SSI diagnosis, the current literature suggests that defining SSI in colorectal surgery remains complex and may be influenced by various factors, including the timing of diagnosis, patient characteristics, surgical techniques, and perioperative care protocols. |
Instruction: Using patient-reported outcomes (PROs) to compare the providers of surgery: does the choice of measure matter?
Abstracts:
abstract_id: PUBMED:37547056
Sensitivity for Change Analyses of the Patient-Reported Outcomes in Obesity (PROS) Questionnaire: A Prospective Cohort Study. Purpose: Many patients seeking bariatric surgery experience reduced health-related quality of life (HRQOL). A simple clinical tool, the Patient-Reported Outcomes in Obesity (PROS), was developed to address patients' HRQOL concerns during clinical consultations and facilitate meaningful dialogue. The present study aims to explore its sensitivity to change.
Patients And Methods: A prospective study of patients undergoing bariatric surgery was conducted. The patients responded to items on the PROS and the Obesity-related Problems Scale (OP) before surgery and three, 12 and 24 months after surgery. Longitudinal mixed-effects models were applied to estimate the change in PROS and OP scores over time.
Results: Thirty-eight patients were included. A significant change over time was detected for the PROS with the largest effect size at 24 months (effect size -1.34, p ˂ 0.001), while the corresponding effect size for the OP was -1.32 (p ˂ 0.001). In all items of the PROS, the majority of patients responded not bothered at 24 months. The items physical activity, pain, sleep and self-esteem showed the largest change in the percentage of patients reporting not bothered from baseline to 24 months after surgery.
Conclusion: The PROS is sensitive to change over time and may be used as a brief, easy to administer tool to facilitate a conversation about obesity-specific quality of life in clinical consultations.
abstract_id: PUBMED:29677918
Collecting Patient Reported Outcomes in the Wild: Opportunities and Challenges. Collecting Patient Reported Outcomes (PROs) is generally seen as an effective way to assess the efficacy and appropriateness of medical interventions, from the patients' perspective. In 2016 the Galeazzi Orthopaedic Institute established a digitized program of PROs collection from spine, hip and knee surgery patients. In this work, we re-port the findings from the data analysis of the responses collected so far about the complementarity of PROs with respect to the data reported by the clinicians, and about the main biases that can undermine their validity and reliability. Although PROs collection is recognized as being far more complex than just asking the patients "how they feel" on a regular basis and it entails costs and devoted electronic platforms, we advocate their further diffusion for the assessment of health technology and clinical procedures.
abstract_id: PUBMED:23632595
Using patient-reported outcomes (PROs) to compare the providers of surgery: does the choice of measure matter? Background: Patient-reported outcomes (PROs) are being used to compare health care providers with little knowledge of how the choice of measure affects such comparisons.
Objectives: To assess how much difference the choice of PRO makes to a provider's adjusted outcome and whether the choice affects a provider's rating.
Research Design: PROs collected in England from patients undergoing: hip replacement (243 providers; 52,692 patients); knee replacement (244; 60,118); varicose vein surgery (100; 11,163); and groin hernia repair (201; 31,714). Four case-mix-adjusted outcomes (mean postoperative disease-specific and generic PRO; proportion achieving a minimally important difference in disease-specific PRO; proportion reporting improvement on single transitional item). We calculated the associations between measures and for each measure, the proportion of providers rated as statistically above or below average and the level of agreement in ratings.
Results: For major surgery, disease-specific PROs were strongly correlated with the generic PRO (hip 0.90; knee 0.88), they rated high proportions of providers as above or below average (hip 25.1%; knee 19.3%) and there was agreement in ratings with the generic PRO. Even so, for a large proportion of providers (hip 30%; knee 16%) their rating depended on the choice of measure. For minor surgery, correlations between measures were mostly weak. The single transitional item identified the most outliers (varicose vein 20%, hernia 10%).
Conclusions: Choice of outcome measure can determine a provider's rating. Measure selection depends on whether the priority is to avoid missing "poor" providers or avoid mislabeling average providers as "poor."
abstract_id: PUBMED:37223232
Patient-Reported Outcomes and Surgical Quality. Delivering high-quality surgical care requires knowing how best to define and measure quality in surgery. Patient-reported outcomes (PROs) enable surgeons, health care systems, and payers to understand meaningful health outcomes from the patient's perspective and can be measured using patient-reported outcome measures (PROMs). As a result, there is much interest in using PROMs in routine surgical care, to guide quality improvement and to inform reimbursement pay structures. This chapter defines PROs and PROMs, differentiates PROMs from other quality measures such as patient-reported experience measures, describes PROMs in the context of routine clinical care, and provides an overview of interpreting PROM data. This chapter also describes how PROMs may be applied to quality improvement and value-based reimbursement in surgery.
abstract_id: PUBMED:36244689
Patient-Reported Outcomes in Endoscopic Endonasal Skull Base Surgery. The functional outcome, quality of life, and patient feedback related to a chosen treatment approach in skull base surgery have become a subject of interest and focused research in recent years. The current advances in endoscopic optical imaging technology and surgical precision have radically lowered the perioperative morbidity associated with skull base surgery. This has pushed toward a higher focus on patient-reported outcomes (PROs). It is now critical to ensure that the offered treatment plan and approach align with the patient's preferences and expectations, in addition to the surgeon's best clinical judgment and experience. PROs represent a view that reflects the patient's own thoughts and perspective on their condition and the management options, without input or interpretations from the surgeon. Having PRO data enables patients the opportunity to learn from the experiences and perspectives of other patients. This input empowers the patient to become an active participant in the decision-making process at different stages of their care. An in-depth PRO evaluation requires specific validated tools and scoring systems, namely the patient-reported outcomes measures (PROM) tools. In this review, we discuss the currently available skull-base-related PROs, the assessment tools used to capture them, and the future trends of this important topic that is in its infancy.
abstract_id: PUBMED:33282393
Choosing the right survey-patient reported outcomes in esophageal surgery. Patient reported outcomes (PROs) fulfill a crucial and unique niche in patient management, providing health-care providers a glimpse into their patients' health experience. This is of utmost importance in patients with benign and malignant disorders of esophagus requiring surgery, which carries significant morbidity, in part due to a high burden of symptoms affecting health-related quality of life (HRQOL). There are a variety of generic and disease-specific patient reported outcome measures (PROMs) available for use in esophageal surgery. This article provides a broad overview of commonly used HRQOL instruments in esophageal surgery, including their utility in comparative effectiveness research, prognostication and shared decision-making for patients undergoing surgery for benign and malignant disorders of the esophagus.
abstract_id: PUBMED:35467181
Patient reported outcomes in genital gender-affirming surgery: the time is now. Transgender and non-binary (TGNB) individuals often experience gender dysphoria. TGNB individuals with gender dysphoria may undergo genital gender-affirming surgery including vaginoplasty, phalloplasty, or metoidioplasty so that their genitourinary anatomy is congruent with their experienced gender. Given decreasing social stigma and increasing coverage from private and public payers, there has been a rapid increase in genital gender-affirming surgery in the past few years. As the incidence of genital gender-affirming surgery increases, a concurrent increase in the development and utilization of patient reported outcome measurement tools is critical. To date, there is no systematic way to assess and measure patients' perspectives on their surgeries nor is there a validated measure to capture patient reported outcomes for TGNB individuals undergoing genital gender-affirming surgery. Without a systematic way to assess and measure patients' perspectives on their care, there may be fragmentation of care. This fragmentation may result in challenges to ensure patients' goals are at the forefront of shared- decision making. As we aim to increase access to surgical care for TGNB individuals, it is important to ensure this care is patient-centered and high-quality. The development of patient-reported outcomes for patients undergoing genital gender-affirming surgery is the first step in ensuring high quality patient-centered care. Herein, we discuss the critical need for development of validated patient reported outcome measures for transgender and non-binary patients undergoing genital reconstruction. We also propose a model of patient-engaged patient reported outcome measure development.
abstract_id: PUBMED:33045757
Patient Reported Outcomes (PROs) - a Tool for Strengthening Patient Involvement and Measuring Outcome in Orthopaedic Outpatient Rehabilitation. The present study serves to establish Patient Reported Outcomes (PROs) as a tool for strengthening patient involvement and measuring outcomes in orthopaedic outpatient rehabilitation. Assessments by FFbH-R (Hannover Back Function Questionnaire for patients with back problems), Quick-DASH (Disabilities of Arm, Shoulder, and Hand Score for patients with upper extremity lesions), and LEFS (Lower Extremity Function Scale for patients with lower extremity lesions) were employed in 20 outpatient rehabilitation centres over a period of 12 months to evaluate changes in performance and participation from the subjective patient perspective. The following questionnaires were used: FFbH-R status post: lumbar disc surgery; cervical disc surgery; spinal canal decompression; conservative back pain treatment; other; Quick-DASH status post: rotator cuff reconstruction; shoulder arthroplasty; fracture (conservative or osteosynthesis); other; LEFS status post: hip arthroplasty; knee arthroplasty; anterior cruciate ligament repair; osteotomy; fracture (conservative treatment or osteosynthesis); other. Analysis of the 6,751 usable data sets demonstrated significant positive changes in all scores and diagnostic subgroups. The mean difference in score was 14.2 points in the FFbH-R, - 22 points in the Quick-DASH and 18 points in the LEFS. Thus, this study proves the positive effects of orthopaedic rehabilitation in an outpatient setting. PROs were instituted on a permanent basis in seven of the participating institutions.
abstract_id: PUBMED:37223227
Patient-Reported Outcomes in Colorectal Surgery. Given the increased life expectancy and improvements in the treatment of colorectal patients, the success of a treatment course can no longer be determined only by objective outcomes. Health care providers ought to take into consideration the impact an intervention will have on the quality of life of patients. Endpoints that take into account the patient's perspective are defined as patient-reported outcomes (PROs). PROs are assessed through patient-reported outcome measures (PROMs), usually in the form of questionnaires. PROs are especially important in colorectal surgery, whose procedures can often be associated with some degree of postoperative functional impairment. Several PROMs are available for colorectal surgery patients. However, while some scientific societies have offered recommendations, there is no standardization in the field and PROMs are seldom implemented in clinical practice. The routine use of validated PROMs can guarantee that functional outcomes are followed over time; this way, they can be addressed in case of worsening. This review will provide an overview of the most commonly used PROMs in colorectal surgery, both generic and disease specific, as well as a summary of the available evidence in support of their routine utilization.
abstract_id: PUBMED:37688832
Patient-reported outcomes: A primer for plastic surgeons. Surgical care today is no longer evaluated only on clinical outcomes but also on holistic patient wellbeing. Patient-reported outcomes (PROs) are a representation of the patient's perspective on their results and wellbeing. The aim of this review is to establish PROs as the center of healthcare and plastic surgery, to delineate important PROs in plastic surgery practice and research, to discuss the future of PROs within our discipline, and to encourage surgeons to incorporate PROs into their practice. PROs are an important parallel of clinical outcomes in that they can use the patient's perspective to 1) support clinical findings, 2) detect differences in care when there are no clear clinical differences, 3) track progress longitudinally, and 4) support systemic improvements in healthcare. Plastic surgery as a field is naturally aligned with PROs because, as a discipline, we focus on patient form and function. The emerging forefronts of plastic surgery such as lymphedema care, gender-affirming care, peripheral nerve surgery, migraine surgery, and breast implant illness are critically dependent on PROs. In the next decade, we predict that there will be a continued proliferation of robust PRO measures and integration into healthcare delivery. Outcomes research in surgery should continue to evolve as surgeons provide increasingly more benefits to improve patient wellbeing. Plastic surgeons must continue to play a prominent role in the future of PROs.
Answer: Yes, the choice of measure in patient-reported outcomes (PROs) does matter when comparing providers of surgery. The research design in one study assessed the impact of different PRO measures on provider ratings after surgeries such as hip replacement, knee replacement, varicose vein surgery, and groin hernia repair. The study found that the choice of PRO measure can significantly determine a provider's rating. For major surgeries, disease-specific PROs were strongly correlated with generic PROs, and they rated a high proportion of providers as above or below average. However, for a substantial proportion of providers, their rating depended on the choice of measure. For minor surgeries, correlations between measures were mostly weak, and the single transitional item identified the most outliers. This indicates that the selection of outcome measures is crucial and depends on whether the priority is to avoid missing "poor" providers or to avoid mislabeling average providers as "poor" (PUBMED:23632595).
Furthermore, the use of PROs in surgical care is increasingly recognized as important for understanding health outcomes from the patient's perspective. PROs can be measured using patient-reported outcome measures (PROMs), which differ from other quality measures such as patient-reported experience measures. PROMs can be applied to guide quality improvement and inform reimbursement pay structures, highlighting the importance of selecting appropriate measures that reflect the quality of surgical care (PUBMED:37223232).
In conclusion, the choice of PRO measure is significant when comparing surgical providers, as it can affect the adjusted outcomes and ratings of providers. The selection of the appropriate measure should align with the goals of the comparison, whether it is to identify providers who may be underperforming or to ensure that providers are not incorrectly labeled based on the outcomes measured. |
Instruction: Is diagnostic accuracy of fine needle aspiration on solid pancreatic lesions aspiration-related?
Abstracts:
abstract_id: PUBMED:24704290
Is diagnostic accuracy of fine needle aspiration on solid pancreatic lesions aspiration-related? A multicentre randomised trial. Background: Endoscopic ultrasound fine needle aspiration has a central role in the diagnostic algorithm of solid pancreatic masses. Data comparing the fine needle aspiration performed with different aspiration volume and without aspiration are lacking. We compared endoscopic ultrasound fine needle aspiration performed with the 22 gauge needle with different aspiration volumes (10, 20 and 0 ml), for adequacy, diagnostic accuracy and complications.
Methods: Prospective clinical study at four referral centres. Endoscopic ultrasound fine needle aspiration was performed with a 22G needle with both volume aspiration (10 and 20 cc) and without syringe, in randomly assigned sequence. The cyto-pathologist was blinded as to which aspiration was used for each specimen.
Results: 100 patients met the inclusion criteria, 88 completed the study. The masses had a mean size of 32.21±11.24 mm. Sample adequacy evaluated on site was 87.5% with 20 ml aspiration vs. 76.1% with 10 ml (p=0.051), and 45.4% without aspiration (20 ml vs. 0 ml p<0.001; 10 ml vs. 0 ml p<0.001). The diagnostic accuracy was significantly better with 20 ml than with 10 ml and 0 ml (86.2% vs. 69.0% vs. 49.4% p<0.001).
Conclusions: A significantly higher adequacy and accuracy were observed with the 20 ml aspiration puncture, therefore performing all passes with this volume aspiration may improve the diagnostic power of fine needle aspiration.
abstract_id: PUBMED:35162325
Direct Comparison of Elastography Endoscopic Ultrasound Fine-Needle Aspiration and B-Mode Endoscopic Ultrasound Fine-Needle Aspiration in Diagnosing Solid Pancreatic Lesions. Elastography endoscopic ultrasound (E-EUS) has been proved to be a valuable supplement to endoscopic ultrasound fine-needle aspiration (EUS-FNA) in differentiating solid pancreatic lesions, but the improvement of EUS-FNA guided during E-EUS has not been proven. Our study aimed to evaluate whether E-EUS fine-needle aspiration (E-EUS-FNA) was superior to B-mode EUS-FNA for the diagnosis of solid pancreatic masses and whether the diagnostic rate was affected by specific factors. Our prospective study was conducted between 2019-2020 by recruiting patients with solid pancreatic masses. E-EUS examination was followed by one pass of E-EUS-FNA towards the blue part of the lesion and a second pass of EUS-FNA. The final diagnosis was based on surgery, E-EUS-FNA or EUS-FNA results, or a 12-month follow-up. Sixty patients with solid pancreatic lesions were evaluated. The sensitivity, specificity, and accuracy for diagnosing malignancy using E-EUS-FNA and EUS-FNA were 89.5%, 100%, 90%, 93%, 100%, and 93.3%, respectively, but the differences were not significant. Neither mass location nor the lesion size influenced the results. The lengths of the core obtained during E-EUS-FNA and EUS-FNA were similar. E-EUS-FNA in solid pancreatic lesions was not superior to B-mode EUS-FNA.
abstract_id: PUBMED:32892519
Endoscopic Ultrasound-Guided Fine Needle Biopsy Needles Provide Higher Diagnostic Yield Compared to Endoscopic Ultrasound-Guided Fine Needle Aspiration Needles When Sampling Solid Pancreatic Lesions: A Meta-Analysis. Background/aims: Studies comparing the utility of endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) and endoscopic ultrasound-guided fine needle biopsy (EUS-FNB) for solid pancreatic lesions have been inconclusive with no clear superiority. The aim of this meta-analysis was to compare the diagnostic accuracy and safety between the two sampling techniques.
Methods: We performed a systematic search of randomized controlled trials published between 2012 and 2019. The primary outcome was overall diagnostic accuracy. Secondary outcomes included adverse event rates, cytopathologic and histopathologic accuracy, and the mean number of passes required to obtain adequate tissue between FNA and FNB needles. Fixed and random effect models with pooled estimates of target outcomes were developed.
Results: Eleven studies involving 1,365 participants were included for analysis. When compared to FNB, FNA had a significant reduction in diagnostic accuracy (81% and 87%, p=0.005). In addition, FNA provided reduced cytopathologic accuracy (82% and 89%, p=0.04) and an increased number of mean passes required compared to FNB (2.3 and 1.6, respectively, p<0.0001). There was no difference in adverse event rate between FNA and FNB needles (1.8% and 2.3% respectively, p=0.64).
Conclusion: FNB provides superior diagnostic accuracy without compromising safety when compared to FNA. FNB should be readily considered by endosonographers when evaluating solid pancreatic masses.
abstract_id: PUBMED:24404556
Accuracy of Endoscopic Ultrasound-guided Fine Needle Aspiration in Diagnosing Solid Pseudopapillary Tumor. Background: Solid pseudopapillary tumors are rare pancreatic tumors. Accurate preoperative diagnosis helps in planning of the surgery.
Aim: This study was to evaluate accuracy of endoscopic ultrasound-guided fine needle aspiration and immunohistochemistry in diagnosing solid pseudopapillary tumors.
Materials And Methods: A retrospective review was performed by reviewing medical records to identify patients treated for solid pseudopapillary tumors over a 5-year period. Patients who were noted to have pancreatic lesions by computer tomography abdomen underwent endoscopic ultrasound. Fine needle aspiration was obtained from each of these lesions and subjected to immunohistochemistry.
Results: Five patients were identified. Endoscopic ultrasound was able to identify the pancreatic lesions in all five patients noted in computer tomography abdomen. Solid pseudopapillary tumors were diagnosed by immunohistochemistry. All five patients underwent surgery and the resected lesions confirmed solid pseudopapillary tumors in 80% patients.
Conclusion: Endoscopic ultrasound-guided fine needle aspiration has a higher degree of accuracy in diagnosing solid pseudopapillary tumors.
abstract_id: PUBMED:35004982
Endoscopic ultrasound fine needle aspiration vs fine needle biopsy in solid lesions: A multi-center analysis. Background: While endoscopic ultrasound (EUS)-guided fine needle aspiration (FNA) is considered a preferred technique for tissue sampling for solid lesions, fine needle biopsy (FNB) has recently been developed.
Aim: To compare the accuracy of FNB vs FNA in determining the diagnosis of solid lesions.
Methods: A retrospective, multi-center study of EUS-guided tissue sampling using FNA vs FNB needles. Measured outcomes included diagnostic test characteristics (i.e., sensitivity, specificity, accuracy), use of rapid on-site evaluation (ROSE), and adverse events. Subgroup analyses were performed by type of lesion and diagnostic yield with or without ROSE. A multivariable logistic regression was also performed.
Results: A total of 1168 patients with solid lesions (n = 468 FNA; n = 700 FNB) underwent EUS-guided sampling. Mean age was 65.02 ± 12.13 years. Overall, sensitivity, specificity and accuracy were superior for FNB vs FNA (84.70% vs 74.53%; 99.29% vs 96.62%; and 87.62% vs 81.55%, respectively; P < 0.001). On subgroup analyses, sensitivity, specificity, and accuracy of FNB alone were similar to FNA + ROSE [(81.66% vs 86.45%; P = 0.142), (100% vs 100%; P = 1.00) and (88.40% vs 85.43%; P = 0.320]. There were no difference in diagnostic yield of FNB alone vs FNB + ROSE (P > 0.05). Multivariate analysis showed no significant predictor for better accuracy. On subgroup analyses, FNB was superior to FNA for non-pancreatic lesions; however, there was no difference between the techniques among pancreatic lesions. One adverse event was reported in each group.
Conclusion: FNB is superior to FNA with equivalent diagnostic test characteristics compared to FNA + ROSE in the diagnosis of non-pancreatic solid lesions. Our results suggest that EUS-FNB may eliminate the need of ROSE and should be employed as a first-line method in the diagnosis of solid lesions.
abstract_id: PUBMED:31313531
Endoscopic ultrasound guided fine-needle aspiration vs core needle biopsy for solid pancreatic lesions: Comparison of diagnostic accuracy and procedural efficiency. Background: Endoscopic ultrasound (EUS) guided core needle biopsies (CNB) are increasingly being performed to diagnose solid pancreatic lesions. However, studies have been conflicting in terms of CNB improving diagnostic accuracy and procedural efficiency vs fine-needle aspiration (FNA), which this study aims to elucidate.
Methods: Data were prospectively collected on consecutive patients with solid pancreatic or peripancreatic lesions at a single tertiary care center from November 2015 to November 2016 that underwent either FNA or CNB. Patient demographics, characteristics of lesions, diagnostic accuracy, final and follow-up pathology, use of rapid on-site evaluation (ROSE), complications, and procedure variables were obtained.
Results: A total of 75 FNA and 48 CNB were performed; of these, 13 patients had both. Mean passes were lower with CNB compared to FNA (2.4 vs 2.9, P = .02). Use of ROSE was higher for FNA (97.3% vs 68.1%, P = .001). Mean procedure time was shorter with CNB (34.1 minutes vs 51.2 minutes, P = .02) and diagnostic accuracy was similar (89.2% vs 89.4%, P = .98). There was no difference in diagnostic accuracy when ROSE was performed for CNB vs not performed (93.5% vs 85.7%, P = .58). Additionally, diagnostic accuracy of combined FNA and CNB procedures was 92.3%, which was comparable to FNA (P = .73) or CNB (P = .52) alone.
Conclusion: FNA and CNB had comparable safety and diagnostic accuracy. Use of CNB resulted in less number of passes and shorter procedure time as compared to FNA. Moreover, diagnostic accuracy for CNB with or without ROSE was similar.
abstract_id: PUBMED:34584980
Endoscopic ultrasound-guided fine-needle biopsy histology with a 22-gauge Franseen needle and fine-needle aspiration liquid-based cytology with a conventional 25-gauge needle provide comparable diagnostic accuracy in solid pancreatic lesions. Background And Aim: Fine-needle biopsy (FNB) needles obtain more core samples and support the shift from cytologic to histologic evaluation; however, recent studies have proposed a superior diagnostic potential for liquid-based cytology (LBC). This study compared the diagnostic ability of endoscopic ultrasound (EUS)-guided FNB histology with a 22-gauge Franseen needle (22G-FNB-H) and fine-needle aspiration (FNA) LBC with a conventional 25-gauge needle (25G-FNA-LBC).
Methods: We analyzed 46 patients who underwent both 22G-FNB-H and 25G-FNA-LBC in the same lesion during the same endoscopic procedure. This study evaluated the diagnostic ability of each needle, diagnostic concordance between needles, and incremental diagnostic effect of both needles compared to using each needle alone.
Results: The agreement rate for malignancy between both techniques was 93.5% (kappa value = 0.82). There was no significant difference in the diagnostic ability of both methods. 22G-FNB-H and 25G-FNA-LBC provided an incremental diagnostic accuracy in two (4.3%) cases and one (2.2%) case, respectively.
Conclusion: Our study demonstrated that the diagnostic accuracy of 25G-FNA-LBC and 22G-FNA-H for solid pancreatic lesions were comparable. A conventional 25-gauge needle that punctures lesions with ease can be used in difficult cases and according to the skill of the endoscopist.
abstract_id: PUBMED:35614028
Diagnostic value of endoscopic ultrasound-guided fine needle aspiration with rapid on-site evaluation performed by endoscopists in solid pancreatic lesions: A prospective, randomized controlled trial. Background And Aim: Endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) is the most established diagnostic method for pancreatic tissue. Rapid on-site evaluation by a trained endoscopist (self-ROSE) can improve the diagnostic accuracy. This research is aimed to analyze the application value of self-ROSE for EUS-FNA in solid pancreatic lesions.
Methods: A total of 194 consecutive patients with solid pancreatic lesions in Nanjing Drum Tower Hospital were randomized in a 1:1 ratio to EUS-FNA with or without self-ROSE in this single-center randomized controlled trial. Before initiating self-ROSE, the endoscopist underwent training for pancreatic cytologic sample adequacy assessment and cytopathological diagnosis of EUS-FNA in pathology department for 1 month. Some parts of the slides of EUS-FNA were air dried, stained on-site with BASO Liu's reagent, and on-site evaluated in self-ROSE group. Between the two groups, the diagnostic performance of EUS-FNA was analyzed, including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy, with a comparison of the number of needle passes and the complication rates.
Results: The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value were 94.8%, 94.4%, 100%, 100%, and 58.3% in the self-ROSE group, respectively, and 70.1%, 65.1%, 100%, 100%, and 32.6% in the non-self-ROSE group. The diagnostic accuracy (P < 0.001) and sensitivity (P < 0.001) were both significantly increased during EUS-FNA in the self-ROSE group compared to the non-self-ROSE group. The rate of cytologic sample adequacy was 100% in self-ROSE group and 80.4% in non-self-ROSE group. The number of passes were 3.38 ± 1.00 in self-ROSE group and 3.22 ± 0.89 in non-self-ROSE group (P = 0.228). No complications were found in both. There was acceptable consistency between endoscopist and pathologist in the cytopathological diagnosis (kappa = 0.666, P < 0.05) and in the sample adequacy rate (kappa = 1.000, P < 0.001).
Conclusion: Our results demonstrated that self-ROSE is valuable for EUS-FNA in the diagnosis of solid pancreatic lesions and is an important choice to routinely increase the accuracy of EUS-FNA in centers without ROSE assessment.
abstract_id: PUBMED:25793739
Contrast-enhanced harmonic endoscopic ultrasound-guided fine-needle aspiration in the diagnosis of solid pancreatic lesions: a retrospective study. Background: The negative predictive value of endoscopic ultrasonography-guided fine needle aspiration for the diagnosis of solid pancreatic lesions remains low, and the biopsy specimens are sometimes inadequate for appropriate pathological diagnosis.
Aims: To evaluate the usefulness of a novel method of contrast-enhanced harmonic endoscopic ultrasonography-guided fine-needle aspiration for the differential diagnosis and adequate sampling of solid pancreatic lesions.
Methods: Patients with a diagnosis of solid pancreatic lesions who underwent fine-needle aspiration guided by contrast-enhanced harmonic endoscopic ultrasonography or by endoscopic ultrasonography from October 2010 to July 2013 were retrospectively identified and classified into the CH-EUS or EUS group, respectively. Surgical pathology and/or follow-up results were defined as the final diagnosis. Operating characteristics and adequacy of biopsy specimens by fine-needle aspiration were compared between the two groups.
Results: Operating characteristics for contrast-enhanced harmonic endoscopic ultrasonography-guided fine-needle aspiration in solid pancreatic lesions were as follows: area under the curve = 0.908, sensitivity = 81.6%, specificity = 100%, positive predictive value = 100%, negative predictive value = 74.1%, and accuracy = 87.9%. The percentage of adequate biopsy specimens in the CH-EUS group (96.6%) was greater than that in the EUS group (86.7%).
Conclusion: Simultaneous contrast-enhanced harmonic endoscopic ultrasonography during fine-needle aspiration is useful for improving the diagnostic yield and adequate sampling of solid pancreatic lesions.
abstract_id: PUBMED:34943512
Two Needle Passes Achieve Similar Diagnostic Yield Compared to Three Passes Regarding Diagnosis of Solid Pancreatic Lesions in Endoscopic Ultrasound-Guided Fine Needle Aspiration. Current guidelines advocate 3-4 passes with a fine-needle aspiration (FNA) to achieve high rates of diagnostic samples for malignancy when performing endoscopic ultrasound (EUS)-guided sampling of solid pancreatic lesions, in the absence of on-site cytologic evaluation. The aim of this study is to compare 2 vs. 3 needle passes in EUS-FNA for solid pancreatic lesions in terms of incremental diagnostic yield and to identify factors associated with the procedure's outcome. In this retrospective study, 2 passes of EUS-FNA were found to have similar diagnostic yield compared to 3 passes for the diagnosis of solid pancreatic masses, suggesting that there might be no significant incremental tissue yield when 3 passes are performed.
Answer: Yes, the diagnostic accuracy of fine needle aspiration (FNA) on solid pancreatic lesions appears to be aspiration related. A multicenter randomized trial compared endoscopic ultrasound fine needle aspiration (EUS-FNA) performed with a 22-gauge needle with different aspiration volumes (10, 20, and 0 ml) and found that a significantly higher adequacy and accuracy were observed with the 20 ml aspiration volume. The study concluded that performing all passes with this volume aspiration may improve the diagnostic power of FNA (PUBMED:24704290).
Additionally, a meta-analysis comparing endoscopic ultrasound-guided fine needle biopsy (EUS-FNB) needles to EUS-FNA needles when sampling solid pancreatic lesions found that FNB provides superior diagnostic accuracy without compromising safety compared to FNA (PUBMED:32892519). This suggests that the technique and type of needle used in the aspiration process can influence the diagnostic yield.
Furthermore, a study comparing EUS-FNA with rapid on-site evaluation (ROSE) performed by endoscopists to EUS-FNA without ROSE in solid pancreatic lesions found that the diagnostic accuracy and sensitivity were significantly increased with the use of self-ROSE (PUBMED:35614028). This indicates that immediate evaluation of the aspirated material can also affect the diagnostic accuracy.
In summary, the diagnostic accuracy of FNA for solid pancreatic lesions is related to the aspiration volume, the type of needle used, and the immediate evaluation of the aspirated material. |
Instruction: Unipolar mania: a distinct entity or characteristic of manic preponderance?
Abstracts:
abstract_id: PUBMED:22949290
Unipolar mania: a distinct entity or characteristic of manic preponderance? Objective: It has been reported that fewer patients with unipolar mania respond to lithium prophylaxis as do those with classical bipolar disorder. This study aimed to determine if the difference to response to lithium is related to unipolar mania or to a high preponderance of mania during the course of bipolarity.
Materials And Methods: The study included bipolar-I patients (according to DSM-IV criteria) that had a ≥ 2-year history of either lithium or valproate prophylaxis as monotherapy. The response rate in the patients with unipolar mania and classical bipolar disorder were compared. Then, the response rate to lithium in all the patients with a manic episode rate <50% and >50%, and <80% and >80% during their course were compared. Finally, the above comparisons were repeated, excluding the patients with unipolar mania.
Results: The study included 121 bipolar-I patients (34 unipolar mania and 87 classical bipolar disorder). The response rate to lithium prophylaxis was significantly lower in the unipolar mania group than that in the bipolar group, whereas, the response rate to valproate prophylaxis was similar in both groups. Additionally, significantly fewer patients with a manic episode rate >80% during their course responded to lithium, followed by those with a manic episode rate >50%; however, these differences disappeared when the unipolar mania group was excluded from the comparison.
Conclusion: Fewer patients with unipolar mania responded to lithium prophylaxis than those with classical bipolar disorder, which appeared to be related to unipolar mania, rather than to a high manic predominance during the disease course. On the other hand, response to valproate prophylaxis was similar in the unipolar mania and classical bipolar disorder groups.
abstract_id: PUBMED:26721636
Receiver Operating Characteristic Curve Analysis of Screening Tools for Bipolar Disorder Comorbid With ADHD in Schoolchildren. Objective: We compared Child Behavior Checklist (CBCL)-AAA (Attention Problems, Aggressive Behavior, and Anxious/Depressed) and Parent-Young Mania Rating Scale (P-YMRS) profiles in Brazilian children with ADHD, pediatric-onset bipolar disorder (PBD), and PBD + ADHD. Method: Following analyses of variance or Kruskal-Wallis tests with multiple-comparison Least Significant Difference (LSD) or Dunn's Tests, thresholds were determined by Mann-Whitney U Tests and receiver operating characteristic (ROC) plots. Results: Relative to ADHD, PBD and PBD + ADHD groups scored higher on the Anxious/Depressed, Thought Problems, Rule-Breaking, and Aggressive Behavior subscales and Conduct/Delinquency Diagnostic Scale of the CBCL; all three had similar attention problems. The PBD and PBD + ADHD groups scored higher than the ADHD and healthy control (HC) groups on all CBCL problem scales. The AAA-profile ROC had good diagnostic prediction of PBD + ADHD. PBD and PBD-ADHD were associated with (similarly) elevated P-YMRS scores. Conclusion: The CBCL-PBD and P-YMRS can be used to screen for manic behavior and assist in differential diagnosis.
abstract_id: PUBMED:24210629
Unipolar mania: a distinct entity? Background: Whether or not unipolar mania is a separate nosological entity remains a subject of dispute. This review discusses that question in light of recent data.
Methods: Unipolar mania studies in the PUBMED database and relevant publications and cross-references were searched.
Results: There seems to be a bipolar subgroup with a stable, unipolar recurrent manic course, and that 15-20% of bipolar patients may be unipolar manic. Unipolar mania may be more common in females. It seems to have a slightly earlier age of illness onset, more grandiosity, psychotic symptoms, hyperthymic temperament, but less rapid-cycling, suicidality and comorbid anxiety disorders. It seems to have a better course of illness with better social and professional adjustment. However, its response to lithium prophylaxis seems to be worse, although its response to valproate is the same when compared to that of classical bipolar.
Limitations: The few studies on the subject are mainly retrospective, and the primary methodological criticism is the uncertainty of the diagnostic criteria for unipolar mania.
Conclusions: The results indicate that unipolar mania displays some different clinical characteristics from those of classical bipolar disorder. However, whether or not it is a separate nosological entity has not been determined due to the insufficiency of relevant data. Further studies with standardized diagnostic criteria are needed. Considering unipolar mania as a course specifier of bipolar disorder could be an important step in this respect.
abstract_id: PUBMED:34820682
Adult attention deficit hyperactivity disorder (ADHD) in the clinical descriptions and classificatory reflections of Gustav Specht (1905) and Hermann Paul Nitsche (1910) The notion that the adult form of attention deficit hyperactivity disorder (ADHD) is not a construct of modern psychiatry is increasingly prevailing. Looking into the history of psychiatry can make an enlightening contribution here. Guided by this aim and specifically following literature referred to by Emil Kraepelin (1856-1926), we analyzed the content of one study each by Gustav Specht (1860-1940) and the later Nazi psychiatrist Hermann Paul Nitsche (1876-1948) from 1905 and 1910, respectively, on the topic of chronic mania. Our investigation concluded that in their case studies both authors described people who would today be diagnosed as suffering from adult ADHD as the clinical descriptions reveal core symptoms of this entity as defined by modern classifications. They also mentioned currently discussed research questions. Both authors expressed their dissatisfaction with the classificatory situation of these patients at the time. Specht even postulated a "completely independent mental illness" that he called "chronic mania", under which he classified all the patients suffering from today's adult ADHD. He also pointed out that this diagnosis was not widely recognized at the time by psychiatrists as a full-fledged form of illness but used more as a diagnosis to avoid the embarrassment of not having one. Nitsche saw the "chronic manic states" as he called them as a "clinical peculiarity" but assigned them to the large group of "manic depressive insanity", which could only be more finely differentiated in the future.
abstract_id: PUBMED:4081648
ADD psychosis as a separate entity. "Attention deficit disorder (ADD) psychosis" merits delineation as a separate entity. It constitutes the end result of the effects of a certain particular neurological deficit (ADD) on personality organization. It is my belief that about 10 percent of psychoses currently diagnosed most often schizophrenic and sometimes affective psychosis must best be considered a separate organic psychosis, i.e., an ADD psychosis. This ADD psychosis, then, is not merely a subgroup of schizophrenia, as I once thought. It merits a separate designation because its etiology, pathogenesis, and life history are different from those of the schizophrenic syndrome. The family histories are also different, as are the psychological findings. The treatment response is so different that it merits urgent consideration. Prognosis, both short range and long range, also seems different from those of the other psychoses.
abstract_id: PUBMED:2928414
Commentary on Gracia et al.: diagnostic entity or dynamic processes? The case of Mr. G is a fascinating and beautifully presented, although sad, history of a patient. The clarity of the presentation and the discussion are particularly impressive in this example of the use of longitudinal data for descriptive diagnosis and conceptualization. The report exemplifies current trends in descriptive psychiatry, which focus on diagnosing an illness as a whole. The illness itself is considered a relatively static entity, even if it includes changes within its course. From this perspective, longitudinal data provide information for deciding what type of illness is being described. In my commentary, I would like to focus on a different way of using longitudinal data from that employed by the authors--a different type of longitudinal perspective, one that does not have the primary goal of defining a type of disorder diagnostically. I would like instead to focus on understanding this patient in terms of longitudinal processes rather than as someone afflicted with a persisting diagnostic entity. Attention to longitudinal processes can raise different questions and suggest what further information would be needed to understand these processes and plan optimal treatment. This more dynamic approach to understanding processes is not mutually exclusive with the static approach to diagnosis, but the two orientations provide very different perspectives on assessment, conceptualization and treatment.
abstract_id: PUBMED:495794
Unipolar mania: a distinct clinical entity? Of the 241 lithium clinic patients at the New York State Psychiatric Institute with bipolar I affective disorder, 38 (15.7%) had never been hospitalized or somatically treated for depression. These "unipolar manic" patients had a significantly lower incidence of rapid cycling and suicide attempts than other bipolar I patients. No differences were found, however, in risk of illness in first-degree relatives. Lithium was an effective prophylactic agent in these patients. Some patients originally classified as "unipolar manic" were found to have depressive episodes with additional information and clinical observation. "Unipolar mania" appears to be a subgroup of bipolar I illness, but there are no data to support the hypothesis that it is a separate entity.
abstract_id: PUBMED:31114930
Attention-deficit/hyperactivity disorder in adults in the clinical description and classification of Emil Kraepelin This study presents descriptions of symptoms specific to the adult form of attention-deficit/hyperactivity disorder (ADHD) in the 8th edition of the Textbook on Psychiatry by Emil Kraepelin (1856-1926). To identify whether ADHD is a new, fashionable phenomenon in adults or whether early psychiatrists also saw such patients and how they classified them, this textbook is an essential source. Published between 1905 and 1915, it can be perceived as the culmination and at the same time terminal point of Kraepelin's conceptual and nosological work, which in turn marked the beginning of present-day psychiatric classification. Kraepelin did not perceive ADHD as a psychiatric entity of its own, which is either due to the fact that he saw no necessity to do so or that he did not recognize this. If the latter, Kraepelin may have been misled by the manifold psychiatric comorbidities typical for ADHD, which may have masked ADHD. Kraepelin seems to have grouped patients obviously suffering from the adult form of ADHD into two groups: on the one hand into the so-called basic constitution (Grundzustand) of manic-depressive disorder, which he called manic disposition or constitutional excitement (manische Veranlagung oder konstitutionelle Erregung) and on the other hand into the so-called group of anchorless people (Haltlose), which he perceived as a special form of psychopathic personality. It seems that Kraepelin grouped milder grades of ADHD with predominantly ADHD-associated mood swings into the group of manic disposition while grouping more severe forms, which usually occur together with distinct personality disorders and addictive disorders, into that of anchorless people.
abstract_id: PUBMED:30416307
Identification and management of "delirious mania:" A rare clinical entity. Little is known about the entity of delirious mania, apart from a few reports. Delirious mania, though a potentially lethal condition, has not found a place in the current classification system. We discuss the approach to the identification and management of delirious mania in a female patient presenting with catatonic and delirious features, with a past history of recurrent mood episodes. However, it must be emphasized that it is likely to be a rare clinical presentation of bipolar disorder and should not be the first diagnostic consideration. As a rule, any patient presenting with delirium or clouded consciousness first warrants a thorough medical workup and other specialty referrals.
abstract_id: PUBMED:7676865
Is bipolar disorder the most common diagnostic entity in hospitalized adolescents and children? An evaluation of all children and adolescents admitted to an acute psychiatric unit over a three-month period was undertaken to determine the presence of bipolar disorder. The findings indicated that bipolar disorder was the most common diagnostic entity. This disorder had not been recognized in most of these youngsters previously, although almost all of them have had past psychiatric contacts. The vast majority of youngsters experiencing delusions and/or hallucinations, and the vast majority of court-remanded adolescents also appeared to be suffering from the disorder. It was concluded that bipolar disorder has to be ruled out in all youngsters admitted to acute care psychiatric units.
Answer: Unipolar mania has been a subject of debate regarding whether it is a distinct nosological entity or simply a characteristic of manic preponderance within the spectrum of bipolar disorders. Studies have shown that unipolar mania may present with certain clinical characteristics that differ from classical bipolar disorder. For instance, unipolar mania may have a slightly earlier age of illness onset, more grandiosity, psychotic symptoms, hyperthymic temperament, but less rapid-cycling, suicidality, and comorbid anxiety disorders. It also appears to have a better course of illness with better social and professional adjustment (PUBMED:24210629). However, the response to lithium prophylaxis in unipolar mania is worse compared to classical bipolar disorder, although the response to valproate is similar between the two groups (PUBMED:22949290).
The concept of unipolar mania as a separate entity is supported by the observation that there seems to be a bipolar subgroup with a stable, unipolar recurrent manic course, with estimates suggesting that 15-20% of bipolar patients may be unipolar manic (PUBMED:24210629). However, the diagnostic criteria for unipolar mania are uncertain, and most studies on the subject are retrospective, which limits the ability to definitively classify unipolar mania as a separate entity. Further studies with standardized diagnostic criteria are needed, and considering unipolar mania as a course specifier of bipolar disorder could be an important step in this respect (PUBMED:24210629).
In historical context, the notion of adult ADHD and its overlap with chronic mania has been discussed, with some early psychiatrists describing patients with symptoms that would align with modern classifications of adult ADHD, and at times, these descriptions intersected with discussions of chronic mania (PUBMED:34820682, PUBMED:31114930).
In conclusion, while unipolar mania displays some distinct clinical characteristics from classical bipolar disorder, the question of whether it is a separate nosological entity remains unresolved due to insufficient data and the need for further research with standardized diagnostic criteria (PUBMED:24210629). |
Instruction: Is there an association between the sleep apnea syndrome and the circadian peak of myocardial infarction in the morning hours?
Abstracts:
abstract_id: PUBMED:16317608
Is there an association between the sleep apnea syndrome and the circadian peak of myocardial infarction in the morning hours? Objective: To determine whether there is a relationship between the circadian rhythm of acute myocardial infarction (AMI) in the morning hours and the sleep apnea syndrome (SAS).
Patients And Methods: 203 patients who had sustained an AMI were examined 7-14 days later for sleep-associated breathing disorders using a 5-channel recording system. The diagnostic criterion for SAS was > 10 episodes of apnea and hypopnea per hour (AHI >10). 76 % of all patients were male, mean age 62 years.
Results: SAS was diagnosed in 91 of the 203 patients (44.8 %). Compared to the 112 patients without SAS there were significantly more AMI in the morning hours (6:00 am to 12:00 am) in the SAS-group (49.5 %) than in the non-SAS-group (21.4 %). The two groups differed with regard to the symptoms of day-time sleepiness (29.7 % vs 17.0 %), age (mean 64.6 years vs 60.2 years), gender (83.5 % vs 69.9 % male) and smoking (33.0 % vs 51.8 %). There were no significant differences in Body mass index, hypertension, hyperlipoproteinemia, diabetes mellitus, family history, history of cardiovascular disease and taking of sedatives.
Conclusion: The strong association between SAS and morning onset of AMI found in this study could be the result of a sympathetic stress reaction to the breathing disorder.
abstract_id: PUBMED:11279325
Is the morning peak of acute myocardial infarction's onset due to sleep-related breathing disorders? A prospective study. Many studies have shown that the risk of experiencing a myocardial infarction (MI) is increased during the first hours of the morning. Sleep apnea syndrome (SAS) is associated with an enhanced adrenergic activity, prolonged a few hours after awakening. We aimed at assessing whether sleep breathing disorders could be a culprit for the morning excess rate of MI. We studied 40 middle-aged men admitted for an acute MI. An overnight polysomnographic study was performed 37.4 +/- 9.4 days after the MI. The prevalence of SAS was high (30%). The prevalence of SAS was significantly higher in patients with the MI onset during the morning. The circadian pattern was significantly different in patients with or without SAS: those with SAS presented an important peak of MI onset during the period between 06.00 and 11.59 h. None of them had their MI during the period between 24.00 and 05.59 h. This different nyctohemeral pattern underlines the potential role of sleep breathing disorders as a trigger of MI.
abstract_id: PUBMED:16299662
Breathing disorders during sleep and the circadian rhythm of strokes Similar to myocardium infarct and sudden death, stroke presents a circadian rhythm with a maximum incidence of the beginning of the symptoms during the first morning hours. It is believed that this morning pick is secondary to haemodynamic, haemostatic and autonomic changes that happen during sleep, the sleep-awake transition and after wake up. About 25% of stroke, especially lacunar and atherothrombotic infarcts, occurs during night sleep. Usually, the symptoms are perceived for first time when the individual wakes up in the morning although stroke could have occurred long time before. These individuals are excluded from thrombolytic treatments and therapeutic trials. The patients that develop an ischemic stroke when sleeping presents more obstructive apneas during the acute phase of stroke than patients that suffer stroke during wakefulness. It is known that apneas are associated to oxyhemoglobin desaturations and to important haemodinamic changes and their treatment reduce cerebrovascular morbidity. These findings suggest that apneas, likely associated with other risk factors, can worsen or precipitate an ischemic stroke.
abstract_id: PUBMED:7628291
Chronobiology and chronotherapy in medicine. There is a fascinating and exceedingly important area of medicine that most of us have not been exposed to at any level of our medical training. This relatively new area is termed chronobiology; that is, how time-related events shape our daily biologic responses and apply to any aspect of medicine with regard to altering pathophysiology and treatment response. For example, normally occurring circadian (daily cycles, approximately 24 hours) events, such as nadirs in epinephrine and cortisol levels that occur in the body around 10 PM to 4 AM and elevated histamine and other mediator levels that occur between midnight and 4 AM, play a major role in the worsening of asthma during the night. In fact, this nocturnal exacerbation occurs in the majority of asthmatic patients. Because all biologic functions, including those of cells, organs, and the entire body, have circadian, ultradian (less than 22 hours), or infradian (greater than 26 hours) rhythms, understanding the pathophysiology and treatment of disease needs to be viewed with these changes in mind. Biologic rhythms are ingrained, and although they can be changed over time by changing the wake-sleep cycle, these alterations occur over days. However, sleep itself can adversely affect the pathophysiology of disease. The non-light/dark influence of biologic rhythms was first described in 1729 by the French astronomer Jean-Jacques de Mairan. Previously, it was presumed that the small red flowers of the plant Kalanchoe bloss feldiuna opened in the day because of the sunlight and closed at night because of the darkness. When de Mairan placed the plant in total darkness, the opening and closing of the flowers still occurred on its intrinsic circadian basis. It is intriguing to think about how the time of day governs the pathophysiology of disease. On awakening in the morning, heart rate and blood pressure briskly increase, as do platelet aggregability and other clotting factors. This can be linked to the acrophase (peak event) of heart attacks. During the afternoon we hit our best mental and physical performance, which explains why most of us state that "I am not a morning person." Even the tolerance for alcohol varies over the 24-hour cycle, with best tolerance around 5 pm (i.e. "Doctor, I only have a couple of highballs before dinner"). Thus, all biologic functions, from those of the cell, the tissue, the organs, and the entire body, run on a cycle of altering activity and function.(ABSTRACT TRUNCATED AT 400 WORDS)
abstract_id: PUBMED:21380796
Impaired circadian variation of platelet activity in patients with sleep apnea. Background: Cardiovascular diseases are frequent in patients with obstructive sleep apnea (OSAS). There is evidence that the day-night pattern of myocardial infarction and sudden cardiac death observed in the general population is altered in patients with OSAS. This study investigates potential abnormalities in the circadian profiles of platelet activity in OSAS.
Methods: We studied 37 patients with OSAS [7 of whom were also studied after 3 months on continuous positive airway pressure (CPAP) treatment] and 11 controls. In each subject, we obtained six different blood samples during 24-h period (2200, 0200, 0600, 1000, 1400, and 1800 hours). Platelet activity was determined by flow cytometry immediately after sampling.
Results: We found that nocturnal platelet activity was significantly increased in patients with OSAS (p = 0.043) and that effective treatment with CPAP decreased platelet activity in these patients but differences just failed to reach statistical significance (p = 0.063).
Conclusions: OSAS is associated with increased platelet activity during the night, and that this appears to be improved by chronic use of CPAP. These results may contribute to explain the high prevalence of cardiovascular events during sleep in OSAS.
abstract_id: PUBMED:10441811
Does sleep apnea increase the risk of myocardial infarct during sleep? Myocardial infarction shows a circadian pattern with a maximum in the early morning hours. In patients with sleep-related breathing disorders (SRBD), it is assumed that apnea-associated changes of hemodynamics, blood gases, and rheology lead to a higher frequency of myocardial infarction during sleep. This investigation analyzes the circadian pattern of myocardial infarction in patients with and without SRBD. Within a time period of 20 months, 89 male patients with acute myocardial infarction were consecutively admitted to the intensive care unit. A nocturnal long-term registration of oxygen saturation, heart rate, breathing sounds, and body position by means of a 4-channel recording system (MESAM IV) was carried out in 59 of the 89 patients 6 to 10 days (evaluation I) and in 43 of 59 patients 22 to 28 days after infarction (evaluation II). Sleep apnea with a respiratory-disturbance-index (RDI > or = 10/h was found in 44.1/39.5% of the patients (evaluation I/II). In 22% of the patients, time of infarction was during a sleeping period. Patients with myocardial infarction during sleep had a clearly higher RDI in comparison to patients with a myocardial infarction during wakefulness (evaluation I: 22.7 versus 9.4/h; p = 0.08; evaluation II: 20.3 versus 7.3; p < 0.05). 53.6% of all myocardial infarctions occurred during the time period 5:00-11:00 a.m. Investigations in a larger number of patients are necessary to confirm these results as well as the relevance of sleep apnea as a cardiovascular risk factor.
abstract_id: PUBMED:26913199
Diurnal variation in the performance of rapid response systems: the role of critical care services-a review article. The type of medical review before an adverse event influences patient outcome. Delays in the up-transfer of patients requiring intensive care are associated with higher mortality rates. Timely detection and response to a deteriorating patient constitute an important function of the rapid response system (RRS). The activation of the RRS for at-risk patients constitutes the system's afferent limb. Afferent limb failure (ALF), an important performance measure of rapid response systems, constitutes a failure to activate a rapid response team (RRT) despite criteria for calling an RRT. There are diurnal variations in hospital staffing levels, the performance of rapid response systems and patient outcomes. Fewer ward-based nursing staff at night may contribute to ALF. The diurnal variability in RRS activity is greater in unmonitored units than it is in monitored units for events that should result in a call for an RRT. RRT events include a significant abnormality in either the pulse rate, blood pressure, conscious state or respiratory rate. There is also diurnal variation in RRT summoning rates, with most activations occurring during the day. The reasons for this variation are mostly speculative, but the failure of the afferent limb of RRT activation, particularly at night, may be a factor. The term "circadian variation/rhythm" applies to physiological variations over a 24-h cycle. In contrast, diurnal variation applies more accurately to extrinsic systems. Circadian rhythm has been demonstrated in a multitude of bodily functions and disease states. For example, there is an association between disrupted circadian rhythms and abnormal vital parameters such as anomalous blood pressure, irregular pulse rate, aberrant endothelial function, myocardial infarction, stroke, sleep-disordered breathing and its long-term consequences of hypertension, heart failure and cognitive impairment. Therefore, diurnal variation in patient outcomes may be extrinsic, and more easily modifiable, or related to the circadian variation inherent in human physiology. Importantly, diurnal variations in the implementation and performance of the RRS, as gauged by ALF, the RRT response to clinical deterioration and any variations in quality and quantity of patient monitoring have not been fully explored across a diverse group of hospitals.
abstract_id: PUBMED:3310837
Cardiovascular stress and sleep. This review summarizes briefly the present knowledge on sleep-related factors in ischaemic heart disease. A marked circadian rhythm in the frequency of onset of acute myocardial infarction has been found, but the exact mechanism is not known. The circadian variation is possibly explained by several mechanisms. The best documented is sleep apnoea syndrome, which seems to be a risk factor for ischaemic heart disease and stroke. Stressful REM-sleep seems to be potentially arrhythmogenic in patients with decreased cardiopulmonary function. The role of coronary spasm, increased thrombocyte aggregation and mental stress in sleep disorders is still poorly understood.
abstract_id: PUBMED:8452968
Observations on the effect of the circadian rhythm on the appearance of a myocardial infarct N/A
abstract_id: PUBMED:1521412
Hypertension, cardiac arrhythmias, myocardial infarction, and stroke in relation to obstructive sleep apnea. The cyclical changes in heart rate and systemic blood pressure that accompany apneic events are predominantly mediated by fluctuations in the activity of the autonomic nervous system. Increased vagal efferent parasympathetic activity is responsible for the cyclical reductions in heart rate during apnea. In contrast, the cyclical elevations in systemic blood pressure are believed to result from recurrent peripheral vasoconstriction mediated by repetitive activation of the sympathetic nervous system. Maximal activation and pressures coincide with apnea termination and brief arousal from sleep. These cyclical elevations in systemic pressure during sleep increase ventricular workload and, thereby, may contribute to the development of ventricular hypertrophy. Systemic hypertension is present during wakefulness in approximately 50% of patients with OSA. Although age and obesity are the predominant risk factors for diurnal hypertension, OSA probably makes an independent contribution in younger obese men. Sinus bradycardia, Mobitz type 1 second-degree heart block, and prolonged sinus arrest have all been documented in association with the apneic events. Increased ventricular ectopy has been observed with oxyhemoglobin desaturations below 60%. Myocardial ischemia, infarction, sudden death, and stroke all demonstrate similar circadian variations in time of onset. Peak frequencies occur between 6 AM and noon, generally within several hours of awakening. Although sleep is associated with decreased frequencies of these adverse cardiovascular events in the general population, evidence exists linking REM sleep to an increased risk of myocardial ischemia. In men who habitually snore, epidemiologic data have detected an increased risk for ischemic heart disease and stroke. Habitual snoring has also been associated with an increased risk of sudden death during sleep. In patients with clinically significant OSA, there is reasonable information indicating excessive mortality in the absence of treatment. This mortality is predominantly cardiovascular and tends to occur during sleep.
Answer: Yes, there is an association between the sleep apnea syndrome (SAS) and the circadian peak of myocardial infarction (MI) in the morning hours. Studies have found that patients with SAS have a higher prevalence of AMI onset during the morning hours compared to those without SAS. One study found that 49.5% of patients with SAS experienced AMI in the morning (6:00 am to 12:00 pm) as opposed to 21.4% of patients without SAS (PUBMED:16317608). Another study reported a significant peak of MI onset between 06:00 and 11:59 am in patients with SAS, with none of these patients experiencing MI during the period between 24:00 and 05:59 am (PUBMED:11279325). These findings suggest that sleep-related breathing disorders could be a trigger for MI, particularly in the morning hours.
The association between SAS and the morning peak of MI may be due to the enhanced adrenergic activity associated with SAS, which is prolonged a few hours after awakening (PUBMED:11279325). Additionally, patients with SAS have been found to have increased nocturnal platelet activity, which may contribute to the high prevalence of cardiovascular events during sleep in these patients (PUBMED:21380796). Furthermore, patients with myocardial infarction during sleep had a higher respiratory-disturbance-index (RDI), indicating more severe sleep apnea, compared to patients with myocardial infarction during wakefulness (PUBMED:10441811).
Overall, the strong association between SAS and morning onset of AMI could be the result of a sympathetic stress reaction to the breathing disorder (PUBMED:16317608), and these findings underscore the potential role of sleep breathing disorders as a trigger of MI (PUBMED:11279325). |
Instruction: Comparison of hip joint range of motion in professional youth and senior team footballers with age-matched controls: an indication of early degenerative change?
Abstracts:
abstract_id: PUBMED:19218076
Comparison of hip joint range of motion in professional youth and senior team footballers with age-matched controls: an indication of early degenerative change? Objectives: To determine if there is evidence of abnormal hip joint range of motion (ROM) in youth and senior team professional footballers compared with matched controls.
Design: A case control study design was used.
Participants: 40 professional footballers (20 youth and 20 senior team) and 40 matched control subjects.
Main Outcome Measures: Bilateral measurements of passive hip internal rotation (IR), external rotation (ER), flexion, abduction and extension were made together with Faber's test and the hip quadrant.
Results: Youth and senior footballers had significantly less IR and Faber's range and significantly higher abduction than their respective controls (p < 0.001). Senior footballers also had significantly reduced IR (p < 0.05) and Faber's (p < 0.001) than the youth team. A higher proportion of senior footballers had positive hip quadrants (45% of all hips) compared to all other groups. No significant difference in hip ROM was found between dominant and non-dominant legs.
Conclusions: A specific pattern of hip ROM was found in professional footballers which appeared to be different from controls. These changes may demonstrate the early stages of hip degeneration to which it has been shown ex-professional players are prone to. Hip joint ROM exercises may be necessary in these players to restore normal movement and prevent the onset of hip osteoarthritis (OA).
abstract_id: PUBMED:8902682
A new pelvic tilt detection device: roentgenographic validation and application to assessment of hip motion in professional ice hockey players. Professional ice hockey players often sustain hip and low back strains. We hypothesized that playing the sport of ice hockey may result in the shortening of the iliopsoas muscles, increasing the likelihood of lumbosacral strains and hip injuries. The purpose of this study was to identify whether ice hockey players demonstrate a decrease in hip extension range of motion when compared with age-matched controls. Objective data were obtained using the Thomas test with an electrical circuit device to determine pelvic tilt motion. The device was validated by obtaining X-rays in six subjects during the Thomas test. The study then examined 25 professional hockey players and 25 age-matched controls. A two-way analysis of variance was applied for statistical analysis to examine the effect of sport and side. The results demonstrated that ice hockey players have a reduced mean hip extension range of motion (p < .0001) by comparison with age-match controls. There was no difference between right and left sides, nor was there any interaction of the sport with the side of the body. Therefore, hockey players demonstrated a decreased extensibility of the iliopsoas muscles. Future research may be directed toward establishing a link between prophylactic stretching and injury rate in professional ice hockey players.
abstract_id: PUBMED:25486297
Hip and Shoulder Range of Motion in Youth Baseball Pitchers. Oliver, GD and Weimar, WH. Hip and shoulder range of motion in youth baseball pitchers. J Strength Cond Res 30(10): 2823-2827, 2016-Lack of range of motion (ROM) has long been suspected as contributing to injury in baseball pitchers. However, all previous ROM research has focused on collegiate and professional pitchers. It was thus the purpose of this study to measure and evaluate bilateral hip and throwing shoulder rotational passive range of motion (PROM) in youth baseball pitchers. Twenty-six youth baseball pitchers (11.3 ± 1.0 years; 152.4 ± 9.0 cm; 47.5 ± 11.3 kg) with no history of injury participated. Bilateral hip and throwing shoulder rotational PROM was measured. There were no significant side-to-side differences for the hip variables (p ≥ 0.05). Shoulder external rotation (ER) was significantly greater than shoulder internal rotation (IR). And the lead leg hip had significantly greater ER than IR. Shoulder ER revealed significant correlations with both lead and stance hip IR (r = 0.45, p = 0.02 and r = 0.48, p = 0.01, respectively). The youth baseball pitchers in this study displayed similar PROM patterns as collegiate and professional baseball pitchers. Additionally, our youth baseball pitchers also presented strong relationships between hip and shoulder PROM. This study reveals that the PROM patterns displayed by these youth may indicate that their available ROM could survive maturation. It is therefore suggested that clinical focus be directed to maintaining hip and shoulder rotational ROM throughout maturation in attempt to determine a possible relations between injurious mechanisms and performance enhancement.
abstract_id: PUBMED:27713579
Differences in hip range of motion among collegiate pitchers when compared to youth and professional baseball pitcher data. The purpose of this study was to measure passive hip internal (IR) and external rotation (ER) range of motion (ROM) in collegiate baseball pitchers and compare to published youth and professional values. Measures were taken on the bilateral hips of 29 participants (mean age 20.0±1.4, range 18-22 years). Results identified no significant differences between the stance and stride hip in collegiate right handed pitchers for IR (p= 0.22, ES 0.23) and ER (p=.08, ES= 0.25). There was no significant difference in left handed pitchers for IR (p= 0.80, ES= 0.11) and ER (p= 0.56, ES= 0.15). When comparing youth to collegiate, IR increased in the stance (2º) and stride (5º) hip and an increase in the stance (5º) and stride (5º) hip were present for ER as well. From collegiate to professional, IR increased in the stance (4º) and stride (3º) hip whereas a decrease in the stance (9º) and stride (12º) hip was present for ER. The data suggests an increase in passive ROM from youth to collegiate and a decrease from collegiate to professional. Understanding passive hip ROM values among the different levels of pitchers may assist clinicians in developing time dependent interventions to prevent future injury and enhance performance.
abstract_id: PUBMED:17387220
Descriptive profile of hip rotation range of motion in elite tennis players and professional baseball pitchers. Background: Repetitive loading to the hip joint in athletes has been reported as a factor in the development of degenerative joint disease and intra-articular injury. Little information is available on the bilateral symmetry of hip rotational measures in unilaterally dominant upper extremity athletes.
Hypothesis: Side-to-side differences in hip joint range of motion may be present because of asymmetrical loading in the lower extremities of elite tennis players and professional baseball pitchers.
Study Design: Cohort (cross-sectional) study (prevalence); Level of evidence, 1.
Methods: Descriptive measures of hip internal and external rotation active range of motion were taken in the prone position of 64 male and 83 female elite tennis players and 101 male professional baseball pitchers using digital photos and computerized angle calculation software. Bilateral differences in active range of motion between the dominant and nondominant hip were compared using paired t tests and Bonferroni correction for hip internal, external, and total rotation range of motion. A Pearson correlation test was used to test the relationship between years of competition and hip rotation active range of motion.
Results: No significant bilateral difference (P > .005) was measured for mean hip internal or external rotation for the elite tennis players or the professional baseball pitchers. An analysis of the number of subjects in each group with a bilateral difference in hip rotation greater than 10 degrees identified 17% of the professional baseball pitchers with internal rotation differences and 42% with external rotation differences. Differences in the elite male tennis players occurred in only 15% of the players for internal rotation and 9% in external rotation. Female subjects had differences in 8% and 12% of the players for internal and external rotation, respectively. Statistical differences were found between the mean total arc of hip range of internal and external rotation in the elite tennis players with the dominant side being greater by a clinically insignificant mean value of 2.5 degrees. Significantly less (P < .005) dominant hip internal rotation and less dominant and nondominant hip total rotation range of motion were found in the professional baseball pitchers compared with the elite male tennis players.
Conclusion: This study established typical range of motion patterns and identified bilaterally symmetric hip active range of motion rotation values in elite tennis players and professional baseball pitchers. Asymmetric hip joint rotational active range of motion encountered during clinical examination and screening may indicate abnormalities and would indicate the application of flexibility training, rehabilitation, and further evaluation.
abstract_id: PUBMED:34211321
Clinical Hip Osteoarthritis in Current and Former Professional Footballers and Its Effect on Hip Function and Quality of Life. The objective of the study was to establish the prevalence of clinical hip osteoarthritis in current and former professional footballers and to explore its consequences on hip function and health-related quality of life (HRQoL). A cross-sectional study by means of questionnaire was conducted among current and former professional footballers fulfilling the following inclusion criteria: (1) male (2) active or retired professional footballer (3) member of FIFPRO (Football Players Worldwide) (4) between 18 and 50 years old (5) could read and understand texts in French, Spanish, or English. Controls (matched for: gender, age, body weight and height) were also recruited. The main outcome measures were clinical hip osteoarthritis, hip function and HRQoL. Questionnaires were sent to 2,500 members of which 1,401 participated (1,000 current and 401 former professional footballers). Fifty-two controls were recruited. Prevalence of hip osteoarthritis was 2% among current and 8% among former professional footballers. Hip function was significantly (p ≤ 0.001) lower in both types of footballers with hip osteoarthritis than in footballers without hip osteoarthritis and controls. Current and former professional footballers with hip osteoarthritis reported significantly lower physical health scores (p = 0.032, p = 0.002) than those without. Hip osteoarthritis led to a significantly lower score in the physical (p = 0.004) and mental (p = 0.014) component of HRQoL in former footballers compared to the controls, while in current footballers only the physical component was significantly (p = 0.012) lower compared to the controls. Hip osteoarthritis has a higher prevalence in former than in current professional footballers and impacts hip function and HRQoL negatively.
abstract_id: PUBMED:34275262
Relationship between the hip range of motion and functional motor system movement patterns in football players. Background: The aim of this study was to determine the hip range of motion and the movement patterns of football players assessed with an aid of a Functional Motor Systems test, and to find an association between these parameters and the risk for hip joint injury.
Methods: The study included 50 men aged between 16 and 20 years: 25 footballers and 25 age- and body mass index-matched controls. The hip ranges of motion (flexion, extension, internal and external rotation, adduction and abduction) were determined, and the movement patterns were evaluated with the tests from the Functional Motor Systems battery.
Results: Football players presented with significantly higher ranges of the hip flexion, extension, internal and external rotation than the controls. Moreover, footballers and controls differed significantly in terms of their mean overall Functional Motor Systems scores (15.77 points ±2.44 vs. 13.79±3.02 points, P=0.019). Football players scored best on the shoulder mobility test for the right side and worst on the rotary stability test for the left side. The scores on the trunk stability test and rotary stability test for the left side were significantly higher in footballers than in the controls. Nevertheless, the overall Functional Motor Systems scores of 14 points or less were recorded in the case of as many as 10/25 footballers.
Conclusions: Altogether, these findings suggest that some football players present with a strain which may predispose them to future injuries. Future research should center around the etiology of reduced hip ROM observed in footballers. Furthermore, football training seems to result in a considerable motor asymmetry of the trunk which also predisposes to injury.
abstract_id: PUBMED:18555865
Functional range of motion of the hip joint Purpose Of The Study: The functional mobility of a joint represents the range of motion healthy individuals require to fulfill everyday life tasks. Oscillation angle corresponds to the entire range of motion that can be achieved by the joint. Wedge opening and direction are the characteristic features. We describe the characteristics of functional mobility of the hip joint in healthy subjects.
Material And Methods: Hip motion was analyzed in twelve healthy subjects aged 22 to 25 years. The three dimensional analysis used the Motion Analysis System (Motion Analysis Corporation, Santa Rosa, CA) at a frequency of 60 Hz. MatLab software was used to modelize a prosthesis and determine the oscillation angle and its direction as a function of implant position and head-to-neck ratio. After determining the hip center for each individual subject, the range of motion necessary to complete a task was given by the maximal angle along each anatomic axis needed to reach a given position in comparison with the resting position. The following tasks were studied: sit to stand motion, lifting weight from a squatting position, reaching the ground with both legs abducted in extension, walking, ascending and descending stairs, getting on a bicycle, sitting cross-legged, cutting toenails. Whether or not the task could be achieved with the prosthetic conformation was then determined.
Results: Each task was described as a combination of motion in the three anatomic axes. Lifting weight from a squatting position combined flexion (110 degrees), abduction (9 degrees) and external rotation (18 degrees) with a standard deviation of 9 degrees. For a given task, only a few combinations of femoral and acetabular orientations were compatible with completion of that task. Combining the motions required for several tasks diminished the possible orientations for prosthetic positioning.
Discussion: Analyzing the motion required for these tasks shows the maximal range of motion involved in each direction. There was very little variability among healthy subjects. These results are in agreement with other values determined with other methods. Compensatory mechanisms used by disabled people to complete different tasks were not taken into consideration. The effects of changing either the head-to-neck ratio or implant position are discussed in relation to completion of a given task.
abstract_id: PUBMED:36317121
Determining the hip joint isokinetic muscle strength and range of motion of professional soccer players based on their field position. Background: Soccer players' physical and physiological demands vary based on their field position. Although the hip joint has an important role in soccer, little information is available about the strength and flexibility of the hip joint based on player positions. Therefore, this study aims to investigate the differences in muscle strength and flexibility of the hip joint of professional soccer players based on their field position.
Methods: Ninety-six professional soccer players from Saudi Arabia were divided into four groups (goalkeepers, defenders, midfielders, and attackers), with 24 participants in each group based on their field position. The Modified Thomas test was used to measure the hip extension range of motion (ROM), and muscle strength was assessed by an Isokinetic dynamometer.
Results: There were no statistically significant differences in the isokinetic strength at the hip joint movements between goalkeepers, defenders, midfielders, and attackers (p ≥ 0.05). At the same time, there was a significant difference between groups in the hip extension ROM (p ≤ 0.05). according to different player positions. Post hoc tests reported significant differences between goalkeepers and defenders (p ≤ 0.05), midfielders (p ≤ 0.05), and attackers p ≤ 0.05). At the same time, there were no significant differences between defenders and midfielders (p ≥ 0.05), defenders and attackers (p ≥ 0.05), and midfielders and attackers (p ≥ 0.05).
Conclusion: Even though there was no significant difference in isokinetic strength, there was a significant difference in hip extension ROM among players based on field position. This study may help coaches and trainers to recognize the strengths and weaknesses of players and design training programs to rectify the weaker components and improve players' performance in different playing positions.
abstract_id: PUBMED:34391316
Investigation of multidirectional hip range of motion and hip motion asymmetry in individuals with idiopathic scoliosis with different curve patterns. Background: While some studies of the asymmetry of lower limbs in individuals with idiopathic scoliosis exist, there is a need for multidirectional studies conducted on hip joint range of motion and its relationship to curve patterns in idiopathic scoliosis.
Objectives: This study analyzes the hip joint range of multidirectional motions, hip motion asymmetry and investigates them according to curve patterns in individuals with idiopathic scoliosis.
Methods: The sample included 108 females with idiopathic scoliosis. Participants were divided into three groups: double curves, single thoracic curve and single lumbar curve. The range of hip flexion and extension, abduction and adduction, and internal and external rotations were assessed actively and passively with a universal goniometer. The range of motion, left-right asymmetry and the mid-points of the ranges of motion were analyzed.
Results: The passive range of the right hip abduction was higher in the thoracic curve group vs. the lumbar curve group. Active and passive ranges of hip extension were higher in the left hip vs. right hip. Active left-right asymmetry was higher than passive left-right asymmetry.
Conclusion: Individuals with idiopathic scoliosis had different hip abduction motions according to curve pattern that originated from single curves. Left-right hip asymmetry was seen for the hip extension motion. Higher left-right asymmetry for active motion than passive motion in hip abduction may indicate a problem in motion perception in individuals with idiopathic scoliosis.
Answer: The study comparing hip joint range of motion (ROM) in professional youth and senior team footballers with age-matched controls aimed to determine if there was evidence of abnormal hip ROM in these athletes, which could indicate early stages of degeneration. The case-control study involved 40 professional footballers (20 youth and 20 senior team) and 40 matched control subjects. The main outcome measures included bilateral measurements of passive hip internal rotation (IR), external rotation (ER), flexion, abduction, extension, Faber's test, and the hip quadrant. The results showed that both youth and senior footballers had significantly less IR and Faber's range and significantly higher abduction than their respective controls (p < 0.001). Senior footballers also had significantly reduced IR (p < 0.05) and Faber's (p < 0.001) compared to the youth team. Additionally, a higher proportion of senior footballers had positive hip quadrants (45% of all hips) compared to all other groups. No significant difference in hip ROM was found between dominant and non-dominant legs. The findings suggest that professional footballers exhibit a specific pattern of hip ROM that differs from controls, which may demonstrate the early stages of hip degeneration. The study concludes that hip joint ROM exercises may be necessary for these players to restore normal movement and prevent the onset of hip osteoarthritis (OA) (PUBMED:19218076). |
Instruction: Do State-Based Policies Have an Impact on Teen Birth Rates and Teen Abortion Rates in the United States?
Abstracts:
abstract_id: PUBMED:26148786
Do State-Based Policies Have an Impact on Teen Birth Rates and Teen Abortion Rates in the United States? Objectives: The United States has one of the highest teen birth rates among developed countries. Interstate birth rates and abortion rates vary widely, as do policies on abortion and sex education. The objective of our study is to assess whether US state-level policies regarding abortion and sexual education are associated with different teen birth and teen abortion rates.
Methods: We carried out a state-level (N = 51 [50 states plus the District of Columbia]) retrospective observational cross-sectional study, using data imported from the National Vital Statistics System. State policies were obtained from the Guttmacher Institute. We used descriptive statistics and regression analysis to study the association of different state policies with teen birth and teen abortion rates.
Results: The state-level mean birth rates, when stratifying between policies protective and nonprotective of teen births, were not statistically different-for sex education policies, 39.8 of 1000 vs 45.1 of 1000 (P = .2187); for mandatory parents' consent to abortion 45 of 1000, vs 38 of 1000 when the minor could consent (P = .0721); and for deterrents to abortion, 45.4 of 1000 vs 37.4 of 1000 (P = .0448). Political affiliation (35.1 of 1000 vs 49.6 of 1000, P < .0001) and ethnic distribution of the population were the only variables associated with a difference between mean teen births. Lower teen abortion rates were, however, associated with restrictive abortion policies, specifically lower in states with financial barriers, deterrents to abortion, and requirement for parental consent.
Conclusion: While teen birth rates do not appear to be influenced by state-level sex education policies, state-level policies that restrict abortion appear to be associated with lower state teen abortion rates.
abstract_id: PUBMED:25620298
State policy and teen childbearing: a review of research studies. Teen childbearing is affected by many individual, family, and community factors; however, another potential influence is state policy. Rigorous studies of the relationship between state policy and teen birth rates are few in number but represent a body of knowledge that can inform policy and practice. This article reviews research assessing associations between state-level policies and teen birth rates, focusing on five policy areas: access to family planning, education, sex education, public assistance, and access to abortion services. Overall, several studies have found that measures related to access to and use of family planning services and contraceptives are related to lower state-level teen birth rates. These include adolescent enrollment in clinics, minors' access to contraception, conscience laws, family planning expenditures, and Medicaid waivers. Other studies, although largely cross-sectional analyses, have concluded that policies and practices to expand or improve public education are also associated with lower teen birth rates. These include expenditures on education, teacher-to-student ratios, and graduation requirements. However, the evidence regarding the role of public assistance, abortion access, and sex education policies in reducing teen birth rates is mixed and inconclusive. These conclusions must be viewed as tentative because of the limited number of rigorous studies that examine the relationship between state policy and teen birth rates over time. Many specific policies have only been analyzed by a single study, and few findings are based on recent data. As such, more research is needed to strengthen our understanding of the role of state policies in teen birth rates.
abstract_id: PUBMED:26918400
The Effects of State-Mandated Abstinence-Based Sex Education on Teen Health Outcomes. In 2011, the USA had the second highest teen birth rate of any developed nation, according to the World Bank, . In an effort to lower teen pregnancy rates, several states have enacted policies requiring abstinence-based sex education. In this study, we utilize a difference-in-differences research design to analyze the causal effects of state-level sex education policies from 2000-2011 on various teen sexual health outcomes. We find that state-level abstinence education mandates have no effect on teen birth rates or abortion rates, although we find that state-level policies may affect teen sexually transmitted disease rates in some states. Copyright © 2016 John Wiley & Sons, Ltd.
abstract_id: PUBMED:16753244
Getting a piece of the pie? The economic boom of the 1990s and declining teen birth rates in the United States. In the United States, the 1990s was a decade of dramatic economic growth as well as a period characterized by substantial declines in teenage childbearing. This study examines whether falling teen fertility rates during the 1990s were responsive to expanding employment opportunities and whether the implementation of the Personal Responsibility and Work Opportunities Act (PRWORA), increasing rates of incarceration, or restrictive abortion policies may have affected this association. Fixed-effects Poisson regression models were estimated to assess the relationship between age-specific birth rates and state-specific unemployment rates from 1990 to 1999 for Black and White females aged 10-29. Falling unemployment rates in the 1990s were associated with decreased childbearing among African-American women aged 15-24, but were largely unrelated to declines in fertility for Whites. For 18-19 year-old African-Americans, the group for whom teen childbearing is most normative, our model accounted for 85% of the decrease in rates of first births. Young Black women, especially older teens, may have adjusted their reproductive behavior to take advantage of expanded labor market opportunities.
abstract_id: PUBMED:22792555
Why is the teen birth rate in the United States so high and why does it matter? Teens in the United States are far more likely to give birth than in any other industrialized country in the world. U.S. teens are two and a half times as likely to give birth as compared to teens in Canada, around four times as likely as teens in Germany or Norway, and almost 10 times as likely as teens in Switzerland. Among more developed countries, Russia has the next highest teen birth rate after the United States, but an American teenage girl is still around 25 percent more likely to give birth than her counterpart in Russia. Moreover, these statistics incorporate the almost 40 percent fall in the teen birth rate that the United States has experienced over the past two decades. Differences across U.S. states are quite dramatic as well. A teenage girl in Mississippi is four times more likely to give birth than a teenage girl in New Hampshire--and 15 times more likely to give birth as a teen compared to a teenage girl in Switzerland. This paper has two overarching goals: understanding why the teen birth rate is so high in the United States and understanding why it matters. Thus, we begin by examining multiple sources of data to put current rates of teen childbearing into the perspective of cross-country comparisons and recent historical context. We examine teen birth rates alongside pregnancy, abortion, and "shotgun" marriage rates as well as the antecedent behaviors of sexual activity and contraceptive use. We seek insights as to why the rate of teen childbearing is so unusually high in the United States as a whole, and in some U.S. states in particular. We argue that explanations that economists have tended to study are unable to account for any sizable share of the variation in teen childbearing rates across place. We describe some recent empirical work demonstrating that variation in income inequality across U.S. states and developed countries can explain a sizable share of the geographic variation in teen childbearing. To the extent that income inequality is associated with a lack of economic opportunity and heightened social marginalization for those at the bottom of the distribution, this empirical finding is potentially consistent with the ideas that other social scientists have been promoting for decades but which have been largely untested with large data sets and standard econometric methods. Our reading of the totality of evidence leads us to conclude that being on a low economic trajectory in life leads many teenage girls to have children while they are young and unmarried and that poor outcomes seen later in life (relative to teens who do not have children) are simply the continuation of the original low economic trajectory. That is, teen childbearing is explained by the low economic trajectory but is not an additional cause of later difficulties in life. Surprisingly, teen birth itself does not appear to have much direct economic consequence. Moreover, no silver bullet such as expanding access to contraception or abstinence education will solve this particular social problem. Our view is that teen childbearing is so high in the United States because of underlying social and economic problems. It reflects a decision among a set of girls to "drop-out" of the economic mainstream; they choose non-marital motherhood at a young age instead of investing in their own economic progress because they feel they have little chance of advancement. This thesis suggests that to address teen childbearing in America will require addressing some difficult social problems: in particular, the perceived and actual lack of economic opportunity among those at the bottom of the economic ladder.
abstract_id: PUBMED:19761588
Religiosity and teen birth rate in the United States. Background: The children of teen mothers have been reported to have higher rates of several unfavorable mental health outcomes. Past research suggests several possible mechanisms for an association between religiosity and teen birth rate in communities.
Methods: The present study compiled publicly accessible data on birth rates, conservative religious beliefs, income, and abortion rates in the U.S., aggregated at the state level. Data on teen birth rates and abortion originated from the Center for Disease Control; on income, from the U.S. Bureau of the Census, and on religious beliefs, from the U.S. Religious Landscape Survey carried out by the Pew Forum on Religion and Public Life. We computed correlations and partial correlations.
Results: Increased religiosity in residents of states in the U.S. strongly predicted a higher teen birth rate, with r = 0.73 (p < 0.0005). Religiosity correlated negatively with median household income, with r = -0.66, and income correlated negatively with teen birth rate, with r = -0.63. But the correlation between religiosity and teen birth rate remained highly significant when income was controlled for via partial correlation: the partial correlation between religiosity and teen birth rate, controlling for income, was 0.53 (p < 0.0005). Abortion rate correlated negatively with religiosity, with r = -0.45, p = 0.002. However, the partial correlation between teen birth rate and religiosity remained high and significant when controlling for abortion rate (partial correlation = 0.68, p < 0.0005) and when controlling for both abortion rate and income (partial correlation = 0.54, p = 0.001).
Conclusion: With data aggregated at the state level, conservative religious beliefs strongly predict U.S. teen birth rates, in a relationship that does not appear to be the result of confounding by income or abortion rates. One possible explanation for this relationship is that teens in more religious communities may be less likely to use contraception.
abstract_id: PUBMED:28558295
The effect of spending cuts on teen pregnancy. In recent years, English local authorities have been forced to make significant cuts to devolved expenditure. In this paper, we examine the impact of reductions in local expenditure on one particular public health target: reducing rates of teen pregnancy. Contrary to predictions made at the time of the cuts, panel data estimates provide no evidence that areas which reduced expenditure the most have experienced relative increases in teenage pregnancy rates. Rather, expenditure cuts are associated with small reductions in teen pregnancy rates, a result which is robust to a number of alternative specifications and tests for causality. Underlying socio-economic factors such as education outcomes and alcohol consumption are found to be significant predictors of teen pregnancy.
abstract_id: PUBMED:25620306
Adolescent pregnancy, birth, and abortion rates across countries: levels and recent trends. Purpose: To examine pregnancy rates and outcomes (births and abortions) among 15- to 19-year olds and 10- to 14-year olds in all countries for which recent information could be obtained and to examine trends since the mid-1990s.
Methods: Information was obtained from countries' vital statistics reports and the United Nations Statistics Division for most countries in this study. Alternate sources of information were used if needed and available. We present estimates primarily for 2011 and compare them to estimates published for the mid-1990s.
Results: Among the 21 countries with complete statistics, the pregnancy rate among 15- to 19-year olds was the highest in the United States (57 pregnancies per 1,000 females) and the lowest rate was in Switzerland (8). Rates were higher in some former Soviet countries with incomplete statistics; they were the highest in Mexico and Sub-Saharan African countries with available information. Among countries with reliable evidence, the highest rate among 10- to 14-year olds was in Hungary. The proportion of teen pregnancies that ended in abortion ranged from 17% in Slovakia to 69% in Sweden. The proportion of pregnancies that ended in live births tended to be higher in countries with high teen pregnancy rates (p = .02). The pregnancy rate has declined since the mid-1990s in the majority of the 16 countries where trends could be assessed.
Conclusions: Despite recent declines, teen pregnancy rates remain high in many countries. Research on the planning status of these pregnancies and on factors that determine how teens resolve their pregnancies could further inform programs and policies.
abstract_id: PUBMED:9794946
The decline in US teen pregnancy rates, 1990-1995. Objectives: Estimate pregnancy, abortion, and birth rates for 1990 to 1995 for all teens, sexually experienced teens, and sexually active teens. DESISN: Retrospective analysis of national data on pregnancies, abortions, and births. Participants. US women aged 15 to 19 years.
Outcome Measures: Annual pregnancy, abortion, and birth rates for 1990 to 1995 for women aged 15 to 19 years, with and without adjustments for sexual experience (ever had intercourse), and sexual activity (had intercourse within last 3 months).
Results: Approximately 40% of women aged 15 to 19 years were sexually active in 1995. Teen pregnancy rates were constant from 1990 to 1991. From 1991 to 1995, the annual pregnancy rate for women aged 15 to 19 years decreased by 13% to 83.6 per 1000. The percentage of teen pregnancies that ended in induced abortions decreased yearly; thus, the abortion rate decreased more than the birth rate (21% vs 9%). From 1988 to 1995, the proportion of sexually experienced teens decreased nonsignificantly.
Conclusions: After a 9% rise from 1985 to 1990, teen pregnancy rates reached a turning point in 1991 and are now declining. Physicians should counsel their adolescent patients about responsible sexual behavior, including abstinence and proper use of regular and emergency contraception.
abstract_id: PUBMED:20638008
How are restrictive abortion statutes associated with unintended teen birth? Purpose: Legislation that restricts abortion access decreases abortion. It is less well understood whether these statutes affect unintended birth. Given recent increases in teen pregnancy and birth, we examined the relationship between legislation that restricts abortion access and unintended births among adolescent women.
Methods: Using 2000-2005 Pregnancy Risk Assessment Monitoring System data, we examined the relationship between adolescent pregnancy intention and policies affecting abortion access: mandatory waiting periods, parental involvement laws, and Medicaid funding restrictions. Logistic regression controlled for individual characteristics, state-level factors, geographic regions, and time trends. Subgroup analyses were done for racial, ethnic, and insurance groups.
Results: In our multivariate model, minors in states with mandatory waiting periods were more than two times as likely to report an unintended birth, with even higher risk among blacks, Hispanics, and teens receiving Medicaid. Medicaid funding restrictions were associated with higher rates of unwanted birth among black teens. Parental involvement laws were associated with a trend toward more unwanted births in white minors and fewer in Hispanic minors.
Conclusions: Mandatory waiting periods are associated with higher rates of unintended birth in teens, and funding restrictions may especially affect black adolescents. Policies limiting access to abortion appear to affect the outcomes of unintended teen pregnancy. Subsequent research should clarify the magnitude of such effects, and lead to policy changes that successfully reduce unintended teen births.
Answer: State-based policies do have an impact on teen birth rates and teen abortion rates in the United States, but the nature and extent of this impact can vary depending on the specific policy and other factors.
Research has shown that state-level policies that restrict abortion are associated with lower state teen abortion rates. Specifically, lower teen abortion rates were found in states with financial barriers, deterrents to abortion, and requirements for parental consent (PUBMED:26148786). However, the same study found that teen birth rates do not appear to be influenced by state-level sex education policies.
Other studies have found that access to family planning services and contraceptives are related to lower state-level teen birth rates, including adolescent enrollment in clinics, minors' access to contraception, and family planning expenditures (PUBMED:25620298). Additionally, policies and practices to expand or improve public education have been associated with lower teen birth rates, such as expenditures on education and teacher-to-student ratios (PUBMED:25620298).
State-mandated abstinence-based sex education, on the other hand, has been found to have no effect on teen birth rates or abortion rates, although it may affect teen sexually transmitted disease rates in some states (PUBMED:26918400).
Economic factors, such as the economic boom of the 1990s and expanding employment opportunities, have also been associated with decreased childbearing among African-American women aged 15-24 (PUBMED:16753244).
Religiosity has been found to predict higher teen birth rates, even when controlling for income and abortion rates, suggesting that teens in more religious communities may be less likely to use contraception (PUBMED:19761588).
Spending cuts on public health targets like reducing rates of teen pregnancy have not been found to increase teenage pregnancy rates; in fact, they are associated with small reductions in teen pregnancy rates (PUBMED:28558295).
Overall, the evidence suggests that state policies can influence teen birth and abortion rates, but the relationship is complex and influenced by a variety of factors, including access to family planning, education, economic conditions, and religiosity. More research is needed to strengthen our understanding of the role of state policies in teen birth rates (PUBMED:25620298). |
Instruction: Is locking plate fixation a better option than casting for distal radius fracture in elderly people?
Abstracts:
abstract_id: PUBMED:26139690
Is locking plate fixation a better option than casting for distal radius fracture in elderly people? Objectives: To compare the outcomes of locking plate fixation versus casting for displaced distal radius fracture with unstable fracture pattern in active Chinese elderly people.
Design: Historical cohort study.
Setting: Orthopaedic ward and clinic at Tseung Kwan O Hospital, Hong Kong.
Patients: Between 1 May 2010 and 31 October 2013, 57 Chinese elderly people aged 61 to 80 years were treated either operatively with locking plate fixation (n=26) or conservatively with cast immobilisation (n=31) for unstable displaced distal radius fracture.
Main Outcome Measures: Clinical, radiological, and functional outcomes were assessed at 9 to 12 months after treatment.
Results: The functional outcome (based on the quick Disabilities of the Arm, Shoulder and Hand score) was significantly better in the locking plate fixation group than in the cast immobilisation group, while clinical and radiological outcomes were comparable with those in other similar studies.
Conclusions: Locking plate fixation resulted in better functional outcome for displaced distal radius fracture with unstable fracture pattern in active Chinese elderly people aged 61 to 80 years. Further prospective study with long-term follow-up is recommended.
abstract_id: PUBMED:35674264
Outcomes of Primary Volar Locking Plate Fixation of Open Distal Radius Fractures. Background: Few studies have reported the outcomes of primary volar locking plate fixation in Gustilo and Anderson type II and IIIA open distal radius fractures. We report the outcomes of treatment of Gustilo and Anderson type II and IIIA open distal radius fractures using primary volar locking plate fixation. Methods: We retrospectively reviewed 24 patients with open distal radius fractures who were treated using primary volar locking plate fixation. The range of motion (ROM) and modified Mayo wrist scores were measured to assess functional outcomes. Radiological outcomes included the bone union period, radial inclination, volar tilt, radial length and ulnar variance. Results: Functional outcomes, including mean ROM in flexion (39.1°) and extension (52.5°), improved following primary volar locking plate treatment. Radiological outcomes were as follows. Mean bone union period, radial length and ulnar variance were 7.8 months, 10.4 and 0.7 mm, respectively. Two patients had superficial wound infection 2 weeks after surgery and one patient had non-union of the radius that required implant removal, autologous iliac crest bone graft and plate re-fixation. Conclusions: Primary volar locking plate fixation is a safe and reliable treatment option for Gustilo and Anderson type II and IIIA open distal radius fractures. By providing firm stabilisation and allowing early ROM exercise, primary volar locking plate fixation resulted in good functional and radiological outcomes. Level of Evidence: Level IV (Therapeutic).
abstract_id: PUBMED:25149898
Biomechanical comparison of bicortical locking versus unicortical far-cortex-abutting locking screw-plate fixation for comminuted radial shaft fractures. Purpose: To provide comparative biomechanical evaluation of bicortical locking versus unicortical-abutting locking screw-plate fixation in a comminuted radius fracture model.
Methods: A validated synthetic substitute of the adult human radius with a 1.5-cm-long segmental mid-diaphyseal defect was used in the study to simulate a comminuted fracture. Stabilization was achieved with an 8-hole locking plate and either bicortical screws or unicortical-abutting screws. The specimens were tested using nondestructive cyclical loading in 4-point bending, axial compression, and torsion to determine stiffness and displacement and subsequently in 4-point bending to assess load to failure.
Results: There were no statistically significant differences between bicortical versus unicortical-abutting locking screw fixation in nondestructive 4-point bending, axial compression, and torsion. Both locking screw constructs also demonstrated comparable 4-point bending loads to failure.
Conclusion: The biomechanical equivalence between bicortical locking versus unicortical-abutting locking screw-plate fixation suggests that adequate locking plate fixation can be achieved without perforation of the far cortex. The abutment of the screw tip within the far cortex enhances the unicortical screw positional stability and thereby effectively opposes the displacement of the screw when subjected to bending or axial or rotational loads.
Clinical Relevance: Unicortical-abutting screws potentially offer several clinical advantages. They eliminate the need for drilling through the far cortex and thereby a risk of adjacent neurovascular injury or soft tissue structure compromise. They eliminate the issues associated with symptomatic screw prominence. They can decrease risk of refracture after screw-plate removal. In case of revision plating, they permit conversion to bicortical locking screws through the same near-cortex screw holes, which eliminates the need for a longer or repositioned plate.
abstract_id: PUBMED:29734893
Mid-Term Functional Outcome after Volar Locking Plate Fixation of Distal Radius Fractures in Elderly Patients. Background: The volar locking plate is frequently used in the fixation of unstable distal radius fractures, but despite this there is a paucity of mid to long term outcome studies. The purpose of this study was to investigate the mid-term functional outcomes of elderly patients treated with a volar locking plate for unstable distal radius fractures.
Methods: Thirty-two patients with a mean age of 74.1 (range, 65-85) years were followed for a mean of 39.1 (range, 30-81) months. Patients with follow-up periods of < 24 months were excluded from this study to investigate the mid-term clinical outcomes. The Mayo wrist score (MWS), grip strength and wrist range of motion were retrospectively reviewed at 12 months, 24 months and the latest follow-up (mean 39.1 months). Osteoarthritis status according to the system of Knirk and Jupiter was assessed at 24 months.
Results: Significant improvements in MWS and grip strength were observed between 12 and 24 months but not between 24 months and the final follow-up. There was no significant difference in wrist range of motion between 12 and 24 months. The MWS of 14 patients with radiographic signs of osteoarthritis was not significantly different from that of 18 patients without radiographic signs of osteoarthritis.
Conclusions: Elderly patients treated with the volar locking plate showed improved MWS and grip strength postoperatively after 12 months. Improvement in grip strength was slower than range of motion.
abstract_id: PUBMED:36708181
Comparison of surgical treatments for distal ulna fracture when combined with anterior locking plate fixation of distal radius in the over 70 age group. We conducted a retrospective multicentre study to compare the clinical and radiographic outcomes, and complications of three surgical treatments of distal ulna fracture (DUF) when combined with anterior locking plate fixation for distal radial fracture (DRF) in patients over 70 years of age. We identified 1521 patients over 70 years of age who were diagnosed as having DRF and who underwent anterior locking plate fixation between 2015 and 2020, among which 122 cases of DUF were analysed. Three surgical treatment options for DUF were identified in this cohort: K-wire fixation (Group K), locking plate fixation (Group L) and Darrach procedure (Group D). The results of the analysis showed the total immobilization period in Group D to be the shortest among the three treatments. Functional outcomes were superior, and the rate of complications were smaller in Group D than in Group L. In addition, rotational range of motion was larger in Group D and Group L compared with Group K. In patients who are 70 years of age or older with combined unstable DRF and highly comminuted or displaced DUF, the Darrach procedure for DUF seems to be the most useful and reasonable treatment option once the fracture of the distal radius has been rigidly fixed.Level of evidence: IV.
abstract_id: PUBMED:34607738
Efficacy of Hand Therapy After Volar Locking Plate Fixation of Distal Radius Fracture in Middle-Aged to Elderly Women: A Randomized Controlled Trial. Purpose: This study aimed to evaluate the efficacy of hand therapy after volar locking plate fixation of distal radius fractures in middle-aged to elderly women.
Methods: Fifty-seven patients diagnosed with distal radius fractures who had undergone volar plate fixation were enrolled in a prospective, randomized controlled trial. Patients were randomized into the hand therapy and independent exercise (IE) groups, in which they exercised independently under the surgeon's direction with and without hand therapy, respectively. The primary outcome was the functional outcome measured using the Disability of Arm, Shoulder, and Hand questionnaire after 6 weeks. The secondary outcomes were functional outcomes measured using the Patient-Rated Wrist Evaluation questionnaire, active and passive ranges of motion (ROMs), grip strength, key pinch strength, and pain measured on a visual analog scale. Patients were followed up in the outpatient department at 2, 4, 6, and 8 weeks and at 3 and 6 months.
Results: The Disability of Arm, Shoulder, and Hand scores were significantly lower in the hand therapy group at 6 weeks after surgery (12.5 vs 19.4 in the IE group). The postoperative visual analog scale pain scores were significantly lower in the hand therapy group at 2, 4, and 6 weeks (10.2 vs 17.6 in the IE group). The active ROM of the wrist flexion-extension arc at 2, 4, 6, and 8 weeks; active ROM of the pronation-supination arc at 6 and 8 weeks; and passive ROM of the wrist flexion-extension arc at 2, 4, and 8 weeks were significantly greater in the hand therapy group.
Conclusions: Hand therapy improved the outcomes after volar locking plate fixation for distal radius fracture in middle-aged to elderly women at 8 weeks after surgery. No significant between-group differences were observed in any functional outcome measure at 6 months after surgery, as previously reported.
Type Of Study/level Of Evidence: Therapeutic II.
abstract_id: PUBMED:32041567
Surgical outcomes of elderly patients aged more than 80 years with distal radius fracture: comparison of external fixation and locking plate. Background: To compare the outcomes after surgical intervention, including external fixation (EF) with the optional addition of K-pins or open reduction and internal fixation (ORIF) with a volar locking plate (VLP), in patients with distal radius fracture aged > 80 years.
Methods: We reviewed 69 patients with a distal radius fracture aged > 80 years who treated under surgical intervention from 2011 to 2017 retrospectively. Their demographic data and complications were recorded. Preoperative, postoperative, and last follow-up plain films were analyzed. The functional outcomes of wrist range of motion were also evaluated.
Results: 41 patients were treated with EF with the optional addition of K-pins, while 28 patients were treated with ORIF with a VLP. The radiological parameters, including ulnar variance and radial inclination, at the last follow-up were significantly more acceptable in the VLP group (p = 0.01, p = 0.03, respectively). The forearm supination was significantly better in patients treated with VLP (p = 0.002). The overall incidence of complications was lower in the VLP group (p = 0.003).
Conclusion: VLP provides better radiological outcomes, wrist supination and lower complication rates than EF. Therefore, although EF is still widely used because of its acceptable results and easy application, we recommend VLP as a suitable treatment option for distal radius fracture in the geriatric population aged > 80 years.
abstract_id: PUBMED:37885613
Comparison of the cast and volar locking plate in the treatment of intra-articular distal radius fractures in elderly patients over 75 years of age. Background: We aimed to compare radiologically and clinically closed reduction circular casting (CRCC) and volar locking plate (VLP) treatment options in elderly patients over 75 years with intraarticular distal radius fracture (DRF).
Material And Method: Elderly patients aged ≥75 years with at least one year of follow-up from the clinic archive who underwent conservative (CRCC) and surgical (VLP) treatment for AO type C DRF were retrospectively included in the study. Thirty-seven patients treated conservatively with CRCC and 31 treated surgically with VLP were compared as two groups. Quick Disability of the Arm, Shoulder, and Hand (QDASH) and Visual Analog scores (VAS) were evaluated functionally. In addition, a rapid assessment of physical activity (RAPA) score evaluation was performed since these patients were elderly. In addition, radiologic findings, wrist range of motion, and complications were evaluated.
Results: There was no difference between the CRCC and VLP groups regarding QDASH, VAS, and RAPA scores at the last follow up. Radiologically, there were significant differences between the groups regarding radial height, volar tilt, radial inclination and joint stepping. (respectively p= <0.001, p= <0.001, p= <0.001, p= <0.001).
Conclusion: In elderly patients over 75 years of age with intra-articular DRF, surgical treatment with VLP results in better radiologic results compared to conservative treatment with CRCC, although both treatment options lead to similar results in terms of functional outcomes.
abstract_id: PUBMED:37762943
Ulnar-Sided Sclerosis of the Lunate Does Not Affect Outcomes in Patients Undergoing Volar Locking Plate Fixation for Distal Radius Fracture. Background And Aim: Radial shortening after distal radius fracture causes ulnar impaction, and a mild reduction loss of radial height occurs even after volar locking plate fixation. This study aimed to determine whether preoperative ulnar-sided sclerosis affects clinical outcomes after volar locking plate fixation for distal radius fracture (DRF).
Method: Among 369 patients who underwent volar locking plate fixation for DRF, 18 with preoperative ulnar-sided sclerosis of the lunate were included in Group A and compared to a 1:4 age-, sex- and fracture-pattern-matched cohort without sclerosis (72 patients, Group B). The visual analog scale (VAS), Disabilities of the Arm, Shoulder, and Hand (DASH) score, and grip strength were assessed as clinical outcomes. Ulnar variance (UV), radial inclination, radial length, and volar tilt at two weeks after surgery and the final follow-up were measured as radiographic outcomes.
Results: The mean VAS and DASH scores and grip strength did not differ between the two groups. The mean UV at two weeks after surgery and the last follow-up was significantly higher in Group A. The mean changes in UV were +0.62 mm in Group A and +0.48 mm in Group B. There were no significant intergroup differences. Neither UV nor its changes showed any association with DASH and VAS scores.
Conclusions: Preoperative ulnar-sided sclerosis of the lunate did not affect clinical outcomes after volar locking plate fixation, even if UV increased postoperatively.
abstract_id: PUBMED:29228851
Volar locking plate fixation versus external fixation of distal radius fractures: a meta-analysis. This meta-analysis evaluates reported outcomes of volar locking plates versus external fixation for management of distal radius fractures. A comprehensive literature search was carried out using PubMed, embase, MEDLINE, and the Cochrane Library. Multiple outcomes were analysed: study characteristics, objective/subjective outcome measures, radiographic parameters, and complication rates. Nine studies containing 780 participants met the outlined inclusion criteria. Disabilities of the arm, shoulder, and hand scores significantly favoured volar locking plates, but only at 3 months was the difference clinically meaningful. Grip strength, extension, and supination were better in the volar locking plates group in the early post-operative period, but were similar at 12 months. Ulnar variance was better restored by volar locking plates. The volar locking plates group was associated with higher re-operation rate, and the external fixation group had a higher infection rate. Current literature suggests that volar locking plates can provide better subjective scores and radiographic parameters, especially in the first 3 months, but may be associated with a higher re-operation rate.
Answer: Locking plate fixation appears to be a better option than casting for distal radius fractures in elderly people in several aspects. A historical cohort study comparing locking plate fixation to cast immobilization in active Chinese elderly people aged 61 to 80 years found that locking plate fixation resulted in better functional outcomes based on the quick Disabilities of the Arm, Shoulder, and Hand score (PUBMED:26139690). Additionally, a study on primary volar locking plate fixation of open distal radius fractures reported good functional and radiological outcomes, suggesting that this method is a safe and reliable treatment option (PUBMED:35674264).
Further supporting the use of locking plate fixation, a study on volar locking plate fixation of distal radius fractures in elderly patients showed significant improvements in the Mayo wrist score and grip strength postoperatively after 12 months (PUBMED:29734893). Moreover, a retrospective multicentre study indicated that in patients over 70 years of age with combined unstable distal radial fracture and highly comminuted or displaced distal ulna fracture, the Darrach procedure for distal ulna fracture seems to be the most useful and reasonable treatment option once the fracture of the distal radius has been rigidly fixed with an anterior locking plate (PUBMED:36708181).
A randomized controlled trial also found that hand therapy improved outcomes after volar locking plate fixation for distal radius fracture in middle-aged to elderly women at 8 weeks after surgery (PUBMED:34607738). In patients aged more than 80 years, volar locking plate (VLP) provided better radiological outcomes, wrist supination, and lower complication rates than external fixation (EF), making VLP a suitable treatment option for distal radius fracture in the geriatric population (PUBMED:32041567).
A comparison of cast and volar locking plate in the treatment of intra-articular distal radius fractures in elderly patients over 75 years of age found that although VLP resulted in better radiologic results compared to conservative treatment with CRCC, both treatment options led to similar results in terms of functional outcomes (PUBMED:37885613).
Lastly, a meta-analysis evaluating outcomes of volar locking plates versus external fixation for management of distal radius fractures concluded that volar locking plates can provide better subjective scores and radiographic parameters, especially in the first 3 months, but may be associated with a higher re-operation rate (PUBMED:29228851).
In summary, the evidence suggests that locking plate fixation generally provides better functional and radiological outcomes compared to casting in elderly patients with distal radius fractures, although individual patient factors and specific fracture characteristics must be considered. |
Instruction: Aggressive pancreatic resection for primary pancreatic neuroendocrine tumor: is it justifiable?
Abstracts:
abstract_id: PUBMED:17434366
Aggressive pancreatic resection for primary pancreatic neuroendocrine tumor: is it justifiable? Background: Benign and malignant pancreatic neuroendocrine tumors (PNETs) are rare, and long-term outcome is generally poor without surgical intervention. The aim of the study was to assess whether aggressive pancreatic resection is justifiable for patients with PNET.
Methods: All consecutive patients who had undergone major pancreatic resection from January 1997 through January 2005 were reviewed and analyzed.
Results: There were 33 patients (16 male and 17 female) with a mean age of 53 years. Five patients had multiple endocrine neoplasms syndrome, and 1 patient had von Hippel-Lindau syndrome. There were 20 benign (9 functional) and 13 malignant (6 functional) neoplasms. Mean tumor size was 4.2 cm, and multiple tumors were noted in 10 patients. Eight patients (25%) underwent pancreticoduedenectomy, and 25 patients (76%) underwent distal pancreatectomy (extended distal pancreatectomy in 4 and splenectomy in 20 patients). Regional lymph node involvement was present in 10 patients (30%), and 6 patients (18%) had liver metastasis. Four patients (12%) underwent concurrent resection of other organs because of disease extension. Median intraoperative blood loss was 500 mL. Perioperative morbidity was 36%, and mortality was 3%. Symptomatic palliation was complete in 93% (14.15 patients) and partial in 1 patient because of nonresectable hepatic disease. Median hospital stay was 11.5 days. After median follow-up of 36 months, there were no local recurrences. The 1-, 3-, and 5-year overall survival rates for patients with benign versus malignant neoplasms were 100% vs. 92%, 89% vs. 64%, and 89% vs 36% (P = .01), respectively. The 1-, 3-, and 5-year disease progression rates for patients with malignant neoplasms were 13%, 63%, and 100%, respectively (P < .0001).
Conclusions: Aggressive pancreatic resection for PNET can be performed with low perioperative mortality and morbidity. Unlike available nonoperative therapy, this approach offers an excellent means of symptomatic palliation and local disease control. In patients with malignant PNET, metastatic recurrence is not uncommon and will usually require additional multimodality therapy. When possible, an aggressive approach to PNET is justified to optimize palliation and survival.
abstract_id: PUBMED:18813199
Aggressive surgical resection in the management of pancreatic neuroendocrine tumors: when is it indicated? Background: Pancreatic neuroendocrine tumors (PNETs) comprise a heterogeneous group of neoplasms for which treatment is variable, depending on the clinical stage. Despite this diversity, surgery remains the gold standard in the management of PNETs. This paper discusses whether aggressive surgical intervention is indicated for PNETs and investigates what prognostic factors may assist in predicting which patients with invasive disease will benefit most from surgical intervention.
Methods: A review was conducted of large surgical series reported in the English literature over the last 10 years as they pertain to current surgical intervention in PNETs and of prognostic factors related to surgical outcome and survival.
Results: Improved survival can be achieved with aggressive surgical management of PNETs. The presence of hepatic metastases is not a contraindication to surgical resection of the primary PNET. Results of series that reported prognostic factors are heterogeneous.
Conclusions: Aggressive surgical resection for selected individuals with PNETs can be performed safely and may improve both symptomatic disease and overall survival. Consideration for resection of primary PNETs should be given to patients with treatable hepatic metastases. Prognostic indices such as tumor differentiation and ability to achieve R0/R1 resection have been linked to survival outcome in PNETs and should be considered when planning aggressive surgical management for this disease.
abstract_id: PUBMED:33718210
Microscopic Invasion of Nerve Is Associated With Aggressive Behaviors in Pancreatic Neuroendocrine Tumors. Objectives: The role of neural invasion has been reported in cancers. Few studies also showed that neural invasion was related to survival rate in patients with pancreatic neuroendocrine tumor (PNET). The aim of this study is to explore the association between neural invasion and aggressive behaviors in PNET.
Methods: After excluding those patients with biopsy and with missing histological data, a total 197 patients with PNET who underwent surgery were retrospectively analyzed. The demographic data and histological data were obtained. Aggressive behavior was defined based on extra-pancreatic extension including vascular invasion, organ invasion and lymph node metastases. Logistic regression analyses were used to identify risk factor for aggressive behavior. Receiver operating characteristic (ROC) curves were performed to show the performance of nomograms in evaluating aggressive behavior of PNET.
Results: The prevalence of neural invasion in the cohort was 10.1% (n = 20). The prevalence of lymph node metastasis, organ invasion, and vascular invasion in PNET patients with neural invasion was higher than those in patients without neural invasion (p < 0.05). Neural invasion was more common in grade 3 (G3) tumors than G1/G2 (p < 0.01). Tumor size, tumor grade, and neural invasion were independent associated factors of aggressive behavior (p < 0.05) after adjusting for possible cofounders in total tumors and G1/G2 tumors. Two nomograms were developed to predict the aggressive behavior. The area under the ROC curve was 0.84 (95% confidence interval (CI): 0.77-0.90) for total population and was 0.84 (95% CI: 0.78-0.89) for patients with G1/G2 PNET respectively.
Conclusions: Neural invasion is associated with aggressive behavior in PNET. Nomograms based on tumor size, grade and neural invasion show acceptable performances in predicting aggressive behavior in PNET.
abstract_id: PUBMED:26610782
Minimally Invasive Techniques for Resection of Pancreatic Neuroendocrine Tumors. Surgical resection remains the treatment of choice for primary pancreatic neuroendocrine tumors (PNETs), because it is associated with increased survival. Minimally invasive procedures are a safe modality for the surgical treatment of PNETs. In malignant PNETs, laparoscopy is not associated with a compromise in terms of oncologic resection, and provides the benefits of decreased postoperative pain, better cosmetic results, shorter hospital stay, and a shorter postoperative recovery period. Further prospective, multicenter, randomized trials are required for the analysis of these minimally invasive surgical techniques for the treatment of PNETs and their comparison with traditional open pancreatic surgery.
abstract_id: PUBMED:35789277
Highlights of pancreatic surgery: extended indications in pancreatic neuroendocrine tumors Advanced pancreatic neuroendocrine tumors (paNET) are mostly characterized by infiltration of vascular structures and/or neighboring organs. The indications for resection in these cases should be measured based on the possibility of an R0 resection. Although the data situation for this rare entity is limited, small case series have shown a significant survival advantage in patients who underwent a radical resection in locally advanced stages of paNET. Both vascular reconstruction and multivisceral resection, when performed at experienced centers, should be considered as curative treatment options. The very special biological behavior of the paNET and the often young patient age justify a much more aggressive approach compared to the pancreatic ductal adenocarcinoma.
abstract_id: PUBMED:1455305
Prospective study of aggressive resection of metastatic pancreatic endocrine tumors. Background: Because metastatic pancreatic endocrine tumors (MPET) have a poor prognosis, 17 patients with potentially resectable MPET were prospectively studied to define the efficacy of aggressive resection.
Methods: Patients underwent resection when the full extent of MPET was deemed operable after imaging studies were obtained. Two patients underwent three reoperations for recurrent tumor.
Results: MPET were completely excised in 16 of 20 cases by major resections of liver, viscera, and nodes, with no operative mortality. Survival was 87% at 2 years and 79% at 5 years with mean follow-up of 3.2 years. Median imaging disease-free interval was 1.8 years, and four of 17 patients remain biochemically cured. After aggressive resection patients with MPET limited in extent had higher survival than patients with extensive MPET (p < 0.019). In a nonrandomized cohort of 25 patients with inoperable tumor, survival was 60% at 2 years and 28% at 5 years.
Conclusions: In select patients MPET can be resected safely with a favorable outcome; most patients will experience recurrence, but some may be cured. Resection of extensive MPET does not appear to improve survival. Resection of limited MPET should be considered as life-extending and potentially curative therapy.
abstract_id: PUBMED:34985731
Landmark Series: Importance of Pancreatic Resection Margins. An important goal of cancer surgery is to achieve negative surgical margins and remove all disease completely. For pancreatic neoplasms, microscopic margins may remain positive despite gross removal of the palpable mass, and surgeons must then consider extending resection, even to the point of completion pancreatectomy, an option that renders the patient with significant adverse effects related to exocrine and endocrine insufficiency. Counterintuitively, extending resection to ensure clear margins may not improve patient outcome. Furthermore, the goal of improving survival by extending the resection may not be achieved, as an initial positive margin may indicate more aggressive underlying tumor biology. There is a growing body of literature on this topic, and this landmark series review will examine the key publications that guide our management for resection of pancreatic ductal adenocarcinoma, intraductal papillary mucinous neoplasms, and pancreatic neuroendocrine tumors.
abstract_id: PUBMED:31197692
Resection Versus Observation of Small Asymptomatic Nonfunctioning Pancreatic Neuroendocrine Tumors. Background: Management of asymptomatic, nonfunctioning small pancreatic neuroendocrine tumors (PNETs) is controversial because of their overall good prognosis, and the morbidity and mortality associated with pancreatic surgery. Our aim was to compare the outcomes of resection with expectant management of patients with small asymptomatic PNETs.
Methods: Retrospective review of patients with nonfunctioning asymptomatic PNETs < 2 cm that underwent resection or expectant management at the Tel-Aviv Medical Center between 2001 and 2018.
Results: Forty-four patients with small asymptomatic, biopsy-proven low-grade PNETs with a KI67 proliferative index < 3% were observed for a mean of 52.48 months. Gallium67DOTATOC-PET scan was completed in 32 patients and demonstrated uptake in the pancreatic tumor in 25 (78%). No patient developed systemic metastases. Two patients underwent resection due to tumor growth, and true tumor enlargement was evidenced in final pathology in one of them. Fifty-five patients underwent immediate resection. Significant complications (Clavien-Dindo grade ≥ 3) developed in 10 patients (18%), mostly due to pancreatic leak, and led to one mortality (1.8%). Pathological evaluation revealed lymphovascular invasion in 1 patient, lymph node metastases in none, and a Ki67 index ≥ 3% in 5. No case of tumor recurrence was diagnosed after mean follow-up of 52.8 months.
Conclusions: No patients with asymptomatic low-grade small PNETs treated by expectant management were diagnosed with regional or systemic metastases after a 52.8-month follow-up. Local tumor progression rate was 2.1%. Surgery has excellent long-term outcomes, but it harbors significant morbidity and mortality. Observation can be considered for selected patients with asymptomatic, small, low grade PNETs.
abstract_id: PUBMED:35093860
The Role of Surgery for Pancreatic Neuroendocrine Tumors. Pancreatic neuroendocrine tumors (PNETs) arise from endocrine pancreatic cells and comprise 3-5% of pancreatic cancers. Surgical resection is the only potentially curative option for PNETs. Surgical candidates should be carefully selected according to tumor functionality, size, location, grade, and stage. Current guidelines state that patients with neuroendocrine carcinoma may not be surgical candidates due to aggressive tumor behavior and poor prognosis, while in cases of PNET with unresectable metastatic disease, resection may be of benefit in certain patients. The current guidelines recommend resection of any size of functional PNETs and of non-functional PNETs >2 cm. Watchful waiting is recommended for patients with non-functional PNETs <1 cm. Further evidence is needed to determine whether surgery for non-functional PNETs of 1-2 cm would be of benefit or if surgery should be individualized. This review aimed to discuss the current literature on the management of PNETs and highlight the utility of surgery in treatment.
abstract_id: PUBMED:29731245
The prognostic impact of primary tumor resection in pancreatic neuroendocrine tumors with synchronous multifocal liver metastases. Background: Whether primary tumor resection benefits patients with synchronous multifocal liver metastases from pancreatic neuroendocrine tumors remains controversial. We investigated whether primary tumor resection significantly affects survival in this study.
Methods: A retrospective study of patients with synchronous multifocal liver metastases from pancreatic neuroendocrine tumors between 1998 and 2016 was performed. Patient demographics, operation details, adjuvant treatment, and pathological and survival information were collected, and relevant clinical-pathological parameters were assessed in univariate and multivariate survival analyses.
Results: Sixty-three patients were included in this study, including 35 who underwent primary tumor resection. The median survival time and 5-year survival rate of this cohort were 50 months and 44.5%, respectively. Median survival time in the resected group was significantly longer at 72 months than that of 32 months in the nonresected group (p = 0.010). Multivariate analysis showed that primary tumor surgery was a significant independent prognostic factor (HR 0.312, 95% CI: 0.128-0.762, p = 0.011).
Conclusions: Primary tumor resection significantly benefits patients with synchronous multifocal liver metastases from pancreatic neuroendocrine tumors.
Answer: Aggressive pancreatic resection for primary pancreatic neuroendocrine tumors (PNETs) is considered justifiable based on several studies that have shown positive outcomes associated with this approach.
One study found that aggressive pancreatic resection for PNETs can be performed with low perioperative mortality and morbidity, offering excellent symptomatic palliation and local disease control. The study also noted that while metastatic recurrence is not uncommon in malignant PNETs and may require additional multimodality therapy, an aggressive approach is justified to optimize palliation and survival (PUBMED:17434366).
Another study supported the idea that aggressive surgical management of PNETs can improve both symptomatic disease and overall survival. It suggested that resection of primary PNETs should be considered even in patients with treatable hepatic metastases, and that prognostic indices such as tumor differentiation and the ability to achieve R0/R1 resection are linked to survival outcomes in PNETs (PUBMED:18813199).
Additionally, a study on the role of neural invasion in PNETs found that neural invasion is associated with aggressive behavior in PNETs. The study developed nomograms based on tumor size, grade, and neural invasion that show acceptable performance in predicting aggressive behavior in PNETs, further supporting the need for aggressive surgical intervention in certain cases (PUBMED:33718210).
Furthermore, minimally invasive techniques for resection of PNETs have been shown to be a safe modality for the surgical treatment of these tumors, offering benefits such as decreased postoperative pain, shorter hospital stays, and quicker recovery periods without compromising oncologic resection (PUBMED:26610782).
In cases of advanced PNETs characterized by infiltration of vascular structures and/or neighboring organs, aggressive resection, including vascular reconstruction and multivisceral resection, should be considered as curative treatment options, especially given the special biological behavior of PNETs and the often younger patient age (PUBMED:35789277).
A prospective study of aggressive resection of metastatic pancreatic endocrine tumors also demonstrated that select patients with metastatic PNETs can be resected safely with favorable outcomes, and some may even be cured (PUBMED:1455305).
In summary, the evidence suggests that aggressive pancreatic resection for PNETs is justifiable and can lead to improved survival and symptomatic relief, particularly when the tumors are resectable and the patient is a suitable candidate for surgery. However, it is important to consider the individual patient's situation, including tumor functionality, size, location, grade, and stage, when deciding on the aggressiveness of the surgical approach (PUBMED:35093860).
Moreover, the importance of pancreatic resection margins has been highlighted, indicating that achieving negative surgical margins is a critical goal of cancer surgery. While extending resection to ensure clear margins may not always improve patient outcomes, it is a consideration that must be weighed against the potential for more aggressive underlying tumor biology (PUBMED:34985731).
For small asymptomatic nonfunctioning PNETs, a study comparing outcomes of resection with expectant management found that observation can be considered for selected patients with asymptomatic, small, low-grade PNETs, as no patients in the observation group developed regional or systemic metastases after a follow-up period. However, surgery has excellent long-term outcomes but is associated with significant morbidity and mortality (PUBMED:31197692).
Finally, a study on the prognostic impact of primary tumor resection in PNETs with synchronous multifocal liver metastases found that primary tumor resection significantly benefits patients, with a longer median survival time observed in the resected group compared to the nonresected group. Primary tumor surgery was identified as a significant independent prognostic factor (PUBMED:29731245).
In conclusion, the justification for aggressive pancreatic resection for PNETs is supported by evidence showing improved survival, effective symptomatic palliation, and the potential for curative outcomes in selected patients. The decision to pursue aggressive resection should be individualized based on a thorough evaluation of the patient's clinical status, tumor characteristics, and potential prognostic factors. |
Instruction: MRI of the epididymis: can the outcome of vasectomy reversal be predicted preoperatively?
Abstracts:
abstract_id: PUBMED:24951200
MRI of the epididymis: can the outcome of vasectomy reversal be predicted preoperatively? Objective: The purpose of this study is to describe the MRI findings seen with tubular ectasia of the epididymis and investigate whether MRI may predict vasal/epididymal tubular occlusion before vasectomy reversal.
Materials And Methods: First, we compared epididymal T1 signal intensity (measured as percentage change relative to ipsilateral testis) in 24 patients with sonographically established tubular ectasia compared with 22 control patients (sonographically normal epididymides). Second, in a subset of patients with tubular ectasia who subsequently underwent surgery to restore fertility (n = 10), we examined the relationship between epididymal T1 signal intensity and surgical outcome. Vasovasostomy (simple vas deferens reanastomosis with high success rate) was possible when viable sperm were detected in the vas deferens intraoperatively. When no sperm were detected, vasal/epididymal tubular occlusion was inferred and vasoepididymostomy (vas deferens to epididymal head anastomosis, a technically challenging procedure with poorer outcome) was performed.
Results: In tubular ectasia, we found increased epididymal T1 signal intensity (0-77%) compared with normal epididymides (-27 to 20%) (p < 0.0001). In patients with tubular ectasia who underwent surgery (n = 10), we found higher T1 epididymal signal intensity in cases of vasal/epididymal occlusion (0-70%) relative to cases in which vasal/epididymal patency was maintained (0-10%) (p = 0.01). By logistic regression, relative epididymal T1 signal intensity increase above 19.4% corresponded to greater than 90% probability of requiring vasoepididymostomy.
Conclusion: Increased epididymal T1 signal intensity (likely due to proteinaceous material lodged within the epididymal tubules) at preoperative MRI in patients undergoing vasectomy reversal suggests vasal/epididymal tubular occlusion and requirement for vasoepididymostomy rather than vasovasostomy.
abstract_id: PUBMED:2073058
The psychological and physical success of vasectomy reversal Forty men who had their vasectomy reversed were psychoanalitically examined. The changes of mind were above all concerned with the results of new relationships, initially no (more) children, then (more) children. The stability of the relationship at the time of vasectomy had been wrongly evaluated which must subsequently influence any advice given. Between 22 and 26 months after the reversal there was a telephone follow-up. Fertilization occurred with 32.1% of the cases, sperm was present in 77.4% (according to the men). The time between vasectomy and reversal was 4.8 years on average, the longest period being 8 years. A fertilization followed even when the anastomosis was only possible in one side in vas deferens area or on both sides in the caput area of the epididymis.
abstract_id: PUBMED:22099990
Nomogram to preoperatively predict the probability of requiring epididymovasostomy during vasectomy reversal. Purpose: Up to 6% of men who undergo vasectomy may later undergo vasectomy reversal. Most men require vasovasostomy but a smaller subset requires epididymovasostomy. Outcomes of epididymovasostomy depend highly on specialized training in microsurgery and, if predicted preoperatively, might warrant referral to a specialist in this field. We created a nomogram based on preoperative patient characteristics to better predict the need for epididymovasostomy.
Materials And Methods: We evaluated patients who underwent primary vasectomy reversal during a 5-year period. Preoperative and intraoperative patient data were collected in a prospectively maintained database. We evaluated the ability of age, years since vasectomy, vasectomy site, epididymal fullness and granuloma presence or absence to preoperatively predict the need for epididymovasostomy in a given patient. The step-down method was used to create a parsimonious model, on which a nomogram was created and assessed for predictive accuracy.
Results: Included in the study were 271 patients with a mean age of 42 years. Patient age was not positively associated with epididymovasostomy. Mean time from vasectomy to reversal was 9.7 years. Time to reversal and a sperm granuloma were selected as important predictors of epididymovasostomy in the final parsimonious model. The nomogram achieved a bias corrected concordance index of 0.74 and it was well calibrated.
Conclusions: Epididymovasostomy can be preoperatively predicted based on years since vasectomy and a granuloma on physical examination. Urologists can use this nomogram to better inform patients of the potential need for epididymovasostomy and whether specialist referral is needed.
abstract_id: PUBMED:24243789
The need for epididymovasostomy at vasectomy reversal plateaus in older vasectomies: a study of 1229 cases. Vasectomy reversal involves either vasovasostomy (VV) or epididymovasostomy (EV), and rates of epididymal obstruction and EV increase with time after vasectomy. However, as older vasectomies may not require EV for successful reversal, we hypothesized that sperm production falls after vasectomy and can protect the system from epididymal blowout. Our objective was to define how the need for EV at reversal changes with time after vasectomy through a retrospective review of consecutive reversals performed by three surgeons over a 10-year period. Vasovasotomy was performed with Silber score 1-3 vasal fluid. EVs were performed with Silber score 4 (sperm fragments; creamy fluid) or 5 (sperm absence) fluid. Reversal procedure type was correlated with vasectomy and patient age. Post-operative patency rates, total spermatozoa and motile sperm counts in younger (<15 years) and older (>15 years) vasectomies were assessed. Simple descriptive statistics determined outcome relevance. Among 1229 patients, 406 had either unilateral (n = 252) or bilateral EV's (n = 154) constituting 33% (406/1229) of reversals. Mean patient age was 41.4±7 years (range 22-72). Median vasectomy interval was 10 years (range 1-38). Overall sperm patency rate after reversal was 84%. The rate of unilateral (EV/VV) or bilateral EV increased linearly in vasectomy intervals of 1-22 years at 3% per year, but plateaued at 72% in vasectomy intervals of 24-38 years. Sperm counts were maintained with increasing time after vasectomy, but motile sperm counts decreased significantly (p < 0.001). Pregnancy, secondary azoospermia, varicocoele and sperm granuloma were not assessed. In conclusion, and contrary to conventional thinking, the need for EV at reversal increases with time after vasectomy, but this relationship is not linear. EV rates plateau 22 years after vasectomy, suggesting that protective mechanisms ameliorate epididymal 'blowout'. Upon reversal, sperm output is maintained with time after vasectomy, but motile sperm counts decrease linearly, suggesting epididymal dysfunction influences semen quality after reversal.
abstract_id: PUBMED:3811050
Vasectomy reversal. A vasovasostomy may be performed on an outpatient basis with local anesthesia, but also may be performed on an outpatient basis with epidural or general anesthesia. Local anesthesia is preferred by most of my patients, the majority of whom choose this technique. With proper preoperative and intraoperative sedation, patients sleep lightly through most of the procedure. Because of the length of time often required for bilateral microsurgical vasoepididymostomy, epidural or general anesthesia and overnight hospitalization are usually necessary. Factors influencing the preoperative choice for vasovasostomy or vasoepididymostomy in patients undergoing vasectomy reversal are considered. The preoperative planned choice of vasovasostomy or vasoepididymostomy for patients having vasectomy reversal described herein does not have the support of all urologists who regularly perform these procedures. My present approach has evolved as the data reported in Tables 1 and 2 have become available, but it may change as new information is evaluated. However, it offers a logical method for planning choices of anesthesia and inpatient or outpatient status for patients undergoing vasectomy reversal procedures.
abstract_id: PUBMED:2199876
Vasectomy: an appraisal for the obstetrician-gynecologist. Data regarding the efficacy of vasectomy are limited, but the procedure appears to be highly effective. Efficacy may vary by the method of vas occlusion. Death attributable to vasectomy in the United States is exceedingly rare, and major perioperative morbidity is quite uncommon. No long-term adverse health effects have been documented, and much evidence supports the conclusion that vasectomy does not increase the risk of subsequent atherosclerosis. Vasectomy, like tubal sterilization, should be considered a permanent decision, because reversal surgery is expensive and requires substantial surgical expertise. Although vasectomy reversal is often successful, it cannot be guaranteed even in the best of circumstances, and when the vasectomy has caused epididymal obstruction, reversal is often unsuccessful. Vasectomy represents a safe and effective alternative to tubal sterilization for couples who decide that the male should be sterilized.
abstract_id: PUBMED:6700508
Vasectomy reversal. Review of 475 microsurgical vasovasostomies. Over a 10-year period, routine vasectomies were reversed, and a very high rate of return of patency and potency was obtained in a series of 475 patients. The patients who presented for the reversal of vasectomy were, on average, about 33 years of age and had undergone vasectomy five years previously. An original, meticulous, microsurgical technique enabled the return of sperms to the ejaculate in over 90% of patients; the subsequent pregnancy rate in their spouses was over 82% in the first two years after the operation. The reversal of routine vasectomies which have been performed in the mid-scrotal area, at least 3 cm away from the epididymis, has the best chance of success; damage to the epididymis may preclude the reconstructive procedure.
abstract_id: PUBMED:6719666
Microsurgery for vasectomy reversal and vasoepididymostomy. Accurate microsurgical techniques developed by us have allowed a considerable improvement in success rate for vasectomy reversal, and a better understanding of the pathophysiology of obstruction. This has permitted us also to treat pathologic obstructive azoospermia more effectively.
abstract_id: PUBMED:437165
Epididymal extravasation following vasectomy as a cause for failure of vasectomy reversal. Twenty-eight men undergoing vasectomy reversal who were found to have no sperm in the proximal vas fluid on one or both sides underwent microscopic epididymal exploration. In 33 of 39 cases so explored, normal sperm were found in the epididymal fluid of the corpus, despite absence of sperm in the vas fluid. Epididymal histology distal to this site revealed extensive interstitial sperm granulomas resulting from rupture of the epididymal duct. Testicular biopsy revealed normal spermatogenesis. Secondary epididymal obstructions were noted when there was copious fluid in the vas deferens proximal to the vasectomy site as well as when there was scanty fluid. It is concluded that persistent azoospermia after an accurate microscopic vasovasostomy results from the secondary epididymal obstruction induced by rupture of the epididymal duct related to the pressure increase after vasectomy.
abstract_id: PUBMED:8126819
The post-vasectomy length of the testicular vasal remnant: a predictor of surgical outcome in microscopic vasectomy reversal. To identify another preoperative predictor of surgical outcome in vasal reconstruction, we prospectively measured the post-vasectomy length of the testicular vas deferens from the cauda of the epididymis to the site of the vasectomy in 26 patients undergoing vasal reconstruction. Intraoperatively, the vasal fluid was microscopically inspected and classified as whole sperm present or whole sperm absent. The length of the testicular vasal remnant was correlated with the intraoperative status of the vasal fluid. A testicular vasal remnant length greater than 2.7 cm. predicted the presence of fluid with whole sperm present in 30 of 32 testicles (94%). A testicular vasal length of less than 2.7 cm. predicted the presence of fluid without whole sperm in 17 of 20 testicles (85%). The length of the testicular vasal remnant can be measured preoperatively and knowledge of this length may be used to advise patients regarding the likelihood of successful vasal reconstruction.
Answer: The study described in abstract PUBMED:24951200 aimed to determine if preoperative MRI findings of the epididymis could predict the outcome of vasectomy reversal. The researchers compared the T1 signal intensity of the epididymis in patients with tubular ectasia to that of control patients with normal epididymides. They found that increased epididymal T1 signal intensity was associated with tubular ectasia and suggested vasal/epididymal tubular occlusion. In a subset of patients with tubular ectasia who underwent vasectomy reversal surgery, those who required vasoepididymostomy (a more complex procedure with a lower success rate) had higher T1 signal intensity compared to those who maintained vasal/epididymal patency and could undergo vasovasostomy (a simpler procedure with a higher success rate). The study concluded that an increase in epididymal T1 signal intensity above 19.4% corresponded to a greater than 90% probability of requiring vasoepididymostomy. Thus, MRI findings of the epididymis can potentially predict the need for vasoepididymostomy over vasovasostomy in patients undergoing vasectomy reversal. |
Instruction: Is incarceration during pregnancy associated with infant birthweight?
Abstracts:
abstract_id: PUBMED:9314809
Is incarceration during pregnancy associated with infant birthweight? Objectives: This study examined whether incarceration during pregnancy is associated with infant birthweight.
Methods: Multivariable analyses compared infant birthweight outcomes among three groups of women: 168 women incarcerated during pregnancy, 630 women incarcerated at a time other than during pregnancy, and 3910 women never incarcerated.
Results: After confounders were controlled for, infant birthweights among women incarcerated during pregnancy were not significantly different from women never incarcerated; however, infant birthweights were significantly worse among women incarcerated at a time other than during pregnancy than among never-incarcerated women and women incarcerated during pregnancy.
Conclusions: Certain aspects of the prison environment (shelter, food, etc.) may be health-promoting for high-risk pregnant women.
abstract_id: PUBMED:20422272
Maternal incarceration during pregnancy and infant birthweight. The primary aim of this study was to examine whether incarceration during pregnancy is associated with infant birthweight. Our second objective was to illustrate the sensitivity of the relationship between infant birthweight and exposure to prison during pregnancy to the method used to measure and model this exposure. The data consisted of delivery records of 360 infants born between January 1, 2002 and December 31, 2004 to pregnant women incarcerated in Texas state prisons. Weighted linear regression, adjusting for potential confounders, was used to model infant birth weight as a function of: (1) the number of weeks of pregnancy spent incarcerated (Method A) and (2) the gestational age at admission to prison (Method B), respectively. These two exposure measures were modeled as continuous variables with and without linear spline transformation. The association between incarceration during pregnancy and infant birthweight appears strongest among infants born to women incarcerated during the first trimester and very weak to non-existent among infants born to women incarcerated after the first trimester. With Method A, but not Method B, linear spline transformation had a distinct effect on the shape of the relationship between exposure and outcome. The association between exposure to prison during pregnancy and infant birth weight appears to be positive only among women incarcerated during the first trimester of pregnancy and the relation is sensitive to the method used to measure and model exposure to prison during pregnancy.
abstract_id: PUBMED:35764595
Evening blue-light exposure, maternal glucose, and infant birthweight. Maternal-fetal consequences of exposure to blue-wavelength light are poorly understood. This study tested the hypothesis that evening blue-light exposure is associated with maternal fasting glucose and infant birthweight. Forty-one pregnant women (body mass index = 32.90 ± 6.35 kg/m2 ; 24-39 years old; 16 with gestational diabetes mellitus [GDM]) wore actigraphs for 7 days, underwent polysomnography, and completed study questionnaires during gestational week 30 ± 3.76. Infant birthweight (n = 41) and maternal fasting glucose (n = 30; range = 16-36 weeks) were recorded from the mothers' medical charts. Blue-light exposure was obtained from Actiwatch-Spectrum recordings. Adjusted and unadjusted linear regression analyses were performed to determine sleep characteristics associated with maternal fasting glucose and infant-birthweight. The mean fasting mid- to late-gestation glucose was 95.73 ± 24.68 mg/dl and infant birthweight was 3271 ± 436 g. In unadjusted analysis, maternal fasting glucose was associated with blue-light exposure (β = 3.82, p = 0.03). In the final model of multiple linear regression for fasting glucose, evening blue-light exposure (β = 4.00, p = 0.01) remained significant after controlling for gestational weight gain, parity, sleep duration, and GDM. Similarly, blue-light exposure was associated with infant birthweight (69.79, p = 0.006) in the unadjusted model, and remained significant (β = 70.38, p = 0.01) after adjusting for weight gain, wakefulness after sleep onset, gestational age at delivery, and GDM. Higher blue-light exposure in pregnancy is associated with higher fasting glucose and infant birthweight. Reduced use of electronic devices before bedtime is a modifiable behavior.
abstract_id: PUBMED:33914227
Paternal Jail Incarceration and Birth Outcomes: Evidence from New York City, 2010-2016. Objectives: To examine population-level associations between paternal jail incarceration during pregnancy and infant birth outcomes using objective measures of health and incarceration.
Methods: We use multivariate logistic regression models and linked records on all births and jail incarcerations in New York City between 2010 and 2016.
Results: 0.8% of live births were exposed to paternal incarceration during pregnancy or at the time of birth. After accounting for parental sociodemographic characteristics, maternal health behaviors, and maternal health care access, paternal incarceration during pregnancy remains associated with late preterm birth (OR = 1.34, 95% CI = 1.21, 1.48), low birthweight (OR = 1.39, 95% CI = 1.27, 1.53), small size for gestational age (OR = 1.35, 95% CI = 1.17, 1.57), and NICU admission (OR = 1.14, 95% CI = 1.05, 1.24).
Conclusions: We found strong positive baseline associations (p < 0.001) between paternal jail incarceration during pregnancy with probabilities of all adverse outcomes examined. These associations did not appear to be driven purely by duration or frequency of paternal incarceration. These associations were partially explained by parental characteristics, maternal health behavior, and health care. These results indicate the need to consider paternal incarceration as a potential stressor and source of trauma for pregnant women and infants.
abstract_id: PUBMED:8570465
The intergenerational relationship between mother's birthweight, infant birthweight and infant mortality in black and white mothers. The relationship between the birthweight of white and black mothers and the outcomes of their infants were examined using the 1988 National Maternal and Infant Health Survey. White and black women who were low birthweight themselves were at increased risk of delivering very low birthweight (VLBW), moderately low birthweight (MLBW), extremely preterm and small size for gestational age (SGA) infants. Adjustment for the confounding effects of prepregnant weight and height reduced the risks of all these outcomes slightly, and more substantially reduced the maternal birthweight associated risk of moderately low birthweight among white mothers. There was little effect of maternal birthweight on infant birthweight-specific infant mortality in white mothers; however, black mothers who weighed less than 4 lbs at birth were at significantly increased risk of delivering a normal birthweight infant who subsequently died. Although the risks for the various outcomes associated with low maternal birthweight were not consistently higher in black mothers compared with white mothers, adjustment for prepregnant weight and height had a greater effect in white mothers than in black mothers. We suggest that interventions to reduce the risks for adverse pregnancy outcomes associated with low maternal birthweight should attempt to optimise prepregnant weight and foster child health and growth.
abstract_id: PUBMED:29291410
Effect of frozen/thawed embryo transfer on birthweight, macrosomia, and low birthweight rates in US singleton infants. Background: Singleton infants conceived using assisted reproductive technology have lower average birthweights than naturally conceived infants and are more likely to be born low birthweight (<2500 gr). Lower birthweights are associated with increased infant and child mortality and poor adult health outcomes, including cardiovascular disease, hypertension, and diabetes. Data from registry and single-center studies suggest that frozen/thawed embryo transfer may be associated with larger birthweights. To date, however, a nationwide, full-population study on United States infants born using frozen/thawed embryo transfer has not been reported.
Objectives: The objective of this study was to compare the effect of frozen/thawed vs fresh embryo transfer on birthweight outcomes for singleton, term infants conceived using in vitro fertilization in the United States between 2007 and 2014, including average birthweight and the risks of both macrosomia (>4000 g) and low birthweight (<2500 g).
Study Design: We used data from the Centers for Disease Control and Prevention's National Assisted Reproductive Technology Surveillance System to compare birthweight outcomes of live-born singleton, autologous oocyte, term (37-43 weeks) infants. Generalized linear models for all infants and stratified by infant sex were used to assess the relationship between frozen/thawed embryo transfer and birthweight, in grams. Infertility diagnosis, year of treatment, maternal age, maternal obstetric history, maternal and paternal race, and infant gestational age and sex were included in the models. Missing race data were imputed. The adjusted relative risks for macrosomia and low birthweight were evaluated using multivariable predicted marginal proportions from logistic regression models.
Results: In total, 180,184 singleton, term infants were included, with 55,898 (31.02%) having been conceived from frozen/thawed embryos. Frozen/thawed embryo transfer was associated with, on average, a 142 g increase in birthweight compared with infants born after fresh embryo transfer (P < .001). An interaction between infant sex and embryo transfer type was significant (P < .0001), with frozen/thawed embryo transfer having a larger effect on male infants by 16 g. The adjusted risk of a macrosomic infant was 1.70 times higher (95% confidence interval, 1.64-1.76) following frozen/thawed embryo transfer than fresh embryo transfer. However, adjusted risk of low birthweight following frozen/thawed embryo transfer was 0.52 (95% confidence interval, 0.48-0.56) compared with fresh embryo transfer.
Conclusion: Frozen/thawed embryo transfer, in comparison with fresh embryo transfer, was associated with increased average birthweight in singleton, autologous oocytes, term infants born in the United States, with a significant interaction between frozen/thawed embryo transfer and infant sex. The risk of macrosomia following frozen/thawed embryo transfer was greater than that following fresh embryo transfer, but the risk of low birthweight among frozen/thawed embryo transfer infants was significantly decreased in comparison with fresh embryo transfer infants.
abstract_id: PUBMED:37076810
Association between infant birthweight and pelvic floor muscle strength: a population-based cohort study. Background: To assess the relationship between infant birthweight and pelvic floor muscle (PFM) strength in China.
Methods: We performed a retrospective, single-center cohort study of 1575 women delivering vaginally between January 2017 and May 2020. All participants completed pelvic floor examinations within 5-10 weeks after delivery and were evaluated for PFM strength, which was estimated by vaginal pressure. Data were collected from electronic records. We evaluated the association between infant birthweight and vaginal pressure through multivariable-adjusted linear regression analysis. We also performed subgroup analyses stratified by potential confounders.
Results: Vaginal pressure decreased as the quartile of birthweight increased (P for trend < 0.001). Beta coefficients were -5.04 (95%CI -7.98 to -2.1), -5.53 (95%CI -8.5 to -2.57), -6.07 (95%CI -9.08 to -3.07) for birthweight quartile 2-4, respectively (P for trend < 0.001), independent of age, postpartum hemorrhage, and the number of vaginal deliveries. In addition, the results of subgroup analyses showed the same patterns across strata.
Conclusions: This study demonstrates that infant birthweight was associated with decreased vaginal pressure in women after vaginal delivery and could be considered a risk factor for decreased PFM strength in the population with vaginal delivery. This association may provide an extra basis for appropriate fetal weight control during pregnancy, and for earlier pelvic floor rehabilitation of postpartum women delivering babies with larger birthweight.
abstract_id: PUBMED:28796908
Maternal and infant factors had a significant impact on birthweight and longitudinal growth in a South African birth cohort. Aim: This birth cohort study investigated longitudinal infant growth and associated factors in a multiethnic population living in a low-resource district surrounding the town of Paarl in South Africa.
Methods: Between March 2012 and October 2014, all mothers attending their second trimester antenatal visit at Paarl Hospital were approached for enrolment. Mother-infant pairs were followed from birth until 12 months of age. Comprehensive socio-demographic, nutritional and psychosocial data were collected at birth, two, six and 12 months. Infant anthropometry was analysed as z-scores for weight and height. Linear regression was used to investigate predictors of birthweight, and linear mixed-effects models were used to investigate predictors of infant growth.
Results: Longitudinal anthropometric data from 792 infants were included: 53% were Black African, 47% were mixed race, and 15% were born preterm. Stunting occurred in 13% of infants at 12 months. Maternal height, antenatal alcohol and tobacco use, ethnicity and socioeconomic status were significant predictors of birthweight. In the adjusted mixed-effects model, birthweight was a significant predictor of growth during the first year of life.
Conclusion: Birthweight was an important predictor of growth trajectory during infancy. Birthweight and growth were influenced by several important modifiable factors.
abstract_id: PUBMED:35265994
Infant Sex-Specific Associations between Prenatal Food Insecurity and Low Birthweight: A Multistate Analysis. Background: Low birthweight is associated with increased risk of neonatal mortality and adverse outcomes among survivors. As maternal sociodemographic factors do not explain all of the risk in low birthweight, exploring exposures occurring during critical periods, such as maternal food insecurity, should be considered from a life course perspective.
Objectives: To explore the association between prenatal food insecurity and low birthweight, as well as whether or not there may be a sex-specific response using a multistate survey.
Methods: Pregnancy Risk Assessment Monitoring System (PRAMS) data of live births from 11 states during 2009-2017 were used, restricting to women with a singleton birth. Food insecurity was determined by a single question in PRAMS, and low birthweight was defined as a birth <2500 g. Multivariable logistic regression was used, stratified by infant sex and adjusted for maternal sociodemographic and prepregnancy health characteristics.
Results: There were n = 50,915 women from 2009 to 2017, with 9.1% experiencing food insecurity. Unadjusted results revealed that food-insecure mothers had an increased odds ratio of delivering a low-birthweight baby (OR: 1.38; 95% CI: 1.25, 1.53). Adjustment for covariates appeared to explain the association among male infants, whereas magnitudes remained greater among female infants (adjusted OR: 1.13; 95% CI: 0.94, 1.35).
Conclusions: Findings suggest a sex-specific response to prenatal food insecurity, particularly among female offspring. Future studies are warranted with more precise measures of food insecurity and to understand the difference by infant sex.
abstract_id: PUBMED:29760105
Triplets, birthweight, and handedness. The mechanisms behind handedness formation in humans are still poorly understood. Very low birthweight is associated with higher odds of left-handedness, but whether this is due to low birthweight itself or premature birth is unknown. Handedness has also been linked to development, but the role of birthweight behind this association is unclear. Knowing that birthweight is lower in multiple births, triplets being about 1.5 kg lighter in comparison with singletons, and that multiples have a higher prevalence of left-handedness than singletons, we studied the association between birthweight and handedness in two large samples consisting exclusively of triplets from Japan (n = 1,305) and the Netherlands (n = 947). In both samples, left-handers had significantly lower birthweight (Japanese mean = 1,599 g [95% confidence interval (CI): 1,526-1,672 g]; Dutch mean = 1,794 g [95% CI: 1,709-1,879 g]) compared with right-handers (Japanese mean = 1,727 g [95% CI: 1,699-1,755 g]; Dutch mean = 1,903 g [95% CI: 1,867-1,938 g]). Within-family and between-family analyses both suggested that left-handedness is associated with lower birthweight, also when fully controlling for gestational age. Left-handers also had significantly delayed motor development and smaller infant head circumference compared with right-handers, but these associations diluted and became nonsignificant when controlling for birthweight. Our study in triplets provides evidence for the link between low birthweight and left-handedness. Our results also suggest that developmental differences between left- and right-handers are due to a shared etiology associated with low birthweight.
Answer: The association between incarceration during pregnancy and infant birthweight has been examined in several studies, with varying findings. One study found that after controlling for confounders, infant birthweights among women incarcerated during pregnancy were not significantly different from those of women never incarcerated. However, infant birthweights were significantly worse among women incarcerated at a time other than during pregnancy than among never-incarcerated women and women incarcerated during pregnancy, suggesting that certain aspects of the prison environment, such as shelter and food, may be health-promoting for high-risk pregnant women (PUBMED:9314809).
Another study aimed to examine the sensitivity of the relationship between infant birthweight and exposure to prison during pregnancy to the method used to measure and model this exposure. The study found that the association between incarceration during pregnancy and infant birthweight appears strongest among infants born to women incarcerated during the first trimester and very weak to non-existent among infants born to women incarcerated after the first trimester. The method used to measure and model exposure to prison during pregnancy had a distinct effect on the shape of the relationship between exposure and outcome (PUBMED:20422272).
These findings indicate that the relationship between incarceration during pregnancy and infant birthweight is complex and may be influenced by the timing of incarceration during pregnancy as well as the conditions within the prison environment. It is important to consider these factors when evaluating the health outcomes of infants born to incarcerated mothers. |
Instruction: Does the incorporation of a virtual simulator improve abilities in endoscopic surgery acquired with an inanimate simulator?
Abstracts:
abstract_id: PUBMED:19616204
Does the incorporation of a virtual simulator improve abilities in endoscopic surgery acquired with an inanimate simulator? Introduction: The carrying out of training courses in surgical endoscopy for surgeons in training centres, is becoming more common. In addition to adequately planning activities, simulation systems are used to improve learning and monitor progression. Inanimate models and virtual reality programs increase psychomotor skills and assessment of performance. In this work we tried to improve our training program, basically in training boxes by introducing a virtual simulator.
Material And Method: Seventeen surgical residents, with a basic training were chosen as the control group. Two additional groups were established, group A: with 6 hours of training with inanimate simulator. Group B: the same training system plus 4h of practice with LapSim. Exercises in the endotrainer and virtual simulator with moving-replacing objects, cutting and suturing-knotting were planned. End-point was time (mean with 95% confidence interval) in every exercise in box trainer, before and after the training period.
Results: Movement exercises: Time in control group was 223.6s, A:103.7s, and B:89.9s (Control vs. A, P < 0.05). Cutting exercises: Time in control group was 317.7s, group A: 232.8s and in the B: 163.6s, (Control vs. B, P < 0.05). In the suture/knot exercise everyone was able to carry out a stitch after the training period. Time in control group was 518.4s, in group A: 309.4s, P < 0.05, and in B:189.5s (Control vs. A, P < 0.05).
Conclusions: Training in inanimate boxes was able to improve the skills of students, particularly for moving and suture/knots. The incorporation of a virtual simulator increased the learning capabilities, mainly in cutting exercises.
abstract_id: PUBMED:25840893
Natural orifice translumenal endoscopic surgery (NOTES): emerging trends and specifications for a virtual simulator. Introduction And Study Aim: A virtual translumenal endoscopic surgical trainer (VTEST) is being developed to accelerate the development of natural orifice translumenal endoscopic surgery (NOTES) procedures and devices in a safe and risk-free environment. For a rapidly developing field such as NOTES, a needs analysis must be conducted regularly to discover emerging research trends and areas of potential high impact for a virtual simulator. This paper presents a survey-based study which follows a similar study conducted by this group in 2011 (Sankaranarayanan et al. in Surg Endosc 27:1607-1616, 2013).
Methods: A 32-point questionnaire was distributed at the 2012 Natural Orifice Surgery Consortium for Assessment and Research annual meeting. These data were subsequently augmented by an identical online survey, targeted at the members of the American Society for Gastrointestinal Endoscopy and the Society of American Gastrointestinal and Endoscopic Surgeons, and analyzed.
Results: Twenty-eight NOTES experts participated in the 2012 study. Cholecystectomy (CE) procedure remained the most commonly performed NOTES technique, with 18 positive responses (64%). In contrast to 2011, the popularity of the NOTES appendectomy (AE) was significantly lower, with only 2 (7%) instances (CE vs. AE, p < 0.001), while the number of peroral endoscopic myotomy (POEM, PE) cases had increased significantly, with 11 (39%) positive responses, respectively (PE vs. AE, p = 0.013). Strong preference toward hybrid rather than pure NOTES techniques (82 vs. 11%, p < 0.001) was also expressed. Other responses were similar to those in the 2011 study, with the VTEST™ utility in developing and testing new techniques and instruments ranked particularly high.
Conclusion: Based on the results of this study, a decision was made to focus exclusively on the transvaginal hybrid NOTES cholecystectomy procedure, including both rigid and flexible scope techniques. The importance of developing a virtual NOTES simulator was reaffirmed, with POEM identified as a promising candidate for future simulator development.
abstract_id: PUBMED:27314591
NOViSE: a virtual natural orifice transluminal endoscopic surgery simulator. Purpose: Natural orifice transluminal endoscopic surgery (NOTES) is a novel technique in minimally invasive surgery whereby a flexible endoscope is inserted via a natural orifice to gain access to the abdominal cavity, leaving no external scars. This innovative use of flexible endoscopy creates many new challenges and is associated with a steep learning curve for clinicians.
Methods: We developed NOViSE-the first force-feedback-enabled virtual reality simulator for NOTES training supporting a flexible endoscope. The haptic device is custom-built, and the behaviour of the virtual flexible endoscope is based on an established theoretical framework-the Cosserat theory of elastic rods.
Results: We present the application of NOViSE to the simulation of a hybrid trans-gastric cholecystectomy procedure. Preliminary results of face, content and construct validation have previously shown that NOViSE delivers the required level of realism for training of endoscopic manipulation skills specific to NOTES.
Conclusions: VR simulation of NOTES procedures can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. In the context of an experimental technique, NOViSE could potentially facilitate NOTES development and contribute to its wider use by keeping practitioners up to date with this novel surgical technique. NOViSE is a first prototype, and the initial results indicate that it provides promising foundations for further development.
abstract_id: PUBMED:28039649
OR fire virtual training simulator: design and face validity. Background: The Virtual Electrosurgical Skill Trainer is a tool for training surgeons the safe operation of electrosurgery tools in both open and minimally invasive surgery. This training includes a dedicated team-training module that focuses on operating room (OR) fire prevention and response. The module was developed to allow trainees, practicing surgeons, anesthesiologist, and nurses to interact with a virtual OR environment, which includes anesthesia apparatus, electrosurgical equipment, a virtual patient, and a fire extinguisher. Wearing a head-mounted display, participants must correctly identify the "fire triangle" elements and then successfully contain an OR fire. Within these virtual reality scenarios, trainees learn to react appropriately to the simulated emergency. A study targeted at establishing the face validity of the virtual OR fire simulator was undertaken at the 2015 Society of American Gastrointestinal and Endoscopic Surgeons conference.
Methods: Forty-nine subjects with varying experience participated in this Institutional Review Board-approved study. The subjects were asked to complete the OR fire training/prevention sequence in the VEST simulator. Subjects were then asked to answer a subjective preference questionnaire consisting of sixteen questions, focused on the usefulness and fidelity of the simulator.
Results: On a 5-point scale, 12 of 13 questions were rated at a mean of 3 or greater (92%). Five questions were rated above 4 (38%), particularly those focusing on the simulator effectiveness and its usefulness in OR fire safety training. A total of 33 of the 49 participants (67%) chose the virtual OR fire trainer over the traditional training methods such as a textbook or an animal model.
Conclusions: Training for OR fire emergencies in fully immersive VR environments, such as the VEST trainer, may be the ideal training modality. The face validity of the OR fire training module of the VEST simulator was successfully established on many aspects of the simulation.
abstract_id: PUBMED:29184667
Design and evaluation of an augmented reality simulator using leap motion. Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system.
abstract_id: PUBMED:27729115
3D-printed pediatric endoscopic ear surgery simulator for surgical training. Introduction: Surgical simulators are designed to improve operative skills and patient safety. Transcanal Endoscopic Ear Surgery (TEES) is a relatively new surgical approach with a slow learning curve due to one-handed dissection. A reusable and customizable 3-dimensional (3D)-printed endoscopic ear surgery simulator may facilitate the development of surgical skills with high fidelity and low cost. Herein, we aim to design, fabricate, and test a low-cost and reusable 3D-printed TEES simulator.
Methods: The TEES simulator was designed in computer-aided design (CAD) software using anatomic measurements taken from anthropometric studies. Cross sections from external auditory canal samples were traced as vectors and serially combined into a mesh construct. A modified tympanic cavity with a modular testing platform for simulator tasks was incorporated. Components were fabricated using calcium sulfate hemihydrate powder and multiple colored infiltrants via a commercial inkjet 3D-printing service.
Results: All components of a left-sided ear were printed to scale. Six right-handed trainees completed three trials each. Mean trial time (n = 3) ranged from 23.03 to 62.77 s using the dominant hand for all dissection. Statistically significant differences between first and last completion time with the dominant hand (p < 0.05) and average completion time for junior and senior residents (p < 0.05) suggest construct validity.
Conclusions: A 3D-printed simulator is feasible for TEES simulation. Otolaryngology training programs with access to a 3D printer may readily fabricate a TEES simulator, resulting in inexpensive yet high-fidelity surgical simulation.
abstract_id: PUBMED:35502075
Can virtual reality surgical simulator improve the function of the non-dominant hand in ophthalmic surgeons? Purpose: Phacoemulsification surgery requires the use of both hands; however, some surgeons may not be comfortable using their non-dominant hand, especially in critical steps such as chopping. This work aims at assessing whether a virtual reality simulator can help cataract surgeons train their non-dominant hand using the capsulorhexis module.
Methods: This was a prospective observational study including thirty ophthalmic surgeons; none of them had previous training on the EyeSi surgical simulator. Twenty-three were experienced, and seven were intermediate surgeons. Surgeons were asked to perform capsulorhexis three times using their dominant hand and then using their non-dominant hand. A performance score based on efficiency, target achievement, instrument handling, and tissue treatment was calculated by the simulator.
Results: A significant improvement in the score of surgeons using their non-dominant hand from the first trial (69.57 ± 18.9) to the third trial (84.9 ± 9.2) (P < 0.001) was found, whereas such improvement was not noted with the dominant hand (P = 0.12). Twenty-six surgeons managed to reach 90% of the mean score achieved by dominant hand by using their non-dominant hand, 11 (36.7%) from the first trial, seven (23.3%) from the second, and eight (26.7%) from the third.
Conclusion: Cataract surgeons showed significant improvement in the scores of their non-dominant hands with simulator training. Thus, it is possible to safely train non-dominant hands for difficult tasks away from the operating room, which would be a fruitful addition to residency training programs.
abstract_id: PUBMED:24043916
Assessment of skills using a virtual reality temporal bone surgery simulator. Surgery on the temporal bone is technically challenging due to its complex anatomy. Precise anatomical dissection of the human temporal bone is essential and is fundamental for middle ear surgery. We assessed the possible application of a virtual reality temporal bone surgery simulator to the education of ear surgeons. Seventeen ENT physicians with different levels of surgical training and 20 medical students performed an antrotomy with a computer-based virtual temporal bone surgery simulator. The ease, accuracy and timing of the simulated temporal bone surgery were assessed using the automatic assessment software provided by the simulator device and additionally with a modified Final Product Analysis Scale. Trained ENT surgeons, physicians without temporal bone surgical training and medical students were all able to perform the antrotomy. However, the highly trained ENT surgeons were able to complete the surgery in approximately half the time, with better handling and accuracy as assessed by the significant reduction in injury to important middle ear structures. Trained ENT surgeons achieved significantly higher scores using both dissection analysis methods. Surprisingly, there were no significant differences in the results between medical students and physicians without experience in ear surgery. The virtual temporal bone training system can stratify users of known levels of experience. This system can be used not only to improve the surgical skills of trained ENT surgeons for more successful and injury-free surgeries, but also to train inexperienced physicians/medical students in developing their surgical skills for the ear.
abstract_id: PUBMED:33100816
Fabrication and Validation of a Cost-Effective Upper Endoscopy Simulator. Purpose: Beginning with the graduating class of 2018, the American Board of Surgery (ABS) requires that residents complete the ABS Flexible Endoscopy Curriculum, Fundamentals of Endoscopic Surgery (FES). This curriculum includes both didactic and simulator training. In the ideal setting residents gain proficiency using simulation prior to performing endoscopies in the clinical setting. This new requirement creates an increased demand for endoscopic simulators in all General Surgery residency programs. Due to the cost prohibitive nature of virtual reality simulators an economic alternative is needed.
Methods: A mechanical simulator was created from inexpensive items easily acquired at a hardware store and in the hospital. Total cost of the simulator was approximately $120 USD. To validate the simulator, experienced endoscopists completed a training session with the device. A seven-question Likert scale survey (1 - strongly disagree to 5 - strongly agree) was completed after the session evaluated the simulated experience versus live upper endoscopies and the device's ability to meet the goals of the FES curriculum.
Results: Eight proficient endoscopists completed the training session and survey and agreed that the device closely replicated live colonoscopies and would meet all training requirements in the FES curriculum. Mean responses to all seven survey questions ranged from 3.8-4.4.
Conclusion: This device is a cost-effective method for simulating live upper endoscopies and is appropriate for use in FES training.
abstract_id: PUBMED:38002426
Implementation of a Virtual Reality Based Digital-Twin Robotic Minimally Invasive Surgery Simulator. The rapid development of computers and robots has seen robotic minimally invasive surgery (RMIS) gradually enter the public's vision. RMIS can effectively eliminate the hand vibrations of surgeons and further reduce wounds and bleeding. However, suitable RMIS and virtual reality-based digital-twin surgery trainers are still in the early stages of development. Extensive training is required for surgeons to adapt to different operating modes compared to traditional MIS. A virtual-reality-based digital-twin robotic minimally invasive surgery (VRDT-RMIS) simulator was developed in this study, and its effectiveness was introduced. Twenty-five volunteers were divided into two groups for the experiment, the Expert Group and the Novice Group. The use of the VRDT-RMIS simulator for face, content, and structural validation training, including the peg transfer module and the soft tissue cutting module, was evaluated. Through subjective and objective evaluations, the potential roles of vision and haptics in robot surgery training were explored. The simulator can effectively distinguish surgical skill proficiency between experts and novices.
Answer: Yes, the incorporation of a virtual simulator does improve abilities in endoscopic surgery acquired with an inanimate simulator. A study demonstrated that training in inanimate boxes improved the skills of students, particularly for moving and suture/knots. The addition of a virtual simulator, such as the LapSim, further increased the learning capabilities, mainly in cutting exercises. The results showed that the time taken to complete movement exercises and cutting exercises was significantly reduced in the group that received additional practice with the virtual simulator compared to the control group and the group that only trained with the inanimate simulator (PUBMED:19616204). |
Instruction: Does the condition of the mouth and teeth affect the ability to eat certain foods, nutrient and dietary intake and nutritional status amongst older people?
Abstracts:
abstract_id: PUBMED:26304822
Parent/Adolescent Weight Status Concordance and Parent Feeding Practices. Background: Previous studies have examined the independent influence of mother's weight status or child's weight status on parents' use of specific feeding practices (ie, food restriction, pressure-to-eat). However, studies have not examined the mutual influence of parents' and adolescents' weight status on parents' feeding practices. This study examines the relationship between parent and adolescent weight status concordance and discordance and parent feeding practices.
Methods: Data from 2 linked population-based studies, Eating and Activity in Teens (EAT) 2010 and Families and Eating and Activity in Teens (F-EAT), were used for cross-sectional analysis. Parents (n = 3252; 63% female; mean age 42.6 years) and adolescents (n = 2153; 54% female; mean age 14.4 years) were socioeconomically and racially/ethnically diverse. Anthropometric assessments and surveys were completed at school by adolescents, and surveys were completed at home by parents.
Results: Parents used the highest levels of pressure-to-eat feeding practices when parents and adolescents were both nonoverweight compared with all other combinations of concordant and discordant parent/adolescent weight status categories. Additionally, parents used the highest levels of food restriction when parents and adolescents were both overweight/obese compared with all other combinations of concordant and discordant parent/adolescent weight status categories. Sensitivity analyses with 2-parent households revealed similar patterns.
Conclusions: Results suggest that parents use feeding practices in response to both their adolescents' and their own weight status. Results may inform health care providers and public health interventionists about which parent/adolescent dyads are at highest risk for experiencing food restriction or pressure-to-eat parent feeding practices in the home environment and whom to target in interventions.
abstract_id: PUBMED:3283920
The effect of bearing tumours on the ability of mice to reject bone marrow transplants. To examine the possible relationship between the cells mediating resistance to tumour cells and those mediating rejection of foreign bone marrow transplants (BMT), the effect of tumour-bearing on BMT rejection has been measured by means of the spleen colony assay. Moderate doses (less than 5.0 X 10(6] of bone marrow cells from DBA/2 strain mice, which normally produce few haemopoietic spleen colonies in gamma-irradiated (950R) CBA/J strain mice, gave numerous (confluent) colonies when given soon after injection of Ehrlich ascites tumour (EAT) cells. Transfusion of [3H]UdR-labelled DBA/2 bone marrow cells demonstrated that the increased spleen colony formation in tumour-bearing mice was not due simply to changes in the total number of injected cells homing to the spleen. Injection of EAT ascites fluid alone, given to CBA/J mice before 950R + BMT, transiently reduced spleen colony development, the effect being more marked when fluid from older tumors was used. Supernatant fluid from EAT cells grown in vitro also depressed growth of BMT in vivo. The results reveal two processes in progress in mice bearing an ascites tumour: (1) an early reduction in the natural resistance to BMT allowing successful grafting and spleen colony formation, and (2) a progressive production by the tumour cells of short-acting soluble factors tending to suppress the proliferation of colony forming bone marrow cells in the transplant. The effect of the tumour-bearing state in weakening the natural resistance to foreign BMT strongly suggests that both tumour and foreign marrow graft resistance are mediated by the same or closely related effector cells.
abstract_id: PUBMED:2460217
Intracellular phosphoribosyl diphosphate level and glucose carrier in Ehrlich ascites tumour cells during tumour growth and diabetes in tumour-bearing mice. The ability of Ehrlich ascites tumour (EAT) cells in mice to take up glucose as well as the density of glucose carriers on the cells increased progressively during the course of tumour development. Simultaneously as the rate of uptake rose, the intracellular phosphoribosyl diphosphate (PRPP) levels dropped responding to the decrease of serum glucose. Hyperglycaemia induced in the host by alloxan or streptozotocin administration increased the serum glucose concentrations and intracellular PRPP levels but decreased the density of glucose carriers of the cells, whereas insulin administration reversed this condition. The physiological significance of these observations are discussed.
abstract_id: PUBMED:29016744
Effects of dapagliflozin on human epicardial adipose tissue: modulation of insulin resistance, inflammatory chemokine production, and differentiation ability. Aims: In patients with cardiovascular disease, epicardial adipose tissue (EAT) is characterized by insulin resistance, high pro-inflammatory chemokines, and low differentiation ability. As dapagliflozin reduces body fat and cardiovascular events in diabetic patients, we would like to know its effect on EAT and subcutaneous adipose tissue (SAT).
Methods And Results: Adipose samples were obtained from 52 patients undergoing heart surgery. Sodium-glucose cotransporter 2 (SGLT2) expression was determined by real-time polymerase chain reaction (n = 20), western blot, and immunohistochemistry. Fat explants (n = 21) were treated with dapagliflozin and/or insulin and glucose transporters expression measured. Glucose, free fatty acid, and adipokine levels (by array) were measured in the EAT secretomes, which were then tested on human coronary endothelial cells using wound healing assays. Glucose uptake was also measured using the fluorescent glucose analogue (6NBDG) in differentiated stromal vascular cells (SVCs) from the fat pads (n = 11). Finally, dapagliflozin-induced adipocyte differentiation was assessed from the levels of fat droplets (AdipoRed staining) and of perilipin. SGLT2 was expressed in EAT. Dapagliflozin increased glucose uptake (20.95 ± 4.4 mg/dL vs. 12.97 ± 4.1 mg/dL; P < 0.001) and glucose transporter type 4 (2.09 ± 0.3 fold change; P < 0.01) in EAT. Moreover, dapagliflozin reduced the secretion levels of chemokines and benefited wound healing in endothelial cells (0.21 ± 0.05 vs. 0.38 ± 0.08 open wound; P < 0.05). Finally, chronic treatment with dapagliflozin improved the differentiation of SVC, confirmed by AdipoRed staining [539 ± 142 arbitrary units (a.u.) vs. 473 ± 136 a.u.; P < 0.01] and perilipin expression levels (121 ± 10 vs. 84 ± 11 a.u.).
Conclusions: Dapagliflozin increased glucose uptake, reduced the secretion of pro-inflammatory chemokines (with a beneficial effect on the healing of human coronary artery endothelial cells), and improved the differentiation of EAT cells. These results suggest a new protective pathway for this drug on EAT from patients with cardiovascular disease.
abstract_id: PUBMED:6696915
Hydroxyurea treatment does not prevent initiation of DNA synthesis in Ehrlich ascites tumour cells and leads to the accumulation of short DNA fragments containing the replication origins. The ability of EAT cells to initiate DNA synthesis in the presence of high doses of hydroxyurea was examined using the recently developed method for crosslinking DNA in vivo. Since crosslinking blocks elongation but has little effect on initiation (Russev and Vassilev (1982) J. Mol. Biol. 161, 77-87), this approach permits a separate study of the two stages of the DNA replication. We found out that hydroxyurea did not greatly affect the initiation of DNA replication but strongly inhibited the elongation of the already initiated new DNA chains. This resulted in the formation of short fragments enriched in sequences synthesized at and around the sites where DNA initiation began. These fragments were not ligated to the high molecular weight chromosomal DNA and could be released under denaturing conditions in single-stranded form. The reassociation and electrophoretic analysis showed that they contained about 200 nucleotides long interspersed DNA sequences repeated approx. 10(4) times per haploid genome, that probably served as replication origins.
abstract_id: PUBMED:37010279
Generation of a Mouse Spontaneous Autoimmune Thyroiditis Model. In recent years, Hashimoto's thyroiditis (HT) has become the most common autoimmune thyroid disease. It is characterized by lymphocyte infiltration and the detection of specific serum autoantibodies. Although the potential mechanism is still not clear, the risk of Hashimoto's thyroiditis is related to genetic and environmental factors. At present, there are several types of models of autoimmune thyroiditis, including experimental autoimmune thyroiditis (EAT) and spontaneous autoimmune thyroiditis (SAT). EAT in mice is a common model for HT, which is immunized with lipopolysaccharide (LPS) combined with thyroglobulin (Tg) or supplemented with complete Freund's adjuvant (CFA). The EAT mouse model is widely established in many types of mice. However, the disease progression is more likely associated with the Tg antibody response, which may vary in different experiments. SAT is also widely used in the study of HT in the NOD.H-2h4 mouse. The NOD.H2h4 mouse is a new strain obtained from the cross of the nonobese diabetic (NOD) mouse with the B10.A(4R), which is significantly induced for HT with or without feeding iodine. During the induction, the NOD.H-2h4 mouse has a high level of TgAb accompanied by lymphocyte infiltration in the thyroid follicular tissue. However, for this type of mouse model, there are few studies to comprehensively evaluate the pathological process during the induction of iodine. A SAT mouse model for HT research is established in this study, and the pathologic changing process is evaluated after a long period of iodine induction. Through this model, researchers can better understand the pathological development of HT and screen new treatment methods for HT.
abstract_id: PUBMED:2303318
Inhibition of Ehrlich ascites tumor in vivo by PAF-antagonists. Several lines of evidence support that PAF modulates the inflammatory and immune responses, and that tumors may inhibit both these processes. In the present study we analysed the effect of PAF antagonists on the growth of Ehrlich Ascites Tumor (EAT) in vivo. Mice were inoculated intraperitoneally with 1 x 10(3) EAT cells and the tumor growth evaluated by counting the number of peritoneal cells, 1,6 and 10 days after tumor implantation. BN 52021 was administered intraperitoneally, intravenously or subcutaneously once or twice a day, at 1.0, 2.5, 5.0 and 20.0 mg/kg. Control animals received 0.1 ml of the vehicle in the same schedule. It was found that i.p. and i.v. administration of BN 52021 (5 mg/kg, twice a day) significantly inhibited EAT growth (80.8% and 56.0% respectively). Other routes and doses were less effective. Another PAF antagonist, SRI 63441 (5 mg/kg, i.p., twice a day) also inhibited EAT growth (80.4%). The BN 52021 added to EAT cells in culture, at concentration of 10(-3) and 10(-4) M, did not affect the viability and proliferation of tumors cells. In an attempt to understand the mechanism of this inhibition, we analyzed the peritoneal macrophages for spreading ability and H2O2 release. It was found that 24 h after tumor implantation there was an increase in the spreading ability of peritoneal macrophages (75%) and that, as the tumor grew, the spreading index fell to control levels ( less than 10%). (5 mg/kg/twice a day) the spreading remained elevated (50-60%) at all the times examined. Release of H2O2, measured by horseradish peroxidase-phenol red oxidation, was below detectable levels throughout tumor growth.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:30732660
The intergenerational transmission of family meal practices: a mixed-methods study of parents of young children. Objective: The current mixed-methods study explored qualitative accounts of prior childhood experiences and current contextual factors around family meals across three quantitatively informed categories of family meal frequency patterns from adolescence to parenthood: (i) 'maintainers' of family meals across generations; (ii) 'starters' of family meals in the next generation; and (iii) 'inconsistent' family meal patterns across generations.
Design: Quantitative survey data collected as part of the first (1998-1999) and fourth (2015-2016) waves of the longitudinal Project EAT (Eating and Activity in Adolescents and Young Adults) study and qualitative interviews conducted with a subset (n 40) of Project EAT parent participants in 2016-2017.
Setting: Surveys were completed in school (Wave 1) and online (Wave 4); qualitative interviews were completed in-person or over the telephone.ParticipantsParents of children of pre-school age (n 40) who had also completed Project EAT surveys at Wave 1 and Wave 4.
Results: Findings revealed salient variation within each overarching theme around family meal influences ('early childhood experiences', 'influence of partner', 'household skills' and 'family priorities') across the three intergenerational family meal patterns, in which 'maintainers' had numerous influences that supported the practice of family meals; 'starters' experienced some supports and some challenges; and 'inconsistents' experienced many barriers to making family meals a regular practice.
Conclusions: Family meal interventions should address differences in cooking and planning skills, aim to reach all adults in the home, and seek to help parents who did not eat family meals as a child develop an understanding of how and why they might start this tradition with their family.
abstract_id: PUBMED:11063916
Mitochondrial glutathione depletion by glutamine in growing tumor cells. The effect of L-glutamine (Gln) on mitochondrial glutathione (mtGSH) levels in tumor cells was studied in vivo in Ehrlich ascites tumor (EAT)-bearing mice. Tumor growth was similar in mice fed a Gln-enriched diet (GED; where 30% of the total dietary nitrogen was from Gln) or a nutritionally complete elemental diet (SD). As compared with non-tumor-bearing mice, tumor growth caused a decrease of blood Gln levels in mice fed an SD but not in those fed a GED. Tumor cells in mice fed a GED showed higher glutaminase and lower Gln synthetase activities than did cells isolated from mice fed an SD. Cytosolic glutamate concentration was 2-fold higher in tumor cells from mice fed a GED ( approximately 4 mM) than in those fed an SD. This increase in glutamate content inhibited GSH uptake by tumor mitochondria and led to a selective depletion of mitochondrial GSH (mtGSH) content (not found in mitochondria of normal cells such as lymphocytes or hepatocytes) to approximately 57% of the level found in tumor mitochondria of mice fed an SD. In tumor cells of mice fed a GED, 6-diazo-5-norleucine- or L-glutamate-gamma-hydrazine-induced inhibition of glutaminase activity decreased cytosolic glutamate content and restored GSH uptake by mitochondria to the rate found in EAT cells of mice fed an SD. The partial loss of mtGSH elicited by Gln did not affect generation of reactive oxygen intermediates (ROIs) or mitochondrial functions (e.g., intracellular peroxide levels, O(2)(-)(*) generation, mitochondrial membrane potential, mitochondrial size, adenosine triphosphate and adenosine diphosphate contents, and oxygen consumption were found similar in tumor cells isolated from mice fed an SD or a GED); however, mitochondrial production ROIs upon TNF-alpha stimulation was increased. Our results demonstrate that glutamate derived from glutamine promotes an inhibition of GSH transport into mitochondria, which may render tumor cells more susceptible to oxidative stress-induced mediators.
abstract_id: PUBMED:23925943
Effect of flavonoids and hyperthermal intraperitoneal chemotherapy on tumour growth and micronucleus induction in mouse tumour model. Hyperthermia enhanced the clastogenicity of alkylating agents. We investigated whether quercetin (QU; 3,3',4',5,7-pentahydroxy flavone) or naringenin (NAR) can sensitize Ehrlich ascites tumour (EAT) to cisplatin (CP) hyperthermal intraperitoneal chemotherapy treatment and whether these flavonoids in combination with CP can ameliorate CP-induced micronuclei (MNs) in peripheral blood reticulocytes of mice. QU or NAR were administered to mice 7 and 3 days before implantation of EAT cells, while CP (5 or 10 mg kg-1) was injected intraperitoneally to normothermic or hyperthermic-treated mice 3 days after implantation of EAT cells (2 106). Our study supports the claim that the QU or NAR in combined treatment with CP has the potential to inhibit tumour growth in both normothermic and hyperthermic conditions and attenuate number of MNs in the peripheral blood reticulocytes of mice at normothermic condition but enhanced the clastogenicity of CP agents in hyperthermal condition.
Answer: The provided abstracts do not directly address the question of whether the condition of the mouth and teeth affects the ability to eat certain foods, nutrient and dietary intake, and nutritional status among older people. The abstracts focus on various other topics, including parent and adolescent weight status and feeding practices (PUBMED:26304822), the effect of tumors on bone marrow transplant rejection (PUBMED:3283920), glucose uptake in Ehrlich ascites tumor cells (PUBMED:2460217), the effects of dapagliflozin on epicardial adipose tissue (PUBMED:29016744), DNA synthesis in Ehrlich ascites tumor cells (PUBMED:6696915), a mouse model of autoimmune thyroiditis (PUBMED:37010279), the effect of PAF-antagonists on Ehrlich ascites tumor growth (PUBMED:2303318), the intergenerational transmission of family meal practices (PUBMED:30732660), the impact of L-glutamine on mitochondrial glutathione levels in tumor cells (PUBMED:11063916), and the effect of flavonoids and hyperthermal intraperitoneal chemotherapy on tumor growth and micronucleus induction (PUBMED:23925943).
None of these abstracts discuss the impact of oral health on dietary intake or nutritional status in older adults. To answer the question, research specifically examining the relationship between oral health, including the condition of the mouth and teeth, and its impact on eating habits, food choices, nutrient absorption, and overall nutritional health in the elderly population would be required. This information is not provided in the abstracts given. |
Instruction: Routine chest X-ray prior to thyroid surgery: is it always necessary?
Abstracts:
abstract_id: PUBMED:23439441
Routine postoperative chest X-ray is unnecessary following the Nuss procedure for pectus excavatum. Objectives: Pneumothorax is the most common complication after the Nuss procedure for pectus excavatum. The majority of pneumothoraces are small, and the patients have no symptoms. The aim of this study was to evaluate the necessity for routine chest X-ray immediately after surgery.
Methods: Group I consists of 644 patients who were operated on with a Nuss procedure for pectus excavatum between 2001 and 2009 (85% male, median age 16 [range 7-48 years]) at Aarhus University Hospital. The standard procedure included chest X-ray immediately after surgery and before discharge. Group II consists of 294 patients (88% male, median age 16 [range 11-54 years]) who had a Nuss procedure in the period January 2011 to October 2012, where the standard procedure only included chest X-ray before discharge.
Results: In Group I, pneumothorax was found on the chest X-ray obtained immediately after surgery in 333 (52%) patients. Fifteen (4.5%) were treated with chest-tube drainage. Six of these patients had no symptoms, but a 2- to 3-cm pneumothorax, 2 had progression of the pneumothorax and 7 had respiratory symptoms. The median size of those drained was 3 (range 2-6 cm). At the normal 6-week control, no pneumothorax remained. Group II: Among the 294 patients, 1 (0.3%) had a chest tube.
Conclusions: Only patients with respiratory symptoms after the Nuss procedure need a chest X-ray. A routine chest X-ray can be limited to the time of discharge where the position of the bar(s) is also checked.
abstract_id: PUBMED:18686829
Is routine preoperative chest X-ray indicated in elderly patients undergoing elective surgery? Background: In our hospital pre-operative chest x-ray (CXR) are routinely requested without prior establishment of any medical indication for patients of 70 or more years of age who are undergoing elective surgery. The aim of this study was to determine if routine preoperative chest x-rays are justifiably indicated for elderly patients undergoing elective surgery in the University of Nigeria Teaching Hospital, Enugu.
Method: One hundred and twenty consecutive patients aged 70 or more years were studied between January 2003 and December 2005. As part of our routine preoperative evaluation, detailed history and thorough physical examination were carried out with a view to eliciting symptoms and signs that would normally indicate chest X-ray. Pre-operative ECG were also examined for presence or absence of abnormalities that could indicate chest X-ray. Preoperative chest X-rays of the 120 patients were also studied and radiological findings noted.
Results: Ninety (75%) out of 120 patients had medical indications for chest X-ray. The remaining 30 (25%) were considered to lack medical indication for chest X-ray. Overall, 105 out of the 120 (84%) patients had abnormal findings on chest X-ray.
Conclusion: Routine preoperative chest X-rays in the elderly patients are worthwhile even without medical indication.
abstract_id: PUBMED:19519965
Is a routine chest X-ray indicated before discharge following paediatric cardiac surgery? Unlabelled: In many paediatric cardiosurgical units, a chest X-ray is routinely performed before discharge. We sought to evaluate the clinical impact of such routine radiographs in the management of children after cardiac surgery. Of 100 consecutive children, a chest X-ray was performed in 71 prior to discharge. Of these, 38 were clinically indicated, while 33 were performed as a routine. Therapeutic changes were instituted on the basis of the X-ray in 4 patients, in all of whom the imaging had been clinically indicated. No therapeutic changes followed those radiographs performed on a routine basis.
Conclusion: Routine chest radiographs can be omitted prior to discharging patients after paediatric heart surgery.
abstract_id: PUBMED:35344742
Are Routine Chest Radiographs After Chest Tube Removal in Thoracic Surgery Patients Necessary? Introduction: The routine use of chest x-ray (CXR) to evaluate the pleural space after chest tube removal is a common practice driven primarily by surgeon preference and institutional protocol. The results of these postpull CXRs frequently lead to additional interventions that serve only to increase health care costs and resource utilization. We investigated the utility of these postpull CXRs in thoracic surgery patients and assessed their effectiveness in predicting the need for tube replacement.
Methods: Single-institution retrospective study comprising thoracic surgery patients requiring postoperative chest tube drainage over a 3-y period. Demographics and surgical characteristics, including surgical approach, procedure, and procedure type, were recorded. Outcomes included postpull CXR findings, interventions resulting from radiographic abnormalities, and the additional health resource utilization incurred by obtaining these studies on asymptomatic patients.
Results: The study included 433 patients. Postpull CXRs were performed in 87.1% of patients, with 33.2% demonstrating an abnormality compared with the prior study. Among these, 65.7% resulted only in repeat imaging and 25.7% resulted in discharge delay. Overall, a total of 13 patients (3%) required chest tube replacement, three during the index hospitalization and the other 10 requiring readmission. Among those requiring chest tube replacement, 75% had normal postpull imaging, and all were symptomatic.
Conclusions: Recurrent pneumothorax after chest tube removal requiring immediate tube reinsertion is relatively rare and does not occur in the absence of symptoms. Our study suggests that routine postpull CXRs have limited clinical utility and can be safely omitted in asymptomatic patients with appropriate clinical observation.
abstract_id: PUBMED:33385243
Are routine chest X-rays following chest tube removal necessary in asymptomatic pediatric patients? Purpose: The purpose of this study was to determine if routine chest X-rays (CXRs) performed after chest tube (CT) removal in pediatric patients provide additional benefit for clinical management compared to observation of symptoms alone.
Methods: A single-center retrospective study was conducted of inpatients, 18 years or younger, who had a CT managed by the pediatric surgery team between July 2017 and May 2019. The study compared two groups: (1) patients who received a post-pull CXR and (2) those who did not. The primary outcome of the study was the need for intervention after CT removal.
Results: 102 patients had 116 CTs and met inclusion criteria; 79 post-pull CXRs were performed; the remaining 37 CT pulls did not have a follow-up CXR. No patients required CT replacement or surgery in the absence of symptoms. Three patients exhibited clinical symptoms that would have prompted intervention regardless of post-pull CXR results. One patient had an intervention guided by post-pull CXR results alone. Meanwhile, another patient had delayed onset of symptoms and intervention. No patients required an intervention in the group that did not have a post-pull CXR.
Conclusion: Chest X-ray after CT removal had a very low yield for changing clinical management of asymptomatic patients. Clinical symptoms predict the need for an intervention.
abstract_id: PUBMED:28905345
Same-day Routine Chest-X Ray After Thoracic Surgery is Not Necessary! Introduction: Performing a routine postoperative chest X-ray (CXR) after general thoracic surgery is daily practice in many thoracic surgery departments. The quality, frequency of pathological findings and the clinical consequences have not been well evaluated. Furthermore, exposure to ionising radiation should be restricted to a minimum and therefore routine practice can be questioned.
Methods: As a hospital standard, each patient was given a routine CXR after opening of the pleura and inserting a chest tube. From October 2015 to March 2016, each postoperative patient with a routine CXR was included in a prospective database, including film quality, pathological findings, clinical and laboratory results and cardiorespiratory monitoring, as well as clinical consequences.
Results: 546 patients were included. Risk factors for postoperative complications were obesity in 50 patients (9.2%), emphysema in 127 patients (23.3%), coagulopathy in 34 patients (6.2%), longer operation time (more than two hours) in 242 patients (44.3%) and previous lung irradiation in 29 (5.3%) of patients. Major lung resections were performed in 191 patients (35.9%). 263 (48.2%) patients had procedures with minimally invasive access. The quality of the X-ray film was insufficient in 8.2% of patients. 90 (16.5%) of CXRs were found to show pathological findings, with a trend for more pathological findings after open surgery (55/283; 19.4%) compared to minimally invasive surgery (35/263; 13.3%) (p = 0.064). 11 (2.0%) patients needed a surgical or clinical intervention during postoperative observation; this corresponds to 12.2% of patients with a pathological finding on CXR. Nine of these 11 patients were clinically symptomatic and only two (0.37%) patients were asymptomatic with a relevant pneumothorax.
Conclusions: Our study cannot support routine postoperative CXR after general thoracic procedures and we believe that restriction to clinically symptomatic cases should be a safe option.
abstract_id: PUBMED:34522394
Local diagnostic reference levels for routine chest X-ray examinations at a public sector hospital in central South Africa. Background: Dose optimisation is a radiation protection guideline recommended by the International Commission on Radiological Protection (ICRP) for adherence to the 'as low as reasonably achievable' (ALARA) principle. Diagnostic reference levels (DRLs) are used to optimise patients' radiation protection for diagnostic and interventional procedures and are particularly useful for frequently performed examinations such as chest X-rays.
Aim: To establish the local diagnostic reference levels (LDRLs) for routine chest X-rays.
Setting: Public sector hospital, Northern Cape province, South Africa.
Methods: Sixty patients referred for chest X-rays fulfilling the inclusion criteria participated in this study. Patients were ≥ 18 years of age and weighed between 60 kg and 80 kg. Consent for participation was obtained. The entrance skin air kerma (ESAK) was measured by using the indirect method recommended by the International Atomic Energy Agency (IAEA). Statistical software (SAS version 9.2) was used to determine the LDRLs for chest X-rays in three different rooms. In two rooms, computed radiography (CR) was used and the other one was a digital radiography (DR) unit. The LDRL values at the research site were compared with various published international values.
Results: LDRLs for chest X-rays were established. The CR LDRL value for the posteroanterior (PA) chest projection was higher than the DR (flat panel detector [FPD]) LDRL value. The LDRLs of the PA chest projections were 0.3 mGy for CR and 0.2 mGy for DR. The lateral (LAT) chest projection LDRL value was 0.8 mGy for both CR and DR (FPD) projections. The resultant LDRL between rooms at the research site was 0.3 mGy for PA 0.3 mGy and 0.8 mGy for LAT chest projections.
Conclusion: The LDRLs for chest X-rays established at this research site were lower than internationally reported DRLs. We recommend that LDRLs for routine chest X-rays should be repeated every 3 years, according to the ICRP.
Contribution: Currently, no established or published DRL values prescribed by the Directorate of Radiation Control (DRC) are available in South Africa. The LDRLs established for routine chest X-ray examinations at this research site can serve as a guideline for the establishment of DRL values for other anatomical regions at the research site and other radiology departments in the country.
abstract_id: PUBMED:37065502
Effectiveness of protective thyroid shield in chest X-ray imaging. Chest X-ray imaging is the most common X-ray imaging method for diagnosing coronavirus disease. The thyroid gland is one of the most radiation-sensitive organs of the body, particularly in infants and children. Therefore, it must be protected during chest X-ray imaging. Yet, because it has benefits and drawbacks, using a thyroid shield as protection during chest X-ray imaging is still up for debate.Therefore, this study aims to clarify the need for using a protective thyroid shield during chest X-ray imaging. This study was performed using different dosimeters (silica beads as a thermoluminescent dosimeter and an optically stimulated luminance dosimeter) embedded in an adult male ATOM dosimetric phantom. The phantom was irradiated using a portable X-ray machine with and without thyroid shielding. The dosimeter readings indicated that a thyroid shield reduced the radiation dose to the thyroid gland by 69% ± 18% without degrading the obtained radiograph. The use of a protective thyroid shield during chest X-ray imaging is recommended because its benefits outweigh the risks.
abstract_id: PUBMED:7254966
Value of the chest X-ray as a screening test for elective surgery in children. A retrospective study was conducted to assess the value of the chest x-ray as a preoperative screening procedure in pediatric patients. Admissions for elective surgery were compared at two hospitals, one that required routine preoperative chest x-rays and one that did not. Our purpose was to determine the yield of the screening chest x-ray in detecting unknown abnormalities and to determine whether patients who had a preoperative chest x-ray taken experienced fewer anesthetic or postoperative complications than did those who did not. In all, 1,924 cases were studied; in 749 a preoperative chest film was taken. Of those 749 cases, a previously unsuspected abnormality was discovered in 35 (4.7%) patients. Nine (1.2%) of these abnormalities were considered to be clinically significant and three (0.4%) resulted in cancellation of surgery. No differences in anesthetic or postoperative complications were noted between the two groups of patients. It is recommended that the performance of routine preoperative chest x-rays on apparently healthy children be discontinued.
abstract_id: PUBMED:30867907
Agreement between chest ultrasonography and chest X-ray in patients who have undergone thoracic surgery: preliminary results. Background: Chest Ultrasonography (chest US) has shown good sensibility in detecting pneumothorax, pleural effusions and peripheral consolidations and it can be performed bedside.
Objectives: The aim of the study was to analyze agreement between chest US and chest X-ray in patients who have undergone thoracic surgery and discuss cases of discordance.
Methods: Patients undergoing thoracic surgery were retrospectively selected. Patients underwent routinely Chest X-ray (CXR) during the first 48 h after surgery. Chest US have been routinely performed in all selected patients in the same date of CXR. Chest US operators were blind to both reports and images of CXR. Ultrasonographic findings regarding pneumothorax (PNX), subcutaneous emphysema (SCE), lung consolidations (LC), pleural effusions (PE) and hemi-diaphragm position were collected and compared to corresponding CXR findings. Inter-rater agreement between two techniques was determined by Cohen's kappa-coefficient.
Results: Twenty-four patients were selected. Inter-rater agreement showed a moderate magnitude for PNX (Cohen's Kappa 0.5), a slight/fair magnitude for SCE (Cohen's Kappa 0.21), a fair magnitude for PE (Cohen's Kappa 0.39), no agreement for LCs (Cohen's Kappa 0.06), high levels of agreement for position of hemi-diaphragm (Cohen's Kappa 0.7).
Conclusion: Analysis of agreement between chest X-ray and chest US showed that ultrasonography is able to detect important findings for surgeons. Limitations and advantages have been found for both chest X-ray and chest US. Knowing the limits of each one is important to really justify and optimize the use of ionizing radiations.
Answer: The provided abstracts do not directly address the necessity of routine chest X-rays specifically prior to thyroid surgery. However, they do discuss the utility and implications of routine chest X-rays in various other surgical and medical contexts, which can provide some insight into the broader question of when such imaging is warranted.
For instance, studies have shown that routine postoperative chest X-rays may not be necessary in certain cases, such as following the Nuss procedure for pectus excavatum (PUBMED:23439441), paediatric cardiac surgery (PUBMED:19519965), thoracic surgery (PUBMED:35344742, PUBMED:28905345), and chest tube removal in pediatric patients (PUBMED:33385243). These studies suggest that chest X-rays should be reserved for symptomatic patients or those with specific clinical indications rather than used routinely.
Conversely, one study found that routine preoperative chest X-rays in elderly patients undergoing elective surgery were worthwhile even without medical indication, as a high percentage of these patients had abnormal findings (PUBMED:18686829). This suggests that the value of routine chest X-rays may be context-dependent, varying with patient age, type of surgery, and the likelihood of uncovering clinically significant findings.
Given the emphasis on reducing unnecessary radiation exposure (PUBMED:34522394) and the potential benefits of alternative imaging modalities like chest ultrasonography (PUBMED:30867907), the necessity of routine chest X-rays should be carefully considered. The decision to perform a chest X-ray prior to thyroid surgery should ideally be based on individual patient risk factors, clinical indications, and the potential for the X-ray to alter surgical management.
In summary, while the abstracts do not provide a definitive answer to the question of routine chest X-rays specifically before thyroid surgery, they highlight a trend towards more selective and symptom-driven use of chest X-rays in various medical and surgical contexts. This suggests that the necessity of routine chest X-rays prior to thyroid surgery should be evaluated on a case-by-case basis, considering the patient's clinical presentation and the likelihood that the X-ray will impact surgical decision-making. |
Instruction: Do people who consciously attend to their movements have more self-reported knee pain?
Abstracts:
abstract_id: PUBMED:24942479
Do people who consciously attend to their movements have more self-reported knee pain? An exploratory cross-sectional study. Objectives: This study explored the relationship between propensity for conscious control of movement (assessed by the Movement-Specific Reinvestment Scale) and self-reported knee pain.
Design: Cross-sectional study.
Setting: General population.
Subjects: Adults aged 18 to 55 years of age.
Measures: Participants completed the movement-specific reinvestment scale and a self-report questionnaire on knee pain at the same time on one occasion.
Results: Data was collected on 101 adults of whom 34 (33.7%) self-reported knee pain. Mean scores on the conscious motor processing subscale of the movement-specific reinvestment scale, but not the movement self-consciousness subscale, were significantly higher for participants who reported knee pain within the previous year compared with those who did not (mean difference 3.03; t-test 2.66, df = 97, P = 0.009; 95% confidence interval (CI) 0.77 to 5.30). The association between self-reported knee pain and propensity for conscious motor processing was still observed, even after controlling for movement self-consciousness subscale scores, age, gender and body mass index (adjusted odds ratio 1.16, 95% CI 1.04 to 1.30).
Conclusions: Propensity for conscious control of movement may play a role in knee pain.
abstract_id: PUBMED:37773113
Current status and influencing factors of self-management in knee joint discomfort among middle-aged and elderly people: a cross-sectional study. Background: This study aims to identify the current status and factors influencing self-management of knee discomfort in middle-aged and elderly people in China.
Methods: A stratified multistage cluster sampling method was used to select participants from communities in China from January 15 to May 31, 2020. A cross-sectional survey was conducted using the general information questionnaire and the Knee Joint Discomfort Self-management Scale. Univariate analysis and a generalized linear model were used to analyze the factors influencing self-management.
Results: The prevalence of knee discomfort was 77%. Moderate to severe discomfort accounted for 30.5%. The average item score of self-management in 9640 participants was 1.98 ± 0.76. The highest and lowest levels were: 'daily life management' and 'information management'. Gender, ethnicity, education level, economic source, chronic disease, knee pain in the past month, and the degree of self-reported knee discomfort were significant predictors of self-management.
Conclusion: The self-management of knee discomfort in middle-aged and elderly people is poor, and the degree of discomfort is a significant predictor. Healthcare providers should consider socioeconomic demographic and clinical characteristics to help these individuals improve their self-management skills. Attention should also be given to improving their ability to access health information and making them aware of disease risks.
abstract_id: PUBMED:35192713
Influence of Severity and Duration of Anterior Knee Pain on Quadriceps Function and Self-Reported Function. Context: Little is known about how the combination of pain severity and duration affects quadriceps function and self-reported function in patients with anterior knee pain (AKP).
Objective: To examine how severity (low [≤3 of 10] versus high [>3 of 10]) and duration (short [<2 years] versus long [>2 years]) of AKP affect quadriceps function and self-reported function.
Design: Cross-sectional study.
Setting: Laboratory.
Patients Or Other Participants: Sixty patients with AKP (mean pain severity = 4 of 10 on the numeric pain rating scale, mean pain duration = 38 months) and 48 healthy control individuals. Patients with AKP were categorized into 3 subdivisions based on pain: (1) severity (low versus high); (2) duration (short versus long); and (3) severity and duration (low and short versus low and long versus high and short versus high and long).
Main Outcome Measure(s): Quadriceps maximal (maximal voluntary isometric contraction) and explosive (rate of torque development) strength, activation (central activation ratio), and endurance (average peak torque) and self-reported function (Lower Extremity Functional Scale score).
Results: Compared with the healthy control group, (1) all AKP subgroups showed less quadriceps maximal strength (P < .005, d ≥ 0.78) and activation (P < .02, d ≥ 0.85), except for the AKP subgroup with low severity and short duration of pain (P > .32); (2) AKP subgroups with either high severity or long duration of pain showed less quadriceps explosive strength (P < .007, d ≥ 0.74) and endurance (P < .003, d ≥ 0.79), but when severity and duration were combined, only the AKP subgroup with high severity and long duration of pain showed less quadriceps explosive strength (P = .006, d = 1.09) and endurance (P = .0004, d = 1.21); and (3) all AKP subgroups showed less self-reported function (P < .0001, d ≥ 3.44).
Conclusions: Clinicians should be aware of the combined effect of severity and duration of pain and incorporate both factors into clinical practice when rehabilitating patients with AKP.
abstract_id: PUBMED:32563423
Predicting self-reported functional improvement one year after primary total knee arthroplasty using pre- and postoperative patient-reported outcome measures. Background: Approximately 20% of patients do not perceive functional improvement after a primary total knee arthroplasty (TKA). This study aims to assess which patient-related and clinical determinants at baseline and six months postoperative can predict lack of self-reported functional improvement at 12 months after primary TKA.
Methods: In a retrospective cohort study of 569 patients who received a primary TKA between 2015 and 2018, self-reported functional improvement, measured as ≥7 points increase in Oxford Knee Score (OKS) from baseline to 12 months postoperative, was assessed. Patient characteristics and patient-reported variables at baseline and six months postoperative were entered in a logistic regression model with manual backward elimination.
Results: Incidence of functional improvement in this study was 73%. Preoperative variables were no strong predictors of the outcome. An increase in pain between baseline and six months postoperative was a risk factor for not functionally improving (odds ratio (OR) 1.13 (95% confidence interval (CI) 1.03-1.23)). An improvement in knee pain and function was a protective factor for lacking functional improvement (OR 0.78 (95% CI 0.74-0.82)). The prediction model explained 44% of variance and showed good calibration and discrimination. Sensitivity and specificity were 82% and 76%, respectively.
Conclusions: Using pre- and postoperative variables, a prediction model for self-reported functional improvement one year after TKA was developed. This prediction tool was easy to use at six months postoperative and allowed identification of patients at high risk for not functionally improving one year after TKA. This could facilitate early interventions directed at functional improvement after TKA.
abstract_id: PUBMED:30962756
A Path Model Analysis of the Causal Relationship between Self-care Agency and Healthy Behavior in Community-dwelling Older People from the GAINA Study. Background: Self-care agency is an important determinant of self-care behavior. The purpose of this study was to identify the causal relationship between self-care agency and healthy behavior, and to construct a conceptual model of healthy behavior among older people living in a rural community.
Methods: This study was conducted as a cross-sectional survey at the Hino, a town in western Tottori Prefecture, Japan. Participants who were enrolled in the Good Ageing and Intervention against Nursing Care and Activity Decline (GAINA) study from 2014 to 2018 (467 new participants) were initially investigated. Of 398 participants aged ≥ 65 years, 5 were excluded due to missing data, and thus 393 were analyzed. Nurse researchers conducted face-to-face interviews with participants to check the accuracy of data obtained from a self-administered questionnaire, which included demographic information, physical condition (comorbidities, knee pain, low back pain, and locomotive syndrome), healthy behavior, and self-care agency. Correlations among variables were investigated by Pearson's correlation coefficient analysis, and path analysis was performed to assess causal relationships.
Results: A total of 393 persons (160 men and 233 women) were investigated, ranging in age from 65 to 92 years, with a mean age of 75.1 years (SD: 6.9 years). Path analysis revealed poor fit of a model in which pain and locomotive syndrome were factors inhibiting healthy behavior. When the model included only self-care agency, the indices of model fit were almost satisfactory (Goodness-of-fit index = 0.967, Adjusted goodness-of-fit index = 0.900, Comparative fit index = 0.951, and Root mean square error of approximation = 0.088), and the coefficient of determination (R2) was 0.38. The self-care agency items with the greatest influence on healthy behavior were the ability to "grasp the techniques/tips needed to maintain health," and the ability to "persist with healthy behavior."
Conclusion: Self-care agency can promote healthy behavior among community-dwelling older people. Regardless of physical problems such as pain and locomotive syndrome, older people have the potential to adopt positive healthy behavior if they acquire self-care agency.
abstract_id: PUBMED:29667429
Quadriceps Function, Knee Pain, and Self-Reported Outcomes in Patients With Anterior Cruciate Ligament Reconstruction. Context: Interactions among muscle strength, pain, and self-reported outcomes in patients with anterior cruciate ligament reconstruction (ACLR) are not well understood. Clarifying these interactions is of clinical importance because improving physical and psychological function is thought to optimize outcomes after ACLR.
Objective: To examine the relationships among neuromuscular quadriceps function, pain, self-reported knee function, readiness to return to activity, and emotional response to injury both before and after ACLR.
Design: Descriptive laboratory study.
Patients Or Other Participants: Twenty patients (11 females and 9 males; age = 20.9 ± 4.4 years, height = 172.4 ± 7.5 cm, weight = 76.2 ± 11.8 kg) who were scheduled to undergo unilateral ACLR.
Main Outcome Measure(s): Quadriceps strength, voluntary activation, and pain were measured at presurgery and return to activity, quantified using maximal voluntary isometric contractions (MVICs), central activation ratio, and the Knee Injury and Osteoarthritis Outcome Score pain subscale, respectively. Self-reported knee function, readiness to return to activity, and emotional responses to injury were evaluated at return to activity using the International Knee Documentation Committee questionnaire (IKDC), ACL Return to Sport After Injury scale (ACL-RSI), and Psychological Response to Sport Injury Inventory (PRSII), respectively. Pearson product moment correlations and linear regressions were performed using raw values and percentage change scores.
Results: Presurgical levels of pain significantly predicted 31% of the variance in the ACL-RSI and 29% in the PRSII scores at return to activity. The MVIC and pain collected at return to activity significantly predicted 74% of the variance in the IKDC, whereas only MVIC significantly predicted 36% of the variance in the ACL-RSI and 39% in the PRSII scores. Greater increases in MVIC from presurgery to return to activity significantly predicted 49% of the variance in the ACL-RSI and 59% of the variance in the IKDC scores.
Conclusion: Decreased quadriceps strength and higher levels of pain were associated with psychological responses in patients with ACLR. A comprehensive approach using traditional rehabilitation that includes attention to psychological barriers may be an effective strategy to improve outcomes in ACLR patients.
abstract_id: PUBMED:18029391
Associations between physical examination and self-reported physical function in older community-dwelling adults with knee pain. Background And Purpose: Knee pain is a common disabling condition for which older people seek primary care. Clinicians depend on the history and physical examination to direct treatment. The purpose of this study was to examine the associations between simple physical examination tests and self-reported physical functional limitations.
Subjects And Methods: A population sample of 819 older adults underwent a standardized physical examination consisting of 24 tests. Associations between the tests and self-reported physical functional limitations (Western Ontario and McMaster Universities Osteoarthritis Index physical functioning subscale [WOMAC-PF] scores) were explored.
Results: Five of the tests showed correlations with WOMAC-PF scores, corresponding to an intermediate effect (r>or=.30). These were tenderness on palpation of the infrapatellar area, timed single-leg standing balance, maximal isometric quadriceps femoris muscle strength (force-generating capacity), reproduction of symptoms on patellofemoral compression, and degree of knee flexion. Each of these tests was able to account for between 7% and 13% of the variance in WOMAC-PF scores, after controlling for age, sex, and body mass index. Three of these tests are indicative of impairments that may be modifiable by exercise interventions.
Discussion And Conclusion: Self-reported physical functional limitations among older people with knee pain are associated with potentially modifiable physical impairments that can be identified by simple physical examination tests.
abstract_id: PUBMED:33434631
The clinical profile of people with knee osteoarthritis and a self-reported prior knee injury: A cross-sectional study of 10,973 people. Background: Little is known about how a prior knee injury affects the clinical profile of individuals with knee osteoarthritis (KOA) although this is potentially important to personalize care.
Objectives: To compare individual and clinical characteristics of individuals with KOA with and without a self-reported prior knee injury.
Design: Secondary data analysis of baseline data from the Good Life with osteoArthritis in Denmark (GLA:D®) registry.
Methods: Individuals with symptomatic KOA, self-reporting a prior knee injury requiring a doctor's assessment, were compared to individuals without prior knee injury on a range of individual and clinical characteristics using multivariable logistic regression.
Results: The analysis included 10,973 individuals with KOA of which 54% self-reported a prior knee injury. The average age was 64 years and 73% were female. We found that being male (Odds Ratio (OR): 0.99), having longer symptom duration of knee pain (OR: 1.07), having more painful body sites (OR: 1.03), being able to do more chair rises (OR: 1.02) and being more physically active in a week (2-4 days; OR:1.33) (>4 days; OR: 1.24) were associated with self-reporting a prior knee injury whereas being older (OR: 0.99), having higher BMI (OR: 0.99) and higher quality of life (OR: 0.98) were not associated with reporting a prior knee injury.
Conclusion: The overall pattern of our findings rather than specific characteristics indicates that individuals with KOA and a history of a self-reported knee injury have a somewhat different clinical profile than their non-injured peers.
abstract_id: PUBMED:30299278
Is Self-Reported Knee Stability Associated With Symptoms, Function, and Quality of Life in People With Knee Osteoarthritis After Anterior Cruciate Ligament Reconstruction? Objective: This study aimed to investigate the association of self-reported knee stability with symptoms, function, and quality of life in individuals with knee osteoarthritis after anterior cruciate ligament reconstruction (ACLR).
Setting: Cross-sectional.
Participants: Twenty-eight individuals with knee osteoarthritis, 5 to 12 years after ACLR.
Main Outcome Measures: Self-reported knee stability was assessed using visual analogue scales (VAS) during hop for distance (HD), side-to-side hop (SSH), and one-leg rise (OLR). Symptoms [Knee Injury and Osteoarthritis Outcome Score (KOOS) pain, Anterior Knee Pain Scale (AKPS), and International Knee Documentation Committee form], self-reported function (KOOS-sport/rec), performance-based function (hopping and OLR), and quality of life (KOOS-QOL) were assessed. K-means clustering categorized individuals into low (n = 8) and high self-reported knee stability (n = 20) groups based on participants' VAS scores during functional tasks.
Results: The low self-reported knee stability group had worse knee symptoms than the high self-reported knee stability group [KOOS-pain: mean difference -17 (95% confidence interval, -28 to -5); AKPS: -10 (-20 to -1)], and worse self-reported function [KOOS-sport/rec: -33 (-48 to -18)] and performance-based function [HD: -28 (-53 to -3); SSH: -10 (-20 to -1), OLR: -18 (-32 to -50)].
Conclusion: Low self-reported stability is associated with worse symptoms, and worse self-reported and performance-based function. Further research is required to determine the causation relation of self-reported knee stability to knee symptoms and function in individuals with knee osteoarthritis after ACLR.
abstract_id: PUBMED:28323136
Self-reported knee pain and disability among healthy individuals: reference data and factors associated with the Knee injury and Osteoarthritis Outcome Score (KOOS) and KOOS-Child. Objective: To develop normative reference data for the Knee injury and Osteoarthritis Outcome Score (KOOS) and KOOS-Child, as well as investigate socio-demographic, psychological and physical factors associated with knee pain and disability among healthy adults.
Method: The KOOS or KOOS-Child (each containing five subscales) was administered to participants aged 8-101 years within the 1000 Norms Project, an observational study of 1000 self-reported healthy individuals. Self-efficacy, physical activity, body mass index (BMI), lower limb alignment, knee frontal plane projection angle (FPPA), knee range of motion (ROM), knee and hip strength, six-minute walk, 30-second chair stand and timed up and down stairs tests were collected. KOOS data were dichotomised using established cut-off scores and logistic regression analyses were conducted for each subscale.
Results: Socio-demographic characteristics were similar to the Australian population. Normative reference data were generated for children (8-17 years) and adults (18-101 years). Female adults were up to twice as likely to report knee pain, symptoms and sport/recreation (Sport/Rec) limitations compared to males (P < .05). Older age, lower self-efficacy, greater BMI, varus lower limb alignment, lower knee flexion ROM and lower hip external rotation (ER) strength were independently associated with knee pain and disability among adults.
Conclusions: Age- and gender-stratified reference data for the KOOS and KOOS-Child have been developed to guide interpretation of results in practice and research for individuals with knee disorders. Psychological and physical factors are linked with self-reported knee pain/disability among adults, and longitudinal studies to investigate causation are required.
Answer: Yes, people who consciously attend to their movements have more self-reported knee pain. A study explored the relationship between the propensity for conscious control of movement and self-reported knee pain and found that mean scores on the conscious motor processing subscale of the movement-specific reinvestment scale were significantly higher for participants who reported knee pain within the previous year compared to those who did not. This association persisted even after controlling for movement self-consciousness subscale scores, age, gender, and body mass index (PUBMED:24942479). |
Instruction: Association between radiographic damage of the spine and spinal mobility for individual patients with ankylosing spondylitis: can assessment of spinal mobility be a proxy for radiographic evaluation?
Abstracts:
abstract_id: PUBMED:15958757
Association between radiographic damage of the spine and spinal mobility for individual patients with ankylosing spondylitis: can assessment of spinal mobility be a proxy for radiographic evaluation? Objective: To demonstrate the association between various measures of spinal mobility and radiographic damage of the spine in individual patients with ankylosing spondylitis, and to determine whether the assessment of spinal mobility can be a proxy for the assessment of radiographic damage.
Methods: Radiographic damage was assessed by the mSASSS. Cumulative probability plots combined the radiographic damage score of an individual patient with the corresponding score for nine spinal mobility measures. Receiver operating characteristic analysis was performed to determine the cut off level of every spinal mobility measure that discriminates best between the presence and absence of radiographic damage. Three arbitrary cut off levels for radiographic damage were investigated. Likelihood ratios were calculated to explore further the diagnostic properties of the spinal mobility measures.
Results: Cumulative probability plots showed an association between spinal mobility measures and radiographic damage for the individual patient. Irrespective of the chosen cut off level for radiographic progression, lateral spinal flexion and BASMI discriminated best between patients with and those without structural damage. Even the best discriminatory spinal mobility assessments misclassified a considerable proportion of patients (up to 20%). Intermalleolar distance performed worst (up to 30% misclassifications). Lateral spinal flexion best predicted the absence of radiographic damage, and a modified Schober test best predicted the presence of radiographic damage.
Conclusion: This study unequivocally demonstrated a relationship between spinal mobility and radiographic damage. However, spinal mobility cannot be used as a proxy for radiographic evaluation in an individual patient.
abstract_id: PUBMED:37061230
Clinical Relevance of Axial Radiographic Damage in Axial Spondyloarthritis: Evaluation of Functional Consequences by an Objective Electronic Device. Objective: Axial spondyloarthritis (axSpA) is associated with decreased function and mobility of patients as a result of inflammation and radiographic damage. The Epionics SPINE device (ES), an electronic device that objectively measures spinal mobility, including range of motion (RoM) and speed (ie, range of kinematics [RoK]) of movement, has been clinically validated in axSpA. We investigated the performance of the ES relative to radiographic damage in the axial skeleton of patients with axSpA.
Methods: A total of 103 patients with axSpA, 31 with nonradiographic axSpA (nr-axSpA) and 72 with radiographic axSpA (r-axSpA), were consecutively examined. Conventional radiographs of the spine (including presence, number, and location of syndesmophytes) and the sacroiliac joints (SIJs; rated by the modified New York criteria) were analyzed with the ES. Function and mobility were assessed using analyses of covariance and Spearman correlation.
Results: The number of syndesmophytes correlated positively with Bath Ankylosing Spondylitis Metrology Index scores (r 0.38, P = 0.02) and correlated negatively with chest expansion (r -0.39, P = 0.02) and ES measurements (-0.53 ≤ r ≤ -0.34, all P < 0.03), except for RoM and RoK regarding rotation and RoK for extension of the lumbar and thoracic spines. In the radiographic evaluation of the SIJs, the extent of damage correlated negatively with ES scores and metric measurements (-0.49 ≤ r ≤ -0.33, all P < 0.001). Patients with r-axSpA, as compared to those with nr-axSpA, showed significantly worse ES scores for RoM, RoK, and chest expansion.
Conclusion: The ES scores, in accordance with mobility measurements, correlated well with the presence and extent of radiographic damage in the spine and the SIJs. As expected, patients with r-axSpA had more severe impairments than those with nr-axSpA.
abstract_id: PUBMED:29065931
Relevance of structural damage in the sacroiliac joints for the functional status and spinal mobility in patients with axial spondyloarthritis: results from the German Spondyloarthritis Inception Cohort. Background: Functional status and spinal mobility in patients with axial spondyloarthritis (axSpA) are known to be determined both by disease activity and by structural damage in the spine. The impact of structural damage in the sacroiliac joints (SIJ) on physical function and spinal mobility in axSpA has not been studied so far. The objective of the study was to analyze the impact of radiographic sacroiliitis on functional status and spinal mobility in patients with axSpA.
Methods: In total, 210 patients with axSpA were included in the analysis. Radiographs of SIJ obtained at baseline and after 2 years of follow up were scored by two trained readers according to the modified New York criteria grading system (grade 0-4). The mean of two readers' scores for each joint and a sum score for both SIJ were calculated for each patient giving a sacroiliitis sum score between 0 and 8. The Bath Ankylosing Spondylitis Functional Index (BASFI) and Bath Ankylosing Spondylitis Metrology Index (BASMI) at baseline and after 2 years were used as outcome measures.
Results: Longitudinal mixed model analysis adjusted for structural damage in the spine (modified Stoke Ankylosing Spondylitis Spine Score - mSASSS), disease activity (Bath Ankylosing Spondylitis Disease Activity Index - BASDAI and C-reactive protein level) and gender, revealed an independent association of the sacroiliitis sum score with the BASFI: b = 0.10 (95% CI 0.01-0.19) and the BASMI: b = 0.12 (95% CI 0.03-0.21), respectively, indicating that change by one radiographic sacroiliitis grade in one joint is associated with BASFI/BASMI worsening by 0.10/0.12 points, respectively, independently of disease activity and structural damage in the spine.
Conclusion: Structural damage in the SIJ might have an impact on functional status and spinal mobility in axSpA independently of spinal structural damage and disease activity.
Trial Registration: ClinicalTrials.gov, NCT01277419 . Registered on 14 January 2011.
abstract_id: PUBMED:34393107
Successful Evaluation of Spinal Mobility Measurements With the Epionics SPINE Device in Patients With Axial Spondyloarthritis Compared to Controls. Objective: Epionics SPINE (ES), a novel device that measures spinal movements using electronic sensors including range of motion (RoM) and speed (range of kinematics [RoK]), has already been validated in patients with mechanical back pain and healthy individuals. This study aimed to evaluate ES for quantification of spinal mobility in patients with axial spondyloarthritis (axSpA).
Methods: A total of 153 individuals, 39 female and 114 male, were examined including 134 patients with axSpA, of whom 40 had nonradiographic (nr)-axSpA, 94 had radiographic (r)-axSpA; 19 were healthy controls (HCs). The results were compared using mean ES scores and modeling was performed using multivariable logistic regression models resulting in good validity and high discriminative power.
Results: ES measurements showed meaningful differences between patients with axSpA and HCs (all P < 0.001), as well as between r- and nr-axSpA (P < 0.01). In patients with axSpA, a negative correlation between ES and Bath Ankylosing Spondylitis Metrology Index values was found: -0.76 ≤ r ≤ -0.52 (P < 0.05). Bath Ankylosing Spondylitis Functional Index scores showed a similar trend (r > -0.39). Patients with r-axSpA had a more limited and slower spinal mobility than those with nr-axSpA. Other patient-reported outcomes almost did not correlate.
Conclusion: This study shows that the ES is an objective performance measure and a valid tool to assess spinal mobility in axSpA, also based on the Outcomes Measures in Rheumatology (OMERACT) criteria. RoK and RoM scores provide additional information on physical function of patients with axSpA.
abstract_id: PUBMED:31468167
Epionics SPINE-use of an objective method to examine spinal mobility in patients with axial spondyloarthritis Axial spondylarthritis (axSpA) is a chronic inflammatory disease of the spine that can be associated with loss of physical function, mobility and upright postural impairment. Established tools for the assessment of function that are largely based on subjective perception are semiquantitatively recorded by standardized questionnaires (Bath ankylosing spondylitis functional index, BASFI), while measurement of spinal mobility of patients with axSpA is based on physical examination of various movement regions particularly the axial skeleton (Bath ankylosing spondylitis metrology index, BASMI). Recently, a performance test has been added to assess the range of motion and speed of certain tasks (AS performance-based improved test, ASPI); however, since these tests have limited reliability and reproducibility, more objective tests would be desirable. In this study the spinal mobility of patients with axSpA was quantified using the Epionics SPINE device (ES) and data were evaluated using the outcome measures in rheumatology (OMERACT) criteria. The ES automatically measures various patterns of spinal movements using electronic sensors, which also assess the range and speed of carrying out movements. Patients with back pain from other causes and persons without back pain served as controls. The measurement results obtained with ES differed between the groups and correlated with BASMI values (r = 0.53-0.82, all p = <0.03). Patients with radiographically detectable axSpA had more limited and slower mobility than those with non-radiographically detectable axSpA. Overall, the results presented here suggest that ES measurements represent a valid and objective measurement procedure of spinal mobility for axSpA patients.
abstract_id: PUBMED:26337175
Construct validity of clinical spinal mobility tests in ankylosing spondylitis: a systematic review and meta-analysis. The study aimed to determine, using systematic review and meta-analysis, the level of evidence supporting the construct validity of spinal mobility tests for assessing patients with ankylosing spondylitis. Following the guidelines proposed in the Preferred Reporting Items for Systematic reviews and Meta-Analyses, three sets of keywords were used for data searching: (i) ankylosing spondylitis, spondyloarthritis, spondyloarthropathy, spondylarthritis; (ii) accuracy, association, construct, correlation, Outcome Measures in Rheumatoid Arthritis Clinical Trials, OMERACT, truth, validity; (iii) mobility, Bath Ankylosing Spondylitis Metrology Index-BASMI, radiography, spinal measures, cervical rotation, Schober (a further 19 keywords were used). Initially, 2558 records were identified, and from these, 21 studies were retained. Fourteen of these studies were considered high level of evidence. Compound indexes of spinal mobility showed mostly substantial to excellent levels of agreement with global structural damage. Individual mobility tests for the cervico-thoracic spine showed only moderate agreements with cervical structural damage, and considering structural damage at the lumbar spine, the original Schober was the only test that presented consistently substantial levels of agreement. Three studies assessed the construct validity of mobility measures for inflammation and low to fair levels of agreement were observed. Two meta-analyses were conducted, with assessment of agreement between BASMI and two radiological indexes of global structural damage. The spinal mobility indexes and the original Schober test show acceptable construct validity for inferring the extent of structural damage when assessing patients with ankylosing spondylitis. Spinal mobility measures do not reflect levels of inflammation at either the sacroiliac joints and/or the spine.
abstract_id: PUBMED:27803139
Physical Function and Spinal Mobility Remain Stable Despite Radiographic Spinal Progression in Patients with Ankylosing Spondylitis Treated with TNF-α Inhibitors for Up to 10 Years. Objective: The aim of the study was to investigate the effect of radiographic spinal progression and disease activity on function and spinal mobility in patients with ankylosing spondylitis (AS) treated with tumor necrosis factor-α (TNF-α) inhibitors for up to 10 years.
Methods: Patients with AS who participated in 2 longterm open-label extensions of clinical trials with TNF-α inhibitors (43 receiving infliximab and 17 receiving etanercept) were included in this analysis based on the availability of spinal radiographs performed at baseline and at a later timepoint (yr 2, 4, 6, 8, and 10) during followup. Spinal radiographs were scored according to the modified Stoke Ankylosing Spondylitis Spine Score (mSASSS). Function was assessed by the Bath Ankylosing Spondylitis Functional Index (BASFI), spinal mobility by the Bath Ankylosing Spondylitis Metrology Index (BASMI), and disease activity by the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI).
Results: After the initial improvement, BASFI and BASMI remained remarkably stable at low levels over up to 10 years despite radiographic spinal progression. In the generalized mixed effects model analysis, no association between the mSASSS and the BASFI change (β = 0.0, 95% CI -0.03 to 0.03) was found, while there was some effect of mSASSS changes on BASMI changes over time (β = 0.05, 95% CI 0.01-0.09). BASDAI showed a strong association with function (β = 0.64, 95% CI 0.54-0.73) and to a lesser extent, with spinal mobility (β = 0.14, 95% CI 0.01-0.26).
Conclusion: Functional status and spinal mobility of patients with established AS remained stable during longterm anti-TNF-α therapy despite radiographic progression. This indicates that reduction and continuous control of inflammation might be able to outweigh the functional effect of structural damage progression in AS.
abstract_id: PUBMED:16391887
The relationship between severity and extent of spinal involvement and spinal mobility and physical functioning in patients with ankylosing spondylitis. The present study was undertaken to determine the relationship between spinal radiological changes of ankylosing spondylitis (AS), spinal mobility, and physical functioning. Thirty-one patients diagnosed as AS according to the modified New York criteria for AS were included in this study. Three radiographic scoring methods were used to assess spinal damage. Severity of spinal involvement was assessed by using Stoke Ankylosing Spondylitis Spine Score (SASSS) and Bath Ankylosing Spondylitis Radiographic Index-Spine (BASRI-S). To assess the extent of spinal involvement, the total number of vertebrae showing radiological findings attributable to AS [number of vertebrae involved (NoVI)] was calculated according to the AS grading system defined by Braun et al. Statistical analysis, consisting of bivariate correlation, Spearman correlation, and multiple linear regression analysis, was performed using Windows Statistical Package for the Social Sciences 13.0. NoVI was negatively correlated with modified Schober and lateral spinal flexion and was positively correlated with occiput-to-wall distance and BASMI. SASSS was negatively correlated with the modified Schober. BASRI-S was negatively correlated with the modified Schober and positively correlated with BASMI. When BASMI and Bath Ankylosing Spondylitis Functional Index were taken as dependent variables, only the NoVI was found to be associated with BASMI. In our data, the extent of spinal involvement (NoVI) showed a more significant correlation with spinal measurements such as modified Schober and BASMI as compared with the other radiologic scores (SASSS and BASRI-S). Furthermore, because only the NoVI was found to be associated with BASMI, we can conclude that the extent of spinal involvement, which also includes thoracic vertebrae, affects spinal measurements.
abstract_id: PUBMED:31126334
Incorporation of the anteroposterior lumbar radiographs in the modified Stoke Ankylosing Spondylitis Spine Score improves detection of radiographic spinal progression in axial spondyloarthritis. Background: To evaluate the performance of the extended modified Stoke Ankylosing Spondylitis Spine Score (mSASSS) incorporating information from anteroposterior (AP) lumbar radiographs as compared to the conventional mSASSS in detection of radiographic spinal progression in patients with axial spondyloarthritis (axSpA) METHODS: A total of 210 patients with axSpA, 115 with radiographic axSpA (r-axSpA), and 95 with non-radiographic axSpA (nr-axSpA), from the GErman SPondyloarthritis Inception Cohort (GESPIC), were included in the analysis based on the availability of spinal radiographs (cervical spine lateral, lumbar spine lateral, and AP views), at baseline and year 2. Two trained readers independently scored lateral cervical and lumbar spine images according to the mSASSS system (0-3 per vertebral corner, 0-72 in total). In addition, all vertebral corners of vertebral bodies visible on lumbar AP radiographs (lower T12 to upper S1) were assessed according to the same scoring system that resulted in a total range for the extended mSASSS from 0 to 144. Reliability and sensitivity to detect radiographic spinal progression of the extended mSASSS as compared to the conventional mSASSS were evaluated.
Results: The reliability of conventional and extended scores was excellent with intraclass correlation coefficients (ICCs) of 0.926 and 0.927 at baseline and 0.920 and 0.933 at year 2, respectively. The mean ± SD score for mSASSS and extended mSASSS at baseline were 4.25 ± 8.32 and 8.59 ± 17.96, respectively. The change score between baseline and year 2 was 0.73 ± 2.34 and 1.19 ± 3.73 for mSASSS and extended mSASSS, respectively. With the extended mSASSS, new syndesmophytes after 2 years were detected in 4 additional patients, new syndesmophytes or growth of existing syndesmophytes in 5 additional patients, and progression by ≥ 2 points in the total score in 14 additional patients meaning a 25%, 28%, and 46% increase in the proportion of patients with progression according to the respective definition as compared to the conventional score.
Conclusions: Incorporation of lumbar AP radiographs in the assessment of structural damage in the spine resulted into detection of additional patients with radiographic spinal progression not captured by the conventional mSASSS score.
abstract_id: PUBMED:20498215
Both structural damage and inflammation of the spine contribute to impairment of spinal mobility in patients with ankylosing spondylitis. Objective: To study the relationship between spinal mobility, radiographic damage of the spine and spinal inflammation as assessed by MRI in patients with ankylosing spondylitis (AS).
Methods: In this subanalysis of the Ankylosing Spondylitis Study for the Evaluation of Recombinant Infliximab Therapy cohort, 214 patients, representing an 80% random sample, were investigated. Only baseline data were used. MRI inflammation was assessed by the AS spinal MRI activity (ASspiMRI-a) score, structural damage by the modified Stoke AS Spine Score (mSASSS) and spinal mobility by the linear definition of the Bath Ankylosing Spondylitis Metrology Index (BASMI). Univariate correlations were calculated on baseline values using Spearman rank correlation. Independent associations between the variables of interest were investigated by multivariate linear regression analysis. Associations with clinical disease activity, C-reactive protein, disease duration, age, gender, body mass index and HLA-B27 status were also investigated. Subanalyses were performed according to disease duration.
Results: BASMI correlated moderately well with mSASSS (Spearman's rho=0.6) and weakly with ASspiMRI-a (rho=0.3). A best-fit model for BASMI included both mSASSS (regression coefficient (B)=0.865, p<0.001) and ASspiMRI-a (B=0.236, p=0.018). In patients with a disease duration < or = 3 years, B was greater for ASspiMRI-a than for mSASSS (0.595 vs 0.380), while in patients with a disease duration > 3 years B was greater for mSASSS than for ASspiMRI-a (0.924 vs 0.156).
Conclusion: Spinal mobility impairment in AS is independently determined both by irreversible spinal damage and by reversible spinal inflammation. Spinal mobility impairment is more influenced by spinal inflammation in early disease, and by structural damage in later disease.
Answer: The association between radiographic damage of the spine and spinal mobility in individual patients with ankylosing spondylitis (AS) has been investigated in several studies, but the question remains whether assessment of spinal mobility can serve as a proxy for radiographic evaluation.
One study demonstrated a relationship between various measures of spinal mobility and radiographic damage assessed by the modified Stoke Ankylosing Spondylitis Spine Score (mSASSS). However, it concluded that spinal mobility assessments cannot be used as a proxy for radiographic evaluation in individual patients due to misclassification of patients (up to 20-30%) (PUBMED:15958757).
Another study using the Epionics SPINE device (ES) found that ES scores, which objectively measure spinal mobility, correlated well with the presence and extent of radiographic damage in the spine and sacroiliac joints (SIJs). Patients with radiographic axSpA (r-axSpA) had more severe impairments than those with nonradiographic axSpA (nr-axSpA) (PUBMED:37061230).
Further research from the German Spondyloarthritis Inception Cohort revealed that structural damage in the SIJ might impact functional status and spinal mobility in axSpA independently of spinal structural damage and disease activity (PUBMED:29065931).
The ES has been validated as an objective performance measure and a valid tool to assess spinal mobility in axSpA, showing meaningful differences between patients with axSpA and healthy controls, as well as between r-axSpA and nr-axSpA (PUBMED:34393107).
A systematic review and meta-analysis on the construct validity of clinical spinal mobility tests in AS found that spinal mobility indexes and the original Schober test show acceptable construct validity for inferring the extent of structural damage when assessing patients with AS. However, spinal mobility measures do not reflect levels of inflammation at either the sacroiliac joints and/or the spine (PUBMED:26337175).
In conclusion, while there is a demonstrated association between radiographic damage and spinal mobility, the assessment of spinal mobility cannot fully substitute for radiographic evaluation in individual patients with AS. Radiographic assessment remains essential for a comprehensive evaluation of structural damage in AS. |
Instruction: Do socially deprived urban areas have lesser supplies of cancer care services?
Abstracts:
abstract_id: PUBMED:22869877
Do socially deprived urban areas have lesser supplies of cancer care services? Purpose: Area social deprivation is associated with unfavorable health outcomes of residents across the full clinical course of cancer from the stage at diagnosis through survival. We sought to determine whether area social factors are associated with the area health care supply.
Patients And Methods: We studied the area supply of health services required for the provision of guideline-recommended care for patients with breast cancer and colorectal cancer (CRC) in each of the following three distinct clinical domains: screening, treatment, and post-treatment surveillance. We characterized area social factors in 3,096 urban zip code tabulation areas by using Census Bureau data and the health care supply in the corresponding 465 hospital service areas by using American Hospital Association, American Medical Association, and US Food and Drug Administration data. In two-level hierarchical models, we assessed associations between social factors and the supply of health services across areas.
Results: We found no clear associations between area social factors and the supply of health services essential to the provision of guideline recommended breast cancer and CRC care in urban areas. The measures of health service included the supply of physicians who facilitate screening, treatment, and post-treatment care and the supply of facilities required for the same services.
Conclusion: Because we found that the supply of types of health care required for the provision of guideline-recommended cancer care for patients with breast cancer and CRC did not vary with markers of area socioeconomic disadvantage, it is possible that previously reported unfavorable breast cancer and CRC outcomes among individuals living in impoverished areas may have occurred despite an apparent adequate area health care supply.
abstract_id: PUBMED:38375950
Rural-urban disparities and trends in utilization of palliative care services among US patients with metastatic breast cancer. Purpose: To assess trends and rural-urban disparities in palliative care utilization among patients with metastatic breast cancer.
Methods: We analyzed data from the 2004-2019 National Cancer Database. Palliative care services, including surgery, radiotherapy, systemic therapy, and/or other pain management, were provided to control pain or alleviate symptoms; utilization was dichotomized as "yes/no." Rural-urban residence, defined by the US Department of Agriculture Economic Research Service's Rural-Urban Continuum Codes, was categorized as "rural/urban/metropolitan." Multivariable logistic regression was used to examine rural-urban differences in palliative care use. Adjusted odds ratios (AORs) and 95% confidence intervals (CIs) were calculated.
Findings: Of 133,500 patients (mean age 62.4 [SD = 14.2] years), 86.7%, 11.7%, and 1.6% resided in metropolitan, urban, and rural areas, respectively; 72.5% were White, 17.0% Black, 5.8% Hispanic, and 2.7% Asian. Overall, 20.3% used palliative care, with a significant increase from 15.6% in 2004-2005 to 24.5% in 2008-2019 (7.0% increase per year; p-value for trend <0.001). In urban areas, 23.3% received palliative care, compared to 21.0% in rural and 19.9% in metropolitan areas (p < 0.001). After covariate adjustment, patients residing in rural (AOR = 0.84; 95% CI: 0.73-0.98) or metropolitan (AOR = 0.85, 95% CI: 0.80-0.89) areas had lower odds of having used palliative care than those in urban areas.
Conclusions: In this national, racially diverse sample of patients with metastatic breast cancer, the utilization of palliative care services increased over time, though remained suboptimal. Further, our findings highlight rural-urban disparities in palliative care use and suggest the potential need to promote these services while addressing geographic access inequities for this patient population.
abstract_id: PUBMED:12881259
A good death in Uganda: survey of needs for palliative care for terminally ill people in urban areas. Objective: To identify the palliative care needs of terminally ill people in Uganda.
Design: Descriptive cross sectional study.
Setting: Home care programmes in and around Kampala that look after terminally ill people in their homes.
Participants: 173 terminally ill patients registered with the home care programmes.
Results: Most of the participants had either HIV/AIDS or cancer or both; 145 were aged under 50 years, and 107 were women. Three main needs were identified: the control or relief of pain and other symptoms; counselling; and financial assistance for basic needs such as food, shelter, and school fees for their children. The preferred site of care was the home, though all these people lived in urban areas with access to healthcare services within 5 km of their homes.
Conclusion: A "good death" in a developing country occurs when the dying person is being cared for at home, is free from pain or other distressing symptoms, feels no stigma, is at peace, and has their basic needs met without feeling dependent on others.
abstract_id: PUBMED:20665415
Medical and psychosocial care needs of cancer patients: a systematic review comparing urban and rural provisions Background And Objective: The psychological and oncological care needs of patients with cancer and an adequate structure for their medical care have so far been only marginally considered with regard to disparities in patients' residence (rural or urban). Even though there are thought to be such differences, for example with regard to existing care services and obvious specific care needs for patients in rural areas. This study addresses these issues in a systematic survey of the pertinent literature.
Methods: Publications in the last ten years dealing with identified problems were reviewed. A total of 27 studies met the criteria for analysis.
Results: Significant differences between medical care, psychosocial stress and the desired support were reported. Rural patients were more likely to be at a disadvantage compared with their urban counterparts with regard to medical care, being more often burdened cumulatively, and they strongly expressed the wish for psychological and oncological care. But the comparability of these results and transferring these findings to conditions in Germany proved difficult.
Conclusion: When investigating the demand for psycho-oncological care, one needs to be aware of potential differences between rural and urban areas. Hence, in order to reliably distinguish between rural and urban living areas, a set of concrete criteria which define rural and urban surroundings needs to be established.
abstract_id: PUBMED:33826747
Disparities in accessibility to evidence-based breast cancer care facilities by rural and urban areas in Bavaria, Germany. Background: Breast cancer (BC), which is most common in elderly women, requires a multidisciplinary and continuous approach to care. With demographic changes, the number of patients with chronic diseases such as BC will increase. This trend will especially hit rural areas, where the majority of the elderly live, in terms of comprehensive health care.
Methods: Accessibility to several cancer facilities in Bavaria, Germany, was analyzed with a geographic information system. Facilities were identified from the national BC guideline and from 31 participants in a proof-of-concept study from the Breast Cancer Care for Patients With Metastatic Disease registry. The timeframe for accessibility was defined as 30 or 60 minutes for all population points. The collection of address information was performed with different sources (eg, a physician registry). Routine data from the German Census 2011 and the population-based Cancer Registry of Bavaria were linked at the district level.
Results: Females from urban areas (n = 2,938,991 [ie, total of females living in urban areas]) had a higher chance for predefined accessibility to the majority of analyzed facilities in comparison with females from rural areas (n = 3,385,813 [ie, total number of females living in rural areas]) with an odds ratio (OR) of 9.0 for cancer information counselling, an OR of 17.2 for a university hospital, and an OR of 7.2 for a psycho-oncologist. For (inpatient) rehabilitation centers (OR, 0.2) and genetic counselling (OR, 0.3), women from urban areas had lower odds of accessibility within 30 or 60 minutes.
Conclusions: Disparities in accessibility between rural and urban areas exist in Bavaria. The identification of underserved areas can help to inform policymakers about disparities in comprehensive health care. Future strategies are needed to deliver high-quality health care to all inhabitants, regardless of residence.
abstract_id: PUBMED:23658633
Trend of urban-rural disparities in hospice utilization in Taiwan. Aims: The palliative care has spread rapidly worldwide in the recent two decades. The development of hospice services in rural areas usually lags behind that in urban areas. The aim of our study was to investigate whether the urban-rural disparity widens in a country with a hospital-based hospice system.
Methods: From the nationwide claims database within the National Health Insurance in Taiwan, admissions to hospices from 2000 to 2006 were identified. Hospices and patients in each year were analyzed according to geographic location and residence.
Results: A total of 26,292 cancer patients had been admitted to hospices. The proportion of rural patients to all patients increased with time from 17.8% in 2000 to 25.7% in 2006. Although the numbers of beds and the utilizations in both urban and rural hospices expanded rapidly, the increasing trend in rural areas was more marked than that in urban areas. However, still two-thirds (898/1,357) of rural patients were admitted to urban hospices in 2006.
Conclusions: The gap of hospice utilizations between urban and rural areas in Taiwan did not widen with time. There was room for improvement in sufficient supply of rural hospices or efficient referral of rural patients.
abstract_id: PUBMED:26040484
Palliative care costs in Canada: A descriptive comparison of studies of urban and rural patients near end of life. Background: Significant gaps in the evidence base on costs in rural communities in Canada and elsewhere are reported in the literature, particularly regarding costs to families. However, it remains unclear whether the costs related to all resources used by palliative care patients in rural areas differ to those resources used in urban areas.
Aim: The study aimed to compare both the costs that occurred over 6 months of participation in a palliative care program and the sharing of these costs in rural areas compared with those in urban areas.
Design: Data were drawn from two prior studies performed in Canada, employing a longitudinal, prospective design with repeated measures.
Setting/participants: The urban sample consisted of 125 patients and 127 informal caregivers. The rural sample consisted of 80 patients and 84 informal caregivers. Most patients in both samples had advanced cancer.
Results: The mean total cost per patient was CAD 26,652 in urban areas, while it was CAD 31,018 in rural areas. The family assumed 20.8% and 21.9% of costs in the rural and urban areas, respectively. The rural families faced more costs related to prescription medication, out-of-pocket costs, and transportation while the urban families faced more costs related to formal home care.
Conclusion: Despite the fact that rural and urban families assumed a similar portion of costs, the distribution of these costs was somewhat different. Future studies would be needed to gain a better understanding of the dynamics of costs incurred by families taking care of a loved one at the end of life and the determinants of these costs in urban versus rural areas.
abstract_id: PUBMED:10133703
Border crossing for physician services: implications for controlling expenditures. In this article, the authors explore geographic border crossing for the use of Medicare physician services. Using data from the 1988 Part B Medicare Annual Data (BMAD) file, they find that there is substantial geographic variation across both States and urban and rural areas in border crossing to seek services. As might be expected, there is more border crossing among smaller geographic areas than among States. Predominantly rural areas tend to be major importers of services, but urban areas, on average, export services. Border crossing tends to be greater for high-technology services such as advanced imaging, cardiovascular surgery, and oncology procedures. These results suggest that expenditure-control policies applying to States or metropolitan areas should incorporate adjusters for patients' current geographic patterns of care.
abstract_id: PUBMED:8877576
Urban-based Native American cancer-control activities: services and perceptions. Background: Cancer has become a significant health concern in American Indian communities. Over the past several decades Native peoples have experienced significant increases in life expectancy and, with these gains, significant increases in cancer incidence and mortality. Limited data are available concerning cancer-control activities accessible to American Indian communities. Even less is known about control programs in place for American Indians resident in urban areas, where more that half of all Native peoples reside.
Methods: To ascertain the extent of available services and perceptions of health directors, a survey of all Indian-Health-Service-recognized urban clinics was undertaken.
Results: Results indicate that the cancer needs of American Indians resident in urban areas are not being adequately addressed. Only one-third of urban health directors reported perceived increases in cancer incidence and mortality rates. The directors ranked cancer fifth among seven health problems in terms of their clinics' commitment to addressing them. Findings from this study are juxtaposed with whose obtained in a separate survey of reservation-based health directors.
Conclusions: Results indicate a need to develop more responsive cancer-control programs in Indian country and to sensitize researchers to other health needs of these communities.
abstract_id: PUBMED:35329094
Consequences of Structural Urbanism: Urban-Rural Differences in Cancer Patients' Use and Perceived Importance of Supportive Care Services from a 2017-2018 Midwestern Survey. Background: Structural inequities, in part, undergird urban-rural differences in cancer care. The current study aims to understand the potential consequences of structural inequities on rural and urban cancer patients' access to and perceived importance of supportive cancer care resources.
Methods: We used data collected from November 2017 to May 2018 from a larger cross-sectional needs assessment about patients' support needs, use of services, and perceptions at a Midwestern United States cancer center. Oncology patients received a study packet during their outpatient clinic visit, and interested patients consented and completed the questionnaires.
Results: Among the sample of 326 patients, 27% of the sample was rural. In adjusted logistic regression models, rural patients were less likely to report using any secondary support services (15% vs. 27%; OR = 0.43, 95%CI [0.22, 0.85], p = 0.02) and less likely than urban counterparts to perceive secondary support services as very important (51% vs. 64%; OR = 0.57, 95%CI [0.33, 0.94], p = 0.03).
Conclusion: Structural inequities likely have implications on the reduced access to and importance of supportive care services observed for rural cancer patients. To eliminate persistent urban-rural disparities in cancer care, rural residents must have programs and policies that address cancer care and structural inequities.
Answer: The study reported in abstract PUBMED:22869877 investigated whether area social factors are associated with the supply of health services required for the provision of guideline-recommended care for patients with breast cancer and colorectal cancer (CRC) in urban areas. The results showed no clear associations between area social factors and the supply of health services essential to the provision of guideline-recommended cancer care for patients with breast cancer and CRC in urban areas. The study concluded that the supply of types of health care required for the provision of guideline-recommended cancer care did not vary with markers of area socioeconomic disadvantage, suggesting that previously reported unfavorable breast cancer and CRC outcomes among individuals living in impoverished areas may have occurred despite an apparent adequate area health care supply.
Therefore, based on the findings of this study, it cannot be concluded that socially deprived urban areas have lesser supplies of cancer care services. |
Instruction: Do patients with advanced cognitive impairment admitted to hospitals with higher rates of feeding tube insertion have improved survival?
Abstracts:
abstract_id: PUBMED:22871537
Do patients with advanced cognitive impairment admitted to hospitals with higher rates of feeding tube insertion have improved survival? Context: Research is conflicting on whether receiving medical care at a hospital with more aggressive treatment patterns improves survival.
Objectives: The aim of this study was to examine whether nursing home residents admitted to hospitals with more aggressive patterns of feeding tube insertion had improved survival.
Methods: Using the 1999-2007 Minimum Data Set matched to Medicare claims, we identified hospitalized nursing home residents with advanced cognitive impairment who did not have a feeding tube inserted prior to their hospital admissions. The sample included 56,824 nursing home residents and 1773 acute care hospitals nationwide. Hospitals were categorized into nine groups based on feeding tube insertion rates and whether the rates were increasing, staying the same, or decreasing between the periods of 2000-2003 and 2004-2007. Multivariate logit models were used to examine the association between the hospital patterns of feeding tube insertion and survival among hospitalized nursing home residents with advanced cognitive impairment.
Results: Nearly one in five hospitals (N=366) had persistently high rates of feeding tube insertion. Being admitted to these hospitals with persistently high rates of feeding tube insertion was not associated with improved survival when compared with being admitted to hospitals with persistently low rates of feeding tube insertion. The adjusted odds ratios were 0.93 (95% confidence interval [CI]: 0.87, 1.01) and 1.02 (95% CI: 0.95, 1.09) for one-month and six-month posthospitalization survival, respectively.
Conclusion: Hospitals with more aggressive patterns of feeding tube insertion did not have improved survival for hospitalized nursing home residents with advanced cognitive impairment.
abstract_id: PUBMED:20145231
Hospital characteristics associated with feeding tube placement in nursing home residents with advanced cognitive impairment. Context: Tube-feeding is of questionable benefit for nursing home residents with advanced dementia. Approximately two-thirds of US nursing home residents who are tube fed had their feeding tube inserted during an acute care hospitalization.
Objective: To identify US hospital characteristics associated with higher rates of feeding tube insertion in nursing home residents with advanced cognitive impairment.
Design, Setting, And Patients: The sample included nursing home residents aged 66 years or older with advanced cognitive impairment admitted to acute care hospitals between 2000 and 2007. Rate of feeding tube placement was based on a 20% sample of all Medicare Claims files and was assessed in hospitals with at least 30 such admissions during the 8-year period. A multivariable model with the unit of the analysis being the hospital admission identified hospital-level factors independently associated with feeding tube insertion rates, including bed size, ownership, urban location, and medical school affiliation. Measures of each hospital's care practices for all patients with serious chronic illnesses were evaluated, including intensive care unit (ICU) use in the last 6 months of life, the use of hospice services, and the ratio of specialist to primary care physicians. Patient-level characteristics were also considered.
Main Outcome Measure: Endoscopic or surgical insertion of a gastrostomy tube during a hospitalization.
Results: In 2797 acute care hospitals with 280,869 admissions among 163,022 nursing home residents with advanced cognitive impairment, the rate of feeding tube insertion varied from 0 to 38.9 per 100 hospitalizations (mean [SD], 6.5 [5.3]; median [interquartile range], 5.3 [2.6-9.3]). The mean rate of feeding tube insertions per 100 admissions was 7.9 in 2000, decreasing to 6.2 in 2007. Higher insertion rates were associated with the following hospital features: for-profit ownership vs government owned (8.5 vs 5.5 insertions per 100 hospitalizations; adjusted odds ratio [AOR], 1.33; 95% confidence interval [CI], 1.21-1.46), larger size (>310 beds vs <101 beds: 8.0 vs 4.3 insertions per 100 hospitalizations; AOR, 1.48; 95% CI, 1.35-1.63), and greater ICU use in the last 6 months of life (highest vs lowest decile: 10.1 vs 2.9 insertions per 100 hospitalizations; AOR, 2.60; 95% CI, 2.20-3.06). These differences persisted after controlling for patient characteristics. Specialist to primary care ratio and hospice use were weakly or not associated with feeding tube placement.
Conclusion: Among nursing home residents with advanced cognitive impairment admitted to acute care hospitals, for-profit ownership, larger hospital size, and greater ICU use was associated with increased rates of feeding tube insertion, even after adjusting for patient-level characteristics.
abstract_id: PUBMED:23002947
Does feeding tube insertion and its timing improve survival? Objectives: To examine survival with and without a percutaneous endoscopic gastrostomy (PEG) feeding tube using rigorous methods to account for selection bias and to examine whether the timing of feeding tube insertion affected survival.
Design: Prospective cohort study.
Setting: All U.S. nursing homes (NHs).
Participants: Thirty-six thousand four hundred ninety-two NH residents with advanced cognitive impairment from dementia and new problems eating studied between 1999 and 2007.
Measurements: Survival after development of the need for eating assistance and feeding tube insertion.
Results: Of the 36,492 NH residents (88.4% white, mean age 84.9, 87.4% with one feeding tube risk factor), 1,957 (5.4%) had a feeding tube inserted within 1 year of developing eating problems. After multivariate analysis correcting for selection bias with propensity score weights, no difference was found in survival between the two groups (adjusted hazard ratio (AHR) = 1.03, 95% confidence interval (CI) = 0.94-1.13). In residents who were tube-fed, the timing of PEG tube insertion relative to the onset of eating problems was not associated with survival after feeding tube insertion (AHR = 1.01, 95% CI = 0.86-1.20, persons with a PEG tube inserted within 1 month of developing an eating problem versus later (4 months) insertion).
Conclusion: Neither insertion of PEG tubes nor timing of insertion affect survival.
abstract_id: PUBMED:19327073
Churning: the association between health care transitions and feeding tube insertion for nursing home residents with advanced cognitive impairment. Background: There is a tenfold variation across U.S. states in the prevalence of feeding tube use among elderly nursing home residents (NHR) with advanced cognitive impairment. The goal of this study was to examine whether regions with higher rates of health care transitions at the end of life are more likely to use feeding tubes in patients with severe cognitive impairment.
Methods: A retrospective cohort study of U.S. nursing home residents with advanced cognitive impairment. The incidence of feeding tube insertion was determined by Medicare Part A and B billing data. A count of the number of health care transition in the last 6 months of life was determined for nursing home residents. A multivariate model examined the association of residing in a geographic region with a higher rates of health care transition and the insertion of a feeding tube in nusing home resident with advance cognitive impairment.
Results: Hospital Referral Region (HRR) health care transitions varied from 192 (Salem, Oregon) to 509 per 100 decedents (Monroe, Louisiana) within the last 6 months of life. HRRs with higher transition rates had a higher incidence of feeding tube insertion (Spearman correlation = 0.58). Subjects residing in regions with the highest quintile of transitions rates were 2.5 times (95% confidence interval [CI] 1.9-3.2) more likely to have a feeding tube inserted compared to those that resided in the lowest quintile.
Conclusions: Regions with higher rates of care transitions among nursing home residents are also much more likely to have higher rates of feeding tube placement for patients with severe cognitive impairment, a population in whom benefit is unlikely.
abstract_id: PUBMED:32402137
Continuity of Hospital Care and Feeding Tube Use in Cognitively Impaired Hospitalized Persons. Objectives: Hospitalists are increasingly the attending physician for hospitalized patients, and the scheduling of their shifts can affect patient continuity. For dementia patients, the impact is unknown.
Design: Longitudinal study using physician billing claims between 2000 and 2014 to examine the association of continuity of care with the insertion of a feeding tube (FT).
Setting: US hospitals.
Participants: Between 2000 and 2014, 166,056 hospitalizations of patients with a prior nursing home stay, advanced cognitive impairment, and impairments in four or more activities of daily living (mean age = 84.2 years; 30.4% male; 81.0% white).
Measurements: Continuity of care measured at the hospital level with the Sequential Continuity Index (SECON; range = 0 to 100; higher score indicates higher continuity).
Results: Rates of a hospitalist acting as the attending physician increased from 9.6% in 2000 to 22.6% in 2010, whereas a primary care physician with a predominant outpatient focus acting as the attending physician decreased from 50.3% in 2000 to 12.6% in 2014. Post-2010, a mixture of physician specialties increased from 55.5% to 66.4% with a reduction in hospitalists from 22.6% (2010) to 14.1% (2013). Continuity of care decreased over time with SECON dropping from 63.0 to 43.5. Adjusting for patient baseline risk factors, a nonlinear association was observed between SECON and FT insertion. Using cubic splines in the multivariate logistics regression model, the risk of FT insertion in hospitals where the SECON score dropped from 82 to 23 had an adjusted risk ratio (ARR) of FT insertion of 1.48 (95% confidence interval [CI] = 1.34-1.63); hospitals in which SECON dropped from 51 to 23 had an ARR of FT insertion of 1.38 (95% CI = 1.27-1.50).
Conclusion: Hospitalized dementia patients in hospitals in which continuity of care was lower had higher rates of FT insertions. Newer models of care are needed to enhance care continuity and thus ensure treatment consistent with likely outcomes of care and goals of care. J Am Geriatr Soc 68:1852-1856, 2020.
abstract_id: PUBMED:19370678
Enteral tube feeding for older people with advanced dementia. Background: The use of enteral tube feeding for patients with advanced dementia who have poor nutritional intake is common. In one US survey 34% of 186,835 nursing home residents with advanced cognitive impairment were tube fed. Potential benefits or harms of this practice are unclear.
Objectives: To evaluate the outcome of enteral tube nutrition for older people with advanced dementia who develop problems with eating and swallowing and/or have poor nutritional intake.
Search Strategy: The Specialized Register of the Cochrane Dementia and Cognitive Improvement Group (CDCIG), The Cochrane Library, MEDLINE, EMBASE, PsycINFO, CINAHL and LILACS were searched in April 2008. Citation checking was undertaken. Where it was not possible to accept or reject, the full text of the citation was obtained for further evaluation.
Selection Criteria: Randomized controlled trials (RCTs), controlled clinical trials, controlled before and after studies and interrupted time series studies that evaluated the effectiveness of enteral feeding via a nasogastric tube or via a tube passed by percutaneous endoscopic gastrostomy (PEG) were planned to be included. In addition, controlled observational studies were included. The study population comprised adults aged 50 and over (either sex), with a diagnosis of primary degenerative dementia made according to validated diagnostic criteria such as DSM-IV or ICD-10 (APA 1994; WHO 1993) and with advanced cognitive impairment defined by a recognised and validated tool or by clinical assessment and had poor nutrition intake and/or develop problems with eating and swallowing. Where data were limited we also considered studies in which the majority of participants had dementia.
Data Collection And Analysis: Data were independently extracted and assessed by one reviewer, checked by a second and if necessary, in the case of any disagreement or discrepancy it was planned that it would be reviewed by the third reviewer. Where information was lacking, we attempted contact with authors. It was planned that meta-analysis would be considered for RCTs with comparable key characteristics. The primary outcomes were survival and quality of life (QoL).
Main Results: No RCTs were identified. Seven observational controlled studies were identified. Six assessed mortality. The other study assessed nutritional outcomes. There was no evidence of increased survival in patients receiving enteral tube feeding. None of the studies examined QoL and there was no evidence of benefit in terms of nutritional status or the prevalence of pressure ulcers.
Authors' Conclusions: Despite the very large number of patients receiving this intervention, there is insufficient evidence to suggest that enteral tube feeding is beneficial in patients with advanced dementia. Data are lacking on the adverse effects of this intervention.
abstract_id: PUBMED:9040301
The risk factors and impact on survival of feeding tube placement in nursing home residents with severe cognitive impairment. Background: The provision of artificial enteral nutrition to an aged person with severe cognitive impairment is a complex dilemma in the long-term care setting.
Objective: To determine the risk factors and impact on survival of feeding tubes in nursing home residents with advanced cognitive impairment.
Methods: We conducted a cohort study with 24-month follow-up using Minimum Data Set resident assessments on 1386 nursing home residents older than 65 years with recent progression to severe cognitive impairment in the state of Washington. Residents within this population who underwent feeding tube placement were identified. Clinical characteristics and survival for a period of 24 months were compared for residents who were and were not tube fed.
Results: Among the residents with recent progression to severe cognitive impairment, 9.7% underwent placement of a feeding tube. Factors independently associated with feeding tube placement included age younger than 87 years (odds ratio [OR], 1.85; 95% confidence interval [CI], 1.25-2.78), aspiration (OR, 5.46; 95% CI, 2.66-11.20), swallowing problems (OR, 3.00; 95% CI, 1.81-4.97), pressure ulcer (OR, 1.64; 95% CI, 1.23-2.95), stroke (OR, 2.12; 95% CI, 1.17-2.62), less baseline functional impairment (OR, 2.07; 95% CI, 1.27-3.36), no do-not-resuscitate order (OR, 3.03; 95% CI, 1.92-4.85), and no dementia (OR, 2.17; 95% CI, 1.43-3.22). Survival did not differ between groups of residents with and without feeding tubes even after adjusting for independent risk factors for feeding tube placement.
Conclusions: There are specific risk factors associated with feeding tube placement in nursing home residents with severe cognitive impairment. However, there is no survival benefit compared with similar residents who are not tube fed. These prognostic data are important for health care providers, families, and patients making decisions regarding enteral nutritional support in long-term care.
abstract_id: PUBMED:12534849
Nursing home characteristics associated with tube feeding in advanced cognitive impairment. Objectives: To identify nursing homes factors associated with the use of tube feeding in advanced cognitive impairment.
Design: Descriptive study.
Setting: The On-line Survey Certification of Automated Records (OSCAR) was used to obtain facility characteristics from 1,057 licensed nursing homes in six states from 1995 to 1996.
Participants: Residents aged 65 and older with advanced cognitive impairment who had a feeding tube placed over a 1-year period were identified using the Minimum Data Set.
Measurements: Nursing home characteristics independently associated with feeding tube placement were determined.
Results: Having a full-time speech therapist on staff, more licensed nurses and fewer nursing assistants were independently associated with greater use of tube feeding in severely cognitively impaired residents. Other features associated with tube feeding included larger facility size, higher proportion of Medicaid beds, absence of an Alzheimer's disease unit, pressure ulcers in 10% or more of residents, and a higher proportion of residents lacking advance directives and with total functional dependency.
Conclusions: Assessment by a speech therapist, staffing ratios, advance directives, fiscal considerations, and specialized dementia units are potentially modifiable factors in nursing homes that may influence the practice of tube feeding in advanced cognitive impairment.
abstract_id: PUBMED:27273351
Person-centered Feeding Care: A Protocol to Re-introduce Oral Feeding for Nursing Home Patients with Tube Feeding. Background: Although the literature on nursing home (NH) patients with tube feeding (TF) has focused primarily on the continuation vs. discontinuation of TF, the reassessment of these patients for oral feeding has been understudied. Re-assessing patients for oral feeding may be better received by families and NH staff than approaches focused on stopping TF, and may provide an opportunity to address TF in less cognitively impaired patients as well as those with end-stage conditions. However, the literature contains little guidance on a systematic interdisciplinary team approach to the oral feeding reassessment of patients with TF, who are admitted to NHs.
Methods: This project had two parts that were conducted in one 170-bed intermediate/skilled, Medicare-certified NH in Honolulu, Hawai'i. Part 1 consisted of a retrospective observational study of characteristics of TF patients versus non-tube fed patients at NH admission (2003-2006) and longitudinal follow-up (through death or 6/30/2011) with usual care of the TF patients for outcomes of: feeding and swallowing reassessment, goals of care reassessment, feeding status (TF and/or per oral (PO) feedings), and hospice status. Part 2 involved the development of an interdisciplinary TF reassessment protocol through working group discussions and a pilot test of the protocol on a new set of patients admitted with TF from 2011-2014.
Results: Part 1: Of 238 admitted patients, 13.4% (32/238) had TF. Prior stroke and lack of DNR status was associated with increased likelihood of TF. Of the 32 patients with TF at NH admission, 15 could communicate and interact (mild, moderate or no cognitive impairment with prior stroke or pneumonia); while 17 were nonverbal and/or bedbound patients (advanced cognitive impairment or terminal disease). In the more cognitively intact group, 9/15 (60%) were never reassessed for tolerance of oral diets and 10/15 (66.7%) remained with TF without any oral feeding until death. Of the end-stage group, 13/17 (76.5%) did not have goals of care reassessed and remained with TF without oral feeding until death. Part 2: The protocol pilot project included all TF patients admitted to the facility in 2011-2014 (N=33). Of those who were more cognitively intact (n=22), 21/22 (95.5%) had swallowing reassessed, 11/22 (50%) resumed oral feedings but 11 (50%) failed reassessment and continued exclusive TF. Of those with end-stage disease (n=11), 100% had goals of care reassessed and 9 (81.8%) families elected individualized oral feeding (with or without TF).
Conclusion: Using findings from our retrospective study of usual care, our NH's interdisciplinary team developed and pilot-tested a protocol that successfully reintroduced oral feedings to tube-fed NH patients who previously would not have resumed oral feeding.
abstract_id: PUBMED:33171167
Prevalence of advanced heart failure and use of palliative care in admitted patients: Findings from the EPICTER study. Introduction And Aim: Palliative care in patients with advanced heart failure is strongly recommended by Clinical Practice Guidelines. We aimed to calculate the prevalence of advanced heart failure in admitted patients, to describe their management, and to analyse the factors that influence their referral to specialised palliative care.
Patients And Methods: Cross-sectional, multicentre study that consecutively included patients admitted for heart failure in 74 Spanish hospitals. If they met criteria for advanced heart failure, their treatment, complications and procedures were recorded.
Results: A total of 3153 patients were included. Of them, 739 (23%) met criteria for advanced heart failure. They were more likely to be women, older and to have a history of anaemia, chronic kidney disease and cognitive impairment. For their management, furosemide infusions (30%) and vasodilators (21%) were used. Refractory symptoms were treated with opioids (47%) and benzodiazepines (44%). Palliative care was only provided in the last hours of life in 48% of them. A multidisciplinary approach, involving palliative care specialists was sought in 15% of these patients. Treatment with furosemide infusions, an advanced New York Heart Association functional class, to meet advanced HF criteria and the presence of cancer were associated with the referral to specialised palliative care.
Conclusions: Almost one in four patients admitted with HF met criteria of advanced disease. They were older and had more comorbidities. Specialist palliative care services were involved in only a minority of patients, mainly those who were highly symptomatic or had cancer.
Answer: No, patients with advanced cognitive impairment admitted to hospitals with higher rates of feeding tube insertion do not have improved survival. Studies have shown that being admitted to hospitals with persistently high rates of feeding tube insertion was not associated with improved survival when compared with being admitted to hospitals with persistently low rates of feeding tube insertion. The adjusted odds ratios for one-month and six-month posthospitalization survival were not significantly different, indicating no survival benefit (PUBMED:22871537). Additionally, research has found that neither the insertion of percutaneous endoscopic gastrostomy (PEG) tubes nor the timing of insertion affects survival in nursing home residents with advanced cognitive impairment (PUBMED:23002947). Furthermore, a study on the risk factors and impact on survival of feeding tube placement in nursing home residents with severe cognitive impairment also concluded that there is no survival benefit compared with similar residents who are not tube fed (PUBMED:9040301). Overall, the evidence suggests that more aggressive patterns of feeding tube insertion in hospitals do not lead to improved survival outcomes for this patient population. |
Instruction: Does an alkaline environment prevent the development of bisphosphonate-related osteonecrosis of the jaw?
Abstracts:
abstract_id: PUBMED:24368141
Does an alkaline environment prevent the development of bisphosphonate-related osteonecrosis of the jaw? An experimental study in rats. Objective: To investigate the preventive effect of locally applied sodium bicarbonate on bisphosphonate-related osteonecrosis of the jaw (BRONJ).
Study Design: Thirty-six Sprague-Dawley rats were divided into 4 groups. Animals in group I received 0.1 mg/kg sterile saline 3 times per week for 8 weeks. Groups II, III, and IV received intraperitoneal zoledronate injection in the same manner with the same frequency and duration. The right first molar tooth was extracted in groups III and IV. One mL 8.4% sodium bicarbonate (SB) was applied to the extraction socket at the time of extraction in group IV. The effect of locally applied SB as an alkalizing agent was evaluated by histomorphometric analysis.
Results: BRONJ was observed in none of the animals in the control groups, 67% of the animals in the tooth extraction group, and none of the animals in the local SB application group (P < .01).
Conclusions: Administration of locally applied SB had positive effects on the prevention of BRONJ in animals, but further studies are required to verify the effectiveness of this form of treatment before its use in humans.
abstract_id: PUBMED:20728033
Serologic bone markers for predicting development of osteonecrosis of the jaw in patients receiving bisphosphonates. Purpose: Osteonecrosis of the jaw is a well-documented side effect of bisphosphonate (BP) use. Attempts have recently been made to predict the development of bisphosphonate-related osteonecrosis of the jaw (BRONJ). We prospectively investigated the predictive value of serum levels of C-terminal telopeptide of collagen I (CTX), bone-specific alkaline phosphatase, and parathyroid hormone for the development of BRONJ.
Patients And Methods: Data on the demographics, comorbidities, and BP treatment were collected from 78 patients scheduled for dentoalveolar surgery. Of the 78 patients, 51 had been treated with oral BPs and 27 had been treated with frequent intravenous infusions of BPs. Blood samples for CTX, bone-specific alkaline phosphatase, and parathyroid hormone measurements were taken preoperatively. Surgery was performed conservatively, and antibiotic medications were prescribed for 7 days.
Results: Of the 78 patients, 4 patients taking oral BPs (7.8%) and 14 receiving intravenous BPs (51.8%) developed BRONJ. A CTX level less than 150 pg/mL was significantly associated with BRONJ development, with an increased odds ratio of 5.268 (P = .004). The bone-specific alkaline phosphatase levels were significantly lower in patients taking oral BPs who developed BRONJ. The parathyroid hormone levels were similar in patients who did and did not develop BRONJ.
Conclusion: The incidence of BRONJ after oral surgery involving bone is greater among patients receiving frequent, intravenous infusions of BPs than among patients taking oral BPs. Although the measurement of serum levels of CTX is not a definitive predictor of the development of BRONJ, it might have an important role in the risk assessment before oral surgery.
abstract_id: PUBMED:24783891
Biphosphonates-associated osteonecrosis of the jaw: the role of gene-environment interaction. Biphosphonate (BPN) are widely used in clinics to treat metastatic cancer and osteoporosis thus representing a problem not only for patients but also for workers involved in their preparation and administration. A similar exposure occurred years ago in match-making workers undergoing bone alterations similar to those consequent to BPN exposure. Osteonecrosis of the jaw (ONJ) is a main adverse effect related to BPN administration, which is performed in millions of patients worldwide for osteoporosis and cancer therapy, thus representing an emerging problem in public health. In susceptible patients, BPN induce severe, progressive, and irreversible degeneration of facial bones, resulting in avascular ONJ often triggered by dental surgery. BPN induced ONJ occurs in subjects depending on lifestyle factors of both environmental and endogenous origins. Exogenous risk factors include cigarette smoke, alcohol consumption, bacterial infections, and cyclosporine therapy. Endogenous risk factors include systemic diseases such as diabetes or hypertension and adverse polymorphisms of genes involved in metabolism (CYPs, MTHFR), thrombosis (Factor V, Prothrombin), and detoxification (MDR). Available molecular findings provide evidence that ONJ is related to risk-factors associated with environmental mutagenesis and gene-environment interactions. This issues may be useful to identify susceptible subjects by molecular analyses in order to prevent ONJ occurrence.
abstract_id: PUBMED:23317355
Experimental development of bisphosphonate-related osteonecrosis of the jaws in rodents. Osteonecrosis of the jaw (ONJ) following the use of bisphosphonates has become of increased interest in the scientific community, due in particular to its as-yet-unsolved pathogenesis. An experimental model of ONJ was induced in normal male rats [alendronate (ALN); 1 mg/Kg/day; n = 10] and matched controls (saline solution; n = 10). After 60 days of drug treatment, all animals were subjected to extractions of the left first lower molars and were euthanized at 3 and 28 days postsurgery. The following analyses were performed: (i) descriptive and quantitative (scores) histological evaluation, (ii) stereometry of distal sockets and (iii) biochemical measurement of C-telopeptide cross-linked collagen type I (CTX) and bone-specific alkaline phosphatase (BALP). The results showed that 28 days postsurgery the animals treated with ALN had areas of exposed and necrotic bone, associated with significant infection, especially in the interalveolar septum area and crestal regions, compared with controls. The levels of CTX, BALP and bone volume, as well as the degrees of inflammation and vascularization, were significantly reduced in these animals. Therefore, analysis of the data presented suggests that ALN therapy is associated with the development of osteonecrosis in the jaws of rodents after tooth extraction.
abstract_id: PUBMED:22330331
Serum N-telopeptide and bone-specific alkaline phosphatase levels in patients with osteonecrosis of the jaw receiving bisphosphonates for bone metastases. Purpose: Oversuppression of bone turnover can be a critical factor in the pathogenesis of osteonecrosis of the jaw (ONJ). We investigated N-telopeptide of type I collagen (NTX) and bone-specific alkaline phosphatase (BAP) as potential predictors of ONJ onset.
Patients And Methods: Patients with ONJ and available stored serum were identified retrospectively from the institutional databases. Four approximate points were examined: point of ONJ diagnosis and 12, 6, and 1 month before the diagnosis. NTX and BAP were measured using enzyme-linked immunosorbent assays and examined as possible predictors of ONJ.
Results: From March 1998 to September 2009, we identified 122 patients with ONJ. Of these, 56 (46%) had one or more serum samples available. Overall, 55 patients (98%) received bisphosphonates. Using the exact dates, no obvious patterns in either NTX or BAP were noted. Similarly, using the ordinal points, no evidence of suppression of NTX or BAP over time was seen. The consecutive median values were as follows: The median NTX values were 8.0 nmol/L (range 3.8 to 32.9) at 12 months before ONJ; 9.5 nmol/L (range 4.7 to 42.7) at 6 months; 9.5 nmol/L (range 4.5 to 24.6) at 1 month, and 10.4 nmol/L (range 4.4 to 32.5) at the ONJ diagnosis. The median BAP values were BAP 18.0 U/L (range 7.0 to 74) at 12 months before ONJ; 18.0 U/L (range 4.0 to 134) at 6 months; 14.0 U/L (range 4.0 to 132) at 1 month, and 18.0 U/L (range 0.7 to 375) at the ONJ diagnosis. Only 2 patients (4%) had NTX and 17 (30%) had BAP below the normal range at the ONJ diagnosis.
Conclusions: In the present large retrospective study, no trends were seen in the NTX and BAP levels before the ONJ diagnosis.
abstract_id: PUBMED:27556684
Experimental osteonecrosis: development of a model in rodents administered alendronate. The main objective of this study was to cause bisphosphonate-related osteonecrosis of the jaws to develop in a rodent model. Adult male Holtzman rats were assigned to one of two experimental groups to receive alendronate (AL; 1 mg/kg/week; n = 6) or saline solution (CTL; n = 6). After 60 days of drug therapy, all animals were subjected to first lower molar extraction, and 28 days later, animals were euthanized. All rats treated with alendronate developed osteonecrosis, presenting as ulcers and necrotic bone, associated with a significant infection process, especially at the inter-alveolar septum area and crestal regions. The degree of vascularization, the levels of C-telopeptide cross-linked collagen type I and bone-specific alkaline phosphatase, as well as the bone volume were significantly reduced in these animals. Furthermore, on radiographic analysis, animals treated with alendronate presented evident sclerosis of the lamina dura of the lower first molar alveolar socket associated with decreased radiographic density in this area. These findings indicate that the protocol developed in the present study opens new perspectives and could be a good starting model for future property design.
abstract_id: PUBMED:32331240
Calcium Phosphate Ceramics Can Prevent Bisphosphonate-Related Osteonecrosis of the Jaw. Bisphosphonate-associated osteonecrosis of the jaw (BRONJ), a post-surgical non-healing wound condition, is one of the most common side effects in patients treated with nitrogen-containing bisphosphonates. Its physiopathology has been related with suppression of bone turnover, of soft tissue healing and infection. Biphasic calcium phosphates (BCP) are used as a drug delivery vehicle and as a bone substitute in surgical wounds. Due to their capacity to adsorb zoledronate, it was hypothesized these compounds might have a protective effect on the soft tissues in BRONJ wounds. To address this hypothesis, a reproducible in vivo model of BRONJ in Wistar rats was used. This model directly relates chronic bisphosphonate administration with the development of osteonecrosis of the jaw after tooth extraction. BCP granules were placed in the alveolus immediately after tooth extraction in the test group. The animals were evaluated through nuclear medicine, radiology, macroscopic observation, and histologic analysis. Encouragingly, calcium phosphate ceramics were able to limit zoledronate toxicity in vivo and to favor healing, which was evidenced by medical imaging (nuclear medicine and radiology), macroscopically, and through histology. The studied therapeutic option presented itself as a potential solution to prevent the development of maxillary osteonecrosis.
abstract_id: PUBMED:34997043
Osteonecrosis development by tooth extraction in zoledronate treated mice is inhibited by active vitamin D analogues, anti-inflammatory agents or antibiotics. Invasive dental treatment such as tooth extraction following treatment with strong anti-bone resorptive agents, including bisphosphonates and denosumab, reportedly promotes osteonecrosis of the jaw (ONJ) at the extraction site, but strategies to prevent ONJ remain unclear. Here we show that in mice, administration of either active vitamin D analogues, antibiotics or anti-inflammatory agents can prevent ONJ development induced by tooth extraction during treatment with the bisphosphonate zoledronate. Specifically, tooth extraction during treatment with zoledronate induced osteonecrosis in mice, but administration of either 1,25(OH)2D3 or ED71, both active vitamin D analogues, significantly antagonized osteonecrosis development, even under continuous zoledronate treatment. 1,25(OH)2D3 or ED71 administration also significantly inhibited osteocyte apoptosis induced by tooth extraction and bisphosphonate treatment. Administration of either active vitamin D analogue significantly inhibited elevation of serum inflammatory cytokine levels in mice in response to injection of lipopolysaccharide, an infection mimetic. Furthermore, administration of either anti-inflammatory or antibiotic reagents significantly blocked ONJ development following tooth extraction and zoledronate treatment. These findings suggest that administration of active vitamin D, anti-inflammatory agents or antibiotics could prevent ONJ development induced by tooth extraction in patients treated with zoledronate.
abstract_id: PUBMED:28387378
Elevation of pro-inflammatory cytokine levels following anti-resorptive drug treatment is required for osteonecrosis development in infectious osteomyelitis. Various conditions, including bacterial infection, can promote osteonecrosis. For example, following invasive dental therapy with anti-bone resorptive agents, some patients develop osteonecrosis in the jaw; however, pathological mechanisms underlying these outcomes remain unknown. Here, we show that administration of anti-resorptive agents such as the bisphosphonate alendronate accelerates osteonecrosis promoted by infectious osteomyelitis. Potent suppression of bone turnover by these types of agents is considered critical for osteonecrosis development; however, using mouse models we found that acceleration of bone turnover by teriparatide injection did not prevent osteonecrosis but rather converted osteoclast progenitors to macrophages expressing inflammatory cytokines, which were required for osteonecrosis development. In fact, we demonstrate that TNFα-, IL-1α/β- or IL-6-deficient mice as well as wild-type mice administered a TNFα-inhibitor were significantly resistant to development of osteonecrosis accompanying infectious myelitis, even under bisphosphonate treatment. Our data provide new insight into mechanisms underlying osteonecrosis and suggest new ways to prevent it.
abstract_id: PUBMED:27109708
Effect of Clodronate on Antigenic Profile, Growth, and Differentiation of Osteoblast-Like Cells. Purpose: To evaluate the role of osteoblasts in bisphosphonate-related osteonecrosis of the jaw (BRONJ) by studying the effects of different concentrations of clodronate, a non-nitrogen-containing bisphosphonate, on osteoblast growth, differentiation, and antigenic profile.
Materials And Methods: Osteoblast-like cells (MG63) were incubated in culture medium with different doses of clodronate. Their proliferative capacity was determined with a spectrophotometric technique (3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium assay). Flow cytometry was used to study the antigenic profile. Cell differentiation was evaluated by nodule formation and alkaline phosphatase (ALP) activity was measured by spectrophotometric assay.
Results: Clodronate had a significant stimulatory effect on osteoblast-like cell (MG63) proliferation (P < .05). A significant decrease in the expression of CD54, CD80, CD86, and HLA-DR membrane antigens versus controls was observed after 24 hours of treatment with the different clodronate doses assayed (P < .05). A significant decrease (P = .004) in ALP activity was found after 24 hours of treatment with the lowest dose (10(-9) mol/L), and a significant decrease in calcium deposition was found after 15 and 21 days of treatment (P < .05).
Conclusion: Clodronate increases the proliferation of MG63 osteoblast-like cells and decreases their differentiation capacity, generally at low doses, and modulates the expression of costimulatory molecules associated with immune function. Clodronate exerts its effect on osteoblasts by altering their physiology and impairing their repair capacity, which could be related to the development of BRONJ. However, further research is warranted to elucidate fully the mechanisms by which bisphosphonates can produce this disease.
Answer: An experimental study in rats investigated the preventive effect of locally applied sodium bicarbonate, an alkalizing agent, on bisphosphonate-related osteonecrosis of the jaw (BRONJ). The study found that local application of sodium bicarbonate to the extraction socket at the time of tooth extraction resulted in none of the animals in the treatment group developing BRONJ, compared to 67% of the animals in the tooth extraction group without sodium bicarbonate application. This suggests that an alkaline environment created by sodium bicarbonate may have positive effects on the prevention of BRONJ in animals (PUBMED:24368141). However, the study concludes that further research is required to verify the effectiveness of this treatment before it can be recommended for use in humans. |
Instruction: Does tobacco smoking influence the occurrence of hand eczema?
Abstracts:
abstract_id: PUBMED:19067707
Does tobacco smoking influence the occurrence of hand eczema? Background: Tobacco smoking is known to influence various inflammatory skin diseases and an association between tobacco smoking and hand eczema has been proposed in some studies.
Objectives: To examine a possible association between reported current tobacco smoking and the occurrence of hand eczema.
Subjects And Methods: Previously collected questionnaire data on the occurrence of hand eczema in three occupational cohorts and corresponding controls from the general population were studied. The questionnaires used included questions on 1-year prevalence of hand eczema and questions on smoking habits. For one occupational group, hairdressers and their controls, information on amount of smoking was obtained. Information on age, sex and history of atopy was also available.
Results: In total, answers regarding smoking and hand eczema were obtained from 13,452 individuals. Out of 3493 smokers, 437 (12.5%) reported hand eczema compared with 1294 out of 9959 nonsmokers (13.0%) (P = 0.51). With regard to the number of cigarettes smoked, 22.6% of the hairdressers smoking more than 10 cigarettes per day reported hand eczema compared with 17.4% of those smoking 0-10 cigarettes per day (P = 0.01). Corresponding figures for the controls were 14.5% and 11.7%, respectively (P = 0.06).
Conclusions: No clear association was found between 1-year prevalence of hand eczema and smoking. Heavy smoking, more than 10 cigarettes per day, may give a slightly increased risk of hand eczema. Further studies with information on the amount of tobacco consumption and on possible confounders are needed to evaluate smoking as a risk factor for hand eczema.
abstract_id: PUBMED:26140658
Tobacco smoking and hand eczema - is there an association? Background: Numerous risk factors have been suggested for hand eczema. This systematic review evaluates the association between tobacco smoking and hand eczema.
Objective: To review the literature systematically on the association between smoking and hand eczema.
Methods: The PubMed and EMBASE databases were searched up to 27 January 2015 for articles on the association between tobacco smoking and hand eczema, including human studies in English and German only. Experimental studies, studies on tobacco allergy, case reports, reviews and studies on second-hand smoking were excluded.
Results: Twenty articles were included. Among studies in occupational settings, three of seven found a statistically significant positive association between tobacco smoking and hand eczema prevalence rate, as did four of eight population-based studies. The association was stronger for studies in occupational settings than for population-based studies. No studies reported tobacco to be a clear protective factor for hand eczema. Two of five studies regarding severity found a positive association between smoking and hand eczema severity.
Conclusion: Overall, the data indicate that smoking may cause an increased frequency of hand eczema, particularly in high-risk occupations. However, data from studies controlling for other risk factors are conflicting, and few prospective studies are available. Studies controlling for other risk factors are needed, and information regarding the diagnosis of subclasses of hand eczema, as well as severity, may be important.
abstract_id: PUBMED:20716220
Tobacco smoking and hand eczema: a population-based study. Background: Tobacco smoking has been proposed to promote hand eczema.
Objectives: To examine the association between tobacco smoking and hand eczema and to investigate a possible dose-response relation.
Methods: A national environmental health survey was performed in 2007. A questionnaire was mailed to 43,905 individuals and responses were obtained from 25,851 (59%). Questions on 1-year prevalence of hand eczema and on previous and current smoking were included. Respondents were asked to report number of cigarettes per day and to provide information on history of atopy and frequency of hand exposure to water.
Results: In total, answers regarding smoking and hand eczema were obtained from 25,428 individuals. Of regular daily smokers, 10·0% reported hand eczema vs. 9·1% of nonsmokers (P = 0·0951). A history of atopy showed the strongest influence on the occurrence of hand eczema: prevalence proportion ratio (PPR) 3·46. The PPR for hand eczema among individuals smoking > 15 cigarettes per day was 1·25 and 1·40 in uni- and multivariate analysis, respectively. Age, history of atopy, sex and water exposure were found to be confounders but not effect modifiers. A dose-response relation between level of smoking and 1-year prevalence of hand eczema was revealed with a PPR of 1·05 (P < 0·001) for the continuous variable of smoking habits, indicating a significantly increased prevalence of hand eczema among individuals with higher consumption of tobacco.
Conclusions: An association between heavy smoking and hand eczema was confirmed. It is important to consider the level of exposure, as a dose-response relation was revealed, and to be aware of confounding factors.
abstract_id: PUBMED:24909920
Association between tobacco smoking and prognosis of occupational hand eczema: a prospective cohort study. Background: Hand eczema (HE) is a common occupational skin disease. Tobacco smoking is known to be associated with adverse cutaneous effects. However, its influence on the prognosis of occupational HE has not yet been studied.
Objectives: To evaluate relations between smoking status, severity and prognosis of occupational HE in patients taking part in an interdisciplinary tertiary individual prevention programme (TIP).
Methods: In a prospective, multicentre, cohort study 1608 patients with occupational HE taking part in a TIP were recruited and followed up for 3 years. The clinical and self-reported outcome data of smokers and nonsmokers were compared.
Results: Nonsmokers and smokers were equally distributed. During the TIP, the average self-reported daily cigarette consumption and the severity of HE decreased significantly (P < 0·01). However, at all time points HE was significantly more severe in smokers than in nonsmokers. This association was not dependent on the self-reported number of cigarettes smoked daily. Smokers had significantly more days of absence from work due to occupational HE than nonsmokers in the year before the TIP (P < 0·01) and in the following year (P = 0·02). After the TIP, smokers reported significantly more often that they had to give up their occupation (P = 0·02) than nonsmokers.
Conclusions: The severity of occupational HE is increased in smokers. Tobacco smoking is associated with a higher number of days of absence from work and with not staying in the workforce owing to occupational HE. Thus, smoking confers a worse prognosis and interferes with the outcome of prevention programmes.
abstract_id: PUBMED:27709631
Associations between lifestyle factors and hand eczema severity: are tobacco smoking, obesity and stress significantly linked to eczema severity? Background: It has been suggested that lifestyle factors such as smoking, overweight and stress may influence the prevalence and severity of hand eczema.
Objectives: To investigate the association between lifestyle factors and hand eczema severity in a cohort of patients with work-related hand eczema.
Methods: Individuals with work-related hand eczema notified in the period between June 2012 and November 2013 were included in this questionnaire-based cross-sectional study. Participants responded to a questionnaire including questions on lifestyle factors, as well as a photographic guide for assessment of severity of hand eczema and questions on quality of life.
Results: A total of 773 individuals (546 women and 227 men) responded to the questionnaire and were included in the study. A strong association was found between tobacco smoking and hand eczema severity (p = 0.003), whereas no significant association was found for body weight and stress. Other factors linked to severe eczema were male sex and older age (p = 0.04 and p = 0.01, respectively), and wet work (p = 0.08).
Conclusion: The data from the present study strongly support an association between smoking and hand eczema severity. However, owing to the cross-sectional design of the study, no conclusion on causation can be drawn.
abstract_id: PUBMED:25650777
Association between smoking and hand dermatitis--a systematic review and meta-analysis. Tobacco smoking is known to influence various inflammatory skin diseases. A systematic review with a meta-analysis was conducted to analyse a possible association between the lifestyle factor tobacco smoking and hand dermatitis. We performed a systematic review using the MEDLINE, Embase and Cochrane Central Register databases. Our search was limited to English and German language, human-subject studies published between January 1, 1980 and December 31, 2013. A total of 43 articles were identified from the initial search, and after taking into account exclusion criteria, only three studies remained investigating the risk factors for hand eczema in the general and in high-risk populations (e.g. bakers, hairdressers, dental technicians). The extracted data were pooled and analysed by standard statistical methods. The studies meeting inclusion criteria consisted of one cohort study and two cross-sectional studies based on a total of 4.113 subjects with hand dermatitis and 34.875 subjects without hand dermatitis. While one of the studies had reported a significant association between hand dermatitis and smoking, the meta-analysis did not confirm this finding (OR 0.99; 95% CI 0.88-1.11). However, heterogeneity across studies was high (I(2) = 72%). Our meta-analysis did not show tobacco smoking to be a risk factor for hand dermatitis. However, these results depend mainly on two large studies from one country. From present data, it cannot be excluded that smoking may influence the course of hand dermatitis. Even though smoking does not seem to be associated with hand dermatitis, it may still negatively influence the course of the disease.
abstract_id: PUBMED:35980390
Risk factors of hand eczema: A population-based study among 900 subjects. Background: Many risk factors such as atopic dermatitis (AD) have shown to associate with hand eczema (HE). However, studies concerning other atopic diseases, parental or longitudinal risk factors of HE are scarce.
Objectives: To examine the association between HE and atopic diseases, parental factors, environmental factors (keeping animals, exposure to moulds) and lifestyle factors (obesity, tobacco smoking, alcohol consumption and physical activity) at population level.
Methods: Subjects belonging to the Northern Finland Birth Cohort 1966 Study (NFBC1966) (n = 6830) answered a comprehensive health questionnaire. The data was completed with parental information.
Results: HE was reported in 900 (13.3%) individuals. All atopic diseases, parental allergy, female gender and obesity increased the risk of HE whereas physical activity decreased the risk of HE. A statistically significant association was not found between HE and tobacco smoking or alcohol consumption.
Conclusions: All atopic diseases, not only AD, seem to have influence on the presence of HE. In addition, parental and environmental factors associated with HE.
abstract_id: PUBMED:27081262
Smoking and Hand Dermatitis in the United States Adult Population. Background: Hand dermatitis is a common chronic relapsing skin disease resulting from a variety of causes, including endogenous predisposition and environmental exposures to irritants and allergens. Lifestyle factors such as smoking have been implicated in hand dermatitis.
Objective: To evaluate the association between tobacco exposure and hand dermatitis using the 2003~2004 National Health and Nutrition Examination Survey (NHANES) database.
Methods: Data were retrieved and analyzed from 1,301 participants, aged 20~59 years, from the 2003~2004 NHANES questionnaire study who completed health examination and blood tests. Diagnosis of hand dermatitis was based on standardized photographs of the dorsal and palmar views of the hands read by two dermatologists.
Results: There were 38 diagnosed cases of active hand dermatitis out of the 1,301 study participants (2.9%). Heavy smokers (>15 g tobacco daily) were 5.11 times more likely to have active hand dermatitis (odds ratio [OR], 5.11; 95% confidence interval [CI], 1.39~18.88; p=0.014). Those with serum cotinine >3 ng/ml were also more likely to have active hand dermatitis, compared with those with serum cotinine ≤3 ng/ml (OR, 2.50; 95% CI, 1.26~4.95; p=0.007). After adjusting for confounding factors such as age, atopic diathesis, occupational groups, and physical activity, the association between tobacco exposure and active hand dermatitis remained significant.
Conclusion: Smoking has a significant association with the presence of active hand dermatitis. It is important to consider smoking cessation as part of management of hand dermatitis.
abstract_id: PUBMED:19919628
The effect of tobacco smoking and alcohol consumption on the prevalence of self-reported hand eczema: a cross-sectional population-based study. Background: Hand eczema is a prevalent disorder that leads to high health care costs as well as a decreased quality of life. Important risk factors include atopic dermatitis, contact allergy and wet work whereas the role of null mutations in the filaggrin gene complex remains to be clarified. It has been debated whether life-style factors such as tobacco smoking and alcohol consumption are associated with hand eczema.
Objectives: The current study aimed to investigate whether self-reported hand eczema was associated with smoking and alcohol consumption in the general population.
Methods: Between June 2006 and May 2008, a cross-sectional study was performed in the general population in Copenhagen, the capital of Denmark. A random sample of 7931 subjects aged 18-69 years old was invited to participate in a general health examination including a questionnaire; 3471 (44%) participated. Data were analysed with logistic regression analyses and associations were expressed as odds ratios (ORs) with 95% confidence intervals (CI).
Results: The prevalence of hand eczema was higher among previous smokers (OR = 1.13; CI = 0.90-1.40), current light smokers (OR = 1.51; CI = 1.14-2.02) and current heavy smokers (OR = 1.38; CI = 0.99-1.92) compared with never-smokers.
Conclusions: Tobacco smoking was positively associated with hand eczema among adults from the general population in Denmark. Apparently, current light smokers (< 15 g daily) had a higher prevalence of hand eczema than current heavy smokers (> 15 g daily) but this needs to be reconfirmed. Alcohol consumption was not associated with hand eczema.
abstract_id: PUBMED:25405274
Smoking is associated with combined allergic and irritant hand eczema, contact allergies and hyperhidrosis. Background: The pathogenesis of chronic hand eczema (CHE) is multifactorial. Certain life-style factors have been suggested as potential triggers of the disease, among them tobacco smoking. However, the few studies addressing the influence of smoking on CHE have been more or less limited to occupational hand eczema.
Objective: The aim of our study was to investigate the association of smoking status with clinical features like contact sensitization, irritant exposure, atopy and hyperhidrosis in a thoroughly characterized cohort of CHE patients.
Methods: A total of 153 patients with CHE underwent clinical examination and an interview about their history and course of disease.
Results: Significantly more smokers suffered from combined allergic and irritant hand eczema (25/86) than non-smokers (7/67; P < 0.05). Nickel sensitization and palmar hyperhidrosis tended to be more frequent among smokers than among non-smokers (nickel: 17/86 smokers, 6/67 non-smokers; hyperhidrosis: 21/84 smokers, 8/64 non-smokers). Among the smoking CHE patients with palmar hyperhidrosis, a significant proportion had contact allergies relevant contact allergies (16/21, P < 0.05).
Conclusion: Our results reveal a predominance of smokers among patients with combined allergic and irritant hand eczema as well as with hyperhidrosis and relevant contact sensitizations.
Answer: The influence of tobacco smoking on the occurrence of hand eczema has been investigated in several studies, with varying results. Some studies have found a positive association between tobacco smoking and the prevalence of hand eczema, particularly in high-risk occupations (PUBMED:26140658). A population-based study revealed a dose-response relation between the level of smoking and the 1-year prevalence of hand eczema, indicating a significantly increased prevalence among individuals with higher consumption of tobacco (PUBMED:20716220). Another study found that heavy smoking, defined as more than 10 cigarettes per day, may give a slightly increased risk of hand eczema (PUBMED:19067707).
Furthermore, the severity of occupational hand eczema has been reported to be increased in smokers, with smokers experiencing more days of absence from work due to hand eczema and reporting that they had to give up their occupation more often than nonsmokers (PUBMED:24909920). A strong association between tobacco smoking and hand eczema severity was also supported by another study (PUBMED:27709631).
However, a systematic review and meta-analysis did not confirm tobacco smoking as a risk factor for hand dermatitis, although it suggested that smoking may negatively influence the course of the disease (PUBMED:25650777). Another study using the NHANES database found a significant association between heavy smoking and active hand dermatitis, even after adjusting for confounding factors (PUBMED:27081262).
In contrast, a population-based study among 900 subjects did not find a statistically significant association between tobacco smoking and hand eczema (PUBMED:35980390), and another cross-sectional study reported that alcohol consumption was not associated with hand eczema, while tobacco smoking was positively associated with the condition (PUBMED:19919628).
Overall, while there is evidence to suggest that tobacco smoking may be associated with an increased frequency and severity of hand eczema, particularly in certain populations and with heavier smoking, the data are not entirely consistent, and further research controlling for other risk factors is needed to clarify the relationship (PUBMED:26140658; PUBMED:20716220; PUBMED:19067707; PUBMED:24909920; PUBMED:27709631; PUBMED:25650777; PUBMED:27081262; PUBMED:35980390; PUBMED:19919628). |
Instruction: Pediatric Liver Transplantation Across the ABO Blood Group Barrier: Is It an Obstacle in the Modern Era?
Abstracts:
abstract_id: PUBMED:26247556
The Role of Liver Sinusoidal Endothelial Cells in Induction of Carbohydrate Reactive B Cells Tolerance Through the Programmed Death 1/Programmed Death Ligand 1 Pathway. Background: A spontaneous tolerance of B cells responding to blood group antigens frequently develops in ABO-incompatible pediatric liver transplantation (LT). Liver sinusoidal endothelial cells (LSECs), which exclusively express blood group antigens in the liver, possess a capacity to induce alloantigen-specific tolerance. In this study, we elucidated the role of LSECs in the tolerance induction of blood group antigen-reactive B cells after ABO-incompatible LT using mice that lack galactose-α(1,3)galactose (Gal) epitopes resembling blood group carbohydrate antigens.
Methods: Using adoptive transfer of LSECs from wild type (WT) C57BL/6J mice to congenic α1,3-galactosyltransferase gene knockout (GalT) mice, we established orthotropic GalT → GalT LSEC chimera mice. Anti-Gal Ab (antibody) production was evaluated after immunization of GalT → GalT LSEC chimera mice with Gal rabbit RBC.
Results: Adoptive transfer of LSECs isolated from WT GalT mice via the portal vein resulted in persistent engraftment of Gal LSECs in congenic GalT mouse livers. Only when GalT mice were splenectomized before LSEC inoculation, the GalT → GalT LSEC chimera lost the ability to produce anti-Gal Abs. The administration of blocking monoclonal Abs (mAbs) against programmed death ligand 1 to the splenectomized GalT → GalT LSEC chimera resulted in the recovery of anti-Gal Ab production.
Conclusions: These findings suggest that LSECs take a part in tolerization of immature but not mature B cells specifically for Gal. Furthermore, the programmed death 1/programmed death ligand 1 pathway likely plays a crucial role in the mechanisms underlying spontaneous tolerization of B cells responding to ABO-blood group antigens in LT.
abstract_id: PUBMED:3132966
Presence of an inhibitor of glycosyltransferase activity in a patient following an ABO incompatible liver transplant. The contribution of the liver to plasma ABO glycosyltransferase activity has been studied in a group O individual transplanted with a liver from a group B donor. The B transferase activity present in the post-transplantation plasma was negligible. However, a potent B transferase inhibitor, absent from the pretransplantation plasma, was present after transplantation. The inhibitor was present in the excluded fraction following Sephadex G-25 gel filtration, but was retained by a protein A-Sepharose column, suggesting that it was an IgG antibody. This inhibitor was also effective in reducing A transferase activity.
abstract_id: PUBMED:22168139
Thrombocytopenia after pig-to-baboon liver xenotransplantation: where do platelets go? Background: In baboons with orthotopic pig liver xenografts, profound thrombocytopenia was observed within 1 h after reperfusion. Assessment of the fate of platelets may shed light on the underlying mechanisms leading to thrombocytopenia and may allow preventive therapies to be introduced.
Methods: Platelet-white blood cell (WBC) aggregation was studied in two baboons that received orthotopic liver xenografts from α1,3-galactosyltransferase gene-knockout pigs transgenic for human CD46 (GTKO/CD46). Percentages of CD42a-positive platelet aggregates with WBC-subtypes were determined by flow cytometry, and absolute numbers (per mm(3) ) were calculated. Platelet aggregates in the liver xenografts were identified by immunofluorescence and electron microscopy. Mean platelet volume (MPV) was determined before and after transplantation.
Results: After pig liver reperfusion, profound thrombocytopenia was associated with aggregation of platelets with WBC-subtypes. Increasing aggregation of platelets with WBC-subtypes was detected throughout the post-transplant period until the recipient was euthanized. Significant negative correlation was found between platelet counts in the blood and aggregation of platelets with monocytes (P < 0.01) and neutrophils (P < 0.01), but not with lymphocytes. MPV remained within the normal range. Two hours after reperfusion, platelet and fibrin deposition were already detected in the liver xenografts by immunofluorescence and by electron microscopy.
Conclusions: Following liver xenotransplantation, the early disappearance of platelets from the circulation was at least in part due to their aggregation with circulating WBC, which may augment their deposition in the liver xenograft and native lungs. Prevention of platelet aggregation with monocytes and neutrophils is likely beneficial in reducing their subsequent sequestration in the liver xenograft and native organs.
abstract_id: PUBMED:25130043
Increased transfusion-free survival following auxiliary pig liver xenotransplantation. Background: Pig to baboon liver xenotransplantation typically results in severe thrombocytopenia and coagulation disturbances, culminating in death from hemorrhage within 9 days, in spite of continuous transfusions. We studied the contribution of anticoagulant production and clotting pathway deficiencies to fatal bleeding in baboon recipients of porcine livers.
Methods: By transplanting liver xenografts from α1,3-galactosyltransferase gene-knockout (GalT-KO) miniature swine donors into baboons as auxiliary organs, leaving the native liver in place, we provided the full spectrum of primate clotting factors and allowed in vivo mixing of porcine and primate coagulation systems.
Results: Recipients of auxiliary liver xenografts develop severe thrombocytopenia, comparable to recipients of conventional orthotopic liver xenografts and consistent with hepatic xenograft sequestration. However, baboons with both pig and native livers do not exhibit clinical signs of bleeding and maintain stable blood counts without transfusion for up to 8 consecutive days post-transplantation. Instead, recipients of auxiliary liver xenografts undergo graft failure or die of sepsis, associated with thrombotic microangiopathy in the xenograft, but not the native liver.
Conclusion: Our data indicate that massive hemorrhage in the setting of liver xenotransplantation might be avoided by supplementation with primate clotting components. However, coagulation competent hepatic xenograft recipients may be predisposed to graft loss related to small vessel thrombosis and ischemic necrosis.
abstract_id: PUBMED:20041862
Impact of thrombocytopenia on survival of baboons with genetically modified pig liver transplants: clinical relevance. A lack of deceased human donor livers leads to a significant mortality in patients with acute-on-chronic or acute (fulminant) liver failure or with primary nonfunction of an allograft. Genetically engineered pigs could provide livers that might bridge the patient to allotransplantation. Orthotopic liver transplantation in baboons using livers from alpha1,3-galactosyltransferase gene-knockout (GTKO) pigs (n = 2) or from GTKO pigs transgenic for CD46 (n = 8) were carried out with a clinically acceptable immunosuppressive regimen. Six of 10 baboons survived for 4-7 days. In all cases, liver function was adequate, as evidenced by tests of detoxification, protein synthesis, complement activity and coagulation parameters. The major problem that prevented more prolonged survival beyond 7 days was a profound thrombocytopenia that developed within 1 h after reperfusion, ultimately resulting in spontaneous hemorrhage at various sites. We postulate that this is associated with the expression of tissue factor on platelets after contact with pig endothelium, resulting in platelet and platelet-peripheral blood mononuclear cell(s) aggregation and deposition of aggregates in the liver graft, though we were unable to confirm this conclusively. If this problem can be resolved, we would anticipate that a pig liver could provide a period during which a patient in liver failure could be successfully bridged to allotransplantation.
abstract_id: PUBMED:31435990
Immortalization of porcine hepatocytes with a α-1,3-galactosyltransferase knockout background. Background: In vivo pig liver xenotransplantation preclinical trials appear to have poor efficiency compared to heart or kidney xenotransplantation because of xenogeneic rejection, including coagulopathy, and particularly thrombocytopenia. In contrast, ex vivo pig liver (wild type) perfusion systems have been proven to be effective in "bridging" liver failure patients until subsequent liver allotransplantation, and transgenic (human CD55/CD59) modifications have even prolonged the duration of pig liver perfusion. Despite the fact that hepatocyte cell lines have also been proposed for extracorporeal blood circulation in conditions of acute liver failure, porcine hepatocyte cell lines, and the GalT-KO background in particular, have not been developed and applied in this field. Herein, we established immortalized wild-type and GalT-KO porcine hepatocyte cell lines, which can be used for artificial liver support systems, cell transplantation, and even in vitro studies of xenotransplantation.
Methods: Primary hepatocytes extracted from GalT-KO and wild-type pigs were transfected with SV40 LT lentivirus to establish immortalized GalT-KO porcine hepatocytes (GalT-KO-hep) and wild-type porcine hepatocytes (WT). Hepatocyte biomarkers and function-related genes were assessed by immunofluorescence, periodic acid-Schiff staining, indocyanine green (ICG) uptake, biochemical analysis, ELISA, and RT-PCR. Furthermore, the tumorigenicity of immortalized cells was detected. In addition, a complement-dependent cytotoxicity (CDC) assay was performed with GalT-KO-hep and WT cells. Cell death and viability rates were assessed by flow cytometry and CCK-8 assay.
Results: GalT-KO and wild-type porcine hepatocytes were successfully immortalized and maintained the characteristics of primary porcine hepatocytes, including albumin secretion, ICG uptake, urea and glycogen production, and expression of hepatocyte marker proteins and specific metabolic enzymes. GalT-KO-hep and WT cells were confirmed as having no tumorigenicity. In addition, GalT-KO-hep cells showed less apoptosis and more viability than WT cells when exposed to complement and xenogeneic serum.
Conclusions: Two types of immortalized cell lines of porcine hepatocytes with GalT-KO and wild-type backgrounds were successfully established. GalT-KO-hep cells exhibited higher viability and injury resistance against a xenogeneic immune response.
abstract_id: PUBMED:28714241
Cytokine profiles in Tibetan macaques following α-1,3-galactosyltransferase-knockout pig liver xenotransplantation. Background: Pig-to-nonhuman primate orthotopic liver xenotransplantation is often accompanied by thrombocytopenia and coagulation disorders. Furthermore, the release of cytokines can trigger cascade reactions of coagulation and immune attacks within transplant recipients. To better elucidate the process of inflammation in liver xenograft recipients, we utilized a modified heterotopic auxiliary liver xenotransplantation model for xeno-immunological research. We studied the cytokine profiles and the relationship between cytokine levels and xenograft function after liver xenotransplantation.
Methods: Appropriate donor and recipient matches were screened using complement-dependent cytotoxicity assays. Donor liver grafts from α1,3-galactosyltransferase gene-knockout (GTKO) pigs or GTKO pigs additionally transgenic for human CD47 (GTKO/CD47) were transplanted into Tibetan macaques via two different heterotrophic auxiliary liver xenotransplantation procedures. The cytokine profiles, hepatic function, and coagulation parameters were monitored during the clinical course of xenotransplantation.
Results: Xenograft blood flow was stable in recipients after heterotopic auxiliary transplantation. A Doppler examination indicated that the blood flow speed was faster in the hepatic artery (HA) and hepatic vein (HV) of xenografts subjected to the modified Sur II (HA-abdominal aorta+HV-inferior vena cava) procedure than in those subjected to our previously reported Sur I (HA-splenic artery+HV-left renal vein) procedure. Tibetan macaques receiving liver xenografts did not exhibit severe coagulation disorders or immune rejection. Although the recipients did suffer from a rapid loss of platelets, this loss was mild. In blood samples dynamically collected after xenotransplantation (post-Tx), dramatic increases in the levels of monocyte chemoattractant protein 1, interleukin (IL)-8, granulocyte-macrophage colony-stimulating factor, IL-6, and interferon gamma-induced protein 10 were observed at 1 hour post-Tx, even under immunosuppression. We further confirmed that the elevation in individual cytokine levels was correlated with the onset of graft damage. Finally, the release of cytokines might contribute to leukocyte infiltration in the xenografts.
Conclusion: Here, we established a modified auxiliary liver xenotransplantation model resulting in near-normal hepatic function. Inflammatory cytokines might contribute to early damage in liver xenografts. Controlling the systemic inflammatory response of recipients might prevent early post-Tx graft dysfunction.
abstract_id: PUBMED:23078060
Immunobiology of liver xenotransplantation. Pigs are currently the preferred species for future organ xenotransplantation. With advances in the development of genetically modified pigs, clinical xenotransplantation is becoming closer to reality. In preclinical studies (pig-to-nonhuman primate), the xenotransplantation of livers from pigs transgenic for human CD55 or from α1,3-galactosyltransferase gene-knockout pigs+/- transgenic for human CD46, is associated with survival of approximately 7-9 days. Although hepatic function, including coagulation, has proved to be satisfactory, the immediate development of thrombocytopenia is very limiting for pig liver xenotransplantation even as a 'bridge' to allotransplantation. Current studies are directed to understand the immunobiology of platelet activation, aggregation and phagocytosis, in particular the interaction between platelets and liver sinusoidal endothelial cells, hepatocytes and Kupffer cells, toward identifying interventions that may enable clinical application.
abstract_id: PUBMED:22642260
Potential factors influencing the development of thrombocytopenia and consumptive coagulopathy after genetically modified pig liver xenotransplantation. Upregulation of tissue factor (TF) expression on activated donor endothelial cells (ECs) triggered by the immune response (IR) has been considered the main initiator of consumptive coagulopathy (CC). In this study, we aimed to identify potential factors in the development of thrombocytopenia and CC after genetically engineered pig liver transplantation in baboons. Baboons received a liver from either an α1,3-galactosyltransferase gene-knockout (GTKO) pig (n = 1) or a GTKO pig transgenic for CD46 (n = 5) with immunosuppressive therapy. TF exposure on recipient platelets and peripheral blood mononuclear cell (PBMCs), activation of donor ECs, platelet and EC microparticles, and the IR were monitored. Profound thrombocytopenia and thrombin formation occurred within minutes of liver reperfusion. Within 2 h, circulating platelets and PBMCs expressed functional TF, with evidence of aggregation in the graft. Porcine ECs were negative for expression of P- and E-selectin, CD106, and TF. The measurable IR was minimal, and the severity and rapidity of thrombocytopenia were not alleviated by prior manipulation of the IR. We suggest that the development of thrombocytopenia/CC may be associated with TF exposure on recipient platelets and PBMCs (but possibly not with activation of donor ECs). Recipient TF appears to initiate thrombocytopenia/CC by a mechanism that may be independent of the IR.
abstract_id: PUBMED:18383106
Liver xenografts for the treatment of acute liver failure: clinical and experimental experience and remaining immunologic barriers. A critical element restricting the application of liver transplantation is the shortage of human deceased donor organs. Xenotransplantation using pig organs might be a solution to this shortage. Although the problems that still require resolution include the immunologic barrier, the potential risk of transferring infectious agents with the transplanted organ, and uncertainty about whether the transplanted organ will function satisfactorily in the human environment, recent progress in the genetic manipulation of pigs has led to the prospect that clinical xenografting, at least as a bridge to allotransplantation, may be possible in the foreseeable future. Experience with clinical auxiliary and orthotopic liver xenotransplantation and experimental liver xenotransplantation in nonhuman primate and other large animal models is reviewed, and the remaining immunologic problems are discussed. Evidence suggests that, in patients with hepatic failure, the pig liver may be less susceptible to antibody-mediated injury than other pig organs, such as the heart or kidney. Pig Kupffer cells and other macrophages will recognize and phagocytose primate red blood cells, but this problem should be overcome by pretransplant depletion of macrophages from the organ-source pig. From the evidence currently available, it does not seem unduly optimistic to anticipate that a liver from an alpha1,3-galactosyltransferase gene-knockout pig would survive at least long enough to function as a successful bridge to allotransplantation.
Answer: Pediatric liver transplantation across the ABO blood group barrier is not necessarily an obstacle in the modern era, as there is evidence that a spontaneous tolerance of B cells responding to blood group antigens frequently develops in ABO-incompatible pediatric liver transplantation (LT). Liver sinusoidal endothelial cells (LSECs), which express blood group antigens in the liver, have the capacity to induce alloantigen-specific tolerance. This tolerance induction is mediated through the programmed death 1/programmed death ligand 1 (PD-1/PD-L1) pathway, which plays a crucial role in the mechanisms underlying spontaneous tolerization of B cells responding to ABO-blood group antigens in LT (PUBMED:26247556).
Furthermore, the presence of an inhibitor of glycosyltransferase activity has been observed in a patient following an ABO-incompatible liver transplant, suggesting that the body can adapt to the presence of ABO-incompatible liver grafts (PUBMED:3132966).
In addition, advancements in genetic engineering of pigs, such as the development of α1,3-galactosyltransferase gene-knockout (GalT-KO) pigs, have provided valuable insights into the immunobiology of liver xenotransplantation, which can be relevant to overcoming ABO barriers in human transplantation (PUBMED:23078060, PUBMED:22642260, PUBMED:18383106).
However, it is important to note that while ABO incompatibility may not be an insurmountable obstacle, there are still significant challenges associated with liver transplantation, such as thrombocytopenia and coagulation disturbances, which can affect the survival and success of the transplant (PUBMED:22168139, PUBMED:25130043, PUBMED:20041862, PUBMED:31435990, PUBMED:28714241).
In summary, with the current understanding of immune tolerance mechanisms, the presence of natural inhibitors, and the advancements in genetic engineering, ABO incompatibility is becoming less of an obstacle in pediatric liver transplantation. However, careful management and monitoring are still required to address other transplantation-related challenges. |
Instruction: Does obesity have detrimental effects on IVF treatment outcomes?
Abstracts:
abstract_id: PUBMED:26285703
Does obesity have detrimental effects on IVF treatment outcomes? Background: The aim of this study was to investigate the influence of body mass index (BMI) on the in vitro fertilization (IVF) treatment outcomes in a cohort of women undergoing their first IVF, using an intracytoplasmic sperm injection (ICSI).
Methods: This retrospective cohort study included 298 cycles from women younger than 38 years old undergoing IVF-ICSI at a university infertility clinic. The treatment cycles were divided into three groups according to the BMI of the women involved: normal weight (18.5 ≤ BMI < 25 kg/m(2), 164 cycles), overweight (25 ≤ BMI < 30 kg/m(2), 70 cycles), and obese (BMI ≥ 30 kg/m(2), 64 cycles). The underweight women (BMI < 18.5 kg/m(2)) were not included in the analysis due to small sample size (n = 22). The patient characteristics and IVF-ICSI treatment outcomes were compared between the BMI groups.
Results: The total gonadotropin dose (p <0.001) and duration of stimulation (p = 0.008) were significantly higher in the obese group when compared to the normal BMI group. There were no significant differences across the BMI categories for the other IVF-ICSI cycle outcomes measured, including the number of retrieved oocytes, mature oocytes, embryos suitable for transfer, proportion of oocytes fertilized, and cycle cancellation rates (p >0.05 for each). Additionally, clinical pregnancy, spontaneous abortion, and the ongoing pregnancy rates per transfer were found to be comparable between the normal weight, overweight, and obese women (p >0.05 for each).
Conclusion: Obese women might require a significantly higher dose of gonadotropins and longer stimulation durations, without greatly affecting the pregnancy outcomes.
abstract_id: PUBMED:37553621
Effects of body mass index on IVF outcomes in different age groups. Background: Herein, we aimed to analyse the effects of body mass index (BMI) on the treatment outcomes of in vitro fertilisation (IVF) in a cohort of women undergoing their first IVF cycle.
Methods: A total of 2311 cycles from 986 women undergoing their first IVF/intracytoplasmic sperm injection cycle with fresh/frozen embryo transfer between January 2018 and December 2021 at the Center of Reproductive Medicine, Shuguang Hospital affiliated to Shanghai University of Traditional Chinese Medicine, were considered in this retrospective cohort study. First, the included patients were classified into four groups based on their BMI: underweight (BMI < 18.5 kg/m2, 78 patients), normal weight (18.5 ≤ BMI < 24 kg/m2, 721patients), overweight (24 ≤ BMI < 28 kg/m2, 147 patients), and obese (BMI ≥ 28 kg/m2, 40 patients). The IVF outcomes included the Gn medication days; Gn dosage; number of retrieved oocytes, mature oocytes, fertilized oocytes, cleavages, and available embryos and high-quality embryos; implantation rate; clinical pregnancy rate and live birth rate. Next, all the obtained data were segregated into three different subgroups according to the patient age: < 30 years, 30-38 years and > 38 years; the IVF pregnancy outcomes were compared among the groups.
Results: Compared with the other three groups, the underweight group had a higher number of fertilized oocytes, cleavage and available embryos and a smaller Gn medication days and required a lower Gn dosage. There was no difference in the number of retrieved oocytes and mature oocytes among the groups. Moreover, compared with the women aged 30-38 years in the overweight group, those in the normal weight group had a significantly higher implantation rate, clinical pregnancy rate and live birth rate (p = 0.013 OR 1.75, p = 0.033 OR 1.735, p = 0.020 OR 1.252 respectively). The clinical pregnancy rate was also significantly higher in those aged 30-38 years in the normal weight group than in the obese group (p = 0.036 OR 4.236).
Conclusions: Although the BMI can greatly affect the pregnancy outcomes of women aged 30-38 years, it has almost no effects on the outcomes of younger or older women.
abstract_id: PUBMED:26980590
The inflammatory markers in polycystic ovary syndrome: association with obesity and IVF outcomes. Objective: To investigate the inflammatory markers in polycystic ovary syndrome (PCOS) and associations of these markers with obesity and in vitro fertilization (IVF) outcomes.
Methods: A total of 292 women underwent IVF procedure either with PCOS (n = 146) or without PCOS (n = 146, age, and body mass index (BMI) matched controls) were included in the study. All patients were classified according to BMI levels (normal weight: NW, BMI <25 kg/m(2) and obese: OB, BMI ≥25 kg/m(2)). The inflammatory markers were leukocyte count, neutrophil/lymphocyte ratio (NLR), platelet/lymphocyte ratio (PLR), mean platelet volume (MPV).
Results: BMI of PCOS was positively correlated with leukocyte, neutrophil, lymphocyte and MPV (p < 0.05), but negatively correlated with NLR and PLR (p < 0.05). Both NLR and PLR increased significantly in PCOS (p < 0.001). PLR increased significantly in NW-PCOS compared the NW-controls and OB-PCOS. MPV values increased only in OB-PCOS subjects. The logistic regression analyzes showed that MPV was the independent variable in PCOS to effect CPR (p = 0.000; OR 0.1; CI 0.06-0.2).
Conclusions: NLR and PLR were significantly increased in all PCOS subjects compared to the BMI-matched controls. Despite PLR being decreased by adiposity, PLR increased in NW-PCOS. These results are supporting the hypothesis that PCOS is a chronic inflammatory process independent of obesity. MPV levels were independently associated with CPR in PCOS. Further prospective studies concerning inflammation and IVF outcomes of PCOS are needed.
abstract_id: PUBMED:35042510
Female dietary patterns and outcomes of in vitro fertilization (IVF): a systematic literature review. Background: Infertility affects up to 15% of couples. In vitro fertilization (IVF) treatment has modest success rates and some factors associated with infertility and poor treatment outcomes are not modifiable. Several studies have assessed the association between female dietary patterns, a modifiable factor, and IVF outcomes with conflicting results. We performed a systematic literature review to identify female dietary patterns associated with IVF outcomes, evaluate the body of evidence for potential sources of heterogeneity and methodological challenges, and offer suggestions to minimize heterogeneity and bias in future studies.
Methods: We performed systematic literature searches in EMBASE, PubMed, CINAHL, and Cochrane Central Register of Controlled Trials for studies with a publication date up to March 2020. We excluded studies limited to women who were overweight or diagnosed with PCOS. We included studies that evaluated the outcome of pregnancy or live birth. We conducted an initial bias assessment using the SIGN 50 Methodology Checklist 3.
Results: We reviewed 3280 titles and/or titles and abstracts. Seven prospective cohort studies investigating nine dietary patterns fit the inclusion criteria. Higher adherence to the Mediterranean diet, a 'profertility' diet, or a Dutch 'preconception' diet was associated with pregnancy or live birth after IVF treatment in at least one study. However, causation cannot be assumed. Studies were potentially hindered by methodological challenges (misclassification of the exposure, left truncation, and lack of comprehensive control for confounding) with an associated risk of bias. Studies of the Mediterranean diet were highly heterogenous in findings, study population, and methods. Remaining dietary patterns have only been examined in single and relatively small studies.
Conclusions: Future studies with rigorous and more uniform methodologies are needed to assess the association between female dietary patterns and IVF outcomes. At the clinical level, findings from this review do not support recommending any single dietary pattern for the purpose of improving pregnancy or live birth rates in women undergoing IVF treatment.
abstract_id: PUBMED:25664123
Effect of overweight/obesity on IVF-ET outcomes in chinese patients with polycystic ovary syndrome. The purpose of this study was to investigate the impact of body mass index (BMI) on the outcomes of IVF/ICSI treatment cycles in Chinese patients with polycystic ovary syndrome (PCOS). Women with PCOS (n = 128) and tubal factor (n = 128) underwent a conventional long GnRH agonist suppressive protocol. Women with PCOS had significantly more oocytes retrieved (P < 0.05) and available embryos (P < 0.05), as compared to patients with tubal infertility. No significant differences were observed in clinical pregnancy rate, miscarriage rate and live birth rate between two groups. Patients were further divided into two subgroups. In total, 49 patients in PCOS group and 19 patients in tubal factor group were overweight or obese (BMI ≥ 24 kg/m(2)). Lean women (BMI < 24 kg/m(2)) with PCOS showed higher clinical pregnancy rate (P < 0.05). Live birth rate and miscarriage rate were also higher in lean PCOS women, but the differences were not significant. Similar clinical outcomes of IVF/ICSI success were achieved between two subgroups in tubal factor patients. In conclusion, lean PCOS patients obtained higher clinical pregnancy rate compared with overweight/obese PCOS patients in Chinese populations.
abstract_id: PUBMED:29982477
Severe maternal morbidity in women with high BMI in IVF and unassisted singleton pregnancies. Study Question: Is there a synergistic risk of severe maternal morbidity (SMM) in overweight/obese women who conceived by IVF compared to normal-weight women without IVF?
Summary Answer: SMM was more common in IVF pregnancies, and among overweight/obese women, but we did not detect a synergistic effect of both factors.
What Is Known Already: While much is known about the impact of overweight and obesity on success rates after IVF, there is less data on maternal health outcomes.
Study Design, Size, Duration: This is a population-based cohort study of 114 409 singleton pregnancies with conceptions dating from 11 January 2013 until 10 January 2014 in Ontario, Canada. The data source was the Canadian Assisted Reproductive Technologies Register (CARTR Plus) linked with the Ontario birth registry (BORN Information System).
Participants/materials, Setting, Methods: We included women who delivered at ≥20 weeks gestation, and excluded those younger than 18 years or with twin pregnancies. Women were classified according to the mode of conception (IVF or unassisted) and according to pre-pregnancy BMI (high BMI (≥25 kg/m2) or low-normal BMI (<25 kg/m2)). The main outcome was SMM, a composite of serious complications using International Classification of Diseases, 10th revision (ICD-10) codes. Secondary outcomes were gestational hypertension, pre-eclampsia, gestational diabetes and cesarean delivery. Adjusted risk ratios (aRR) with 95% CI were estimated using log binomial regression, adjusted for maternal age, parity, education, income and baseline maternal comorbidity.
Main Results And The Role Of Chance: Of 114 409 pregnancies, 1596 (1.4%) were IVF conceptions. Overall, 41.2% of the sample had high BMI, which was similar in IVF and non-IVF groups. We observed 674 SMM events (rate: 5.9 per 1000 deliveries). IVF was associated with an increased risk of SMM (rate 11.3/1000; aRR 1.89, 95% CI: 1.06-3.39). High BMI was modestly associated with SMM (rate 7.0/1000; aRR 1.23, 95% CI: 1.04-1.45) There was no interaction between the two factors (P = 0.22). We noted supra-additive effects of high BMI and IVF on the risk of pre-eclampsia and gestational diabetes, but not gestational hypertension or cesarean delivery.
Limitations, Reasons For Caution: We were unable to assess outcomes according to reason for treatment. Type II error (beta ~25%) may affect our results.
Wider Implications Of The Findings: Our results support previous data indicating a greater risk of SMM in IVF pregnancies, and among women with high BMI. However, these factors do not interact. Overweight and obese women who seek treatment with IVF should be counseled about pregnancy risks. The decision to proceed with IVF should be based on clinical judgment after considering an individual's chance of success and risk of complications.
Study Funding/competing Interest(s): This study was supported by the Research Institute of the McGill University Health Centre (grant 6291) and also supported by the Trio Fertility (formerly Lifequest) Research Fund. The authors report no competing interests.
Trial Registration Number: Not applicable.
abstract_id: PUBMED:36593463
The influence of male and female overweight/obesity on IVF outcomes: a cohort study based on registration in Western China. Background: Overweight/obesity can affect fertility, increase the risk of pregnancy complications, and affect the outcome of assisted reproductive technology (ART). However, due to confounding factors, the accuracy and uniformity of published findings on IVF outcomes have been disputed. This study aimed to assess the effects of both male and female body mass index (BMI), individually and in combination, on IVF outcomes.
Methods: This retrospective cohort study included 11,191 couples undergoing IVF. Per the Chinese BMI standard, the couples were divided into four groups: normal; female overweight/obesity; male overweight/obesity; and combined male and female overweight/obesity. The IVF outcomes of the four groups were compared and analysed.
Results: Regarding the 6569 first fresh IVF-ET cycles, compared with the normal weight group, the female overweight/obesity and combined male/female overweight/obesity groups had much lower numbers of available embryos and high-quality embryos (p < 0.05); additionally, the fertilization (p < 0.001) and normal fertilization rates (p < 0.001) were significantly decreased in the female overweight/obesity group. The combined male/female overweight/obesity group had significant reductions in the available embryo (p = 0.002), high-quality embryo (p = 0.010), fertilization (p = 0.001) and normal fertilization rates (p < 0.001); however, neither male or female overweight/obesity nor their combination significantly affected the clinical pregnancy rate (CPR), live birth rate (LBR) or abortion rate (p > 0.05).
Conclusion: Our findings support the notion that overweight/obesity does not influence pregnancy success; however, we found that overweight/obesity affects the fertilization rate and embryo number and that there are sex differences.
abstract_id: PUBMED:24829525
Obstetric complications in women with IVF conceived pregnancies and polycystic ovarian syndrome. Polycystic ovarian syndrome (PCOS) is often accompanied by infertility that necessitates ovulation induction using clomiphene citrate, gonadotropins or even in vitro fertilization (IVF). These treatment methods are known to increase the incidence of multiple pregnancies as well as some negative consequences, including a rise in the risk for gestational diabetes mellitus, pre-eclampsia, etc., Furthermore, pregnancies established after IVF carry an increased risk for maternal complications. However, the increased risk of developing adverse obstetric complications has been suggested to occur independently of obesity as well as in populations without assisted reproductive techniques. Many studies have been performed to study the effect of PCOS on pregnancy and the effect of pregnancy on PCOS. The hormonal milieu that is exaggerated in PCOS women is quite well understood at the biochemical and genetic levels. The maternal and neonatal outcomes of PCOS women who have undergone in vitro fertilization-embryo transfer (IVF-ET) have not been widely studied till date. This review aims to evaluate the current evidence regarding adverse obstetric outcomes of PCOS women undergoing IVF-ET. The rationale of this review is to study whether the adverse obstetric outcomes are increased in PCOS women in general, or particularly in those PCOS women who are undergoing IVF-ET. It is also important to analyze via a literature review whether the increased adverse outcomes are due to infertility in general or PCOS per se. An attempt has been made to give evidence regarding preventive strategies for obstetric complications in PCOS women who have undergone IVF-ET.
abstract_id: PUBMED:36959962
Body mass index is negatively associated with a good perinatal outcome after in vitro fertilization among patients with polycystic ovary syndrome: a national study. Objective: To evaluate the association between body mass index (BMI) and good perinatal outcomes after in vitro fertilization (IVF) among women with polycystic ovary syndrome (PCOS).
Design: Retrospective cohort study using 2012-2015 Society for Assisted Reproductive Technology Clinic Outcomes Reporting System data.
Setting: Fertility clinics.
Patients: To identify patients most likely to have PCOS, we included women with a diagnosis of ovulation disorder and serum antimüllerian hormone >4.45 ng/mL. Exclusion criteria included age ≥ 41 years, secondary diagnosis of diminished ovarian reserve, preimplantation genetic testing, and missing BMI or primary outcome data.
Interventions: None.
Main Outcome Measures: Good perinatal outcome, defined as a singleton live birth at ≥ 37 weeks with birth weight ≥ 2,500 g and ≤ 4,000 g.
Results: The analysis included 9,521 fresh, autologous IVF cycles from 8,351 women. Among women with PCOS, the proportion of cycles with a good perinatal outcome was inversely associated with BMI: underweight 25.1%, normal weight 22.7%, overweight 18.9%, class I 18.4%, class II 14.9%, and class III or super obesity 12.2%. After adjusting for confounders, women in the highest BMI category had 51% reduced odds of a good perinatal outcome compared with normal weight women (adjusted odds ratio 0.49, 95% confidence interval 0.36-0.67).
Conclusions: Among women with PCOS undergoing fresh, autologous IVF, the odds of a good perinatal outcome decline with increasing BMI. Women with PCOS should be counseled that the odds of achieving a good perinatal outcome decrease as their weight increases.
abstract_id: PUBMED:37810820
Young obese patients may benefit from GnRH-a long protocol contributing to higher implantation rate and live birth rate of fresh IVF-ET cycles. Introduction: Obesity has detrimental influences on women reproductive health. There is little experience in optimizing controlled ovarian hyperstimulation (COH) protocols to treat Chinese obese patients who are undergoing in vitro fertilization and embryo transfer (IVF-ET) therapy.
Methods: The clinical outcome differences were retrospectively analyzed among obese patients who received gonadotrophin-releasing hormone agonist (GnRH-a), GnRH antagonist (GnRH-ant), micro dose GnRH-a (mGnRH-a) and GnRH-a long protocol in IVF-ET cycle at Chengdu Jinjiang Hospital for Women's and Children's Health from January 2014 to December 2019.
Results: The transplantation rate of the GnRH-a long protocol group (59.1%) was higher than that of the GnRH-ant (25.9%) and mGnRH-a (36.7%) groups. The total live birth rate of the GnRH-a long protocol group (46.2%) was higher than that of the GnRH-a group (25.9%) and GnRH-ant group (40.3%). The total number of frozen embryos in the GnRH-ant group was higher than in the other groups (P < 0.05). After adjusting for confounding factors, the logistic regression analysis showed that the GnRH-a long protocol group had higher probabilities of biochemical pregnancy, clinical pregnancy, and live birth than the GnRH-a protocol group. The Gn dose in the mGnRH-a group was higher than the other three groups. Whether single or twin, there were similar neonatal outcomes among the four groups including premature birth rate, Apgar score, newborn weight, and length.
Conclusion: For young obese patients undergoing IVF-ET, the GnRH-a long protocol for COH gives better pregnancy outcomes.
Answer: The relationship between obesity and IVF treatment outcomes is complex and varies across different studies. Some studies suggest that obesity may require adjustments in treatment protocols but does not necessarily lead to poorer pregnancy outcomes. For instance, one study found that obese women might require a significantly higher dose of gonadotropins and longer stimulation durations during IVF treatment, but this did not greatly affect pregnancy outcomes such as clinical pregnancy, spontaneous abortion, and ongoing pregnancy rates per transfer when compared to normal-weight women (PUBMED:26285703).
However, other studies indicate that obesity can have a negative impact on certain IVF outcomes. For example, in women aged 30-38 years, BMI was found to significantly affect pregnancy outcomes, with normal-weight women having higher implantation rates, clinical pregnancy rates, and live birth rates compared to overweight women (PUBMED:37553621). Additionally, in women with polycystic ovary syndrome (PCOS), higher BMI was negatively associated with good perinatal outcomes after IVF, with the odds of a good perinatal outcome declining with increasing BMI (PUBMED:36959962).
In contrast, another study reported that neither male nor female overweight/obesity nor their combination significantly affected the clinical pregnancy rate, live birth rate, or abortion rate, although overweight/obesity did affect the fertilization rate and embryo number (PUBMED:36593463). Similarly, a study on Chinese patients with PCOS found that lean PCOS patients obtained higher clinical pregnancy rates compared with overweight/obese PCOS patients (PUBMED:25664123).
Furthermore, a study on young obese patients suggested that using a specific controlled ovarian hyperstimulation protocol, the GnRH-a long protocol, could contribute to higher implantation rates and live birth rates in fresh IVF-ET cycles (PUBMED:37810820).
Overall, while some studies suggest that obesity does not significantly impact key IVF outcomes such as clinical pregnancy and live birth rates, others indicate that obesity, particularly in certain subgroups like women with PCOS or in specific age ranges, can have a detrimental effect on IVF outcomes, including perinatal outcomes and the quality and number of embryos. It is important to note that individual studies may have limitations, and further research is needed to fully understand the impact of obesity on IVF treatment outcomes. |
Instruction: Cross-modal enhancement of speech detection in young and older adults: does signal content matter?
Abstracts:
abstract_id: PUBMED:21478751
Cross-modal enhancement of speech detection in young and older adults: does signal content matter? Objective: The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants.
Design: Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement.
Results: Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face.
Conclusions: Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.
abstract_id: PUBMED:24198805
Cross-modal signatures in maternal speech and singing. We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6- to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined.
abstract_id: PUBMED:34307767
Cross-modal effects in speech perception. Speech research during recent years has moved progressively away from its traditional focus on audition toward a more multisensory approach. In addition to audition and vision, many somatosenses including proprioception, pressure, vibration and aerotactile sensation are all highly relevant modalities for experiencing and/or conveying speech. In this article, we review both long-standing cross-modal effects stemming from decades of audiovisual speech research as well as new findings related to somatosensory effects. Cross-modal effects in speech perception to date are found to be constrained by temporal congruence and signal relevance, but appear to be unconstrained by spatial congruence. Far from taking place in a one-, two- or even three-dimensional space, the literature reveals that speech occupies a highly multidimensional sensory space. We argue that future research in cross-modal effects should expand to consider each of these modalities both separately and in combination with other modalities in speech.
abstract_id: PUBMED:35989307
Cross-modal functional connectivity supports speech understanding in cochlear implant users. Sensory deprivation can lead to cross-modal cortical changes, whereby sensory brain regions deprived of input may be recruited to perform atypical function. Enhanced cross-modal responses to visual stimuli observed in auditory cortex of postlingually deaf cochlear implant (CI) users are hypothesized to reflect increased activation of cortical language regions, but it is unclear if this cross-modal activity is "adaptive" or "mal-adaptive" for speech understanding. To determine if increased activation of language regions is correlated with better speech understanding in CI users, we assessed task-related activation and functional connectivity of auditory and visual cortices to auditory and visual speech and non-speech stimuli in CI users (n = 14) and normal-hearing listeners (n = 17) and used functional near-infrared spectroscopy to measure hemodynamic responses. We used visually presented speech and non-speech to investigate neural processes related to linguistic content and observed that CI users show beneficial cross-modal effects. Specifically, an increase in connectivity between the left auditory and visual cortices-presumed primary sites of cortical language processing-was positively correlated with CI users' abilities to understand speech in background noise. Cross-modal activity in auditory cortex of postlingually deaf CI users may reflect adaptive activity of a distributed, multimodal speech network, recruited to enhance speech understanding.
abstract_id: PUBMED:31414363
Cross-modal correspondences in sine wave: Speech versus non-speech modes. The present study aimed to investigate whether or not the so-called "bouba-kiki" effect is mediated by speech-specific representations. Sine-wave versions of naturally produced pseudowords were used as auditory stimuli in an implicit association task (IAT) and an explicit cross-modal matching (CMM) task to examine cross-modal shape-sound correspondences. A group of participants trained to hear the sine-wave stimuli as speech was compared to a group that heard them as non-speech sounds. Sound-shape correspondence effects were observed in both groups and tasks, indicating that speech-specific processing is not fundamental to the "bouba-kiki" phenomenon. Effects were similar across groups in the IAT, while in the CMM task the speech-mode group showed a stronger effect compared with the non-speech group. This indicates that, while both tasks reflect auditory-visual associations, only the CMM task is additionally sensitive to associations involving speech-specific representations.
abstract_id: PUBMED:33192386
Electrophysiological Dynamics of Visual Speech Processing and the Role of Orofacial Effectors for Cross-Modal Predictions. The human brain generates predictions about future events. During face-to-face conversations, visemic information is used to predict upcoming auditory input. Recent studies suggest that the speech motor system plays a role in these cross-modal predictions, however, usually only audio-visual paradigms are employed. Here we tested whether speech sounds can be predicted on the basis of visemic information only, and to what extent interfering with orofacial articulatory effectors can affect these predictions. We registered EEG and employed N400 as an index of such predictions. Our results show that N400's amplitude was strongly modulated by visemic salience, coherent with cross-modal speech predictions. Additionally, N400 ceased to be evoked when syllables' visemes were presented backwards, suggesting that predictions occur only when the observed viseme matched an existing articuleme in the observer's speech motor system (i.e., the articulatory neural sequence required to produce a particular phoneme/viseme). Importantly, we found that interfering with the motor articulatory system strongly disrupted cross-modal predictions. We also observed a late P1000 that was evoked only for syllable-related visual stimuli, but whose amplitude was not modulated by interfering with the motor system. The present study provides further evidence of the importance of the speech production system for speech sounds predictions based on visemic information at the pre-lexical level. The implications of these results are discussed in the context of a hypothesized trimodal repertoire for speech, in which speech perception is conceived as a highly interactive process that involves not only your ears but also your eyes, lips and tongue.
abstract_id: PUBMED:33147691
The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study. The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals watched and listened to clear and blurred videos of a speaker uttering intact or highly-intelligible degraded (vocoded) words and made binary judgments about word meaning (animate or inanimate). We found that intact speech evoked larger negativity between 280-527-ms than vocoded speech, suggestive of more robust semantic prediction for the intact signal. For visual reliability, we found that greater cross-modal ERP suppression occurred for clear than blurred videos prior to sound onset and for the P2 ERP. Additionally, the later semantic-related negativity tended to be larger for clear than blurred videos. These results suggest that the cross-modal effect is largely confined to suppression of early auditory networks with weak effect on networks associated with semantic prediction. However, the semantic-related visual effect on the late negativity may have been tempered by the vocoded signal's high-reliability.
abstract_id: PUBMED:37631757
Global Guided Cross-Modal Cross-Scale Network for RGB-D Salient Object Detection. RGB-D saliency detection aims to accurately localize salient regions using the complementary information of a depth map. Global contexts carried by the deep layer are key to salient objection detection, but they are diluted when transferred to shallower layers. Besides, depth maps may contain misleading information due to the depth sensors. To tackle these issues, in this paper, we propose a new cross-modal cross-scale network for RGB-D salient object detection, where the global context information provides global guidance to boost performance in complex scenarios. First, we introduce a global guided cross-modal and cross-scale module named G2CMCSM to realize global guided cross-modal cross-scale fusion. Then, we employ feature refinement modules for progressive refinement in a coarse-to-fine manner. In addition, we adopt a hybrid loss function to supervise the training of G2CMCSNet over different scales. With all these modules working together, G2CMCSNet effectively enhances both salient object details and salient object localization. Extensive experiments on challenging benchmark datasets demonstrate that our G2CMCSNet outperforms existing state-of-the-art methods.
abstract_id: PUBMED:35902451
Are auditory cues special? Evidence from cross-modal distractor-induced blindness. A target that shares features with preceding distractor stimuli is less likely to be detected due to a distractor-driven activation of a negative attentional set. This transient impairment in perceiving the target (distractor-induced blindness/deafness) can be found within vision and audition. Recently, the phenomenon was observed in a cross-modal setting involving an auditory target and additional task-relevant visual information (cross-modal distractor-induced deafness). In the current study, consisting of three behavioral experiments, a visual target, indicated by an auditory cue, had to be detected despite the presence of visual distractors. Multiple distractors consistently led to reduced target detection if cue and target appeared in close temporal proximity, confirming cross-modal distractor-induced blindness. However, the effect on target detection was reduced compared to the effect of cross-modal distractor-induced deafness previously observed for reversed modalities. The physical features defining cue and target could not account for the diminished distractor effect in the current cross-modal task. Instead, this finding may be attributed to the auditory cue acting as an especially efficient release signal of the distractor-induced inhibition. Additionally, a multisensory enhancement of visual target detection by the concurrent auditory signal might have contributed to the reduced distractor effect.
abstract_id: PUBMED:23386124
Cross-modal prediction in speech depends on prior linguistic experience. The sight of a speaker's facial movements during the perception of a spoken message can benefit speech processing through online predictive mechanisms. Recent evidence suggests that these predictive mechanisms can operate across sensory modalities, that is, vision and audition. However, to date, behavioral and electrophysiological demonstrations of cross-modal prediction in speech have considered only the speaker's native language. Here, we address a question of current debate, namely whether the level of representation involved in cross-modal prediction is phonological or pre-phonological. We do this by testing participants in an unfamiliar language. If cross-modal prediction is predominantly based on phonological representations tuned to the phonemic categories of the native language of the listener, then it should be more effective in the listener's native language than in an unfamiliar one. We tested Spanish and English native speakers in an audiovisual matching paradigm that allowed us to evaluate visual-to-auditory prediction, using sentences in the participant's native language and in an unfamiliar language. The benefits of cross-modal prediction were only seen in the native language, regardless of the particular language or participant's linguistic background. This pattern of results implies that cross-modal visual-to-auditory prediction during speech processing makes strong use of phonological representations, rather than low-level spatiotemporal correlations across facial movements and sounds.
Answer: Yes, signal content does matter in cross-modal enhancement of speech detection in both young and older adults. A study examining the effects of age and visual content on cross-modal enhancement of auditory speech detection found that visual signal content affects the degree of enhancement observed. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli tested—an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure—with the greatest enhancement observed for the unaltered clip of the talker's face. In contrast, older adults exhibited significant cross-modal enhancement only with the unaltered face. This suggests that as visual content diverges from the original clip of the talker's face, the enhancement decreases, and this decrease is greater for older participants. The results support the hypothesis of an age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity (PUBMED:21478751). |
Instruction: A medical-psychiatric unit in a general hospital: effective combined somatic and psychiatric care?
Abstracts:
abstract_id: PUBMED:24472335
A medical-psychiatric unit in a general hospital: effective combined somatic and psychiatric care? Objective: To study the effectiveness of combined integral somatic and psychiatric treatment in a medical-psychiatric unit (MPU).
Design: Retrospective case-note study.
Method: The case notes of all patients admitted to the MPU at the VU Medical Center, Amsterdam, in 2011 were analysed. Data on reasons for referral and somatic and psychiatric diagnoses were collected. Using a global clinical assessment scale and the Health of the Nations Outcome Scales (HoNOS), data on psychiatric symptomology and limitations, behavioural problems, social problems and limitations associated with physical health problems were collected on both admission and discharge. In this way the effect of the admission period on various problems was determined.
Results: In 2011 there were 139 admissions to the MPU with a wide variation of somatic and psychiatric diagnoses. The average admission period was 9 days. Global clinical evaluation of the treatment goals set for somatic and psychiatric conditions showed that more than 90% and 85% of the treatment goals, respectively, were completely achieved. HoNOS scores showed a reduction in severity of both psychiatric and somatic problems. The total HoNOS-core was significantly reduced by nearly 3.5 points - a large effect size.
Conclusion: The MPU has succeeded in its goal to deliver integral care to a very diverse group of patients with somatic and psychiatric co-morbidities. It is able to offer care to a vulnerable patient group in which it can be presumed that treatment on a non-integrated unit could not have been delivered or not delivered adequately, due to the complexity of their somatic and behavioural problems.
abstract_id: PUBMED:7390150
Liaison psychiatry: a model for medical care on a general hospital psychiatric unit. Liaison psychiatry is traditionally practiced on the medical and surgical floors of the general hospital. The need for liaison psychiatry on the inpatient psychiatric unit as opposed to its usual setting was realized when the medical care requirements of hospitalized psychiatric patients was assessed. In many general hospitals this medical care is provided by a psychiatrist in consultation with medical and surgical colleagues. Over a three-year period at the Medical Center Hospital of Vermont 563 medical/surgical consultations were provided to the inpatient psychiatric unit. To utilize these consultations most effectively, the role best suited for the psychiatrist was that of liaison-consultee. Case examples are used to demonstrate the effectiveness of employing liaison skills in the treatment of somatic problems on the inpatient psychiatric unit. The educational implications of learning the liaison model in this context are discussed.
abstract_id: PUBMED:496137
Specificity of function of a psychiatric unit in a general hospital (author's transl) The recent creation of a psychiatric unit in the hospital Broussais gives rise to reflection on what, according to the authors, could constitute the originality of such a treatment unit. It should not be used to satisfy all the requests for psychiatric care within the hospital itself, nor act in parallel with the specialized psychiatric departments established as such in psychiatric or general hospitals. The presence, however, of a group of psychiatrists working as a team in a general hospital, and having several beds for their own autonomous use, should enable the first approach to be made, under the best conditions, to pathological disorders produced by psychological disturbances which are not immediately recognized by the patient: somatic manifestations of anxiety or depression, psychosomatic disorders, behavioural problems such as alcoholism, or even suicide attempts. This clinical function cannot be dissociated from the daily therapeutic approach of the health team or from the conception of medical psychology training based on the true needs of general practitioners. It is also the starting point for a close collaboration with physicians and basic scientists in the field of medical research.
abstract_id: PUBMED:7568651
The pediatric medical-psychiatric unit in a psychiatric hospital. Interest in the development of pediatric medical-psychiatric units continues to grow, driven by clinical, financial, and interdisciplinary considerations. While virtually all of the pediatric medical-psychiatric units reported in the literature to date have arisen in the pediatric setting, there are considerations that may encourage the development of such programs in the psychiatric setting. The authors report on the development and characteristics of a pediatric medical-psychiatric specialty inpatient unit developed in a psychiatric hospital. Advantages and disadvantages of the psychiatric hospital setting are considered in light of cumulative experience.
abstract_id: PUBMED:3802003
A psychiatric intensive care unit in a general hospital setting. A twelve month period of the functioning of a psychiatric intensive care unit in a general hospital is reviewed. The unit has actually been functioning for about three and a half years. Although there were growing pains initially, the unit has become an integral part of the psychiatric inpatient service. It serves to provide intensive care to acutely ill patients and provides a safe, secure environment. It also reduces the number of disturbed patients on the two general units. Very disturbed patients are expertly managed by the staff and a surprisingly low percentage of patients have to be transferred to the local mental hospital under certification. The experience demonstrates that a psychiatric intensive care unit based on a general medical intensive care unit model can function well with benefit both to patients and staff.
abstract_id: PUBMED:3967821
A psychiatric unit becomes a psychiatric-medical unit: administrative and clinical implications. Increasing awareness of the frequent concurrence of medical and psychiatric illness has led to a resurgence of interest in psychiatric-medical units. This paper describes the conversion of a 19-bed general hospital psychiatric unit to a psychiatric-medical model. The conversion involved hiring a full-time chief and changing priorities for elective admission, but did not involve major changes in staffing; community-based psychiatrists continued to admit the majority of patients. Arrangements were made for medical house staff coverage of emergent medical problems, while daily medical care remained in the hands of the psychiatrists and their private medical consultants. In the year following the transition numerous patients with combined acute medical and psychiatric illness not treatable in the previous model were accepted and successfully treated. Quantitative study of annual statistics from the pre- and posttransition years revealed the following: The average age of patients increased from 46 to 54 years. The proportion of patients over 65 increased from 19% to 34.9%. The proportion of patients with identified concurrent medical diagnoses increased from 54.7% to 69.1%. Dispositions to nursing homes and chronic care facilities decreased from 10.5% to 8.9%. Length of stay increased from 19.3 to 23.1 days. The average daily hospital bill for psychiatric inpatients rose by 24.6%, compared with a hospital-wide average increase of 16.3%. Although the change in model appeared to offer effective treatment to previously underserved patients, it implied a significant shifting of patients and of costs. Administrative implications are discussed, and a list of preconditions for a successful conversion are suggested.
abstract_id: PUBMED:7250916
The mixed medical-psychiatric unit: an alternative approach to inpatient psychiatric care. In a 256-bed general hospital, psychiatric patients are cared for on a single unit with medical patients. The unit, developed in collaboration with a state university's medical school, has been employed successfully in the teaching of psychiatric residents and medical students and has provided benefits to both the psychiatric and medical patients. A normalizing effect on the disturbed behavior of psychiatric patients has been observed and has been attributed to the presence of nonpsychiatric patients, and the treatment of medical patients on the unit has been humanized through greater nurse-patient interaction and the provision of activity programs and recreational facilities. The experience indicates that a wide range of psychiatric patients can be cared for in a mixed setting and that such a setting fosters continuity of care. The limitations of the unit and the policies and conditions necessary for its operation are discussed.
abstract_id: PUBMED:18807328
Palliative care in a psychiatric-somatic care unit Patients with severe psychiatric and somatic disorders may require admission to a combined psychiatric-somatic care unit. These units provide specialised psychiatric and somatic care as well as palliative care. This is illustrated by two case reports. A 51-year-old man with a malignant brain tumour was admitted to our psychiatric-somatic care unit after threatening his wife and children. He was aggressive and confused. Seizures were suspected and palliative care was needed. Within a few weeks his condition deteriorated. He died 1 day after terminal sedation had been initiated. A 78-year-old woman was admitted to receive daily electroconvulsive treatment (ECT) for depression with catatonia. The ECT had to be interrupted repeatedly due to comorbid infections and complications. She died 3 days after palliative care was initiated.
abstract_id: PUBMED:7583489
Treatment of Hassidic Jewish patients in a general hospital medical-psychiatric unit. A combined medical-psychiatric inpatient unit at a general medical center in Brooklyn, New York, provides inpatient psychiatric treatment to members of the Orthodox Jewish sect of the Lubavitcher Hassidim, who are generally reluctant to accept treatment in traditional psychiatric inpatient settings. The unit's biological treatment model, which emphasizes long-term maintenance on psychotropic medication, monthly postdischarge pharmacotherapeutic visits, lowered expectations, and minimal use of expressive psychotherapy, is compatible with the Hassidic community's view of mental illness. A practicing Orthodox Jewish psychiatrist on the unit staff sensitizes other staff members to religious and cultural issues in treating Hassidic patients.
abstract_id: PUBMED:4034890
The medical-psychiatric unit: a novel psychosomatic approach. A retrospective chart review of 600 general hospital patients, of whom 200 had been admitted to a medical-psychiatric unit, 200 to a conventional psychiatric unit and 200 to medical-surgical units, and for whom psychiatric consultation had been requested, disclosed significant differences in the patient populations of these services. The data suggest that consultation psychiatry and medical-psychiatric units serve two different subgroups of patients with combined physical and psychiatric illness. A comprehensive model of clinical psychosomatic medicine encompassing consultation psychiatry, liaison psychiatry, medical-psychiatric units and after-care medical-psychiatric clinics, is proposed.
Answer: The effectiveness of combined somatic and psychiatric care in a medical-psychiatric unit (MPU) within a general hospital setting has been studied and demonstrated in various contexts. A retrospective case-note study of patients admitted to the MPU at the VU Medical Center in Amsterdam in 2011 showed that the unit was successful in delivering integral care to a diverse group of patients with somatic and psychiatric comorbidities. The study found that more than 90% of somatic and 85% of psychiatric treatment goals were completely achieved, and there was a significant reduction in the severity of both psychiatric and somatic problems as measured by the Health of the Nations Outcome Scales (HoNOS) (PUBMED:24472335).
The concept of a medical-psychiatric unit is not new, and various models have been implemented over the years. For instance, liaison psychiatry has been practiced on inpatient psychiatric units, where the medical care requirements of hospitalized psychiatric patients are assessed and managed by a psychiatrist in consultation with medical and surgical colleagues. This approach has been shown to be effective in treating somatic problems on the inpatient psychiatric unit (PUBMED:7390150).
The specificity of function of a psychiatric unit in a general hospital has been discussed, emphasizing the importance of a team of psychiatrists working in a general hospital to address pathological disorders produced by psychological disturbances that manifest somatically, such as psychosomatic disorders, behavioral problems, and suicide attempts (PUBMED:496137).
Additionally, pediatric medical-psychiatric units have been developed in psychiatric hospitals, offering advantages and disadvantages in this setting based on cumulative experience (PUBMED:7568651). Similarly, psychiatric intensive care units in general hospitals have been reviewed, demonstrating benefits to both patients and staff by providing intensive care to acutely ill patients in a safe, secure environment (PUBMED:3802003).
The conversion of a general hospital psychiatric unit to a psychiatric-medical model has been described, with the transition leading to the successful treatment of patients with combined acute medical and psychiatric illness (PUBMED:3967821). Moreover, the mixed medical-psychiatric unit model has been employed successfully in a general hospital, providing benefits to both psychiatric and medical patients and fostering continuity of care (PUBMED:7250916). |
Instruction: Breast and cervical cancer screening practices among disabled women aged 40-75: does quality of the experience matter?
Abstracts:
abstract_id: PUBMED:18788985
Breast and cervical cancer screening practices among disabled women aged 40-75: does quality of the experience matter? Background: Women with disabilities (WWD) face significant barriers accessing healthcare, which may affect rates of routine preventive services. We examined the relationship between disability status and routine breast and cervical cancer screening among middle-aged and older unmarried women and the differences in reported quality of the screening experience.
Methods: Data were from a 2003-2005 cross-sectional survey of 630 unmarried women in Rhode Island, 40-75 years of age, stratified by marital status (previously vs. never married) and partner gender (women who partner with men exclusively [WPM] vs. women who partner with women exclusively or with both women and men [WPW]).
Results: WWD were more likely than those without a disability to be older, have a high school education or less, have household incomes <$30,000, be unemployed, and identify as nonwhite. In addition, WWD were less likely to report having the mammogram or Pap test procedure explained and more likely to report that the procedures were difficult to perform. After adjustment for important demographic characteristics, we found no differences in cancer screening behaviors by disability status. However, the quality of the cancer screening experience was consistently and significantly associated with likelihood of routine cancer screening.
Conclusions: Higher quality of cancer screening experience was significantly associated with likelihood of having routine breast and cervical cancer screening. Further studies should explore factors that affect quality of the screening experience, including facility characteristics and interactions with medical staff.
abstract_id: PUBMED:32723342
Self-reported breast and cervical cancer screening practices among women in Ghana: predictive factors and reproductive health policy implications from the WHO study on global AGEing and adult health. Background: Breast and cervical cancers constitute the two leading causes of cancer deaths among women in Ghana. This study examined breast and cervical screening practices among adult and older women in Ghana.
Methods: Data from a population-based cross-sectional study with a sample of 2749 women were analyzed from the study on global AGEing and adult health conducted in Ghana between 2007 and 2008. Binary and multivariable ordinal logistic regression analyses were performed to assess the association between socio-demographic factors, breast and cervical screening practices.
Results: We found that 12.0 and 3.4% of adult women had ever had pelvic screening and mammography respectively. Also, 12.0% of adult women had either one of the screenings while only 1.8% had both screening practices. Age, ever schooled, ethnicity, income quantile, father's education, mother's employment and chronic disease status were associated with the uptake of both screening practices.
Conclusion: Nationwide cancer awareness campaigns and education should target women to improve health seeking behaviours regarding cancer screening, diagnosis and treatment. Incorporating cancer screening as a benefit package under the National Health Insurance Scheme can reduce financial barriers for breast and cervical screening.
abstract_id: PUBMED:17188212
Disability and receipt of clinical preventive services among women. Background: More individuals are surviving catastrophic injuries and living longer with persistent disability; however, their receipt of clinical preventive services is not well understood as compared with those without disabilities given the dual focus of care on both primary prevention and the prevention of secondary complications related to their disabilities.
Methods: Longitudinal analyses of 1999-2002 Medical Expenditure Survey (MEPS). Study sample consisted of 3,183 community-dwelling women aged 51-64 years and followed for 2 full years. Women with disabilities were defined as having reported any limitation in any area of activity of daily living in 2 years. Recommended clinical preventive services were defined as receiving the following at the recommended intervals: colorectal, cervical, and breast cancer; cholesterol screening; and influenza immunization. chi(2) tests and multiple logistic regressions were used to examine variations in use of clinical preventive services.
Results: Overall, 23% of the women in the study (n = 835) were disabled. Disabled women, however, were less likely to receive mammography and Pap smears within the recommended intervals. However, disabled women were more likely to receive influenza immunization, cholesterol screening, and colorectal screening within the recommended intervals. Among the disabled, usual source of care and health insurance remained significant predictors of receipt of clinical preventive services across all types,
Conclusions: Disabled women were less likely to receive some of the cancer screening services, suggesting a need for targeted interventions to promote breast cancer and cervical cancer screening. Increased access to health care insurance and health care providers may also help.
abstract_id: PUBMED:35017369
Screening practices for breast and cervical cancer and associated factors, among rural women in Vellore, Tamil Nadu. Background: Population-based screening coverage for breast and cervical cancer screening in the community is inadequately reported in India. This study assessed screening rates, awareness, and other factors affecting screening, among rural women aged 25-60 years in Vellore, Tamil Nadu.
Methods: Women aged 25-60 years, from five randomly selected villages of a rural block were included in this cross-sectional study in Vellore, Tamil Nadu. Households were selected by systematic random sampling, followed by simple random sampling of eligible women in the house. A semi-structured questionnaire was used to assess screening practices, awareness, and other factors related to cervical and breast cancer.
Results: Although 43.8% and 57.9% were aware of the availability of screening for cervical and breast cancer respectively, screening rates were only 23.4% (95% confidence interval [CI]: 18.4-28.4%) and 16.2% (95% CI: 11.9-20.5%), respectively. Adequate knowledge (score of ≥50%) on breast cancer was only 5.9%, with 27.2% for cervical cancer. Only 16.6% of women had ever attended any health education program on cancer. Exposure to health education (breast screening odds ratio [OR]: 6.89, 95% CI: 3.34-14.21; cervical screening OR: 6.92, 95% CI: 3.42-14.00); and adequate knowledge (breast OR: 4.69, 95% CI: 1.55-14.22; cervix OR: 3.01, 95% CI: 1.59-5.68) were independently associated with cancer screening.
Conclusion: Awareness and screening rates for breast and cervical cancer are low among rural women in Tamil Nadu, a south Indian state with comparatively good health indices, with health education being an important factor associated with screening practices.
abstract_id: PUBMED:9422006
Breast and cervical cancer screening among women with physical disabilities. Objective: This article reports findings from the National Study of Women with Physical Disabilities about rates of screening for breast and cervical cancer and factors associated with regular screening in a large sample of women with a variety of physical disabilities and a comparison group of women without disabilities.
Design: Case-comparison study using written survey. Data were analyzed using measures of central tendency, chi 2 analysis, logistic regression, and risk using odds ratios.
Setting: General community.
Participants: A total of 843 women, 450 with disabilities and 393 of their able-bodied friends, aged 18 to 65, who completed the written questionnaire. The most common primary disability type was spinal cord injury (26%), followed by polio (18%), neuromuscular disorders (12%), cerebral palsy (10%), multiple sclerosis (10%), and joint and connective tissue disorders (8%). Twenty-two percent had severe functional limitations, 52% had moderate disabilities, and 26% had mild disabilities.
Main Outcome Measures: Outcomes were measured in terms of frequency of pelvic exams and mammograms.
Results: Women with disabilities tend to be less likely than women without disabilities to receive pelvic exams on a regular basis, and women with more severe functional limitations are significantly less likely to do so. No significant difference was found between women with and without disabilities, regardless of severity of functional limitation, in receiving mammograms within the past 2 years. Perceived control emerged as a significant enhancement factor for mammograms and marginally for pelvic exams. Severity of disability was a significant risk factor for noncompliance with recommended pelvic exams, but not mammograms. Race was a significant risk factor for not receiving pelvic exams, but not mammograms. Household income and age did not reach significance as risk factors in either analysis.
Conclusions: Women with physical disabilities are at a higher risk for delayed diagnosis of breast and cervical cancer, primarily for reasons of environmental, attitudinal, and information barriers. Future research should focus on the subpopulations that were not surveyed adequately in this study, women with disabilities who have low levels of education or income, or who are of minority status.
abstract_id: PUBMED:11817921
Breast and cervical cancer screening practices among Hispanic women in the United States and Puerto Rico, 1998-1999. Background: Results from recent studies suggest that Hispanic women in the United States may underuse cancer screening tests and face important barriers to screening.
Methods: We examined the breast and cervical cancer screening practices of Hispanic women in 50 states, the District of Columbia, and Puerto Rico from 1998 through 1999 by using data from the Behavioral Risk Factor Surveillance System.
Results: About 68.2% (95% confidence interval [CI] = 66.3 to 70.1%) of 7,253 women in this sample aged 40 years or older had received a mammogram in the past 2 years. About 81.4% (95% CI = 80.3 to 82.5%) of 12,350 women aged 18 years or older who had not undergone a hysterectomy had received a Papanicolaou test in the past 3 years. Women with lower incomes and those with less education were less likely to be screened. Women who had seen a physician in the past year and those with health insurance coverage were much more likely to have been screened. For example, among those Hispanic women aged 40 years or older who had any health insurance coverage (n = 6,063), 72.7% (95% CI 70.7-74.6%) had had a mammogram in the past 2 years compared with only 54.8% (95% CI 48.7-61.0%) of women without health insurance coverage (n = 1,184).
Conclusions: These results underscore the need for continued efforts to ensure that Hispanic women who are medically underserved have access to cancer screening services.
abstract_id: PUBMED:12115366
Breast and cervical carcinoma screening practices among women in rural and nonrural areas of the United States, 1998-1999. Background: Prior studies have suggested that women living in rural areas may be less likely than women living in urban areas to have had a recent mammogram and Papanicolau (Pap) test and that rural women may face substantial barriers to receiving preventive health care services.
Methods: The authors examined both breast and cervical carcinoma screening practices of women living in rural and nonrural areas of the United States from 1998 through 1999 using data from the Behavioral Risk Factor Surveillance System. The authors limited their analyses of screening mammography and clinical breast examination to women aged 40 years or older (n = 108,326). In addition, they limited their analyses of Pap testing to women aged 18 years or older who did not have a history of hysterectomy (n = 131,813). They divided the geographic areas of residence into rural areas and small towns, suburban areas and smaller metropolitan areas, and larger metropolitan areas.
Results: Approximately 66.7% (95% confidence interval [CI] = 65.8% to 67.6%) of women aged 40 years or older who resided in rural areas had received a mammogram in the past 2 years, compared with 75.4% of women living in larger metropolitan areas (95% CI = 74.9% to 75.9%). About 73.0% (95% CI = 72.2% to 73.9%) of women aged 40 years or older who resided in rural areas had received a clinical breast examination in the past 2 years, compared with 78.2% of women living in larger metropolitan areas (95% CI = 77.8% to 78.7%). About 81.3% (95% CI = 80.6% to 82.0%) of 131,813 rural women aged 18 years or older who had not undergone a hysterectomy had received a Pap test in the past 3 years, compared with 84.5% of women living in larger metropolitan areas (95% CI = 84.1% to 84.9%). The differences in screening across rural and nonrural areas persisted in multivariate analysis (P < 0.001).
Conclusions: These results underscore the need for continued efforts to provide breast and cervical carcinoma screening to women living in rural areas of the United States.
abstract_id: PUBMED:17324096
Barriers and missed opportunities in breast and cervical cancer screening among women aged 50 and over, New York City, 2002. Objectives: Breast and cervical cancer screening both are routinely recommended for women. However, data are sparse on factors associated with joint screening behaviors. Our objective to describe the factors associated with receiving both, one, or neither screening test among women aged > or = 50.
Methods: Using data from the New York City Community Health Survey (NYC CHS), we compared the characteristics of women > age 50 (n = 2059) who missed (1) a Pap smear only, (2) mammography only, or (3) both screening procedures with the characteristics of women who received both tests. Analyses were performed using multiple logistic regression.
Results: Seventy-three percent of women had both screening tests, 6.7% needed a Pap smear only, 10% missed mammography only, and 10% missed both tests. After multiple logistic regression, missing a Pap smear only was more likely among women > 70 years compared with younger women and among women from Queens than from Manhattan. Missing mammography only was more common among women not reporting a personal doctor than among those with a doctor and among uninsured women relative to the privately insured. Missing both tests was more common among women > 74 years, current smokers compared with never smokers, women without a personal doctor, and the uninsured. This was less common among women from the Bronx than women from Manhattan and among racial/ethnic minorities compared with non-Hispanic white women.
Conclusions: The predictors of each screening outcome appear to be qualitatively different. Changes in provider practices and targeted education may improve Pap smear screening rates, whereas policy initiatives and increased access for the uninsured may raise mammography rates. To achieve optimal preventive care, coscreening should be considered.
abstract_id: PUBMED:11177163
Breast and cervical screening for women in the United Kingdom. Two fully fledged National Health Service cancer screening programmes are currently available in the United Kingdom: breast and cervical screening for women. Breast screening was introduced for women aged 50 years and older, following the publication of the Forrest report in 1986. It has recently been calculated that the breast screening programme in England and Wales has been responsible for around a third of the fall in the death rate from breast cancer among women aged between 55 and 69 years. The cervical screening programme did not have such a clear start and began rather haphazardly in the 1960s. The programme now prevents around 2000 cases of cervical cancer and saves around 1300 lives each year. The duty of the health professional is no longer seen as being to persuade a woman to accept screening, but rather to provide her with accurate information about screening and to assist her in understanding it.
abstract_id: PUBMED:9790660
Use of cervical and breast cancer screening among women with and without functional limitations--United States, 1994-1995. The national health objectives for 2000 include increasing to at least 85% the proportion of all women aged > or =18 years who have received a Papanicolaou test within the preceding 3 years and increasing to at least 80% the proportion of women aged > or =40 years who have ever had a mammogram. However, national data on breast and cervical cancer screening specifically for women with disabilities is limited. During 1994-1995, CDC, 12 other federal agencies, and one foundation sponsored a disability survey as a special supplement to the National Health Interview Survey (NHIS). In 1994, questions on breast and cervical cancer screening were included in the NHIS Health Promotion/Disease Prevention Year 2000 Objectives Supplement. This report provides the findings of an analysis of these linked data, which indicate that women with functional limitations (FLs) were less likely than women without FLs to have had a Pap test within the previous 3 years, and women aged > or =65 years with three or more FLs were less likely to have ever had a mammogram compared with similarly aged women with no limitations.
Answer: The quality of the cancer screening experience is significantly associated with the likelihood of having routine breast and cervical cancer screening among disabled women aged 40-75. Women with disabilities (WWD) were found to be less likely to report having the mammogram or Pap test procedure explained and more likely to report that the procedures were difficult to perform. Despite these challenges, after adjusting for demographic characteristics, there were no differences in cancer screening behaviors by disability status. However, the quality of the cancer screening experience was consistently and significantly associated with the likelihood of routine cancer screening. This suggests that improving the quality of the screening experience could potentially increase the rates of routine breast and cervical cancer screening among disabled women (PUBMED:18788985). |
Instruction: Do the Y-set and double-bag systems reduce the incidence of CAPD peritonitis?
Abstracts:
abstract_id: PUBMED:11158410
Do the Y-set and double-bag systems reduce the incidence of CAPD peritonitis? A systematic review of randomized controlled trials. Background: Peritonitis is the most frequent serious complication of continuous ambulatory peritoneal dialysis (CAPD). It has a major influence on the number of patients switching from CAPD to haemodialysis and has probably restricted the wider acceptance and uptake of CAPD as an alternative mode of dialysis. This systematic review sought to determine if modifications of the transfer set (Y-set or double-bag systems) used in CAPD exchanges are associated with a reduction in peritonitis and an improvement in other relevant outcomes.
Methods: Based on a comprehensive search strategy, we undertook a systematic review of randomized or quasi-randomized controlled trials comparing double-bag and/or Y-set CAPD exchange systems with standard systems, or comparing double-bag with Y-set systems, in patients with end-stage renal disease (ESRD) treated with CAPD. Only published data were used. Data were abstracted by a single investigator onto a standard form and subsequently entered into Review Manager 4.0.4. Its statistical package, Metaview 3.1, calculated an odds ratio (OR) for dichotomous data and a (weighted) mean difference for continuous data with 95% confidence intervals.
Results: Twelve eligible trials with a total of 991 randomized patients were identified. In trials comparing either the Y-set or double-bag systems with the standard systems, significantly fewer patients (133/363 vs 158/263; OR 0.33, 95% CI 0.24-0.46) experienced peritonitis and the number of patient-months on CAPD per episode of peritonitis was consistently greater. When the double-bag systems were compared with the Y-set systems significantly fewer patients experienced peritonitis (44/154 vs 66/138; OR 0.44, 95% CI 0.27-0.71) and the number of patient-months on CAPD per episode of peritonitis was also greater.
Conclusions: Double-bag systems should be the preferred exchange systems in CAPD.
abstract_id: PUBMED:11406068
Double bag or Y-set versus standard transfer systems for continuous ambulatory peritoneal dialysis in end-stage renal disease. Background: Peritonitis is the most frequent serious complication of continuous ambulatory peritoneal dialysis (CAPD). It has a major influence on the number of patients switching from CAPD to haemodialysis and has probably restricted the wider acceptance and uptake of CAPD as an alternative mode of dialysis.
Objectives: This systematic review sought to determine if modifications of the transfer set (Y-set or double bag systems) used in CAPD exchanges are associated with a reduction in peritonitis and an improvement in other relevant outcomes.
Search Strategy: A broad search strategy was employed which attempted to identify all RCTs or quasi-RCTs relevant to the management of end-stage renal disease (ESRD). Five electronic databases were searched (Medline 1966-1999, EMBASE 1984-1999, CINAHL 1982-1996, BIOSIS 1985-1996 and the Cochrane Library), authors of included studies and relevant biomedical companies were contacted, reference lists of identified RCTs and relevant narrative reviews were screened and Kidney International 1980-1997 was hand searched.
Selection Criteria: Randomised or quasi-randomised controlled trials comparing double bag, Y-set and standard CAPD exchange systems in patients with ESRD.
Data Collection And Analysis: Data were abstracted by a single investigator onto a standard form and subsequently entered into Review Manager 4.0.4. Odds Ratio (OR) for dichotomous data and a (Weighted) Mean Difference (WMD) for continuous data were calculated with 95% confidence intervals (95% CI).
Main Results: Twelve eligible trials with a total of 991 randomised patients were identified. In trials comparing either the Y-set or double bag systems with the standard systems significantly fewer patients (OR 0.33, 95% CI 0.24 to 0.46) experienced peritonitis and the number of patient-months on CAPD per episode of peritonitis were consistently greater. When the double bag systems were compared with the Y-set systems significantly fewer patients experienced peritonitis (OR 0.44, 95% CI 0.27 to 0.71) and the numbers of patient-months on CAPD/ episode of peritonitis were also greater.
Reviewer's Conclusions: Double bag systems should be the preferred exchange systems in CAPD.
abstract_id: PUBMED:1680408
Divergent etiologies of CAPD peritonitis in integrated double bag and traditional systems? Thirty-four patients on integrated double bag systems (IDBS) without disinfectants were compared with 33 patients on traditional single bag systems for incidence, probability of remaining free of peritonitis, type, and association of peritonitis (PE). In another 13 patients, the influence of a change in bag system was analyzed. On IDBS, the probability of remaining free of PE was 59% at 12 months and the incidence of PE was 0.44/year (1/27.3 months) while on traditional systems the probability of remaining PE free was 29% (p = 0.03) and the incidence was 1.06/year (1/11.3 months). The switch from single bag systems to IDBS decreased the incidence from 1.7/year to 0.7/year. The distribution of microbes that caused peritonitis on IDBS was different from patients on traditional systems, where the causative microbes were mainly coagulase negative Staphylococci. The use of IDBS decreased considerably the occurrence of infections caused by these skin bacteria. Some associated disorder (e.g., exit site infection, dental infection, broken transfer set) was found significantly more often in patients on IDBS. In conclusion, IDBS affect both the occurrence and type of PE by diminishing effectively intraluminal contamination by skin bacteria. Thus, other sources of infection have a proportionally greater significance.
abstract_id: PUBMED:25117423
Double bag or Y-set versus standard transfer systems for continuous ambulatory peritoneal dialysis in end-stage kidney disease. Background: Peritonitis is the most frequent serious complication of continuous ambulatory peritoneal dialysis (CAPD). It has a major influence on the number of patients switching from CAPD to haemodialysis and has probably restricted the wider acceptance and uptake of CAPD as an alternative mode of dialysis.This is an update of a review first published in 2000.
Objectives: This systematic review sought to determine if modifications of the transfer set (Y-set or double bag systems) used in CAPD exchanges are associated with a reduction in peritonitis and an improvement in other relevant outcomes.
Search Methods: We searched the Cochrane Renal Group's Specialised Register through contact with the Trials Search Co-ordinator. Studies contained in the Specialised Register are identified through search strategies specifically designed for CENTRAL, MEDLINE and EMBASE. Date of last search: 22 October 2013.
Selection Criteria: Randomised controlled trials (RCTs) or quasi-RCTs comparing double bag, Y-set and standard peritoneal dialysis (PD) exchange systems in patients with end-stage kidney disease.
Data Collection And Analysis: Data were abstracted by a single investigator onto a standard form and analysed by Review Manager. Analysis was by a random effects model and results were expressed as risk ratio (RR) or mean difference (MD) with 95% confidence intervals (CI).
Main Results: Twelve eligible trials with a total of 991 randomised patients were identified. Despite the large total number of patients, few trials covered the same interventions, small numbers of patients were enrolled in each trial and the methodological quality was suboptimal. Y-set and twin-bag systems were superior to conventional spike systems (7 trials, 485 patients, RR 0.64, 95% CI 0.53 to 0.77) in preventing peritonitis in PD.
Authors' Conclusions: Disconnect systems should be the preferred exchange systems in CAPD.
abstract_id: PUBMED:9853278
Prevention of peritonitis with disconnect systems in CAPD: a randomized controlled trial. The Mexican Nephrology Collaborative Study Group. Background: Recently, disconnect systems for CAPD that are associated with a reduced frequency of peritonitis have been introduced. Our objective was to compare the incidence of peritonitis using three current CAPD systems in a high-risk population with low educational and socioeconomic levels, and high prevalence of malnutrition.
Methods: In a prospective controlled trial, 147 patients commencing CAPD were randomly assigned to one of three groups: 29 to the conventional, 57 to the Y-set, and 61 to the twin bag systems. The number of peritonitis episodes was registered, and patients were followed up for an average of 11.3 months.
Results: The average peritonitis-free interval for the conventional group was 6.1 months, for the Y system was 12.0 months, and for the twin bag was 24.8 months (P < 0.001). By multivariate analysis, the only factor associated with peritonitis was the CAPD system. Peritonitis-related hospitalization was 5.3 +/- 2.0, 2.7 +/- 1.0, and 1.5 +/- 0.9 days/patient/year in the conventional, Y system, and twin bag groups, respectively. The cost per bag was similar for the conventional and Y system, but higher for the twin bag. However, the total costs of treatment (pesos/patient/year) were lower for twin bag (62,159 for the conventional, 70,275 for the Y system, and 54,387 for the twin bag), due to the lower peritonitis incidence and associated hospitalizations.
Conclusions: Y system and twin bag use was associated with a reduction of 50 and 75% peritonitis incidence, respectively, in patients on CAPD. The cost of the twin bag was actually lower, because of savings from a decreased usage of antibiotics and fewer hospitalizations.
abstract_id: PUBMED:9251369
'O' set connector system in CAPD. 51 CAPD patients (age 55.5 +/- 14.5 yrs, 35 male, 16 female) on CAPD using 'O' set were studied retrospectively during the period January 1993 to April 1995. Etiology of ESRD was Diabetic nephropathy-25(49%) and the other causes-26(51%). The total duration of observation on 'O' set was 553 patient months, the mean duration was 10.8 +/- 6.1 months. 24 patients (47%) developed total of 30 episodes of peritonitis. The incidence of peritonitis was 18.4 patient months per episode of peritonitis. The organisms responsible for peritonitis were Gram positive-6(20%), Gram negative-3(10%), Fungal-1(3.3%), Mycobacterial-1(3.3%), Eosinophilic-1(3.3%), Sterile-12(40%) and unknown-6(20%) 2 patients of bacterial peritonitis and a patient with tuberculous peritonitis died while rest of the patients responded favourably to antibiotics. 13(52%) diabetic patients and 11(42%) non-diabetic patients had peritonitis (p-NS) and the peritonitis rates in diabetics and non diabetics were 18.3 and 18.6 patient months per episode respectively (p-NS). Exit site infection was seen in 5 patients (10%) (Staph aureus-4, Enterococci-1) and all responded to antibiotic therapy. 7 patients had total of 10 episodes of symptomatic accidental intraperitoneal sodium hypochlorite instillation, none had any long term adverse effects. The 'O' set procedure was done by self in 10(20%) and by others in 41(80%) cases. The peritonitis rates when performed by self and others were 18.5 and 18.4 patient months per episode respectively (p-NS). The cost of being on CAPD using 'O' set, Y-bag and twin bag were Rs. 1,50,000, 2,10,000 and 3,72,000 per annum respectively and cost of maintenance haemodialysis was 1,36,800 per annum. The cost of CAPD using 'O' set was comparable to that of maintenance haemodialysis. The 'O' set connector system in CAPD is found to be safe, cost effective and efficient.
abstract_id: PUBMED:32068361
Double purse-string craft around the inner cuff: a new technique for an immediate start of CAPD Background: In order to minimize the risk of leakage and displacement, international guidelines recommend that catheter insertion should be performed at least 2 weeks before beginning CAPD. However, the optimal duration of the break-in period is not defined yet. Methods: From January 2011 to December 2018, 135 PD catheter insertions in 125 patients (90 men and 35 women, mean age 62,02 ± 16,7) were performed in our centre with the double purse-string technique. Seventy-seven straight double-cuffed Tenckhoff catheter were implanted semi-surgically on midline under umbilicus by a trocar and 58 were surgically implanted through rectus muscle. In all patients CAPD was started within 24 hours from catheter placement, without a break-in procedure. We recorded all mechanical and infective catheter-related complications during the 3 first months after initiation of CAPD and the catheter survival rates. Results: During the first 3 months the overall incidence of peri-catheter leakages, catheter dislocations, peritonitis and exit-site infections was 2,96% (4/135), 1,48% (2/135), 10.3% (14/135) and 2.96% (4/135), respectively. No bleeding events, bowel perforations or hernia formations were reported. The catheter survival censored for deaths, kidney transplant, loss of ultrafiltration and inability was 74,7% at 48 months. There was no difference in the incidence of any mechanical or infectious complications and catheter survival between the semi-surgical and the surgical groups. Conclusions: Double purse-string technique allows an immediate start of CAPD both with semi-surgical and surgical catheter implantation. This technique is a safe and feasible approach in all patients who refer to peritoneal dialysis.
abstract_id: PUBMED:8959630
Twin- versus single-bag disconnect systems: infection rates and cost of continuous ambulatory peritoneal dialysis. Although twin-bag disconnect fluid-transfer systems for continuous ambulatory peritoneal dialysis (CAPD) have a lower rate of catheter-related infection than single-bag systems, their greater monetary purchase cost has prevented universal adoption. Therefore, a single-center randomized study was performed in 63 adult patients to compare the efficiency and total cost of Freeline Solo (FS, twin-bag) and Basic Y (BY, single-bag) systems. Patients were new to CAPD (N = 39), or had a new CAPD catheter, or had had no episodes of peritonitis or exit-site infection in the previous 12 months (N = 24). Total follow-up was 631 patient months (pt.mon), and 53 patients were still on the trial at its termination. Patients rated FS as easier to use than BY (P < 0.001). Peritonitis occurred on 23 occasions in 12 out of 30 patients using BY, and on seven occasions in five of 33 patients using FS. Time to first infection was less with BY than FS (hazard ratio, 2.4; 95% confidence interval (CI), 1.0 to 5.3; P < 0.04). Cumulative incidence of peritonitis was 1 per 14.0 pt.mon with BY and 1 per 46.5 pt.mon with FS (odds ratio, 3.6; 95% CI 1.5 to 8.5; P = 0.004). Length of hospitalization for peritonitis or exit-site infection was 98 days in six patients with BY, versus 17 days in two patients with FS. With BY, four catheters were removed because of infection, but none with FS (P < 0.05). With BY, the total cost of infection was $AUD127,079 ($5033 per pt.yr) versus $19,250 ($704 per pt.yr) with FS, which offset the higher purchase cost of FS. The total cost of CAPD was $AUD956 per pt.yr less with FS than BY. In conclusion, the higher purchase cost of the FS twin-bag system is more than offset by savings from its lower incidence of peritonitis.
abstract_id: PUBMED:7408242
A new technique of continuous ambulatory peritoneal dialysis (CAPD): double-bag system for freedom to the patient and significant reduction of peritonitis. A new technical approach has been described for the treatment of patients on CAPD. The new approach is based on the use of a two-bag system and closed connection by a connector to the Tenckhoff catheter. After each dialyzate exchange the patient discards the entire system and is completely bag-free, unlike the Oreopoulos CAPD technique. Using this new procedure we have markedly reduced the incidence of peritonitis and we have made the patient free from carrying an uncomfortable burden during the interval of solution exchange.
abstract_id: PUBMED:2663035
CAPD disconnect systems. Increasingly complex devices to carry out exchanges in continuous ambulatory peritoneal dialysis (CAPD) have failed to improve on the peritonitis rates of the Italian 'Y' system which uses simple flushing along with antiseptic to sterilise the connectors. Other systems based on similar principles have produced satisfactory peritonitis rates while offering patients the ability to dispense with most of the tubing and the bag between exchanges. The enthusiasm of patients for the principle of disconnection has led to a rapid increase in the use of these systems. CAPD disconnect systems offer nephrologists reasonable peritonitis rates and patients further freedom combined with convenience and safety.
Answer: Yes, the Y-set and double-bag systems have been associated with a reduction in the incidence of CAPD peritonitis. Systematic reviews and randomized controlled trials have shown that these modified transfer sets are superior to standard systems in preventing peritonitis in patients undergoing continuous ambulatory peritoneal dialysis (CAPD) for end-stage renal disease (ESRD).
Specifically, a systematic review of twelve eligible trials with a total of 991 randomized patients found that significantly fewer patients experienced peritonitis when using either the Y-set or double-bag systems compared to standard systems. The odds ratio (OR) for experiencing peritonitis was 0.33 with a 95% confidence interval (CI) of 0.24 to 0.46, indicating a substantial reduction in risk. Furthermore, when comparing double-bag systems directly with Y-set systems, the double-bag systems were found to be even more effective, with significantly fewer patients experiencing peritonitis (OR 0.44, 95% CI 0.27 to 0.71) (PUBMED:11158410, PUBMED:11406068).
Additional studies have supported these findings. For instance, a study comparing integrated double bag systems (IDBS) without disinfectants to traditional single bag systems found that IDBS significantly reduced the incidence of peritonitis and changed the distribution of causative microbes, with fewer infections caused by skin bacteria (PUBMED:1680408). Another randomized controlled trial reported that the use of Y system and twin bag was associated with a reduction of 50% and 75% in peritonitis incidence, respectively (PUBMED:9853278).
Moreover, disconnect systems, which include Y-set and twin-bag systems, have been recommended as the preferred exchange systems in CAPD due to their effectiveness in preventing peritonitis (PUBMED:25117423). The use of these systems has also been associated with cost savings due to decreased usage of antibiotics and fewer hospitalizations (PUBMED:8959630).
In summary, the evidence suggests that the Y-set and double-bag systems do reduce the incidence of CAPD peritonitis, leading to improved patient outcomes and potentially lower healthcare costs. |
Instruction: The capture of visible debris by distal cerebral protection filters during carotid artery stenting: Is it predictable?
Abstracts:
abstract_id: PUBMED:15944592
The capture of visible debris by distal cerebral protection filters during carotid artery stenting: Is it predictable? Objectives: Neurologic complications during carotid artery stenting (CAS) are most clearly associated with embolization of visible debris. Distal filter devices may provide cerebral protection by capturing clinically significant debris. However, they increase procedural time and expense and have their own set of complications. The current study was undertaken to identify the clinical factors predictive for the presence or absence of visible debris captured by distal filter devices during CAS.
Methods: Patients undergoing CAS with use of a distal filter device (n = 279) were prospectively entered into an investigational carotid registry. Recorded variables were classified as patient-, lesion-, or procedure-related. The filter was assessed for visible debris in each case. The odds ratio (OR) and 95% confidence interval (CI) were determined for each variable to predict visible debris. The ability of each variable to predict the absence of visible debris was assessed by calculating the individual negative predictive value (NPV).
Results: Visible debris was present in 169 filters (60.3%). There was an increased risk of visible debris found with several variables (OR, 95% CI): hypertension (2.9, 1.7 to 5.2), hypercholesterolemia (2.3, 1.4 to 3.9), stent diameter >9 mm (16.6, 9.0 to 30.0), and any neurologic event (4.2, 1.5 to 9.9). The NPV failed to exceed 0.80 (80%) for any variable. The NPV of the variables with a significantly elevated OR was as follows: hypertension (0.60), hypercholesterolemia (0.52), stent diameter >9 mm (0.75), and any neurologic event (0.38).
Conclusions: Several clinical variables are associated with the presence of visible debris captured by distal filter devices. The current study failed to identify any variables capable of consistently predicting the absence of visible debris. These findings support the routine rather than the selective use of cerebral protection during CAS.
abstract_id: PUBMED:24378246
Combination of flow reversal and distal filter for cerebral protection during carotid artery stenting. Background: Carotid artery stenting (CAS) with distal filter protection allows continuous cerebral perfusion, although it is associated with a greater risk of cerebral ischemic complications than other protection systems. To reduce cerebral ischemic complications, CAS was performed under combined cerebral protection using both flow reversal (FR) and a distal filter.
Methods: Fifty-six stenoses of 52 patients were treated with CAS using the combined protection of FR and a distal filter, with intermittent occlusion of both the common carotid artery (CCA) and the external carotid artery. The blood flow was reversed into the guiding catheter to the central venous system via an external filter, which collected the debris. Clinical outcomes, the rates of capturing visible debris, and new ischemic signals on diffusion-weighted magnetic resonance imaging (DWI-MRI) were evaluated.
Results: The overall technical success rate was 92.9% (52/56). Successful stent deployment was achieved in 100% (56/56) of the cases. No procedural-related emboli causing a neurologic deficit were observed. In 38.5% (20/52) of the cases, visible debris were captured by only the external filter, and in 17.3% (9/52), visible debris were captured by both external and distal filters. In no case was visible debris noted in only the distal filter. New ischemic signals on DWI-MRI were detected in 9.6% (5/52). The 30-day myocardial infarction, stroke, and death rates were 0%.
Conclusions: The additional use of a distal filter captures emboli in 17.3% of cases, and because the occlusion is only intermittent, the procedure is potentially applicable even in those who cannot tolerate prolonged balloon occlusion of the CCA.
abstract_id: PUBMED:22944567
Significance of combining distal filter protection and a guiding catheter with temporary balloon occlusion for carotid artery stenting: clinical results and evaluation of debris capture. Background: Carotid artery stenting (CAS) with distal filter protection allows for continuous cerebral perfusion. However, this procedure has been reported to be associated with a greater risk of debris migrating into the cerebral arteries. To improve the extent of debris capture, we used a guiding catheter with temporary balloon occlusion and temporary aspiration from the common carotid artery.
Methods: Eighty-one stenoses were treated with CAS using distal filter protection; simple distal filter protection (conventional group, n = 50) or distal filter protection with temporary proximal flow control and blood aspiration was performed using a 9-F guiding catheter with a temporary balloon occlusion positioned at the common carotid artery (proximal occlusion group, n = 31). Clinical outcomes, rates of capturing visible debris, and new ischemic signals on diffusion-weighted magnetic resonance imaging (DWI) were evaluated.
Results: Events involving procedure-related emboli causing neurological deficits occurred in 6.0% (3/50) and 3.2% (1/31) of patients in the conventional and proximal occlusion groups, respectively (P = 1.0). The rates of visible debris capture by using the distal filter were 64.0% (32/50) and 29.0% (9/31) in the convention and proximal occlusion groups, respectively, being significantly lower in the proximal occlusion group (P < 0.01). New ischemic signals on DWI were detected in 44.0% (22/50) and 12.9% (4/31) of cases in the conventional and proximal occlusion groups, respectively, being significantly lower in the proximal occlusion group (P < 0.01).
Conclusions: Combining distal filter protection and a guiding catheter with temporary balloon occlusion in CAS significantly reduced visible debris captured by the distal filter and occurrence of small postprocedural cerebral infarctions detected by DWI.
abstract_id: PUBMED:11823652
Cerebral protection during carotid artery stenting: collection and histopathologic analysis of embolized debris. Background And Purpose: Histopathologic analysis was performed to better understand quantity, particle size, and composition of embolized debris collected in protection filters during carotid artery stent implantation.
Methods: Elective carotid stent implantation with the use of a distal filter protection was attempted in 38 consecutive lesions (36 patients) of the internal carotid artery presenting >70% diameter stenosis (mean, 82.1+/-11.1%). Mean age of the patients was 70.7+/-7.7 years; 75% were men, and 50% of patients had previous neurological symptoms.
Results: In 37 lesions (97.4%) it was possible to position the filter device, and in all lesions a stent was successfully implanted. The only complication occurring in the hospital and during the 30-day follow-up was 1 death due to acute myocardial infarction. Neurological sequelae did not occur. Histomorphometric analysis was performed on the filters. Presence of debris was detected in 83.7% of filters. The mean surface area of the polyurethane membrane filter covered with material was 53.2+/-19.8%. Particle size ranged from 1.08 to 5043.5 microm (mean, 289.5+/-512 microm) in the major axis and 0.7 to 1175.3 microm (mean, 119.7+/-186.7 microm) in the minor axis. Collected debris consisted predominantly of thrombotic material, foam cells, and cholesterol clefts.
Conclusions: By the use of distal protection filters during carotid artery stenting, it was possible to collect particulate debris potentially leading to distal vessel occlusion in a high percentage of cases. Qualitative analysis of embolized material showed debris dislocated during the percutaneous intervention from atheromatous plaques.
abstract_id: PUBMED:27865698
Immunohistochemical Analysis of Debris Captured by Filter-Type Distal Embolic Protection Devices for Carotid Artery Stenting. Background: Little is known about the micro-debris captured in filter-type distal embolic protection devices (EPD) used for carotid stenting (CAS). This study aimed to determine the histological and immunohistochemical characteristics of such debris by using a new liquid-based cytology (LBC) technique.
Methods: Fifteen patients who underwent CAS using a filter-type distal EPD (FilterWire EZ; Boston Scientific, Marlborough, MA, USA) were included in the study. After gross inspection of each recovered filter device, micro-debris were collected using a new LBC technique (SurePath; TriPath Imaging, Inc., Burlington, NC). Histological and immunohistochemical analysis of the recovered debris was performed. The pre- and postoperative brain magnetic resonance imaging and neurological status of each patient were also reviewed.
Results: No patient developed ipsilateral symptomatic stroke due to a thromboembolic event. All 15 patients (100%) had microscopically identifiable debris in the filters, whereas gross inspection detected visible debris only in 5 patients (33.3%). Histological analysis revealed various types of structural components in an advanced atheromatous plaque, including fragments of fibrous cap, calcified plaque, smooth muscle cells, and necrotic tissue fragment infiltrated with monocytes and macrophages.
Conclusions: Filter-type EPDs may contribute to reducing the risk of CAS-related embolic events by capturing micro-debris even when gross inspection of the recovered filter shows no visible debris in the device.
abstract_id: PUBMED:29862882
Retinal artery occlusion during carotid artery stenting with distal embolic protection device. Retinal artery occlusion associated with carotid artery stenosis is well known. Although it can also occur at the time of carotid artery stenting, retinal artery occlusion via the collateral circulation of the external carotid artery is rare. We encountered two cases of retinal artery occlusion that were thought to be caused by an embolus from the external carotid artery during carotid artery stenting with a distal embolic protection device for the internal carotid artery. A 71-year-old man presented with central retinal artery occlusion after carotid artery stenting using the Carotid Guardwire PS and a 77-year-old man presented with branch retinal artery occlusion after carotid artery stenting using the FilterWire EZ. Because additional new cerebral ischaemic lesions were not detected in either case by postoperative diffusion-weighted magnetic resonance imaging, it was highly likely that the debris that caused retinal artery occlusion passed through not the internal carotid artery but collaterals to retinal arteries from the external carotid artery, which was not protected by a distal embolic protection device. It is suggested that a distal protection device for the internal carotid artery alone cannot prevent retinal artery embolisation during carotid artery stenting and protection of the external carotid artery is important to avoid retinal artery occlusion.
abstract_id: PUBMED:25312877
Pathology of embolic debris in carotid artery stenting. Background: The relationship between magnetic resonance (MR) plaque imaging and the pathology of distal embolic debris is unknown. We aimed to evaluate the relationship between the pathology of embolic debris in the embolic filter during carotid artery stenting (CAS), MR plaque imaging, and new ischemic lesions on diffusion-weighted imaging (DWI).
Method: We prospectively reviewed the 36 patients who underwent CAS using a filter-type embolic protection device. Pathology of debris was categorized into thrombosis, inflammatory cells, elastic fiber, and calcification. We compared the clinical parameters, MR plaque imaging, and pathological characteristics of the embolic debris retained in the filter during CAS on univariate analysis.
Results: Eleven patients had and 25 patients did not have new lesion on DWI. All of DWI-high lesions were identified in affected side middle cerebral artery territory. Embolic debris was microscopically confirmed in 28 patients (78%); thrombosis in 11 (31%), inflammatory cells in 13 (36%), elastic fiber in 12 (33%), and calcification in 9 (25%). Proportion of asymptomatic carotid stenosis, intra-operative bradycardia/hypotension, and inflammatory cells of debris were significantly higher in patients with new DWI-high lesions. There was no significant relationship between the pathological characteristics and MR plaque imaging of distal embolic debris.
Conclusions: Our study showed that new DWI-high lesions might be influenced by types of debris in the filter. The need for future studies specifically examine the association of pathology of debris and findings of MR plaque imaging with new DWI-high lesions during CAS is emphasized.
abstract_id: PUBMED:14723572
Filter devices for cerebral protection during carotid angioplasty and stenting. The risk of embolization during carotid artery stenting (CAS) has been the foremost reason for the cautious acceptance of this percutaneous alternative to carotid endarterectomy. To address this issue, numerous embolic protection devices are being evaluated as an adjunct to CAS for neuroprotection. Among the 3 main categories of these devices, distal filters, which trap embolic debris while maintaining distal cerebral perfusion, have attracted the most corporate interest. This review focuses on the emerging field of embolic protection filters for use in CAS.
abstract_id: PUBMED:24627316
Technique and clinical evidence of neuroprotection in carotid artery stenting. Carotid artery stenting has been advocated as an effective alternative to carotid endarterectomy. Periprocedural embolization of debris during endovascular treatment of carotid artery disease may result in neurological deficit. Different strategies are being developed and evaluated for their ability to minimize the clinical embolic risk. Distal filter devices, proximal and distal balloon occlusion systems are increasingly used in carotid artery stenting, because they seem to be safe and effective in preventing distal embolization, according to several uncontrolled studies. However the use of embolic protection devices is a subject of controversy and no data on their benefit are available from randomized controlled multi-center trials. The technique and clinical evidence of cerebral protection systems during carotid angioplasty and stenting for stroke prevention are reviewed.
abstract_id: PUBMED:27869549
Carotid artery stenting with double cerebral embolic protection in asymptomatic patients - a diffusion-weighted MRI controlled study. Background: The aim of this study was to compare the simultaneous double-protection method (proximal balloon plus distal filter) with distal-filter protection or proximal-balloon protection alone in asymptomatic patients during carotid artery stenting.
Patients And Methods: 119 consecutive patients were investigated for carotid artery stentings in the extracranial internal carotid artery with the use of distal filters (n = 41, 34.4 %), proximal balloon (MoMa) protection (n = 40, 33.6 %) or double protection (n = 38, 31.9 %). Magnetic resonance imaging (MRI) was performed on all patients before the procedure, and control diffusion-weighted MRI (DW-MRI) was obtained within 24-48 h after the procedure. Procedural data, complications, success rate, major adverse cardiovascular events, and MRI findings were collected.
Results: New cerebral high-intensity (HI) lesions were observed in 47 (39.4 %) patients. HI lesions were observed in 22 (53.6 %), 15 (37.5 %), and 10 (26.3 %) of the patients with distal filters, proximal protection, and double protection, respectively (p = 0.004). The average number of HI lesions on DW-MRI was 1.80 in the distal-filter group, 0.90 in the proximal-balloon group, and 0.55 in the double-protection group (p < 0.001). Procedure and fluoroscopy times were slightly longer in the double-protection group compared to the distal- or proximal-protection groups (p = 0.001).
Conclusions: The double (proximal plus distal) cerebral embolic protection technique is safe and effective for minimizing the risk of cerebral embolization, even in patients with asymptomatic carotid artery stenosis, despite slightly longer procedure and fluoroscopy times. .
Answer: The capture of visible debris by distal cerebral protection filters during carotid artery stenting (CAS) is not entirely predictable. A study aimed to identify clinical factors predictive of the presence or absence of visible debris captured by distal filter devices during CAS found that visible debris was present in 60.3% of filters. Several clinical variables were associated with the presence of visible debris, including hypertension, hypercholesterolemia, stent diameter greater than 9 mm, and any neurologic event. However, the negative predictive value (NPV) of these variables failed to exceed 80%, indicating that none of the variables could consistently predict the absence of visible debris. This suggests that the routine rather than selective use of cerebral protection during CAS is supported, as no reliable predictors for the absence of debris were identified (PUBMED:15944592).
Other studies have explored different methods of cerebral protection during CAS to reduce the risk of cerebral ischemic complications. One study combined flow reversal (FR) and a distal filter for cerebral protection, finding that visible debris was captured by only the external filter in 38.5% of cases and by both external and distal filters in 17.3% of cases. No procedural-related emboli causing a neurologic deficit were observed, suggesting that this combined approach could be beneficial (PUBMED:24378246). Another study combined distal filter protection with a guiding catheter with temporary balloon occlusion, which significantly reduced visible debris captured by the distal filter and the occurrence of small postprocedural cerebral infarctions detected by diffusion-weighted magnetic resonance imaging (DWI) (PUBMED:22944567).
Histopathologic analysis of embolized debris collected in protection filters during CAS showed that a high percentage of cases had particulate debris potentially leading to distal vessel occlusion. The debris consisted predominantly of thrombotic material, foam cells, and cholesterol clefts (PUBMED:11823652). Immunohistochemical analysis of debris captured by filter-type distal embolic protection devices (EPD) revealed that micro-debris were captured even when gross inspection showed no visible debris, indicating that EPDs may reduce the risk of CAS-related embolic events (PUBMED:27865698). |
Instruction: Do vaccinations affect the clinical course of systemic necrotising vasculitis?
Abstracts:
abstract_id: PUBMED:27214210
Do vaccinations affect the clinical course of systemic necrotising vasculitis? A prospective observational web-based study. Objectives: To estimate the impact of vaccinations, infections and traumatic life events on the disease activity of a web-based cohort of systemic necrotising vasculitis (SNV) patients.
Methods: Adults diagnosed with SNV self-reported vaccinations, infectious episodes and traumatic life events every 3 months during follow-up on a secure dedicated website. Participants reported information on disease activity assessed with 3 scores: the French version of the Medical Outcome Study Short Form-36 (SF-36), the visual numerical scale for Patient Global Assessment (PGA) and the modified Disease Extent Index (mDEI).
Results: Between December 2005 and October 2008, 145 participants (mean ± SD age 53±13 years; 57% males) were included. Mean follow-up was 445±325 days. SNVs were distributed as follows: 46% granulomatosis with polyangiitis (Wegener's), 22% eosinophilic granulomatosis with polyangiitis (Churg-Strauss), 18% polyarteritis nodosa and 8% microscopic polyangiitis. During follow-up, 94 vaccinations, 57 acute infectious episodes and 274 traumatic life events were reported. In univariate and multivariate analyses, only traumatic life events were significantly associated with decreased SF-36 mental and physical component scores. No significant SF-36, PGA and mDEI scores variations were reported during the 3 months following acute infectious episode or vaccine administration.
Conclusions: No significant clinical impact of vaccinations on SNV activity was found in this prospective observational study.
abstract_id: PUBMED:10655988
ELISA is the superior method for detecting antineutrophil cytoplasmic antibodies in the diagnosis of systemic necrotising vasculitis. Background: Antineutrophil cytoplasmic antibodies (ANCA) have been used as a diagnostic marker for systemic necrotising vasculitis, a disease classification which includes Wegener granulomatosis, microscopic and classic polyarteritis nodosa, and Churg Strauss disease.
Objective: To compare the diagnostic value of the two methods for detecting these antibodies--immunofluorescence and enzyme linked immunosorbent assay (ELISA)--with respect to biopsy proven active systemic necrotising vasculitis in a clinically relevant population.
Methods: A prospective study to ascertain the patient's diagnosis at the time of each of the 466 requests for ANCA received at one laboratory over a nine month period, and allocate each to one of five diagnostic groups: active and inactive biopsy proven systemic necrotising vasculitis, suspected systemic necrotising vasculitis, low probability systemic necrotising vasculitis, and not systemic necrotising vasculitis.
Results: ELISA was superior to immunofluorescence in the diagnosis of systemic necrotising vasculitis because it was less likely to detect other diseases. This was reflected in its specificity of 97% and positive predictive value of 73%, compared with 90% and only 50% for immunofluorescence (p = 0.0006 and p = 0.013, respectively). ELISA had a negative predictive value of 98% which was not significantly different to immunofluorescence. ELISA was technically superior.
Conclusions: ELISA is the superior method of ANCA detection in the diagnosis of systemic necrotising vasculitis and should be used in conjunction with a compatible clinical picture and histological evidence.
abstract_id: PUBMED:32476929
Necrotising sarcoid granulomatosis. A rare granulomatous disease. Introduction: Necrotizing sarcoid granulomatosis (NSG) is a very rare disease of unknown etiology characterized by sarcoid-like granulomas, vasculitis and necrosis in pulmonary and extrapulmonary localizations. Case report: We describe a case of a 34-year-old Caucasian male with fever, pleural pain, and nodular pulmonary opacities on chest radiograph. Histological examination of the lung tissue confirmed NSG. Diagnostically, infectious causes, vasculitis, and malignancy were excluded. A tendency to partial regression was observed, without the need for corticosteroid treatment. Conclusion: NSG is a rare disease which must be distinguished from other systemic diseases including vasculitides. The key to diagnosis, emphasized in our paper, is the histopathological finding. The course of NSG is similar to sarcoidosis. Corticosteroids are considered the treatment of choice, but the disease exhibits a tendency towards spontaneous regression. (Sarcoidosis Vasc Diffuse Lung Dis 2018; 35: 395-398).
abstract_id: PUBMED:7910276
Chronic parvovirus B19 infection and systemic necrotising vasculitis: opportunistic infection or aetiological agent? We describe three patients who had infection with human parvovirus B19 in association with new-onset systemic necrotising vasculitis syndromes, two with features of polyarteritis nodosa and one with features of Wegener's granulomatosis. Chronic B19 infection, lasting 5 months to more than 3 years, was shown by enzyme immunoassay for IgG and IgM antibodies to B19 and polymerase chain reaction for B19 DNA in serum and tissue samples. The patients had atypical serological responses to the B19 infection, although none had a recognisable immunodeficiency disorder. Treatment with corticosteroids and cyclophosphamide did not control vasculitis. Intravenous immunoglobulin (IVIG) therapy led to rapid improvement of the systemic vascultis manifestations, clearing of the chronic parvovirus infection, and long-term remission. These observations suggest an aetiological relation between parvovirus B19 infection and systemic necrotising vasculitis in these patients and indicate a potentially curative role for IVIG in such disorders.
abstract_id: PUBMED:32975813
Recurrent cutaneous necrotising eosinophilic vasculitis. Recurrent cutaneous necrotising eosinophilic vasculitis (RCNEV) is a rare disease that was first described in 1994. We report a case of RCNEV treated with corticosteroid, and 18 cases that we identified in the literature. Our review of the literature shows that RCNEV was frequently identified in middle-aged females from Asia and usually presents as erythematous to purpuric papuloplaques, angio-oedema on the extremities, as well as peripheral eosinophilia. Histopathologically, RCNEV is characterised by exclusively eosinophilic infiltration around the vascular plexus, the absence of leukocytoclasis and fibrinoid degeneration of vascular walls. Although, RCNEV responds to corticosteroid treatment, relapses have occurred during dose tapering. We also discuss the mechanisms of vascular destruction, the differential diagnosis and steroid-sparing therapies for RCNEV.
abstract_id: PUBMED:9456014
Systemic vasculitis associated with seronegative spondylarthropathy (Reiter's syndrome). A first case of Reiter's syndrome developing a severe systemic necrotising vasculitis is reported. After a disease course with major complications, aggressive consistent immuno-suppressive treatment was successful.
abstract_id: PUBMED:24854375
Cytomegalovirus-related necrotising vasculitis mimicking Henoch-Schönlein syndrome. Viral vasculitides have been previously reported in the literature, the role of infections in their pathogenesis ranging from direct cause to trigger event. Here we report the case of a 3-year-old immunocompetent girl who developed a systemic vasculitis leading to ileal perforation, mimicking a full blown picture of Henoch-Schönlein purpura. High dosage steroid treatment was started, with good response. The anatomopathological examination of the resected gastrointestinal tract showed features of necrotising vasculitis and cytomegalovirus (CMV)-related inclusion bodies in the endothelial cells, with direct correlation to vascular damage. The causative role of viral infection was revealed by the presence of CMV DNA in patient's blood and positive IgG titer against the virus. Steroid therapy was then tapered: the patient achieved clinical remission, which still persists after a six-months follow-up. Our report suggests that CMV vasculitis is probably more frequent than previously thought, even in immunocompetent patients, with a protean clinical presentation, mimicking other types of vasculitides.
abstract_id: PUBMED:21087724
Pleuropulmonary manifestations of necrotising vasculitis The pleuropulmonary manifestations of necrotising vasculitis are frequent and polymorphic. If the existence of extrapulmonary signs and the presence of neutrophil polynuclear anticytoplasmic antibodies are helpful for the diagnosis of a bout of vasculitis, the existence of pleuropulmonary symptoms can also make for discussion of infections or iatrogenic effects induced by immunosuppressive treatments.
abstract_id: PUBMED:31162033
Abdominal adipose tissue predicts major cardiovascular events in systemic necrotising vasculitides. Objectives: Cardiovascular (CV) events are highly prevalent in systemic necrotising vasculitides (SNV). Visceral/subcutaneous adipose tissue (VAT/SAT) ratio has been shown to be associated with CV events in various diseases. We aimed to assess the relevance of abdominal adipose tissue measurement to predict major CV events (MCVEs) in SNV.
Methods: Patients with SNV were successively included in a longitudinal study assessing MCVEs and other sequelae. Dual x-ray absorptiometry was performed to evaluate abdominal adipose tissue. Patients were prospectively followed for MCVEs, defined as myocardial infarction, unstable angina, stroke, arterial revascularisation and/or hospitalisation for or death from CV causes.
Results: One hundred and twenty consecutive SNV patients were included and analysed (54 males, mean age 53±18 years). High CV risk was found in 28 (23.3%) patients. In univariate analysis, age, male gender, VDI, VAT/SAT ratio and serum troponin level were significantly associated with high CV risk, whereas age and VAT/SAT ratio remained independently associated with high CV risk. Variables associated with high tertile of VAT/SAT ratio included age and metabolic risk factors. After median follow-up of 42 months, 19 (16%) patients experienced MCVEs. Hazard ratios for incident MCVEs compared with 1st tertile of VAT/SAT ratio were 7.22 (1.02-51.3; p=0.048) and 9.90 (3.15-31.2; p=0.0002) in the 2nd and 3rd tertile, respectively.
Conclusions: Abdominal visceral adipose tissue is a reliable surrogate marker of CV risk and predicts incident MCVEs in SNV patients. Abdominal adipose tissue should be probably evaluated routinely in these patients to assess CV risk.
abstract_id: PUBMED:16466454
Necrotising cryptococcal vasculitis in an HIV-negative woman. An 84-year-old woman with a past medical history significant for haemolytic anaemia necessitating chronic steroid therapy presented with skin induration and erythema initially thought to be cellulitis. After a rapid progression of symptoms and failure to respond to antibiotics, a clinical diagnosis of necrotising vasculitis was made. Necrotising cryptococcal vasculitis was confirmed by histopathology and culture.
Answer: According to the findings from a prospective observational web-based study, vaccinations do not significantly affect the clinical course of systemic necrotising vasculitis (SNV). In this study, adults diagnosed with SNV self-reported information on vaccinations, infectious episodes, and traumatic life events every 3 months during follow-up on a secure dedicated website. The disease activity was assessed using three scores: the French version of the Medical Outcome Study Short Form-36 (SF-36), the visual numerical scale for Patient Global Assessment (PGA), and the modified Disease Extent Index (mDEI). The results indicated that during the follow-up period, no significant variations in SF-36, PGA, and mDEI scores were reported in the 3 months following acute infectious episodes or vaccine administration. Only traumatic life events were significantly associated with decreased SF-36 mental and physical component scores. Therefore, the study concluded that there was no significant clinical impact of vaccinations on SNV activity (PUBMED:27214210). |
Instruction: Should arm span or height be used in calculating the BMI for the older people?
Abstracts:
abstract_id: PUBMED:25523902
Should arm span or height be used in calculating the BMI for the older people? Preliminary results. Aims And Objectives: To consider using arm span rather than height for calculating the body mass index, as a parameter that offers greater long-term stability, for the nutritional assessment of persons aged over 65 years.
Background: The body mass index presents certain drawbacks for the nutritional screening of older people suffering malnutrition or at risk of malnutrition, due to the anthropometric changes that occur with increasing age, especially the progressive loss of height.
Design: Observational, cross-sectional study, using nonprobabilistic convenience sampling, with anthropometric measurements and nutritional screening in older men and women, divided into two groups: (1) aged 65-75 years and (2) aged over 75 years.
Methodology: Height and arm span were measured to calculate two separate indices of body mass: body mass index (weight/height) and body mass index.1 (weight/arm span). Nutritional screening was conducted using the Mini Nutritional Assessment Short-Form, which includes the body mass index as an anthropometric measure.
Results: Our results reveal statistically significant differences between the two indices, for the sample analysed. Body mass index.1 classifies a larger number of older people as suffering malnutrition and fewer as being at nutritional risk. When this new index is used, there is a displacement of the subjects at risk, thus increasing the number considered at risk of malnutrition and in need of appropriate therapeutic intervention. Therefore, the use of body mass index.1 would enable more people suffering malnutrition, who would otherwise remain untreated, to be attended.
Conclusions: As arm span, as an anthropometric measure, remains unchanged over time, it could be used instead of height, as an alternative index (body mass index.1) to the conventional body mass index. Further research is needed to determine the association between body mass index.1 and clinical status parameters to determine optimum cut-off points.
Relevance To Clinical Practice: This study describes the greater stability of body mass index.1 with respect to body mass index for nutritional screening, and the resulting benefits for nutritional monitoring and intervention for older people.
abstract_id: PUBMED:29735129
The Use of Arm Span as a Substitute for Height in Calculating Body Mass Index (BMI) for Spine Deformity Patients. Objective: To compare arm span and height in body mass index (BMI) calculation in patients with spinal curvature and investigate their impact on interpretation of BMI.
Study Design: Prospective case-control cohorts.
Summary Of Background Data: The BMI value is based on weight to height ratio. Spine deformity patients experience height loss and its use in calculating BMI is likely to produce errors. A surrogate for height should therefore be sought in BMI determination.
Methods: Ninety-three spine deformity patients were matched with 64 normal children. Anthropometric values (height, arm span, and weight) and spinal curve were obtained. BMIs using arm span and height were calculated, and statistical analysis performed to assess the relationship between BMI/height and BMI/arm span in both groups as well as the relationship between these values and Arm Span to Height difference (Delta AH).
Results: There were 46 males and 47 females, the average age was 15.5 years in Group 1 versus 33 males and 31 females, average age 14.8 years in Group 2. Major scoliosis in Group 1 averaged 125.7° (21° to 252°). The extreme curves show vertebral transposition, with overlapping segments making it more than 180°. A logistic regression showed that there was linearity in BMI scores (R2 = 0.97) for both arm span and height (R2 = 0.94) in group 2 patients. For group 1 patients there was a significant difference in the BMI values when comparing BMI/arm span versus BMI/height (p < .0001). Mean BMI values using height was overstated by 2.8 (18.6%). The threshold at which BMI score must be calculated using arm span as opposed to the height (Delta AH) was determined to be 3 cm.
Conclusions: Spine deformity patients experience height loss, which can impact their true BMI values thereby giving an erroneous impression of their nutritional status. The arm span should be used in patients with Delta AH >3 cm to properly assess nutritional status.
abstract_id: PUBMED:29415827
BMI calculation in older people: The effect of using direct and surrogate measures of height in a community-based setting. Background & Aims: There is currently no consensus on which measure of height should be used in older people's body mass index (BMI) calculation. Most estimates of nutritional status include a measurement of body weight and height which should be reliable and accurate, however at present several different methods are used interchangeably. BMI, a key marker in malnutrition assessment, does not reflect age-related changes in height or changes in body composition such as loss of muscle mass or presence of oedema. The aim of this pilot study was to assess how the use of direct and surrogate measures of height impacts on BMI calculation in people aged ≥75 years.
Methods: A cross-sectional study of 64 free-living older people (75-96 yrs) quantified height by two direct measurements, current height (HC), and self-report (HR) and surrogate equations using knee height (HK) and ulna length (HU). BMI calculated from current height measurement (BMIC) was compared with BMI calculated using self-reported height (BMIR) and height estimated from surrogate equations for knee height (BMIK) and ulna length (BMIU).
Results: Median difference of BMIC-BMIR was 2.31 kg/m2. BMIK gave the closest correlation to BMIC. The percentage of study participants identified at increased risk of under-nutrition (BMI < 20 kg/m2) varied depending on which measure of height was used to calculate BMI; from 5% (BMIC), 7.8% (BMIK), 12.5% (BMIU), to 14% (BMIR) respectively.
Conclusions: The results of this pilot study in a relatively healthy sample of older people suggest that interchangeable use of current and reported height in people ≥75 years can introduce substantial significant systematic error. This discrepancy could impact nutritional assessment of older people in poor health and lead to misclassification during nutritional screening if other visual and clinical clues are not taken into account. This could result in long-term clinical and cost implications if individuals who need nutrition support are not correctly identified. A consensus is required on which method should be used to quantify height in older people to improve accuracy of nutritional assessment and clinical care.
abstract_id: PUBMED:32210546
Diagnosis of Presarcopenia Using Body Height and Arm Span for Postmenopausal Osteoporosis. Purpose: Sarcopenia and osteoporosis are both serious health problems in postmenopausal women. The Asia Working Group for Sarcopenia recommends using the skeletal muscle index (SMI), which is height-adjusted appendicular skeletal muscle mass (ASMM). However, loss of height has been shown to be a common clinical finding in patients with osteoporosis. This study examined the prevalence of presarcopenia using height and arm span, which is a predictor of height, and investigated the diagnostic accuracy for presarcopenia.
Methods: A total of 55 post-menopausal osteoporotic patients aged 62-95 years underwent bioelectrical impedance analysis (BIA) for ASMM measurement and dual-energy X-ray absorptiometry (DXA) scan for bone mineral density (BMD). Anthropometric measurements, including height, weight, and arm span were taken, and body mass index (BMI), SMI, and arm span-adjusted SMI (Arm span SMI) were calculated. Presarcopenia was defined as SMI or Arm span SMI <5.7 kg/m2 in this study.
Results: The prevalence of presarcopenia was 27.3% and 38.2% evaluated by SMI and Arm span SMI, respectively. The prevalence of presarcopenia was higher when evaluated by Arm span SMI than by SMI. In the presarcopenia group diagnosed only by Arm span SMI (n=11), the arm span-height difference was significantly higher (p<0.001) and the percentage of young adult mean (YAM) femoral neck-BMD was significantly lower (p=0.013) compared to the normal group diagnosed by both SMI and Arm span-SMI (n=29).
Conclusion: These results indicated that Arm span SMI might be useful for the diagnosis of sarcopenia in patients with severe osteoporosis and kyphosis.
abstract_id: PUBMED:18297563
Measuring body mass index (BMI) in nursing home residents: the usefulness of measurement of arm span. Objective: To study whether arm span can be used as substitute for measurement of height in nursing home patients for calculating body mass index (BMI).
Design: Explanatory observational study.
Setting: Assessment of 35 nursing home residents admitted to long-term stay in a nursing home.
Main Outcome Measures: Correlation between measured height and arm span and of BMI based on both measures.
Results: Measured height and arm span, and BMI calculated from either measure were significantly correlated, r(s)=0.75, p <0.001 and r(s)=0.89, p <0.001, respectively. The ratios of measured height and arm span and between BMIs based on height or arm span are close to 1, but the dispersion is rather large.
Conclusion: Arm span is a reliable substitute for measurement of height in nursing home patients. In persons with severe height reduction, arm-span-based BMI is probably more accurate than conventional height-based BMI.
abstract_id: PUBMED:29923545
Relationship between height and arm span of elderly persons in an urban colony of New Delhi. Anthropometric changes take place with increasing age. Progressive loss of height makes it difficult to use height for calculation of body mass index in nutritional screening of elderly persons. There is a need to find other alternative methods which could be used as proxy measurements of height in them. To assess the relationship of height and arm span and among elderly persons. A community-based cross-sectional study was conducted among elderly persons in urban colony of Delhi. Height and arm span of persons aged 60 years and above (n = 711) were measured according to standard methods. Correlation between arm span and height was calculated. The mean arm span was seen to be more than the mean height in all age-groups and both sexes. There was a linear relationship between height and arm-span in all age-groups. There was a strong correlation between arm span and height in all age groups. Arm span could be used instead of height as an alternative in the conventional body mass index in elderly persons.
abstract_id: PUBMED:30477567
Developing an equation for estimating body height from linear body measurements of Ethiopian adults. Background: Measurements of erect height in older people, hospitalized and bedridden patients, and people with skeletal deformity is difficult. As a result, using body mass index for assessing nutritional status is not valid. Height estimated from linear body measurements such as arm span, knee height, and half arm span was shown to be useful surrogate measures of stature. However, the relationship between linear body measurements and stature varies across populations implying the need for the development of population-specific prediction equation. The objective of this study was to develop a formula that predicts height from arm span, half arm span, and knee height for Ethiopian adults and assess its agreement with measured height.
Methods: A cross-sectional study was conducted from March 15 to April 21, 2016 in Jimma University among a total of 660 (330 females and 330 males) subjects aged 18-40 years. A two-stage sampling procedure was employed to select study participants. Data were collected using interviewer-administered questionnaire and measurement of anthropometric parameters. The data were edited and entered into Epi Data version 3.1 and exported to SPSS for windows version 20 for cleaning and analyses. Linear regression model was fitted to predict height from knee height, half arm span, and arm span. Bland-Altman analysis was employed to see the agreement between actual height and predicted heights. P values < 0.05 was used to declare as statistically significance.
Results: On multivariable linear regression analyses after adjusting for age and sex, arm span (β = 0.63, p < 0.001, R2 = 87%), half arm span (β = 1.05, p < 0.001, R2 = 83%), and knee height (β = 1.62, p < 0.001, R2 = 84%) predicted height significantly. The Bland-Altman analyses showed a good agreement between measured height and predicted height using all the three linear body measurements.
Conclusion: The findings imply that in the context where height cannot be measured, height predicted from arm span, half arm span, and knee height is a valid proxy indicator of height. Arm span was found to be the best predictor of height. The prediction equations can be used to assess the nutritional status of hospitalized and/or bedridden patients, people with skeletal deformity, and elderly population in Ethiopia.
abstract_id: PUBMED:33681051
Correlation between the arm-span and the standing height among males and females of the Khasi tribal population of Meghalaya state of North-Eastern India. Introduction: The estimation of relationship between the arm span and the standing height has been an important tool in anthropometric measurements especially in cases where direct measurement of stature is not possible.
Objective: To find the relationship between the arm-span and the standing height of both males and females in the population of Khasi tribal population of Meghalaya.
Materials And Methods: The study involved 400 numbers (272 males and 128 females) of healthy human volunteer subjects belonging to Khasi tribe of Meghalaya. The standing height and arm-span were measured for each individual and analyzed.
Result: Of the 400 healthy volunteers, 272 (68%) were males and 128 (32%) were females with age ranged from 25 to 45 years. Height and arm span in males (159.68 ± 4.12 cm and 166.30 ± 4.27 cm, respectively) werefound to be significantly (p < 0.001) higher than females (149.96 ± 3.04 cm and 155.77 ± 3.13 cm respectively). The Pearson correlation coefficient (r) between height (cm) and arm span (cm) showed significant positive correlation in both male (r = 0.988, P < 0.001) and female (r = 0.991, P < 0.001) study subjects. The regression equation was Height = 1.060 + 0.954 (Arm span); R2 = 0.976; SEE = 0.646 for male. For female subjects the regression equation was found as Height = 0.150 + 0.962 (Arm span); R2 = 0.983; SEE = 0.400.
Conclusion: Arm-span can be used as one of the most reliable parameter in both males and females for obtaining the stature of an individual in alternative to the height.
abstract_id: PUBMED:36866981
Correlation between anthropometric measurements of height and arm span in Indonesian children aged 7-12 years: a cross-sectional study. Background: Height is an anthropometric measurement that serves as the most constant indicator of growth. In certain circumstances, arm span can be used as an alternative to height measurements. This study aims to analyze the correlation between anthropometric measurements of height and arm span in children aged 7-12 years.
Methods: A cross-sectional study was carried out from September to December 2019 in six elementary schools in Bandung. Children aged 7-12 years were recruited with a multistage cluster random sampling method. Children with scoliosis, contractures, and stunting were excluded from the study. Height and arm span were measured by two pediatricians.
Results: A total of 1,114 children, comprising 596 boys and 518 girls, fulfilled the inclusion criteria. The ratio of height to arm span was 0.98-1.01. The regression equation used to predict height through measurement of arm span in male subjects was Height = 21.8623 + 0.7634 x Arm span (cm) + 0.0791 x age (month); R2 = 94%; standard error of estimate (SEE): 2.66 and that in female subjects was Height = 21.2395 + 0.7779 x Arm span (cm) + 0.0701 x age (month); R2 = 95.4%; SEE: 2.39. The predicted height and the average actual height were not significantly different. There is a strong correlation between height and arm span in children aged 7-12 years.
Conclusions: Arm span can be used to predict the actual height of children aged 7-12 years and as an alternative measurement for growth.
abstract_id: PUBMED:26395228
The relationships between height and arm span, mid-upper arm and waist circumferences and sum of four skinfolds in Ellisras rural children aged 8-18 years. Objective: Height is required for the assessment of growth and nutritional status, as well as for predictions and standardization of physiological parameters. To determine whether arm span, mid-upper arm and waist circumferences and sum of four skinfolds can be used to predict height, the relationships between these anthropometric variables were assessed among Ellisras rural children aged 8-18 years.
Design: The following parameters were measured according to the International Society for the Advancement of Kinathropometry: height, arm span, mid-upper arm circumference, waist circumference and four skinfolds (suprailiac, subscapular, triceps and biceps). Associations between the variables were assessed using Pearson correlation coefficients and linear regression models.
Setting: Ellisras Longitudinal Study (ELS), Limpopo Province, South Africa.
Subjects: Boys (n 911) and girls (n 858) aged 8-18 years.
Results: Mean height was higher than arm span, with differences ranging from 4 cm to 11·5 cm between boys and girls. The correlation between height and arm span was high (ranging from 0·74 to 0·91) with P<0·001. The correlation between height and mid-upper arm circumference, waist circumference and sum of four skinfolds was low (ranging from 0·15 to 0·47) with P<0·00 among girls in the 15-18 years age group.
Conclusions: Arm span was found to be a good predictor of height. The sum of four skinfolds was significantly associated with height in the older age groups for girls, while waist circumference showed a negative significant association in the same groups.
Answer: The use of arm span as a substitute for height in calculating BMI for older people is supported by several studies due to the progressive loss of height with age, which can lead to inaccuracies in BMI calculations when using height. The arm span, as an anthropometric measure, remains unchanged over time and could be used instead of height to calculate an alternative index (body mass index.1) to the conventional BMI (PUBMED:25523902). This alternative index could enable more accurate nutritional screening and intervention for older people, as it classifies a larger number of older individuals as suffering from malnutrition compared to the conventional BMI (PUBMED:25523902).
Additionally, in patients with spinal curvature, the use of arm span for BMI calculation is recommended, especially when the difference between arm span and height (Delta AH) is greater than 3 cm, as spine deformity patients experience height loss which can impact their true BMI values (PUBMED:29735129). Similarly, in a community-based setting, the use of direct and surrogate measures of height showed that interchangeable use of current and reported height can introduce significant systematic error in BMI calculation, impacting nutritional assessment (PUBMED:29415827).
For postmenopausal osteoporotic patients, arm span-adjusted SMI (Arm span SMI) was found to be useful for the diagnosis of sarcopenia, indicating that arm span might be a better measure for assessing conditions related to muscle mass and bone density (PUBMED:32210546). In nursing home residents, arm span has been shown to be a reliable substitute for height measurement, and it may provide a more accurate BMI in persons with severe height reduction (PUBMED:18297563).
Furthermore, studies in various populations, including elderly persons in New Delhi (PUBMED:29923545), Ethiopian adults (PUBMED:30477567), the Khasi tribal population (PUBMED:33681051), Indonesian children (PUBMED:36866981), and Ellisras rural children (PUBMED:26395228), have demonstrated a strong correlation between height and arm span, suggesting that arm span can be used as a reliable parameter for estimating stature when direct measurement of height is not feasible.
In conclusion, for older people, arm span appears to be a more stable and reliable measure than height for calculating BMI, particularly in the context of nutritional assessment and monitoring. |
Instruction: Prader-Willi syndrome large deletion on two brothers. Is this the exception that confirm the rule?
Abstracts:
abstract_id: PUBMED:11424049
Prader-Willi syndrome large deletion on two brothers. Is this the exception that confirm the rule? Introduction: Prader-Willi syndrome (PWS), a neuroendocrine disorder could be due to: a large paternally derived chromosome deletion of 15q11-13, to maternal uniparental disomy (UPD), or imprinting mutation (IC); amongst this last group five families, with inherited microdeletion encompassing SNRPN were described; in these families excluded a typical large deletion. Families with more than a child with PWS by classic large deletion have not been published.
Clinical Cases: We report on a family with three children, 2 of which had typical clinical findings of PWS: mental retardation, hypogonadism, hypotonia, hyperphagia, obesity and also strabismus and synophridia; during pregnancy reduced fetal movement was noted. The Fish probes (SNRPN and D15S10), Methylation specific PCR (MPCR), Southern blot and microsatellite markers confirmed in the PWS brothers a large deletion at least of the area comprising between D15S63 and GABRA5.
Conclusions: No previously described cases in the literature reviewed show for PWS brothers due to a classical deletion. Some possible reasons recurrence in this family could be: at random, germinal mosaicism, or abnormalities at gonadal level environmental factors such as hydrocarbon exposing occupations in fathers of PWS patients as has been referred to by different authors. The latter might be an explanation as the father was working from age of 17 and for over 12 years with paints at shipyards exposed to hydrocarbon and others mutagenic substances. We consider it would be important to bear this case in mind when giving genetic counseling.
abstract_id: PUBMED:7611294
Molecular characterization of two proximal deletion breakpoint regions in both Prader-Willi and Angelman syndrome patients. Prader-Willi syndrome (PWS) and Angelman syndrome (AS) are distinct mental retardation syndromes caused by paternal and maternal deficiencies, respectively, in chromosome 15q11-q13. Approximately 70% of these patients have a large deletion of approximately 4 Mb extending from D15S9 (ML34) through D15S12 (IR10). To further characterize the deletion breakpoints proximal to D15S9, three new polymorphic microsatellite markers were developed that showed observed heterozygosities of 60%-87%. D15S541 and D15S542 were isolated from YAC A124A3 containing the D15S18 (IR39) locus. D15S543 was isolated from a cosmid cloned from the proximal right end of YAC 254B5 containing the D15S9 (ML34) locus. Gene-centromere mapping of these markers, using a panel of ovarian teratomas of known meiotic origin, extended the genetic map of chromosome 15 by 2-3 cM toward the centromere. Analysis of the more proximal S541/S542 markers on 53 Prader-Willi and 33 Angelman deletion patients indicated two classes of patients: 44% (35/80) of the informative patients were deleted for these markers (class I), while 56% (45/80) were not deleted (class II), with no difference between PWS and AS. In contrast, D15S543 was deleted in all informative patients (13/48) or showed the presence of a single allele (in 35/48 patients), suggesting that this marker is deleted in the majority of PWS and AS cases. These results confirm the presence of two common proximal deletion breakpoint regions in both Prader-Willi and Angelman syndromes and are consistent with the same deletion mechanism being responsible for paternal and maternal deletions. One breakpoint region lies between D15S541/S542 and D15S543, with an additional breakpoint region being proximal to D15S541/S542.
abstract_id: PUBMED:27184501
Two patients with chromosome 22q11.2 deletion presenting with childhood obesity and hyperphagia. Chromosome 22q11.2 deletion syndrome is a clinically heterogeneous condition of intellectual disability, parathyroid and thyroid hypoplasia, palatal abnormalities, cardiac malformations and psychiatric symptoms. Hyperphagia and childhood obesity is widely reported in Prader-Willi Syndrome (PWS) but there is only one previous report of this presentation in chromosome 22q11.2 deletion syndrome. We describe two further cases of chromosome 22q11.2 deletion syndrome in which hyperphagia and childhood obesity were the presenting features. This may be a manifestation of obsessive behaviour secondary to some of the psychiatric features commonly seen in chromosome 22q11.2 deletion syndrome. Serious complications may result from hyperphagia and childhood obesity therefore early recognition and intervention is crucial. Due to the similar clinical presentation of these two patients to patients with PWS, it is suggested that the hyperphagia seen here should be managed in a similar way to how it is managed in PWS.
abstract_id: PUBMED:8522319
Genotype-phenotype correlation in a series of 167 deletion and non-deletion patients with Prader-Willi syndrome. A total of 167 patients with Prader-Willi syndrome (PWS) was studied at the clinical and molecular level. Diagnosis was confirmed by the PW71 methylation test. Quantitative Southern blot hybridizations with a probe for the small nuclear ribonucleoprotein N were performed to distinguish between patients with a deletion (116 patient or 69.5%) and patients without a deletion (51 patients or 30.5%). These two types of patients differed with respect to the presence of hypopigmentation, which was more frequent in patients with a deletion (52%) than in patients without (23%), and to average birth weight of females and males, which was lower in patients with a deletion than in patients without. Newborns with PWS had a lower birth weight and length at term, but normal head circumference in comparison with a control group. This finding aids the identification of the neonatal phenotype. In addition, our data confirm an increased maternal age in the non-deletion group.
abstract_id: PUBMED:7977469
Comparison of high resolution chromosome banding and fluorescence in situ hybridization (FISH) for the laboratory evaluation of Prader-Willi syndrome and Angelman syndrome. The development of probes containing segments of DNA from chromosome region 15q11-q13 provides the opportunity to confirm the diagnosis of Prader-Willi syndrome (PWS) and Angelman syndrome (AS) by fluorescence in situ hybridization (FISH). We have evaluated FISH studies and high resolution chromosome banding studies in 14 patients referred to confirm or rule out PWS and five patients referred to confirm or rule out AS. In four patients (three from the PWS category and 1 from the AS group) chromosome analysis suggested that a deletion was present but FISH failed to confirm the finding. In one AS group patient, FISH identified a deletion not detectable by high resolution banding. Review of the clinical findings in the discrepant cases suggested that the FISH results were correct and high resolution findings were erroneous. Studies with a chromosome 15 alpha satellite probe (D15Z) on both normal and abnormal individuals suggested that incorrect interpretation of chromosome banding may occasionally be attributable to alpha satellite polymorphism but other variation of 15q11-q13 chromosome bands also contributes to misinterpretation. We conclude that patients who have been reported to have a cytogenetic deletion of 15q11-q13 and who have clinical findings inconsistent with PWS and AS should be re-evaluated by molecular genetic techniques.
abstract_id: PUBMED:23856564
A case of an atypically large proximal 15q deletion as cause for Prader-Willi syndrome arising from a de novo unbalanced translocation. We describe an 11 month old female with Prader-Willi syndrome (PWS) resulting from an atypically large deletion of proximal 15q due to a de novo 3;15 unbalanced translocation. The 10.6 Mb deletion extends from the chromosome 15 short arm and is not situated in a region previously reported as a common distal breakpoint for unbalanced translocations. There was no deletion of the reciprocal chromosome 3q subtelomeric region detected by either chromosomal microarray or FISH. The patient has hypotonia, failure to thrive, and typical dysmorphic facial features for PWS. The patient also has profound global developmental delay consistent with an expanded, more severe, phenotype.
abstract_id: PUBMED:8884080
Interstitial 6q deletion and Prader-Willi-like phenotype. A third case of an interstitial deletion of the long arm of chromosome 6 with clinical features mimicking Prader-Willi syndrome (PWS) is presented. Although preliminary clinical evaluation in each case suggested PWS, further review revealed that the features in all three cases are not completely compatible with the characteristic findings in Prader-Willi syndrome. Furthermore, the deletions in the three cases do not show a consistent region of overlap. Consequently, no particular band or region in 6q can be defined as associated with obesity. However, our findings confirm the suggestion of Villa et al. in 1995, that individuals with a PWS phenotype who are cytogenetically and molecularly negative for a deletion of 15q11-q13 should be examined for a deletion of 6q.
abstract_id: PUBMED:24750553
Growth patterns of patients with 1p36 deletion syndrome. 1p36 deletion syndrome is one of the most common subtelomeric deletion syndromes. Obesity is frequently observed in patients with this syndrome. Thus, it is important to evaluate the growth status of an individual patient. For this purpose, we accumulated recorded growth data from 44 patients with this syndrome and investigated the growth patterns of patients. Most of the patients showed weight parameters within normal limits, whereas a few of these patients showed intrauterine growth delay and microcephaly. The length of the patients after birth was under the 50th centile in most patients. Many patients showed poor weight gain after birth, and only two female patients were overweight. These findings indicate two different phenotypes of the 1p36 deletion syndrome. The overweight patients with 1p36 deletion started excessive weight gain after two years of life. This characteristic of the patients with 1p36 deletion syndrome is similar to Prader-Willi syndrome.
abstract_id: PUBMED:3319293
Prader-Willi syndrome in two siblings: one with normal karyotype, one with a terminal deletion of distal Xq. Two sibs, a 13-year-old girl and an 11-year-old girl, with typical clinical features of the Prader-Willi syndrome (PWS) are reported. High-resolution chromosome analysis showed the normal karyotype in the elder sister, and 46,X,del(X)(pter----q26.1:) in the younger sister. But an interstitial deletion of 15q was not detected in either of the cases. PWS is most probably an etiologically heterogeneous syndrome consisting of two subgroups, with partial deletion and non-deletion of chromosome 15, respectively.
abstract_id: PUBMED:8723064
Deletion of small nuclear ribonucleoprotein polypeptide N (SNRPN) in Prader-Willi syndrome detected by fluorescence in situ hybridization: two sibs with the typical phenotype without a cytogenetic deletion in chromosome 15q. The small nuclear ribonucleoprotein polypeptide N (SNRPN) gene is regarded as one of the candidates for Prader-Willi syndrome (PWS). We describe two sibs with typical PWS presenting deletion of SNRPN detected by fluorescence in situ hybridization (FISH). Neither a cytogenetically detectable 15q12 deletion nor a deletion for the D15S11, D15S10, and GABRB3 cosmid probes were found in either patient. This implies a smaller deletion limited to the PWS critical region. FISH with a SNRPN probe will permit analysis of PWS patients with limited deletions not detectable with other probes.
Answer: Prader-Willi syndrome (PWS) is typically caused by a large paternally derived chromosome deletion of 15q11-13, maternal uniparental disomy (UPD), or an imprinting mutation (IC). However, the occurrence of PWS due to a classic large deletion in siblings is not commonly reported in the literature. The case described in PUBMED:11424049 is notable because it involves a family with three children, two of whom have PWS with a confirmed large deletion at least of the area comprising between D15S63 and GABRA5. This is an unusual finding, as previously described cases with inherited microdeletions encompassing SNRPN did not show a typical large deletion, and families with more than one child with PWS by classic large deletion have not been published.
The rarity of this occurrence suggests that the case of the two brothers with PWS due to a classical deletion might be an exception rather than the rule. Possible explanations for the recurrence in this family could include random chance, germinal mosaicism, or environmental factors such as exposure to hydrocarbons and other mutagenic substances, as the father had a history of working with paints at shipyards (PUBMED:11424049).
The presence of two common proximal deletion breakpoint regions in both PWS and Angelman syndrome patients, as described in PUBMED:7611294, supports the idea that deletions in the 15q11-q13 region can occur in different ways, leading to these syndromes. However, the specific case of two siblings with a large deletion causing PWS is not typical and represents a rare occurrence in the spectrum of genetic anomalies associated with PWS. Therefore, the case of the two brothers with a large deletion causing PWS could be considered an exception that confirms the general rule that such occurrences are uncommon. |
Instruction: Is cataract surgery cost-effective among older patients with a low predicted probability for improvement in reported visual functioning?
Abstracts:
abstract_id: PUBMED:17063129
Is cataract surgery cost-effective among older patients with a low predicted probability for improvement in reported visual functioning? Introduction: Although cataract surgery has been demonstrated to be effective and cost-effective, 5% to 20% of patients do not benefit functionally from the procedure. This study examines the cost-effectiveness of cataract surgery versus watchful waiting in a subgroup of patients who had less than a 30% predicted probability of reporting improvements in visual function after surgery.
Methods: Randomized trial (first eye surgery vs. watchful waiting) of 250 patients who based on a cataract surgery index (CSI) were felt to have less than a 30% probability of reporting improvements in visual functioning after surgery. Cost was estimated using monthly resource utilization surveys and Medicare billing and payment data. Effectiveness was evaluated at 6 months using the Activities of Daily Vision Scale (ADVS) and the Health Utilities Index, Mark 3 (HUI3).
Results: In terms of overall utility, the incremental cost-effectiveness of surgery was Dollars 38,288/QALY. In the subgroup of patients with a CSI score > 11 (< 20% probability of improvement), the cost-effectiveness of cataract surgery was Dollars 53,500/QALY. Sensitivity analysis demonstrated that often this population of patients may not derive a utility benefit with surgery.
Conclusion: Cataract surgery is cost-effective even in a subpopulation of patient with a lower, < 30%, predicted probability of reporting improved visual functioning after surgery. There may be a subgroup of patients, CSI > 11, for whom a strategy of watchful waiting may be equally effective and considerably less expensive.
abstract_id: PUBMED:28129000
Dry Eye Symptoms, Patient-Reported Visual Functioning, and Health Anxiety Influencing Patient Satisfaction After Cataract Surgery. Purpose: To evaluate how patient satisfaction after cataract surgery is associated with postoperative visual acuity, visual functioning, dry eye signs and symptoms, health anxiety, and depressive symptoms.
Patients And Methods: Fifty-four patients (mean age: 68.02 years) were assessed 2 months after uneventful phacoemulsification; 27 were unsatisfied with their postoperative results and 27 were satisfied. They completed the following questionnaires: Visual Function Index-14 (VF-14), Ocular Surface Disease Index (OSDI), Shortened Health Anxiety Inventory (SHAI), and Shortened Beck Depression Inventory. Testing included logarithm of the Minimum Angle of Resolution (logMAR) uncorrected visual acuity (UCVA) and best-corrected visual acuity (BCVA), dry eye tests (tear meniscus height and depth measured by spectral optical coherence tomography, tear film break-up time (TBUT), ocular surface staining, Schirmer 1 test, and meibomian gland dysfunction grading).
Results: Postoperative UCVA, BCVA, and the dry eye parameters - except TBUT - showed no statistically significant difference between the two groups (p > 0.130). However, the VF-14 scores, the OSDI scores, and the SHAI scores were significantly worse in the unsatisfied patient group (p < 0.002). No significant correlations were found between visual acuity measures and visual functioning (r < 0.170, p > 0.05). However, the VF-14 scores correlated with the OSDI scores (r = -0.436, p < 0.01) and the OSDI scores correlated with the SHAI scores (r = 0.333, p < 0.05). Multiple logistic regression revealed an adjusted association between patient satisfaction and dry eye symptoms (odds ratio = 1.46, 95% CI = 1.02-2.09, p = 0.038) and visual functioning (odds ratio = 0.78, 95% CI = 0.60-1.0, p = 0.048).
Conclusions: Our results suggest that patient-reported visual functioning, dry eye symptoms, and health anxiety are more closely associated with patients' postoperative satisfaction than with the objective clinical measures of visual acuity or the signs of dry eye.
abstract_id: PUBMED:7257737
Visual functioning in cataract patients. Methods of measuring and results. An Index for measuring Visual Functioning on the basis of self-assessment is presented and evaluated clinically and statistically. The construct validity and the reliability are shown to be sufficient. The 'Visual Functioning Index' has been applied to a group of cataract patients. In bilateral cataract patients visual functioning is, of course, correlated to visual acuity in the best eye, but this correlation is not straight forward. So, for the assessment of visual impairment both visual acuity and visual functioning must be measured. Application of the Index to a group of monaphakic cataract patients indicates a good improvement of visual function obtained even by first cataract extraction. Ideally cataract surgery should be performed before the total visual functioning and social integration is severely damaged. On this basis about 20% of the patients in this study should have been operated at an earlier stage, which among other things suggests a need for additional surgical capacity.
abstract_id: PUBMED:33923803
Rasch Validation of the VF-14 Scale of Vision-Specific Functioning in Greek Patients. The Visual Functioning-14 (VF-14) scale is the most widely employed index of vision-related functional impairment and serves as a patient-reported outcome measure in vision-specific quality of life. The purpose of this study is to rigorously examine and validate the VF-14 scale on a Greek population of ophthalmic patients employing Rasch measurement techniques. Two cohorts of patients were sampled in two waves. The first cohort included 150 cataract patients and the second 150 patients with other ophthalmic diseases. The patients were sampled first while pending surgical or other corrective therapy and two months after receiving therapy. The original 14-item VF-14 demonstrated poor measurement precision and disordered response category thresholds. A revised eight-item version, the VF-8G ('G' for 'Greek'), was tested and confirmed for validity in the cataract research population. No differential functioning was reported for gender, age, and underlying disorder. Improvement in the revised scale correlated with improvement in the mental and physical component of the general health scale SF-36. In conclusion, our findings support the use of the revised form of the VF-14 for assessment of vision-specific functioning and quality of life improvement in populations with cataracts and other visual diseases than cataracts, a result that has not been statistically confirmed previously.
abstract_id: PUBMED:36141761
Factors Influencing Visual Improvement after Phacoemulsification Surgery among Malaysian Cataract Patients. Blindness and visual impairment are part of the global burden of eye disease, with cataract being one of the leading causes of blindness. This study aimed to determine the factors affecting visual acuity (VA) improvement among cataract patients after phacoemulsification surgery in Malaysia. Cataract patients aged over 18 who underwent phacoemulsification surgery between January 2014 and December 2018 were included in this retrospective cohort study. Patients' sociodemographic, comorbidities, surgical, and related complication factors were extracted from the National Eye Database. The outcome was measured by the difference in visual acuity before and after the operation and was categorized as "improved", "no change", and "worse". A total of 180,776 patients were included in the final analysis. Multinomial logistic regression analysis showed "no changes in VA" was significantly higher in patients aged less than 40 years old (OR: 1.66; 95% CI: 1.22, 2.26), patients with ocular comorbidities (OR: 1.65; 95% CI: 1.53, 1.77), patients who had undergone surgery lasting more than 60 min (OR: 1.39; 95% CI: 1.14, 1.69), patients who had surgery without an intraocular lens (IOL) (OR: 1.64; 95% CI: 1.20, 2.26), and patients with postoperative complications (OR: 8.76; 95% CI: 8.13, 9.45). Worsening VA was significantly higher among male patients (OR: 1.11; 95% CI: 1.01, 1.22), patients who had ocular comorbidities (OR: 1.76; 95% CI: 1.59, 1.96), patients who had undergone surgery lasting more than 60 min (OR: 1.94; 95% CI: 1.57, 2.41), patients who had surgery without an IOL (OR: 2.03; 95% CI: 1.48, 2.80), and patients with postoperative complications (OR: 21.46; 95% CI: 19.35, 23.80). The factors impacting "no changes" in and "worsening" of VA after cataract surgery were the following: older age, male gender, ethnicity, ocular comorbidities, surgeon grade, absence of IOL, intraoperative complication, and postoperative problems.
abstract_id: PUBMED:33091324
Risk factors for self-reported cataract symptoms, diagnosis, and surgery uptake among older adults in India: Findings from the WHO SAGE data. Visual impairments have a substantial impact on the well-being of older people, but their impact among older adults in low- and middle-income countries is under-researched. We examined risk factors for self-reported cataract symptoms, diagnosis, and surgery uptake in India.
Cross-sectional data from the nationally representative WHO SAGE data (2007-2008) for India were analysed. We focused on a sub-sample of 6558 adults aged 50+, applying descriptive statistics and logistic regression.
Nearly 1-in-5 respondents self-reported diagnosed cataracts, more than three-fifths (62%; n = 3879) reported cataract symptoms, and over half (51.8%) underwent surgery. Increasing age, self-reported diabetes, arthritis, low visual acuity, and moderate or severe vision problems were factors associated with self-reported diagnosed cataracts. Odds of cataract symptoms were higher with increasing age and among those with self-reported arthritis, depressive symptoms, low visual acuity, and with moderate or severe vision problems. Odds of cataract surgery were also higher with increasing age, self-reported diabetes, depressive symptoms, and among those with low visual acuity.
A public health approach of behavioural modification, well-structured national outreach eye care services, and inclusion of local basic eye care services are recommended.
abstract_id: PUBMED:25160890
Cognitive speed of processing training in older adults with visual impairments. Purpose: To examine whether older adults with vision impairment differentially benefit from cognitive speed of processing training (SPT) relative to healthy older adults.
Methods: Secondary data analyses were conducted from a randomised trial on the effects of SPT among older adults. The effects of vision impairment as indicated by (1) near visual acuity, (2) contrast sensitivity, (3) self-reported cataracts and (4) self-reported other eye conditions (e.g., glaucoma, macular degeneration, diabetic retinopathy, optic neuritis, and retinopathy) among participants randomised to either SPT or a social- and computer-contact control group was assessed. The primary outcome was Useful Field of View Test (UFOV) performance.
Results: Mixed repeated-measures ancovas demonstrated that those randomized to SPT experienced greater baseline to post-test improvements in UFOV performance relative to controls (p's < 0.001), regardless of impairments in near visual acuity, contrast sensitivity or presence of cataracts. Those with other eye conditions significantly benefitted from training (p = 0.044), but to a lesser degree than those without such conditions. Covariates included age and baseline measures of balance and depressive symptoms, which were significantly correlated with baseline UFOV performance.
Conclusions: Among a community-based sample of older adults with and without visual impairment and eye disease, the SPT intervention was effective in enhancing participants' UFOV performance. The analyses presented here indicate the potential for SPT to enhance UFOV performance among a community-based sample of older adults with visual impairment and potentially for some with self-reported eye disease; further research to explore this area is warranted, particularly to determine the effects of eye diseases on SPT benefits.
abstract_id: PUBMED:32634010
Prevalence and causes of visual impairment among older persons living in low-income old age homes in Durban, South Africa. Background: Visual impairment (VI) increases with age and has been reported to be more prevalent among older adults living in old age homes than in the general population.
Aim: To determine the prevalence and causes of VI among older adults living in low-income old age homes in Durban, South Africa.
Setting: This study was conducted at low-income old age homes in Durban.
Methods: This cross-sectional study of 118 residents aged 60 years and older, collected socio-demographic data, presenting visual acuities (VAs) for each eye, and binocularly. Anterior segment eye examinations were conducted with a penlight torch and a portable slit-lamp, while posterior segment evaluation was conducted with direct and indirect ophthalmoscopy. Objective and subjective refractions were performed, and the best-corrected distance and near VAs were measured in each eye. VI was defined as presenting VA 6/18 and included moderate VI ( 6/18-6/60), severe VI ( 6/60 -3/60) and blindness ( 6/120).
Results: The mean age of the participants was 73.3 years and included 80.5% females and 19.5% males. The prevalence of VI and blindness was 63.6%. Optical correction significantly reduced the prevalence of VI and blindness by 19.5% (p 0.05). The main causes of non-refractive VI and blindness were cataract (54.5%), posterior segment disorders (25.5%) and corneal opacities (20%).
Conclusion: The prevalence of VI and blindness is high among residents in low-income old age homes living in Durban. Refractive correction and surgical cataract intervention can significantly reduce the burden of VI and blindness among the elderly residents.
abstract_id: PUBMED:38197047
Prevalence of Visual Impairment and Associated Factors Among Older Adults in Southern Ethiopia, 2022. Background: Visual impairment is a functional limitation of the eye brought on by a disorder or disease that can make it more difficult to carry out daily tasks. Visual impairment causes a wide range of public health, social, and economic issues, particularly in developing nations, where more than 90% of the world's visually impaired people reside. Although many studies conducted in Ethiopia related with the topic, there were focused on childhood visual impairments.
Objectives: To assess the prevalence and factors associated with visual impairment among older adults.
Methodology: A community-based cross-sectional study design was conducted in Arba Minch Zuria District. Systematic sampling technique was employed to select 655 adults aged 40 and above. Data were gathered through face-to-face interviews and visual acuity measurements, and SPSS version 25 was used for analysis. Bivariate and multivariate logistic regression analyses were performed to identify factors associated with visual impairment.
Results: The overall prevalence of visual impairment was found to be 36.95% (95% CI=33.2-40.8%). Factors associated with a higher odds of visual impairment included aged 51-60 years (AOR=2.37,95%CI=1.29-4.44), aged 61 and above (AOR=8.9, 95%CI=4.86-16.3), low wealth index ((AOR=1.81, 95%CI: 1.14-3.2), divorced and widowed (AOR=4.67, 95%CI:2.77-7.86), no formal education (AOR=14.28, 95%CI: 2.82-71.46), not utilizing eyeglass (AOR=3.94, 95%CI (1.65-9.40). The most possible causes of visual impairment were found to be refractive error and cataract.
Conclusions And Recommendations: The prevalence of visual impairment among study population was relatively high, and more than three-fifths of participants had unilateral visual impairment. Age, marital status, occupation, educational status, wealth index, and not wearing of prescribed eyeglasses were significantly associated with visual impairment. Refractive error is the leading cause of visual impairment. Awareness of spectacle use and expanding cataract surgery coverage are urgently needed in this area.
abstract_id: PUBMED:21908584
Visual functioning and quality of life under low luminance: evaluation of the German Low Luminance Questionnaire. Purpose: To validate the German-translated Low Luminance Questionnaire (LLQ), a vision-related quality of life scale assessing mainly mesopic and scotopic functioning, and to determine the relationship between the severity of vision impairment, ocular conditions, and low luminance-related visual functioning.
Methods: In all, 274 participants, 184 patients with visual acuity <6/12 or a long-standing symptomatic eye condition and 90 controls, were recruited from an outpatient clinic at a German eye hospital. Participants underwent a clinical examination and completed the German LLQ and VF-14 scales. The validity and psychometric properties of the scales were assessed using Rasch analysis exploring key indices, such as instrument unidimensionality, discriminant ability, and targeting of item difficulty to patient ability. Multivariate analyses of low luminance functioning were adjusted for conventional visual functioning (VF-14 scores).
Results: The 30-item German LLQ initially displayed poor fit to the Rasch model. Following Rasch-guided iterative adjustments to the scale, a 23-item LLQ emerged as a valid and unidimensional scale. Visual functioning under low luminance consistently declined with worsening vision loss. Compared with patients with no vision impairment, those with mild or moderate/severe vision impairment recorded significantly poorer low luminance functioning scores (mean change, -6.33 and -16.62; P = 0.032 and P < 0.001, respectively). Age-related macular degeneration and cataract were independently associated with low luminance visual functioning, as was worse self-reported health.
Conclusions: Low luminance functioning is considerably compromised in visually impaired patients even at the mild spectrum of visual acuity loss. Additionally, the impact of age-related macular degeneration and cataract on patients' low luminance functioning is substantially independent of vision impairment.
Answer: Cataract surgery has been found to be cost-effective even among a subgroup of older patients with a low predicted probability for improvement in reported visual functioning. A study examining the cost-effectiveness of cataract surgery versus watchful waiting in patients with less than a 30% predicted probability of reporting improvements in visual function after surgery found that the incremental cost-effectiveness of surgery was $38,288 per quality-adjusted life year (QALY). However, in a subgroup of patients with a cataract surgery index (CSI) score greater than 11, indicating less than a 20% probability of improvement, the cost-effectiveness of cataract surgery was $53,500 per QALY. Sensitivity analysis suggested that often this population of patients may not derive a utility benefit from surgery, indicating that for some patients with a CSI score greater than 11, a strategy of watchful waiting may be equally effective and considerably less expensive (PUBMED:17063129). |
Instruction: Should sentinel lymph node mapping be performed for colorectal cancer?
Abstracts:
abstract_id: PUBMED:25350306
Possibly improved treatment of colorectal cancer by sentinel lymph node mapping Prognosis for colorectal cancer is dependent on radical surgical intervention. Chemotherapy in patients with advanced disease has improved the survival. A considerable proportion of the patients going through radical surgery will subsequently relapse. Adjuvant chemotherapy is reserved for patients with lymph node metastases, why undetected malignant lymph nodes will result in understaging and exclusion from the possible benefit of adjuvant chemotherapy. With sentinel lymph node mapping it may be possible to detect and resect more malignant lymph node and maybe even avoid extensive resections.
abstract_id: PUBMED:18023573
The application of sentinel lymph node mapping in colon cancer. Lymph node status is the most important prognostic factor for colorectal carcinoma. Complete lymph node dissection has historically been an integral part of the surgical treatment of these diseases. Sentinel lymph node mapping is a newer technology that allows selective removal of the first node draining a tumor. Sentinel node mapping is well accepted for the management of breast carcinoma and cutaneous melanoma, and has resulted in reduced morbidity without adversely affecting survival. Sentinel node mapping is currently being investigated for treatment of colorectal cancers. Recent studies show promise for incorporating the sentinel node mapping technique for treatment of several gastrointestinal malignancies.
abstract_id: PUBMED:31754853
Performance of Indocyanine green for sentinel lymph node mapping and lymph node metastasis in colorectal cancer: a diagnostic test accuracy meta-analysis. Background: Indocyanine green has been widely employed as a secure and easy technique for sentinel lymph node mapping in different types of cancer. Nonetheless, the usage of Indocyanine green has not been fully implemented due to the heterogeneous results found in published studies. Thus, the objective of this meta-analysis is to evaluate the overall performance of Indocyanine green for sentinel lymph node mapping and node metastasis in patients undergoing colorectal cancer surgery.
Methods: An extensive systematic search was performed to identify relevant studies in English and Spanish with no time limit restrictions. For the meta-analysis, a hierarchical summary receiver operating characteristic curve (HSROCs) was constructed, and quantitative data synthesis was performed using random effects models. Specificity, sensitivity, positive, and negative likelihood ratios were obtained from the corresponding HSROC. Between-study heterogeneity was visually evaluated using Galbraith plot, and publication bias was quantified using Deeks' method.
Results: A total of 11 studies were included for analysis. The pooled detection rate for sentinel lymph node mapping was 91% (80-98%). Covariates significantly influencing the pooled detection rate were having colon cancer (estimate: 1.3001; 1.114 to 1.486; p < 0.001) and the usage of a laparoscopic approach (estimate: 1.3495; 1.1029 to 1.5961; p < 0.001). The performance of Indocyanine green for the detection of metastatic lymph nodes yielded an area under the roc curve of 66.5%, sensitivity of 64.3% (51-76%), and specificity of 65% (36-85%).
Conclusions: Indocyanine green for the detection of sentinel lymph node mapping demonstrates better accuracy when used in colonic cancer and by a laparoscopic approach. Nevertheless, its overall performance for the detection of lymph node metastasis is poor.
abstract_id: PUBMED:12808612
Sentinel lymph node mapping in colorectal cancer. Background: Ultrastaging, by serial sectioning combined with immunohistochemical techniques, improves detection of lymph node micrometastases. Sentinel lymph node mapping and retrieval provides a representative node(s) to facilitate ultrastaging. The impact on staging of carcinoma of the colon and rectum in all series emphasizes the importance of this technique in cancer management. Now the challenge is to determine the biological relevance and prognostic implications.
Methods: The electronic literature (1966 to present) on sentinel node mapping in carcinoma of the colon and rectum was reviewed. Further references were obtained by cross-referencing from key articles.
Results: Lymphatic mapping appears to be readily applicable to colorectal cancer and identifies those lymph nodes most likely to harbour metastases. Sentinel node mapping carries a false-negative rate of approximately 10 per cent in larger studies, but will also potentially upstage a proportion of patients from node negative to node positive following the detection of micrometastases. The prognostic implication of these micrometastases requires further evaluation.
Conclusion: Further follow-up to assess the prognostic significance of micrometastases in colorectal cancer is required before the staging benefits of sentinel node mapping can have therapeutic implications.
abstract_id: PUBMED:15232693
Sentinel lymph node biopsy in colorectal carcinoma Lymph node status as an important prognostic factor in colon and rectal cancer is affected by the selection and number of lymph nodes examined and by the quality of histopathological assessment. The multitude of influences is accompanied by an elevated risk of quality alterations. Sentinel lymph node biopsy (SLNB) is currently under investigation for its value in improving determination of the nodal status. Worldwide, the data of 800 to 1000 patients from about 20 relatively small studies are available that focus rather on colon than rectal cancer patients. SLNB may be of clinical value for the collective of patients that are initially node-negative after H&E staining but reveal small micrometastases or isolated tumor cells in the SLN after intensified histopathological workup. If further studies confirm that these patients benefit from adjuvant therapy, the method may have an important effect on the therapy and prognosis of colon cancer patients as well. Another potential application could be the determination of the nodal status after endoscopic excision of early cancer to avoid bowel resection and lymphonodectomy.
abstract_id: PUBMED:21410041
Validation and feasibility of ex vivo sentinel lymph node "mapping" by methylene blue in colorectal cancer. Background/aims: There are currently divided opinions about the usefulness of sentinel lymph node mapping in colorectal carcinoma. This technique can potentially be useful in determining the volume of resection, reducing the number of analyzed lymph nodes limited to sentinel lymph nodes, and re-staging when metastases are detected in the sentinel lymph node. The aim of this study was to examine the feasibility of postoperative sentinel lymphatic node detection (hereinafter referred to as ex vivo sentinel lymph node mapping) in patients with colorectal carcinoma.
Methodology: The clinical study included a total of 58 patients. Thirteen patients were intraoperatively excluded. Ex vivo sentinel lymph node mapping by methylene blue was used in this study to detect the lymphatic micrometastases. Lymph node preparations were also stained with hematoxylin eosin, followed by immunohistochemical staining of serial sections.
Results: Ex vivo sentinel lymph node technique was performed in 45 patients, successfully in 41/45 (91.1%). 22.9 lymph nodes (range: 11 to 43) and 1.7 sentinel lymph node (range: 0 to 4) were resected and stained. Sentinel lymph node staining was negative in 15/45 patients (33, 3% false negative results).
Conclusions: Limited histopathology analysis by ex vivo sentinel lymph node mapping can not replace a complete histological analysis of all resected lymph nodes.
abstract_id: PUBMED:23080027
Lymph node staging in gastrointestinal cancer. Combination of methylene blue-assisted lymph node dissection and ex vivo sentinel lymph node mapping The histopathological lymph node staging is of crucial importance for the prognosis estimation and therapy stratification in gastrointestinal cancer. However, the recommended numbers of lymph nodes that should be evaluated are often not reached in routine practice. Methylene blue assisted lymph node dissection was introduced as a new, simple and efficient technique to improve lymph node harvest in gastrointestinal cancer. This method is inexpensive, causes no delay and needs no toxic substances. All studies performed revealed a highly significantly improved lymph node harvest in comparison to the conventional technique. Moreover, this technique can be combined with a new ex vivo sentinel lymph node mapping that for the first time is based on histological sentinel lymph node detection. The success rate of this method is similar to conventional techniques and it enables an efficient application of extended investigation methods, such as immunohistochemistry or the polymerase chain reaction.
abstract_id: PUBMED:16770539
Sentinel lymph node mapping with GI cancer. Precise evaluation of lymph node status is one of the most important factors in determining clinical outcome in treating gastro-intestinal (GI) cancer. Sentinel lymph node (SLN) mapping clearly has become highly feasible and accurate in staging GI cancer. The lunchtime symposium focused on the present status of SLN mapping for GI cancer. Dr. Kitigawa proposed a new strategy using sentinel node biopsy for esophageal cancer patients with clinically early stage disease. Dr. Uenosono reported on whether the SLN concept is applicable for gastric cancer through his analysis of more than 180 patients with cT1-2, N0 tumors. The detection rate was 95%, the false negative rate of lymph node metastasis including micro-metastasis was 4%, and accuracy was 99% in gastric cancer patients with cT1N0. Dr. Bilchik recommended the best technique for identifying SLNs in colorectal cancer: a combination of radiotracer and blue dye method, emphasizing that this technique will become increasingly popular because of the SLN concept, with improvement in staging accuracy. He stressed that this novel procedure offers the potential for significant upstaging of GI cancer. Dr. Saha emphasized that SLN mapping for colorectal cancer is highly successful and accurate in predicting the presence or absence of nodal disease with a relatively low incidence of skip metastases. It provided the "right nodes" to the pathologists for detailed analysis for appropriate staging and treatment with adjuvant chemotherapy. Although more evidence from large-scale multicenter clinical trials is required, SLN mapping may be very useful for individualizing multi-modal treatment for esophageal cancer and might be widely acceptable even for GI cancer.
abstract_id: PUBMED:21805419
Intraoperative sentinel lymph node mapping in patients with colon cancer: study of 38 cases. Background/aims: Sentinel lymph node mapping has become a cornerstone of oncologic surgery because it is a proven method for identifying nodal disease in melanoma and breast cancer. In addition, it can ameliorate the surgical morbidity secondary to lymphadenectomy. However, experience with sentinel lymph node mapping for carcinoma of the colon and other visceral malignancies is limited. The purpose of this study was to evaluate the feasibility and reliability of in vivo sentinel lymph node mapping in patients with colon cancer.
Methods: In the period March 2004 through June 2009, 38 patients underwent curative surgery for colon cancer. Thirty-eight patients with intraperitoneal colon tumors undergoing resection were studied prospectively. Sentinel lymph nodes were identified as the first blue-stained node(s) after in vivo peritumoral injection of Isosulfan blue dye.
Results: Detection of sentinel lymph nodes was successful in 36 out of 38 colon cancer patients. In 94.7% of the patients with colon cancer, at least one sentinel lymph node was found. No patients had a false negative sentinel node. The sensitivity was 100%, with a negative predictive value of 100%. Aberrant lymphatic drainage was not identified in any patient.
Conclusion: The technique of in vivo sentinel lymph node mapping is technically feasible with high sensitivity, high negative predictive value, and high accuracy. Although sentinel lymph node mapping did not alter the surgical management of colon cancer, it does make possible a more focused and cost-effective pathologic evaluation of nodal disease.
abstract_id: PUBMED:16555150
Ex vitro sentinel lymph node mapping in colorectal carcinoma Objective: To investigate the feasibility of ex vitro sentinel lymph node (SLN) mapping with methylene blue staining and its clinical value of predicting regional lymph node metastasis in colorectal cancer.
Methods: Methylene blue (1 ml) was injected submucosally around the tumor immediately after resection. After 2-5 minutes, the first blue-dyed lymph nodes, sentinel lymph nodes (SLNs), were harvested for pathological examination, and compared with the pathological results of other lymph nodes.
Results: Of the total 32 patients, 57 SLNs were successfully identified in 30 patients with an average of 1.9 nodes per person. The successful labeling rate was 93.8% (30/32). Among the 13 patients with positive SLNs, there were 5 patients with positive non-SLNs and 8 patients with negative Non-SLNs. Among the 17 patients with negative SLNs, there were 15 patients with negative non-SLNs and 2 patients with positive Non-SLNs. The accuracy of SLNs for regional lymph node metastasis was 93.3% (28/30), the false negative rate was 11.8% (2/17), and the specificity was 100% (13/13).
Conclusions: Ex vitro sentinel lymph node mapping with methylene blue staining in colorectal carcinoma is technically feasible and can effectively reflect the metastatic situation of regional lymph nodes.
Answer: Sentinel lymph node (SLN) mapping is a technique that has been investigated for its potential to improve the treatment of colorectal cancer. The status of lymph nodes is a critical prognostic factor for colorectal carcinoma, and complete lymph node dissection has historically been part of the surgical treatment for these diseases (PUBMED:18023573). SLN mapping allows for the selective removal of the first node draining a tumor, which could potentially reduce morbidity without adversely affecting survival, as seen in breast carcinoma and cutaneous melanoma (PUBMED:18023573).
The use of SLN mapping in colorectal cancer aims to detect and resect more malignant lymph nodes, possibly avoiding extensive resections and improving the detection of micrometastases that could lead to upstaging of the disease and influence adjuvant chemotherapy decisions (PUBMED:25350306; PUBMED:12808612). This could be particularly beneficial for patients who would otherwise be understaged and excluded from the benefits of adjuvant chemotherapy (PUBMED:25350306).
Studies have shown that SLN mapping can be feasible and accurate in predicting the presence or absence of nodal disease in colorectal cancer, with high sensitivity and negative predictive value (PUBMED:21805419). However, the overall performance of Indocyanine green, a dye used for SLN mapping, for the detection of lymph node metastasis is considered poor, with better accuracy when used in colonic cancer and by a laparoscopic approach (PUBMED:31754853).
Despite the potential benefits, there are challenges and limitations to the widespread adoption of SLN mapping in colorectal cancer. For instance, the prognostic implication of micrometastases detected through SLN mapping requires further evaluation (PUBMED:12808612), and the false-negative rate of approximately 10% in larger studies indicates that the technique is not infallible (PUBMED:12808612). Additionally, ex vivo SLN mapping cannot replace a complete histological analysis of all resected lymph nodes (PUBMED:21410041).
In conclusion, while SLN mapping shows promise for improving the staging and treatment of colorectal cancer, further research and validation are needed to fully understand its prognostic significance and to refine the technique for consistent and reliable use in clinical practice. |
Instruction: Prior endoscopy in patients with newly diagnosed celiac disease: a missed opportunity?
Abstracts:
abstract_id: PUBMED:23361572
Prior endoscopy in patients with newly diagnosed celiac disease: a missed opportunity? Background: Celiac disease (CD) is under-diagnosed in the United States, and factors related to the performance of endoscopy may be contributory.
Aim: To identify newly diagnosed patients with CD who had undergone a prior esophagogastroduodenoscopy (EGD) and examine factors contributing to the missed diagnosis.
Methods: We identified all patients age ≥ 18 years whose diagnosis of CD was made by endoscopy with biopsy at our institution (n = 316), and searched the medical record for a prior EGD. We compared those patients with a prior EGD to those with without a prior EGD with regard to age at diagnosis and gender, and enumerated the indications for EGD.
Results: Of the 316 patients diagnosed by EGD with biopsy at our center, 17 (5 %) had previously undergone EGD. During the prior non-diagnostic EGD, a duodenal biopsy was not performed in 59 % of the patients, and ≥ 4 specimens (the recommended number) were submitted in only 29 % of the patients. On the diagnostic EGD, ≥ 4 specimens were submitted in 94 %. The mean age of diagnosis of those with missed/incident CD was 53.1 years, slightly older than those diagnosed with CD on their first EGD (46.8 years, p = 0.11). Both groups were predominantly female (missed/incident CD: 65 vs. 66 %, p = 0.94).
Conclusions: Among 17 CD patients who had previously undergone a non-diagnostic EGD, non-performance of duodenal biopsy during the prior EGD was the dominant feature. Routine performance of duodenal biopsy during EGD for the indications of dyspepsia and reflux may improve CD diagnosis rates.
abstract_id: PUBMED:34138763
Panintestinal capsule endoscopy in patients with celiac disease. Introduction: Capsule endoscopy has proven its utility in diagnosing villous atrophy and lymphoma in patients with celiac disease. Recently, a novel capsule endoscopy system was introduced which enables the examination of the small and large bowel. So far, it has not been evaluated in patients with celiac disease.
Objective: The primary objective of this study was to evaluate the novel panintestinal capsule endoscopy system in patients with celiac disease.
Methods: Eleven patients with histologically proven celiac disease (Marsh 0-IV), who underwent a panintestinal capsule endoscopy between March 2018 and April 2019 at our institution, were included in this retrospective single-center study. All patients performed standard bowel preparation prior to the examination. Diagnostic yield, safety and therapeutic impact were analyzed. In addition, the correlation between capsule endoscopy findings and the histology of the duodenal mucosa was assessed.
Results: Panintestinal capsule endoscopy was feasible and produced an acceptable visualization quality in all cases. Concordance of capsule endoscopy findings with the Marsh classification showed a good correlation (r = 0.8). No lymphomas were detected. Evaluation of the colon revealed diminutive polyps (median size 4 mm) in 18% of patients.
Conclusions: The novel panintestinal capsule endoscopy system shows a fair correlation with the Marsh classification in patients with celiac disease. It is also capable of identifying colon polyps. Therefore, the novel panintestinal capsule endoscopy system can be considered for patients with celiac disease and an indication for capsule endoscopy.
abstract_id: PUBMED:24595045
Coeliac patients are undiagnosed at routine upper endoscopy. Background And Aims: Two out of three patients with Coeliac Disease (CD) in Australia are undiagnosed. This prospective clinical audit aimed to determine how many CD patients would be undiagnosed if duodenal biopsy had only been performed if the mucosa looked abnormal or the patient presented with typical CD symptoms.
Methods: All eligible patients presenting for upper gastrointestinal endoscopy (OGD) in a regional center from 2004-2009 underwent prospective analysis of presenting symptoms and duodenal biopsy. Clinical presentations were defined as either Major (diarrhea, weight loss, iron deficiency, CD family history or positive celiac antibodies- Ab) or Minor Clinical Indicators (CI) to duodenal biopsy (atypical symptoms). Newly diagnosed CD patients had follow up celiac antibody testing.
Results: Thirty-five (1.4%) new cases of CD were identified in the 2,559 patients biopsied at upper endoscopy. Almost a quarter (23%) of cases presented with atypical symptoms. There was an inverse relationship between presentation with Major CI's and increasing age (<16, 16-59 and >60: 100%, 81% and 50% respectively, p = 0.03); 28% of newly diagnosed CD patients were aged over 60 years. Endoscopic appearance was a useful diagnostic tool in only 51% (18/35) of CD patients. Coeliac antibodies were positive in 34/35 CD patients (sensitivity 97%).
Conclusions: Almost one quarter of new cases of CD presented with atypical symptoms and half of the new cases had unremarkable duodenal mucosa. At least 10% of new cases of celiac disease are likely to be undiagnosed at routine upper endoscopy, particularly patients over 60 years who more commonly present atypically. All new CD patients could be identified in this study by performing pre-operative celiac antibody testing on all patients presenting for OGD and proceeding to biopsy only positive antibody patients and those presenting with either Major CI or abnormal duodenal mucosa for an estimated cost of AUS$4,629 and AUS$3,710 respectively.
abstract_id: PUBMED:28412572
Quantitative analysis of patients with celiac disease by video capsule endoscopy: A deep learning method. Background: Celiac disease is one of the most common diseases in the world. Capsule endoscopy is an alternative way to visualize the entire small intestine without invasiveness to the patient. It is useful to characterize celiac disease, but hours are need to manually analyze the retrospective data of a single patient. Computer-aided quantitative analysis by a deep learning method helps in alleviating the workload during analysis of the retrospective videos.
Method: Capsule endoscopy clips from 6 celiac disease patients and 5 controls were preprocessed for training. The frames with a large field of opaque extraluminal fluid or air bubbles were removed automatically by using a pre-selection algorithm. Then the frames were cropped and the intensity was corrected prior to frame rotation in the proposed new method. The GoogLeNet is trained with these frames. Then, the clips of capsule endoscopy from 5 additional celiac disease patients and 5 additional control patients are used for testing. The trained GoogLeNet was able to distinguish the frames from capsule endoscopy clips of celiac disease patients vs controls. Quantitative measurement with evaluation of the confidence was developed to assess the severity level of pathology in the subjects.
Results: Relying on the evaluation confidence, the GoogLeNet achieved 100% sensitivity and specificity for the testing set. The t-test confirmed the evaluation confidence is significant to distinguish celiac disease patients from controls. Furthermore, it is found that the evaluation confidence may also relate to the severity level of small bowel mucosal lesions.
Conclusions: A deep convolutional neural network was established for quantitative measurement of the existence and degree of pathology throughout the small intestine, which may improve computer-aided clinical techniques to assess mucosal atrophy and other etiologies in real-time with videocapsule endoscopy.
abstract_id: PUBMED:24976712
Capsule endoscopy: current practice and future directions. Capsule endoscopy (CE) has transformed investigation of the small bowel providing a non-invasive, well tolerated means of accurately visualising the distal duodenum, jejunum and ileum. Since the introduction of small bowel CE thirteen years ago a high volume of literature on indications, diagnostic yields and safety profile has been presented. Inclusion in national and international guidelines has placed small bowel capsule endoscopy at the forefront of investigation into suspected diseases of the small bowel. Most commonly, small bowel CE is used in patients with suspected bleeding or to identify evidence of active Crohn's disease (CD) (in patients with or without a prior history of CD). Typically, CE is undertaken after upper and lower gastrointestinal flexible endoscopy has failed to identify a diagnosis. Small bowel radiology or a patency capsule test should be considered prior to CE in those at high risk of strictures (such as patients known to have CD or presenting with obstructive symptoms) to reduce the risk of capsule retention. CE also has a role in patients with coeliac disease, suspected small bowel tumours and other small bowel disorders. Since the advent of small bowel CE, dedicated oesophageal and colon capsule endoscopes have expanded the fields of application to include the investigation of upper and lower gastrointestinal disorders. Oesophageal CE may be used to diagnose oesophagitis, Barrett's oesophagus and varices but reliability in identifying gastroduodenal pathology is unknown and it does not have biopsy capability. Colon CE provides an alternative to conventional colonoscopy for symptomatic patients, while a possible role in colorectal cancer screening is a fascinating prospect. Current research is already addressing the possibility of controlling capsule movement and developing capsules which allow tissue sampling and the administration of therapy.
abstract_id: PUBMED:29886766
Capsule endoscopy for patients with coeliac disease. Introduction: Coeliac disease is an autoimmune mediated condition in response to gluten. A combination of innate and adaptive immune responses results in villous shortening in the small bowel (SB) that can be morphologically picked up on capsule endoscopy. It is the only imaging modality that can provide mucosal views of the entire SB, while histology is generally limited to the proximal SB. Radiological modalities are not designed to pick up changes in villous morphology. Areas covered: In this review, we provide a comprehensive analysis on the justified use of small bowel capsule endoscopy (SBCE) in the assessment of patients with coeliac disease; compare SBCE to histology, serology, and symptomatology; and provide an overview on automated quantitative analysis for the detection of coeliac disease. We also provide insight into future work on SBCE in relation to coeliac disease. Expert commentary: SBCE has opened up new avenues for the diagnosis and monitoring of patients with coeliac disease. However, larger studies with new and established coeliac disease patients and with greater emphasis on morphological features on SBCE are required to better define the role of SBCE in the setting of coeliac disease.
abstract_id: PUBMED:20552401
The role of capsule endoscopy in suspected celiac disease patients with positive celiac serology. Background: Endomysial antibody (EMA) and tissue transglutaminase (tTG) antibody testing is used to screen subjects with suspected celiac disease. However, the traditional gold standard for the diagnosis of celiac disease is histopathology of the small bowel. As villous atrophy may be patchy, duodenal biopsies could potentially miss the abnormalities. Capsule endoscopy can obtain images of the whole small intestine and may be useful in the early diagnosis of celiac disease.
Aims: To evaluate suspected celiac disease patients who have positive celiac serology and normal duodenal histology and to determine, with capsule endoscopy, whether these patients have any endoscopic markers of celiac disease.
Methods: Twenty-two subjects with positive celiac serology (EMA or tTG) were prospectively evaluated. Eight of the subjects had normal duodenal histology and 14 had duodenal histology consistent with celiac disease. All subjects underwent capsule endoscopy. Endoscopic markers of villous atrophy such as loss of mucosal folds, scalloping, mosaic pattern, and visible vessels were assessed.
Results: Eight subjects with normal duodenal histology had normal capsule endoscopy findings. In the 14 subjects with duodenal histology that was consistent with celiac disease, 13 had celiac disease changes seen at capsule endoscopy. One subject with normal capsule endoscopy findings showed Marsh IIIc on duodenal histology. Using duodenal histology as the gold standard, capsule endoscopy had a sensitivity of 93%, specificity of 100%, PPV of 100%, and NPV of 89% in recognizing villous atrophy.
Conclusions: Capsule endoscopy is useful in the detection of villous abnormalities in untreated celiac disease. Patients with positive celiac serology (EMA or tTG) and normal duodenal histology are unlikely to have capsule endoscopy markers of villous atrophy.
abstract_id: PUBMED:33743925
Indications, Contraindications, and Considerations for Video Capsule Endoscopy. Video capsule endoscopy is indicated in a broad range of clinical settings, most commonly in evaluating suspected small bowel bleeding. It is also useful in diagnosing Crohn's disease and monitoring patients with known Crohn's. Video capsule endoscopy has a role in evaluating patients with refractory celiac disease symptoms and in surveying patients with polyposis syndromes. The only absolute contraindication to video capsule endoscopy is luminal gastrointestinal tract obstruction. Despite manufacturer statement, video capsule endoscopy can be used safely in patients with implantable cardiac devices including pacemakers, defibrillators, and ventricular assist devices.
abstract_id: PUBMED:34468093
The video capsule endoscopy The video capsule endoscopy allows the exploration or the unreachable part of the small intestine by a standard bidirectional endoscopy. It requires a specific device and a bowel preparation but it's an outpatient examination with an acceptable tolerance of the patients. There are several indications including an obscure gastrointestinal bleeding, an iron deficit anemia, Crohn's disease extension and phenotype, hereditary polyposis and coeliac disease. It doesn't include therapeutic options itself requiring an assisted enteroscopy if any lesion is detected. The use of a dissolvable Patency Capsule lowers the risk of a mechanical intestinal occlusion if a stenosis is suspected. The colon capsule endoscopy is very promising for the colorectal cancer screening and follow-up of inflammatory bowel diseases but it's not currently validated nor recommended.
abstract_id: PUBMED:31094178
Сapsule endoscopy for diagnosis of celiac disease. In this review we analyzed the guidelines for diagnosis and management of celiac disease, as well as the recent studies published on this issue. Capsule endoscopy could be used in patients unwilling or unable to undergo conventional endoscopy, in patients who have discordant results between serological and histopathological investigation, in patients with nonresponsive or refractory celiac disease.
Answer: The abstract from PUBMED:23361572 indicates that prior endoscopy in patients with newly diagnosed celiac disease can indeed represent a missed opportunity for earlier diagnosis. The study found that among 316 patients diagnosed with celiac disease (CD) by endoscopy with biopsy at their institution, 17 (5%) had previously undergone an esophagogastroduodenoscopy (EGD) that did not lead to a diagnosis. Notably, during the prior non-diagnostic EGD, a duodenal biopsy was not performed in 59% of these patients, and the recommended number of ≥4 specimens were submitted in only 29% of the cases. In contrast, during the diagnostic EGD, ≥4 specimens were submitted in 94% of cases. The study suggests that routine performance of duodenal biopsy during EGD, especially for indications such as dyspepsia and reflux, may improve the diagnosis rates of CD.
This missed opportunity is further supported by the findings in PUBMED:24595045, which reported that almost one quarter of new cases of CD presented with atypical symptoms and half of the new cases had unremarkable duodenal mucosa. The study suggested that at least 10% of new cases of CD are likely to be undiagnosed at routine upper endoscopy, particularly in patients over 60 years who more commonly present atypically.
These findings underscore the importance of considering CD in patients undergoing EGD, even when they do not have typical symptoms or duodenal mucosal changes, and highlight the potential for improving CD diagnosis rates through more consistent biopsy practices during endoscopy. |
Instruction: Lipomas after blunt soft tissue trauma: are they real?
Abstracts:
abstract_id: PUBMED:17553055
Lipomas after blunt soft tissue trauma: are they real? Analysis of 31 cases. Background: Soft tissue trauma and lipomas are common occurrences in surgical practice. Lipomas are defined as benign tumours of adipose tissue with so far unexplained pathogenesis and aetiology. A link between preceding blunt soft tissue trauma at the site of the tumour and the formation of lipomas has been described earlier. These soft tissue tumours have been named 'post-traumatic lipomas'.
Objectives: In a retrospective review, to analyse all patients with benign adipose tissue tumours treated at our institution between August 2001 and January 2007.
Methods: All cases were reviewed regarding medical history, magnetic resonance imaging findings, intraoperative findings, clinical chemistry and histology.
Results: In 170 patients presenting with lipomas, 34 lipomas in 31 patients were identified as post-traumatic. The mean +/- SD age of the patients with post-traumatic lipomas was 52 +/- 14.5 years. The mean time elapsed between soft tissue trauma and lipoma formation was 2.0 years (range 0.5-5). Twenty-five of the 31 patients reported an extensive and slowly resolving haematoma after blunt tissue trauma at the site of lipoma formation. The mean +/- SD body mass index was 29.0 +/- 7.6 kg m(-2). Fourteen of 31 patients presented with an elevated partial thromboplastin time. Eleven of 34 lipomas were found on the upper extremities, five on the lower extremities, 13 on the trunk, and two on the face. All tumours were located subcutaneously, superficial to the musculofascial system. Thirty-three lipomas were removed by surgical excision and one by liposuction following an incisional biopsy. Histological examination revealed capsulated and noncapsulated benign adipose tissue in all 34 tumours.
Conclusions: The existence of a pathogenic link between blunt soft tissue trauma and the formation of post-traumatic lipomas is still controversial. Two potential mechanisms are discussed. Firstly, the formation of so-called post-traumatic 'pseudolipomas' may result from a prolapse of adipose tissue through fascia induced by direct impact. Alternatively, lipoma formation may be explained as a result of preadipocyte differentiation and proliferation mediated by cytokine release following soft tissue damage after blunt trauma and haematoma formation.
abstract_id: PUBMED:15841381
Posttraumatic pseudolipoma: MRI appearances. The goal of this study was to describe the MRI characteristics of posttraumatic pseudolipomas. Ten patients with previous history of blunt trauma or local surgery were investigated with MRI at the level of their deformity. The etiology was blunt trauma in eight patients and postoperative trauma in two. For all patients medical documentation, in the form of clinical history and physical examination, confirmed that a visible hematoma was present acutely at the same location following the injury and that the contour deformity subsequently appeared. All patients underwent liposuction. Preoperative bilateral MRI examinations were performed on all patients. The mean clinical follow-up was 17.8 months. MRI examinations were interpreted in consensus by two experienced musculoskeletal radiologists with attention to fatty extension (subcutaneous fatty thickness and anatomical extension), asymmetry compared with the asymptomatic side, the presence or absence of fibrous septae or nonfatty components, and patterns of contrast enhancement. Ten posttraumatic pseudolipomas were identified. Clinically, they showed as subcutaneous masses with the consistency of normal adipose tissue. Their locations were the abdomen (n=1), hip (n=1), the upper thigh (n=6), the knee (n=1), and the ankle (n=1). On MRI examinations, using the contralateral side as a control, pseudolipomas appeared as focal fatty masses without a capsule or contrast enhancement. Posttraumatic pseudolipomas may develop at a site of blunt trauma or surgical procedures often antedated by a soft tissue hematoma. Characteristic MRI findings are unencapsulated subcutaneous fatty masses without contrast enhancement.
abstract_id: PUBMED:12077703
MR imaging of soft-tissue masses of the foot. As with other parts of the musculoskeletal system, the soft tissues of the foot can be affected by a wide variety of pathologic entities including trauma, congenital abnormalities, infections, and neoplastic disorders. While plain radiographs are usually the initial examination for evaluation of pathology, magnetic resonance imaging (MRI) is critical to evaluate for abnormalities within the ligaments, tendons, and other nonosseous structures within the foot. The constellation of clinical and MRI findings often allows a relatively specific diagnosis to be rendered. This article discusses both benign and malignant processes within the soft tissues of the foot and presents their characteristic imaging findings with MRI.
abstract_id: PUBMED:37572150
Nodular cystic fat necrosis: a distinctive rare soft-tissue mass. We report the case of a 34-year-old female who was evaluated for a right lower extremity soft-tissue mass, found to be a large cystic lesion bound by fibrous tissue containing innumerable, freely mobile nodules of fat. Her presentation suggested the diagnosis of nodular cystic fat necrosis (NCFN), a rare entity that likely represents a morphological subset of fat necrosis potentially caused by vascular insufficiency secondary to local trauma. Her lesion was best visualized using MRI, which revealed characteristic imaging features of NCFN including nodular lipid-signal foci that suppress on fat-saturated sequences, intralesional fluid with high signal intensity on T2-weighted imaging, and a contrast-enhancing outer capsule with low signal intensity on T1-weighted imaging. Ultrasound imaging offered the advantage of showing mobile hyperechogenic foci within the anechoic cystic structure, and the lesion was otherwise visualized on radiography as a nonspecific soft-tissue radiopacity. She was managed with complete surgical excision with pathologic evaluation demonstrating, similar to the radiologic features, innumerable free-floating, 1-5 mm, smooth, nearly uniform spherical nodules of mature fat with widespread necrosis contained within a thick fibrous pseudocapsule. Follow-up imaging revealed no evidence of remaining or recurrent disease on postoperative follow-up MRI. The differential diagnosis includes lipoma with fat necrosis, lipoma variant, atypical lipomatous tumor, and a Morel-Lavallée lesion. There is overlap in the imaging features between fat necrosis and both benign and malignant adipocytic tumors, occasionally making this distinction based solely on imaging findings challenging. To our knowledge, this is the largest example of NCFN ever reported.
abstract_id: PUBMED:20546224
Bilateral tibialis anterior muscle herniation simulating a soft tissue tumour in a young amateur football player. Muscle herniation is a focal protrusion of muscle tissue through a defect in the deep fascial layer. Anterior tibial muscle is the most commonly affected muscle of the lower extremities because its fascia is the most vulnerable to trauma. Clinically it is characterized by asymptomatic or painful, skin-coloured, soft, subcutaneous nodules of various size depending on the position. The diagnosis is usually made clinically based on its typical manifestations, but ultrasonographic examination is useful for detecting the fascial defect and excluding other conditions caused by soft tissue tumours such as lipomas, angiolipomas, fibromas, schwannomas or varicosities. Although this entity is not rare, it has been less well documented in the dermatological literature. We report a case of bilateral tibialis anterior muscle herniation mimicking a soft tissue tumour in a young amateur football player.
abstract_id: PUBMED:19223256
Post-traumatic pseudolipomas--a review and postulated mechanisms of their development. Post-traumatic pseudolipomas develop in areas of the body that have been subjected to acute, severe, blunt trauma and chronic trauma. This study aimed to review the literature for reports of post-traumatic pseudolipomas on Medline and identify the possible mechanisms of their development. In the literature, 124 such cases were identified relating to case reports and case series; of these, 98 occurred in females and 26 in males. The majority of the cases occurred secondary to severe, acute, blunt trauma. The initial postulated mechanisms for development of post-traumatic pseudolipomas were anatomically and mechanically based. Recently, it was shown that there is a close relationship between inflammation and adipogenesis. Blunt trauma results in an inflammatory process. We postulate that post-traumatic pseudolipoma development occurs as a result of inflammatory triggers and an optimal local milieu at the site of development by making an analogy to an in vivo murine tissue engineering model for neo-adipogenesis.
abstract_id: PUBMED:17058061
Diagnosis and treatment of posttraumatic pseudolipomas. A retrospective analysis Background: Both trauma and lipomas are frequently encountered in day-to-day clinical practice. Although lipomas are defined both clinically and pathologically as benign fatty tissue tumours, their aetiology is still not clear.
Methods: In this study 19 patients with 23 posttraumatic lipomas were analysed retrospectively with reference to ultrasound and MRI diagnosis, history, laboratory results and histopathological investigations.
Results: The mean age of the patients was 50.5 years (+/-15.5). The causative soft tissue trauma dated back an average of 2.6 years. When the histories were taken, 16 of the 19 patients reported vast, slow-resorbing posttraumatic haematomas. Nine of the 23 lipomas were sited in the upper extremity, 3 in the lower extremity, 9 in the trunk and 2 in the face. All were located epifascially. In 22 cases the lipomas were excised, and in 1 case the lipoma was removed by liposuction. Histological examination demonstrated capsulated benign fatty tissue tumours in 19 cases and uncapsulated benign fatty tissue tumours in 4. The average body mass index (BMI) was 29 kg/m(2). Removal of the tumour resulted in a good aesthetic result in all patients.
Conclusions: The link between a blunt soft tissue injury and the development of a posttraumatic lipoma is still the subject of controversy; there are two mechanisms that seem more likely than any others proposed: (1) the "pseudolipoma" as the result of a prolapse of fatty tissue as an immediate result of trauma and (2) the development of a lipoma by way of differentiation of pre-adipocytes mediated by cytokines released by a posttraumatic haematoma. There appears to be a correlation between an increased partial thromboplastin time (PTT) and the development of posttraumatic lipomas. The generalised increase in the volume of body fat documented by the elevated BMI supports the idea that lipomas arise from the prolapse of adipose tissue. However, there is no single mechanistic explanation for the development of posttraumatic lipomas. They are probably caused by multiple factors and not by isolated pathological mechanisms.
abstract_id: PUBMED:18826615
Giant inframuscular lipoma disclosed 14 years after a blunt trauma: a case report. Introduction: Lipoma is the most frequent benign tumor of the soft tissue. This lesion is often asymptomatic except in cases of enormous masses compressing nervous-vascular structures. Although the diagnosis is mostly clinical, imaging tools are useful to confirm the adipose nature of the lesion and to define its anatomic border. Sometimes, lipomas may be the result of a previous trauma, such as in this patient.
Case Presentation: A 45-year-old man presented at our institution with a giant hard firm mass in the upper external quadrant of the right buttock disclosed after a weight loss diet. Subsequent magnetic resonance imaging showed a giant adipose mass developed beneath the large gluteal muscle and among the fibers of the medium and small gluteal muscles. When questioned on his medical history, the patient reported a blunt trauma of the lower back 14 years earlier. He underwent surgery and histological examination confirmed a giant lipoma.
Conclusion: Lipomas might result from a previous trauma. It is hypothesized that the trigger mechanism is activated by cytokine and growth factors released after the trauma. We herein present an exceptional case of a giant post-traumatic lipoma which caused a painful compression on the right sciatic nerve.
abstract_id: PUBMED:12614411
Posttraumatic lipoma: analysis of 10 cases and explanation of possible mechanisms. Background: Trauma and lipoma are the most frequently met occurrences in clinical routine. Although lipomas are well-known fatty tumors both clinically and pathologically, the precise etiology is still unknown. Generally, posttraumatic lipomas are known as "pseudolipoma," which describes herniation of deeper fat through Scarpa's layer secondary to trauma. Here we present 10 patients with lipoma secondary to blunt trauma in different anatomical sites.
Objectives: To correlate trauma and lipoma relationships and to discuss the possible pathogenetic mechanism by reviewing literature.
Methods: Ten patients (12 lipomas) after blunt trauma were presented, and data of patients were reviewed. Ultrasonography and/or nuclear magnetic resonance were employed for diagnosis in addition to physical examination. All tumors were verified by histopathologic examinations. Patients were followed for a minimum of 6 months.
Results: The average age was 34. Four of the lesions (12 altogether) were located on an upper extremity, 5 on a lower extremity, 2 on the trunk, and 1 on the neck. Excision of tumors and primary closure were performed in 92% of the lesions, and only one liposuction was performed. Aesthetic results were achieved in all patients. There were no complications and recurrences.
Conclusion: The effect of blunt trauma on fat tissue may be explained by different theories. We summarized possible mechanisms into two groups according to our observations and review of the literature: The first was related to mature adiposities and mainly a mechanical effect, and the second was differentiations of the preadipocytes to lipoma by the promoting factors. We speculate that only traumas that serve as a cause of fat necrosis may trigger the formation of the lipoma, and local inflammation secondary to fat necrosis may affect adipocytes and promote new formation of lipoma.
abstract_id: PUBMED:17175794
Tumoral, quasitumoral and pseudotumoral lesions of the superficial and somatic soft tissue: new entities and new variants of old entities recorded during the last 25 years. Part XII: appendix. In an eleven part series published in Pathologica, we have presented various tumoral, quasitumoral and pseudotumoral lesions of the superficial and somatic soft tissue (ST), which emerged as new entities or as variants of established entities during the last quarter of a century. Detailed clinicomorphological and differential diagnostic features of approximately sixty entities were chosen on the basis of their clinical significance and morphologic distinctiveness. The series included fibrous and myofibroblastic tumors (e.g. solitary fibrous tumor, high grade classic and pigmented dermatofibrosarcoma protuberans, inflammatory myofibroblastic tumor and myofibrosarcomas), fibromyxoid and fibrohistiocytic neoplasms (e.g., Evans' tumor, phosphaturic mesenchymal tumor, inflammatory myxohyaline tumor), special adipocytic/vascular/and smooth muscle lesions (e.g., chondroid lipoma, Dabska's tumor, ST hemangioblastoma, lipoleiomyosarcoma), epithelioid mesenchymal malignancies of diverse lineages (e.g., epithelioid liposarcoma, proximal-type epithelioid sarcoma, neuroendocrine extraskeletal chondromyxoid sarcoma), ST Ewing's tumor and peripheral nerve sheath tumors (perineuriomas and pigmented and rosetting tumors of the schwannoma/neurofibroma group), extranodal dendritic or histiocytic proliferative processes (follicular dendritic cell sarcoma, Rosai-Dorfman disease, Castleman's disease, and plexiform xanthomatous tumor), and tumors with myoepithelial differentiation. The section devoted to selected pseudotumoral entities considered representatives of the hamartoma group (neural fibrolipomatous hamartoma, ectopic hamartomatous thymoma, rudimentary meningocele), metabolic diseases (amyloid tumor, nephrogenic fibrosing dermopathy, tophaceous pseudogout, pseudoinfiltrative parathyromatosis), stromal tissue reactions to trauma (fibroosseous pseudotumors of digits) and infections (bacillary angiomatosis), and normal organs (glomus coccygeum). To conclude the descriptive phase, supplementary material has now been collected and appended in an attempt to provide a quick digest of essential knowledge both for comparison and differential diagnosis. The data have been tailored to synthesize diverse sources, integrating clinical elements and references to articles that previously appeared in Part I ("Introduction"), Part II ("The List and Review of New Entities") and Parts III to XI ("Excerpta"). At the very least we hope this final part ("Appendix") will provide the reader with a useful tabular organization of ST lesions and a reference resource.
Answer: The existence of lipomas following blunt soft tissue trauma is a subject of ongoing debate in the medical community. The phenomenon where benign tumors of adipose tissue, known as lipomas, appear after an incident of blunt soft tissue trauma has been reported and is referred to as 'post-traumatic lipomas' (PUBMED:17553055). These lipomas are sometimes called 'pseudolipomas' because they may result from the prolapse of adipose tissue through fascia due to direct impact, or they may develop as a result of preadipocyte differentiation and proliferation mediated by cytokine release following soft tissue damage and hematoma formation (PUBMED:17553055; PUBMED:17058061).
In a retrospective review of 170 patients presenting with lipomas, 34 lipomas in 31 patients were identified as post-traumatic, with a mean time of 2.0 years between the trauma and lipoma formation (PUBMED:17553055). MRI characteristics of posttraumatic pseudolipomas show them as focal fatty masses without a capsule or contrast enhancement (PUBMED:15841381). The development of post-traumatic pseudolipomas has been postulated to occur due to inflammatory triggers and an optimal local milieu at the site of development, drawing an analogy to an in vivo murine tissue engineering model for neo-adipogenesis (PUBMED:19223256).
Other studies have also reported cases of posttraumatic lipomas, suggesting that trauma may serve as a cause of fat necrosis, which may trigger the formation of lipomas. Local inflammation secondary to fat necrosis may affect adipocytes and promote new formation of lipoma (PUBMED:12614411). A case of a giant inframuscular lipoma disclosed 14 years after a blunt trauma supports the hypothesis that cytokine and growth factors released after the trauma may activate the trigger mechanism for lipoma formation (PUBMED:18826615).
In conclusion, while the pathogenic link between blunt soft tissue trauma and the formation of post-traumatic lipomas is still controversial, there is evidence to suggest that such lipomas are real and may develop due to various mechanisms related to the body's response to trauma and subsequent healing processes. |
Instruction: Does bone density of the greater tuberosity change in patients over 70?
Abstracts:
abstract_id: PUBMED:24373688
Does bone density of the greater tuberosity change in patients over 70? Introduction: There are no published studies on bone density of the greater tuberosity of the humerus, which could influence the stability of reinsertion by suture anchors. The goal of our study was to determine the influence of age, gender and the type of tear on the quality of bone in the greater tuberosity.
Methodology: Ninety-eight patients over the age of 60 were included, 41 without a rotator cuff tear and 57 with an isolated stage 1 or 2 supraspinatus tear and fatty infiltration (FI) ≤ 2. The areas of measurement included cancellous bone located under the cortex of the greater tuberosity. Measurements were obtained either across from the tear or from the middle facet with greater tuberosity if the cuff was not torn. We measured average, maximum and minimum bone density and the standard deviation (SD) in each region with Osirix software.
Results: The two groups were similar for age (73), investigated side and mean densities (0.282 g/cm(2) vs 0.210 g/cm(2)). Age over 70 was a predictive factor for osteoporosis of the greater tuberosity whether or not a rotator cuff tear was present (P<0.0001). There was less trabecular bone in women with cuff tears (P=0.009). Stage 2 cuff retraction was predictive of osteoporosis of the greater tuberosity (P=0.0001).
Conclusion: This is the first study in the literature to evaluate bone density of the greater tuberosity in relation to the presence or not of a rotator cuff tear in an elderly population. Female gender, age over 70 and stage 2 cuff retraction are factors responsible for osteoporosis of the greater tuberosity of the humeral head. The osteoporosis is not severe, and normally the quality of bone of the greater tuberosity should not limit stability of suture anchors.
Level Of Evidence: 3.
abstract_id: PUBMED:33038499
Bone resorption of the greater tuberosity after open reduction and internal fixation of complex proximal humeral fractures: fragment characteristics and intraoperative risk factors. Hypothesis And Background: In complex proximal humeral fractures, bone resorption of the greater tuberosity is sometimes observed after open reduction and internal fixation (ORIF). However, this has not been well characterized, and risk factors for resorption are not completely understood. We aimed (1) to identify the risk factors associated with bone resorption of the greater tuberosity and (2) to quantify the geometric and bone density characteristics associated with bone resorption using 3-dimensional computed tomography models in complex proximal humeral fractures treated with ORIF.
Methods: We identified a retrospective cohort of 136 patients who underwent ORIF of 3- or 4-part proximal humeral fractures; greater tuberosity resorption developed after ORIF in 30 of these patients. We collected demographic, fracture-related, and surgery-related characteristics and performed multivariable logistic regression analysis to identify factors independently associated with the development of greater tuberosity resorption. Furthermore, we identified 30 age- and sex-matched patients by use of propensity score matching to perform quantitative fragment-specific analysis using 3-dimensional computed tomography models. After the fragment of the greater tuberosity was identified, the number of fragments, the relative fragment volume to the humeral head, and the relative bone density to the coracoid process were calculated. Measurements were compared between matched case-control groups.
Results: We found that an unreduced greater tuberosity (odds ratio [OR], 10.9; P < .001), inadequate medial support at the calcar (OR, 15.0; P < .001), and the use of an intramedullary fibular strut (OR, 4.5; P = .018) were independently associated with a higher risk of bone resorption. Quantitative fragment-specific analysis showed that greater tuberosities with a larger number of fragments (5 ± 2 vs. 3 ± 2, P = .021), smaller fragments (9.9% ± 3.8% vs. 18.6% ± 4.7%, P < .001), and fragments with a lower bone density (66.4% ± 14.3% vs. 88.0% ± 18.4%, P = .001) had higher rates of resorption.
Discussion And Conclusion: An unreduced greater tuberosity or inadequate medial support increases the risk of greater tuberosity resorption, as do a larger number of fracture fragments, smaller fragments, and lower bone density. Additionally, fibular strut grafting is an independent risk factor for tuberosity resorption. Further study is needed, but alternatives to strut grafting such as femoral head allograft may warrant serious consideration.
abstract_id: PUBMED:36093094
Menopause-related cortical loss of the humeral head region mainly occurred in the greater tuberosity. Aims: Proximal humerus fractures are commonly observed in postmenopausal women. The goal of this study was to investigate menopause-related changes in cortical structure of the humeral head.
Materials And Methods: Clinical computed tomography (CT) scans of 75 healthy women spanning a wide range of ages (20-72 years) were analyzed. For each subject, cortical bone mapping (CBM) was applied to create a color three-dimensional (3D) thickness map for the proximal humerus. Nine regions of interest (ROIs) were defined in three walls of the humeral head. Cortical parameters, including the cortical thickness (CTh), cortical mass surface density (CM), and the endocortical trabecular density (ECTD), were measured.
Results: Compared to premenopausal women, postmenopausal women were characterized by a significantly lower CTh and CM value in the lateral part of the greater tuberosity. Similar changes were only found in ROI 4, but not in ROIs 5-6 in the lesser tuberosity. Linear regression analysis revealed that the CTh and CM value of ROIs 1, 3, and 4 were negatively associated with age. These results showed that menopause-related loss in CTh and CM was mainly in the greater tuberosity besides the proximal part of the lesser tuberosity. Trabecular bone variable measured as ECTD showed a notably lower value in ROIs 1-9 in postmenopausal vs. premenopausal group. Inverse linear associations for ECTD and age were found in ROIs 2, 3, 5, 6, 7, and 9, indicating no site-specific differences of endocortical trabecular bone loss between the greater and lesser tuberosity.
Conclusions: Menopause-related cortical loss of the humeral head mainly occurred in the lateral part of the greater tuberosity. The increased rate of humeral bone loss in the greater tuberosity may contribute materially to complex proximal humerus fractures.
abstract_id: PUBMED:21420321
Bone density of the greater tuberosity is decreased in rotator cuff disease with and without full-thickness tears. Background: Despite the high prevalence of rotator cuff disease in the aging adult population, the basic mechanisms initiating the disease are not known. It is known that changes occur at both the bone and tendon after rotator cuff tears. However, no study has focused on early or "pretear" rotator cuff disease states. The purpose of this study was to compare the bone mineral density of the greater tuberosity in normal subjects with that in subjects with impingement syndrome and full-thickness rotator cuff tears.
Materials And Methods: Digital anteroposterior shoulder radiographs were obtained for 3 sex- and age-matched study groups (men, 40-70 years old): normal asymptomatic shoulders (control), rotator cuff disease without full-thickness tears (impingement), and full-thickness rotator cuff tears (n = 39 per group). By use of imaging software, bone mineral densities were determined for the greater tuberosity, the greater tuberosity cortex, the greater tuberosity subcortex, and the cancellous region of the humeral head.
Results: The bone mineral density of the greater tuberosity was significantly higher for the normal control subjects compared with subjects with impingement or rotator cuff tears. No differences were found between the two groups of patients with known rotator cuff disease. The greater tuberosity cortex and greater tuberosity subcortex outcome measures were similar.
Conclusion: Bone mineral changes are present in the greater tuberosity of shoulders with rotator cuff disease both with and without full-thickness tears. The finding of focal diminished bone mineral density of the greater tuberosity in the absence of rotator cuff tears warrants further investigation.
abstract_id: PUBMED:37670197
Greater tuberosity medial malposition: does it affect shoulder abductor moment? Purpose: The detrimental effect of greater tuberosity malposition on functional scores is well known. Superior or posterior malpositions exceeding five mm lead to excessive loading on the deltoid strength. However, the significance of situations where greater tuberosity becomes medialized due to the compressive effect of the locking plate fixation, especially in fractures with metaphyseal dead space, has not been emphasized. It is hypothesized that this condition may cause shortening of the rotator cuff moment arm and consequently impact functional scores.
Methods: Between 2012 and 2018, 52 patients, aged 65,28 (ranging 40-85) proximal humerus fractures treated with locking plate fixation were included in the study. Cephalodiaphyseal angle, greater tuberosity displacement , patients reported outcome and Constant-Murley scores were evaluated.
Results: The mean Constant Murley score was determined to be 78.76 (ranging from 38 to 100). According to the patients reported outcome 39 excellent , five good , two fair, six poor results were observed. Avascular necrosis with screw migration was detected in five cases, while one patient experienced implant insufficiency along with varus deformity. Greater tuberosity was found to be positioned between 6 mm posterior-superior and -13 mm medial. Significant medial malposition was observed in three patients, with -9, -12, and -13 mm of medialization, respectively. Cephalodiaphysial angle was determined as 139.30 degrees (ranging from 120 to 150 degrees) and showed weak correlation with the functional score. Greater tuberosity medialization also showed weak correlation with the Constant-Murley score. The values exhibiting deviation were associated with low patient-reported outcome results and functional scores. In the examination of greater tuberosity displacement values, it was observed that Neer type 3 and 4 fractures differed significantly from Neer type 2 fractures regarding to Kruskal-Wallis test.
Conclusions: Medial impaction of greater tuberosity may be the reason of decreased functional scores, similar to superior or posterior malposition. The medialization of greater tuberosity should be considered as a potential factor leading to the shortening of the rotator cuff's abductor moment.
abstract_id: PUBMED:30170571
Missed fractures of the greater tuberosity. Background: Fractures of the greater tuberosity may result from a variety of mechanisms. Missed injury remains a persistent problem, both from a clinical and medico-legal point-of-view. Few studies on this topic are available in the literature. We present the clinical and radiological findings of a consecutive series of 17 patients who were diagnosed and managed with undisplaced greater tuberosity fractures.
Methods: A retrospective study of a consecutive series of 17 patients who sustained an occult greater tuberosity fracture were performed. Patients sustained a traumatic occult greater tuberosity fracture, underwent shoulder radiographs after trauma in 5 days and they were diagnosed as negative by a consultant radiologist. All patients received a standard assessment using MRI (Magnetic Resonance Imaging) scans Each patient was evaluated for arm dominance, trauma history, duration and type of symptoms and post-treatment Oxford Shoulder Score.
Results: At the final follow up the mean OSS (Oxford Shoulder Score) was 38.3 (range 17-46; SD 9.11). Three patients required a glenohumeral joint injection for post-traumatic pain and stiffness and three patients required subacromial decompression for post-traumatic impingement.
Conclusions: Though undisplaced greater tuberosity fracture can be managed non-operatively with good results, patients with persistent post-traumatic shoulder pain, tenderness and limitation of shoulder function warrant investigation with MRI to identify occult fractures. Prompt identification of these fractures can facilitate patient treatment and counselling, avoiding a source of patient dissatisfaction and litigation.
abstract_id: PUBMED:28500457
An alternative technique for greater tuberosity fractures: use of the mesh plate. Introduction: Isolated greater tuberosity (GT) fractures (AO 11-A1) tend to occur in the younger patient population and are poorly managed by most precontoured proximal humerus locking plates. The goal of this study was to identify and assess an alternative treatment strategy for greater tuberosity fractures.
Materials And Methods: A retrospective review of all cases of isolated greater tuberosity fractures treated with a 2.4/2.7 mesh plate (Synthes) between 2010 and 2015 was conducted. Patient demographics, operative reports, and clinical notes were reviewed. The time to radiographic union was assessed. Clinical outcomes were retrieved from patients at their follow-up visits or via mailed Disabilities of the Arm, Shoulder, Hand (DASH) questionnaires.
Results: Ten patients with isolated GT fractures treated with mesh plating were identified with an average age of 47.1 years. The average radiographic follow-up was 7.2 months and the average clinical follow-up was 8.0 months. The mean time to union was 8.5 weeks. Two patients underwent elective hardware removal. The mean DASH at final follow-up was 28.2 (±22.4), while the mean DASH work was 13.6 (±19.1).
Conclusion: We have identified a viable alternative treatment option for the surgical management of isolated greater tuberosity fractures using a mesh plate that can be contoured to the patient's anatomy. Surgeons should be aware of this option for select patients.
abstract_id: PUBMED:26403880
Bilateral luxatio erecta with greater tuberosity fracture: a case report. Bilateral shoulder dislocation with greater tuberosity fracture and luxatio erecta, both are rare by themselves, with only few reports of each. We report an unusual case of posttraumatic bilateral symmetrical shoulder dislocation involving luxatio erecta with greater tuberosity fracture in a young male. To our knowledge, this is the first case of symmetrical bilateral shoulder dislocation with greater tuberosity fracture involving luxatio erecta dislocation from Indian subcontinent.
abstract_id: PUBMED:27298847
Osteotomy and Re-fixation for treatment of Malunited Greater Tuberosity of Humerus. Introduction: Most greater tuberosity humerus fractures can be treated successfully with either surgical or conservative methods and good results can be expected if fracture heals anatomically but a displaced fracture or an improperly reduced fracture can lead to a symptomatic mal-union.
Case Presentation: 36years old man with mal-united greater tuberosity fracture presented to us 18 months following injury and percutaneous screw fixation and 9 months after screw removal. She complained of severe restriction of shoulder movements. A good result was achieved after corrective open reduction and fixation of the greater tuberosity and rotator cuff repair.
Conclusion: Open reduction, re-fixation and soft tissue reconstruction can give excellent results in cases of malunited greater tuberosity fractures even as late as 20 months after trauma.
abstract_id: PUBMED:24684914
Surgical management of isolated greater tuberosity fractures of the proximal humerus. Because the greater tuberosity is the insertion site of the posterior superior rotator cuff, fractures can have a substantial impact on functional outcome. Isolated fractures should not inadvertently be trivialized. Thorough patient evaluation is required to make an appropriate treatment decision. In most cases surgical management is considered when there is displacement of 5 mm or greater. Although reduction of displaced greater tuberosity fractures has traditionally been performed with open techniques, arthroscopic techniques are now available. The most reliable techniques of fixation of the greater tuberosity incorporate the rotator cuff tendon bone junction rather than direct bone-to-bone fixation.
Answer: Yes, bone density of the greater tuberosity does change in patients over 70. A study found that age over 70 was a predictive factor for osteoporosis of the greater tuberosity, regardless of whether a rotator cuff tear was present (PUBMED:24373688). Additionally, female gender and stage 2 cuff retraction were also identified as factors responsible for osteoporosis of the greater tuberosity of the humeral head in the elderly population (PUBMED:24373688). Another study on menopause-related changes in the cortical structure of the humeral head reported that postmenopausal women had significantly lower cortical thickness and cortical mass surface density in the lateral part of the greater tuberosity compared to premenopausal women, indicating that menopause-related cortical loss mainly occurred in the greater tuberosity (PUBMED:36093094). |
Instruction: Are "generalized" seizures truly generalized?
Abstracts:
abstract_id: PUBMED:19470419
Is benign myoclonic epilepsy of infancy truly idiopathic and generalized? Benign myoclonic epilepsy of infancy is recognized as a generalized and idiopathic epilepsy by the International League Against Epilepsy. Unprovoked and reflex seizures have been reported in these patients. We describe a child diagnosed with benign myoclonic epilepsy of infancy, whose strictly unilateral and localized reflex myoclonias broaden the clinical spectrum of this idiopathic and generalized epileptic syndrome, and raise interrogations about its underlying pathophysiological mechanisms. [Published with video sequences].
abstract_id: PUBMED:35706911
Genetic generalized epilepsy and generalized onset seizures with focal evolution (GOFE). "Generalized Onset with Focal Evolution" (GOFE) is an underrecognized seizure type defined by an evolution from generalized onset to focal activity during the same ictal event. We aimed to discuss electroclinical aspects of GOFE and to emphasize its link with Genetic Generalized Epilepsy (GGE). Patients were identified retrospectively over 10 years, using the video-EEG data base from the Epilepsy Unit of Strasbourg University Hospital. GOFE was defined, as previously reported, from an EEG point of view with an evolution from generalized onset to focal activity during the same ictal event. Three male patients with GOFE were identified among 51 patients with recorded tonic-clonic seizures. Ages at onset of seizures were 13, 20 and 22 years. Focal clinical features (motor asymmetric phenomenology) could be identified. EEG showed generalized interictal discharges with focal evolution of various localization. Four seizures were recorded characterized by 2-3 s of generalized abnormalities followed by focal (parieto-occipital or frontal) discharges. There were initially uncontrolled seizures with lamotrigine, but all patients reported a good outcome with valproate monotherapy. We emphasize that GOFE presents many similarities with GGE. Recognition of the GOFE entity could bring a therapeutic interest avoiding misdiagnosis of focal epilepsy and consequently inappropriate use of narrow spectrum anti-seizure medicine.
abstract_id: PUBMED:15571515
Are "generalized" seizures truly generalized? Evidence of localized mesial frontal and frontopolar discharges in absence. Purpose: To determine whether specific regions of cerebral cortex are activated at the onset and during the propagation of absence seizures.
Methods: Twenty-five absence seizures were recorded in five subjects (all women; age 19-58 years) with primary generalized epilepsy. To improve spatial resolution, all studies were performed with dense-array, 256-channel scalp EEG. Source analysis was conducted with equivalent dipole (BESA) and smoothed linear inverse (LORETA) methods. Analyses were applied to the spike components of each spike-wave burst in each seizure, with sources visualized with standard brain models.
Results: For each patient, the major findings were apparent on inspection of the scalp EEG maps and waveforms, and the two methods of source analysis gave generally convergent results. The onset of seizures was typically associated with activation of discrete, often unilateral areas of dorsolateral frontal or orbital frontal lobe. Consistently across all seizures, the negative slow wave was maximal over frontal cortex, and the spike that appeared to follow the slow wave was highly localized over frontopolar regions of orbital frontal lobe. In addition, sources in dorsomedial frontal cortex were engaged for each spike-wave cycle. Although each patient showed unique features, the absence seizures of all patients showed rapid, stereotyped evolution to engage both mesial frontal and orbital frontal cortex sources during the repeating cycles of spike-wave activity.
Conclusions: These data suggest that absence seizures are not truly "generalized," with immediate global cortical involvement, but rather involve selective cortical networks, including orbital frontal and mesial frontal regions, in the propagation of ictal discharges.
abstract_id: PUBMED:36190316
"Generalized-to-focal" epilepsy: stereotactic EEG and high-frequency oscillation patterns Objective: We aimed to clarify the pathophysiology of epilepsy involving seizures with apparently generalized onset, progressing to focal ictal rhythm through stereotactic EEG (SEEG) implantation, recording, stimulation and high-frequency oscillation (HFO) analysis.
Methods: We identified two patients with seizures with bilateral electrographic onset evolving to focal ictal rhythm, who underwent SEEG implantation. Patients had pre-surgical epilepsy work-up, including prolonged video scalp EEG, brain MRI, PET, ictal/interictal SPECT, MEG, and EEG-fMRI prior to SEEG implantation.
Results: Both patients had childhood-onset seizures involving behavioural arrest and left versive head and eye deviation, evolving to bilateral tonic-clonic convulsions. Seizures were electrographically preceded by diffuse, bilateral 3-Hz activity resembling absence seizures. Both had suspected focal lesions based on neuroimaging, including 3T MRI and voxel-based post-processing in one patient. Electrode stimulation did not elicit any habitual electroclinical seizures. HFO analysis showed bilateral focal regions with high fast-ripple rates.
Significance: “Generalized-to-focal” seizures may occur due to a diffuse, bilateral epileptic network, however, both patients showed ictal evolution from a generalized pattern to a single dominant focus which may explain why the focal aspect of their seizures had a consistent clinical semiology. Patients such as these may have a unique form of generalized epilepsy, but focal/multifocal cerebral abnormalities are also a possibility.
abstract_id: PUBMED:24370318
Carbamazepine treatment of generalized tonic-clonic seizures in idiopathic generalized epilepsy. Purpose: Evaluate the efficacy of carbamazepine in the treatment of idiopathic generalized epilepsy (IGE).
Method: The response of five patients with IGE, who experienced primarily generalized tonic-clonic seizures which were refractory to multiple antiepileptic drugs, is reported.
Results: Carbamazepine controlled multiple seizure types and did not induce or increase the frequency of myoclonic or absence seizures in these patients. Many family members also responded favorably to carbamazepine.
Conclusion: Carbamazepine can be used with caution as an alternative treatment option for refractory IGE, especially in cases in which the main seizure type is generalized tonic-clonic.
abstract_id: PUBMED:28874317
Generalized paroxysmal fast activity in EEG: An unrecognized finding in genetic generalized epilepsy. Objective: To study generalized paroxysmal fast activity (GPFA) in patients with genetic generalized epilepsy (GGE).
Introduction: GPFA is an electroencephalographic (EEG) finding in patients with symptomatic generalized epilepsy consisting of 15-25Hz bifrontally predominant generalized fast activity seen predominantly in sleep. Historically GPFA is linked to epileptic encephalopathy with drug resistant epilepsy and intellectual disability. However, GPFA has been rarely described as an atypical finding in patients with GGE without negative prognostic implication. We report cognitive profile and seizure characteristics in seven patients with GGE and GPFA.
Methods: The Vanderbilt EMU and EEG reports were searched for the keywords "idiopathic generalized epilepsy", "GPFA"and "generalized spike and wave discharges (GSWD)". We reviewed the EEG tracings and the electronic medical records of patients thus identified. The seizure type, frequency, neurological work-up, clinical profile and imaging data were recorded.
Results: Awake and sleep states were captured on EEGs of all patients. On EEG tracing review six patients were confirmed to have GSWD and GPFA; one patient had GPFA but no GSWD. All patients had normal cognitive function. Four had a normal brain MRI and one a normal head CT (two were never imaged). None of the patients had tonic seizures. The main seizure type was generalized tonic-clonic seizures (GTCS) in five patients, absence in two. Age at onset of epilepsy ranged from 4 to 24years. The mean GTC seizure frequency at the time of EEG was 3; two patients were seizure free on two antiepileptic drugs (AEDs).
Conclusions: GPFA can be an unrecognized electrographic finding in patients with genetic generalized epilepsy. While GPFA remains an important diagnostic EEG feature for epileptic encephalopathy (Lennox-Gastaut syndrome) it is not specific for this diagnosis. Thus, GPFA may have a spectrum of variable phenotypic expression. The finding of GPFA is not necessarily indicative of unfavorable outcome.
abstract_id: PUBMED:35356452
Generalized Fast Discharges Along the Genetic Generalized Epilepsy Spectrum: Clinical and Prognostic Significance. Objective: To investigate the electroclinical characteristics and the prognostic impact of generalized fast discharges in a large cohort of genetic generalized epilepsy (GGE) patients studied with 24-h prolonged ambulatory electroencephalography (paEEG).
Methods: This retrospective multicenter cohort study included 202 GGE patients. The occurrence of generalized paroxysmal fast activity (GPFA) and generalized polyspike train (GPT) was reviewed. GGE patients were classified as having idiopathic generalized epilepsy (IGE) or another GGE syndrome (namely perioral myoclonia with absences, eyelid myoclonia with absences, epilepsy with myoclonic absences, generalized epilepsy with febrile seizures plus, or GGE without a specific epilepsy syndrome) according to recent classification proposals.
Results: GPFA/GPT was found in overall 25 (12.4%) patients, though it was significantly less frequent in IGE compared with other GGE syndromes (9.3 vs. 25%, p = 0.007). GPFA/GPT was found independently of seizure type experienced during history, the presence of mild intellectual disability/borderline intellectual functioning, or EEG features. At multivariable analysis, GPFA/GPT was significantly associated with drug resistance (p = 0.04) and with a higher number of antiseizure medications (ASMs) at the time of paEEG (p < 0.001) and at the last medical observation (p < 0.001). Similarly, GPFA/GPT, frequent/abundant generalized spike-wave discharges during sleep, and a higher number of seizure types during history were the only factors independently associated with a lower chance of achieving 2-year seizure remission at the last medical observation. Additionally, a greater number of GPFA/GPT discharges significantly discriminated between patients who achieved 2-year seizure remission at the last medical observation and those who did not (area under the curve = 0.77, 95% confidence interval 0.57-0.97, p = 0.02).
Conclusion: We found that generalized fast discharges were more common than expected in GGE patients when considering the entire GGE spectrum. In addition, our study highlighted that GPFA/GPT could be found along the entire GGE continuum, though their occurrence was more common in less benign GGE syndromes. Finally, we confirmed that GPFA/GPT was associated with difficult-to-treat GGE, as evidenced by the multivariable analysis and the higher ASM load during history.
abstract_id: PUBMED:37714126
Idiopathic generalized epilepsies Idiopathic generalized epilepsies (IGE) is a group of epilepsies age-dependent, a subgroup of EGG genetic generalized epilepsies, with electro-clinical features and polygenic inheritance. Four syndromes comprising the IGEs: childhood absence epilepsy (CAD), juvenile absence epilepsy (JAE), juvenile myoclonic epilepsy (JME), and generalized tonic-clonic seizures epilepsy. Clinically characterized by the presence of one or a combination of absence seizures, myoclonus, tonic-clonic, or myoclonictonic- clonic with common electroencephalographic patterns of 2.5-5.5 Hz generalized spike-wave and activated by hyperventilation or photic stimulation. They generally have a good prognosis for seizure control, not evolve to an epileptic encephalopathy. Frequent clinical overlap between the first three, being able to evolve between them; the probability and age of remission varies in each one. About 80% responding to broad-spectrum anti-seizure drugs such as valproic acid, may worsen with sodium or GABAergic blockers. Development is typically normal; however, they are frequently associated with mood disorders, attentiondeficit/ hyperactivity disorder (ADHD), and learning disabilities, but do not have cognitive deficits. The recognition of this group of EGI is important for the adequate use of the resources, avoiding unnecessary studies, adequate orientation of the prognosis and an optimal treatment.
abstract_id: PUBMED:36990364
Thalamocortical circuits in generalized epilepsy: Pathophysiologic mechanisms and therapeutic targets. Generalized epilepsy affects 24 million people globally; at least 25% of cases remain medically refractory. The thalamus, with widespread connections throughout the brain, plays a critical role in generalized epilepsy. The intrinsic properties of thalamic neurons and the synaptic connections between populations of neurons in the nucleus reticularis thalami and thalamocortical relay nuclei help generate different firing patterns that influence brain states. In particular, transitions from tonic firing to highly synchronized burst firing mode in thalamic neurons can cause seizures that rapidly generalize and cause altered awareness and unconsciousness. Here, we review the most recent advances in our understanding of how thalamic activity is regulated and discuss the gaps in our understanding of the mechanisms of generalized epilepsy syndromes. Elucidating the role of the thalamus in generalized epilepsy syndromes may lead to new opportunities to better treat pharmaco-resistant generalized epilepsy by thalamic modulation and dietary therapy.
abstract_id: PUBMED:26619379
Generalized onset seizures with focal evolution (GOFE) - A unique seizure type in the setting of generalized epilepsy. Purpose: We report clinical and electrographic features of generalized onset seizures with focal evolution (GOFE) and present arguments for the inclusion of this seizure type in the seizure classification.
Methods: The adult and pediatric Epilepsy Monitoring Unit databases at Vanderbilt Medical Center and Children's Hospital were screened to identify generalized onset seizures with focal evolution. We reviewed medical records for epilepsy characteristics, epilepsy risk factors, MRI abnormalities, neurologic examination, antiepileptic medications before and after diagnosis, and response to medications. We also reviewed ictal and interictal EEG tracings, as well as video-recorded semiology.
Results: Ten patients were identified, 7 males and 3 females. All of the patients developed generalized epilepsy in childhood or adolescence (ages 3-15years). Generalized onset seizures with focal evolution developed years after onset in 9 patients, with a semiology concerning for focal seizures or nonepileptic events. Ictal discharges had a generalized onset on EEG, described as either generalized spike-and-wave and/or polyspike-and-wave discharges, or generalized fast activity. This electrographic activity then evolved to focal rhythmic activity most commonly localized to one temporal or frontal region; five patients had multiple seizures evolving to focal activity in different regions of both hemispheres. The predominant interictal epileptiform activity included generalized spike-and-wave and/or polyspike-and-wave discharges in all patients. Taking into consideration all clinical and EEG data, six patients were classified with genetic (idiopathic) generalized epilepsy, and four were classified with structural/metabolic (symptomatic) generalized epilepsy. All of the patients had modifications to their medications following discharge, with three becoming seizure-free and five responding with >50% reduction in seizure frequency.
Conclusion: Generalized onset seizures may occasionally have focal evolution with semiology suggestive of focal seizures, leading to a misdiagnosis of focal onset. This unique seizure type may occur with genetic as well as structural/metabolic forms of epilepsy. The identification of this seizure type may help clinicians choose appropriate medications, avoiding narrow spectrum agents known to aggravate generalized onset seizures.
Answer: The concept of "generalized" seizures being truly generalized is challenged by several studies that suggest a more complex interaction between generalized and focal seizure activities. For instance, a study on benign myoclonic epilepsy of infancy, which is considered a generalized and idiopathic epilepsy, reported a case with strictly unilateral and localized reflex myoclonias, indicating that the clinical spectrum of this syndrome may not be entirely generalized (PUBMED:19470419). Similarly, the phenomenon of "Generalized Onset with Focal Evolution" (GOFE) has been described, where seizures begin with a generalized onset but evolve to focal activity during the same ictal event, suggesting a link with Genetic Generalized Epilepsy (GGE) and highlighting the importance of recognizing GOFE to avoid misdiagnosis and inappropriate treatment (PUBMED:35706911).
Further evidence comes from a study that used dense-array EEG to analyze absence seizures, which are typically considered generalized. The study found that the onset of seizures was associated with activation of discrete areas of the frontal lobe, and the propagation of ictal discharges involved selective cortical networks, including orbital frontal and mesial frontal regions, rather than immediate global cortical involvement (PUBMED:15571515). This suggests that absence seizures may not be truly generalized.
Additionally, stereotactic EEG (SEEG) and high-frequency oscillation (HFO) analysis in patients with "generalized-to-focal" seizures revealed that these seizures might involve a diffuse bilateral epileptic network, but also showed ictal evolution from a generalized pattern to a single dominant focus, which could explain the consistent clinical semiology of the focal aspect of their seizures (PUBMED:36190316).
Moreover, generalized paroxysmal fast activity (GPFA), an EEG finding historically linked to epileptic encephalopathy, has been reported in patients with genetic generalized epilepsy without negative prognostic implications, indicating that GPFA can be an unrecognized electrographic finding in GGE (PUBMED:28874317, PUBMED:35356452).
In summary, while seizures in generalized epilepsy syndromes are traditionally considered to involve the entire brain simultaneously, emerging evidence suggests that there may be focal elements or specific networks involved in what are termed "generalized" seizures, indicating a more nuanced understanding of the pathophysiology of these seizures (PUBMED:37714126, PUBMED:36990364, PUBMED:26619379). |