input
stringlengths 6.82k
29k
|
---|
Instruction: An evaluation of the variation in the rates of occupational accidents in Turkey with trend analysis methodology. Are occupational accidents really diminishing?
Abstracts:
abstract_id: PUBMED:15951882
An evaluation of the variation in the rates of occupational accidents in Turkey with trend analysis methodology. Are occupational accidents really diminishing? Objective: To show whether the rate of occupational accidents is decreasing in Turkey.
Methods: Data on incidence and deaths due to occupational accidents in Turkey during the period 1970-2000 were obtained and evaluated at the Department of Public Health, Hacettepe University, Ankara, Turkey. The data were collected in January 2003. The occupational accident rates were analyzed by morbidity, mortality and case fatality rates. The change in each variable over years was tested by trend analysis methodology. A regression analysis was performed on data on 3-point moving averages.
Results: The morbidity decreases significantly over the last 30 years (p<0.01) with a slope of -0.003. The correlation of the morbidity tendency due to years was 0.995 (p<0.01) and the total variance descriptive rate was 0.991. In respect to mortality, the variation due to years was declining (0.00001, p<0.01). The correlation of the model was 0.950 (p<0.01) and brings out the total variance to 0.903. The trend analysis model can be explained meaningfully in respect of constant and slope (p<0.01). In respect of fatality the variance over the years was 0.00029 (p<0.01). The correlation of the model was 0.934 (p<0.01) and brings out the total variance of 0.872.
Conclusion: The main reason of the decrease of occupational accidents was probably due to unreported accidents, which do not cause injury. Reasons for underreporting of minor accidents should be investigated and this point should be taken into consideration when planning an occupational accident prevention programs.
abstract_id: PUBMED:35377266
An investigation of the effect of the COVID-19 (SARS-CoV-2) pandemic on occupational accidents (Tokat-Turkey). The aims of this study were to compare the incidence of occupational accidents during one-year periods of the COVID-19 Pandemic and before the COVID-19 Pandemic, and to determine in which sectors occupational accidents occurred and what types of injuries were sustained in the population of Tokat, Turkey. A retrospective review was made of the records of Tokat State Hospital of patients injured in occupational accidents between 12.03.2019 and 11.03.2021. The patients injured in occupational accidents were classified according to age, gender, sector, accident type, trauma localization and type, time of the accident, and outcome of the injuries. Of 608 patients injured in occupational accidents, 384 (63.2%) were injured in the period before the pandemic and 224 (36.8%) were injured in the period during the pandemic (p < 0.001). Most work-related injuries occurred in the industry sector (n = 287; 47.2%; p < 0.001). Occupational accidents increased in the service sector (p < 0.001), but decreased in other sectors. The increase in occupational accidents in the health sector (p < 0.001) and transportation sector (p < 0.05) within the service sector caused a general increase despite the decrease in other service sectors (p > 0.05). In current study, the increase in the number of injuries in the transportation sector due to the increase of motor courier accidents, in health sector, and in pandemic quarantines were remarkable. It was evaluated that this narrow-scoped study pioneered comprehensive studies on the measures that can be taken to prevent occupational accidents in such pandemics in the future.
abstract_id: PUBMED:34970941
Cause-responsibility analysis of occupational accidents in an automotive company. The aim of this study was to determine the root causes of accidents and the responsibility rates of the parties involved in those accidents. For this purpose, 20 important accidents of an automotive company were selected and the root causes, the parties involved in the accidents and the respective responsibility rates were determined by 10 experts based on dividing into 11 Tripod Beta basic risk factors and using occupational accident tree analysis (OATA) and occupational accident component analysis (OACA) techniques. The results revealed that among the defects in the management system, the organizational system's defects had the greatest impact on the occurrence of occupational accidents. By modifying about half of the basic risk factors, 80% of occupational accidents can be controlled. Also, by focusing on monitoring and design units, the company's accidents can be reduced by up to 50%.
abstract_id: PUBMED:34615447
Occupational accidents of emergency medicine residents in Turkey. Objectives. Healthcare workers face many biologic, chemical, physical and psychosocial hazards and risks in their work environment. Our research aimed to examine the types and frequency of occupational accidents, their notification status and predisposing factors to which emergency medicine residents (EMRs) were exposed in the last 12 months in Turkey. Methods. This research is a national, multicenter, online descriptive survey study. Participants' descriptive features, characteristics of occupational accidents they were exposed to in the last 12 months and their use status of personal protective equipment (PPE) were examined. Results. We found that 215 EMRs were exposed to 1919 occupational accidents in the last 12 months, and only 287 of these accidents were reported. All participants had at least one occupational accident in the previous 12 months. PPE was not used in 37.9 and 44% of biologic and chemical transmission accidents, respectively. The PPE use frequency of the EMRs in necessary situations for examination gloves, surgical masks, respirators, goggles, gowns and face shields was 60, 19, 19, 8, 15 and 4%, respectively. Conclusion. The actual number of occupational accidents was determined to be considerably higher than those reported. PPE use habits of EMRs were less than they should be.
abstract_id: PUBMED:27517346
Developing techniques for cause-responsibility analysis of occupational accidents. The aim of this study was to specify the causes of occupational accidents, determine social responsibility and the role of groups involved in work-related accidents. This study develops occupational accidents causes tree, occupational accidents responsibility tree, and occupational accidents component-responsibility analysis worksheet; based on these methods, it develops cause-responsibility analysis (CRA) techniques, and for testing them, analyzes 100 fatal/disabling occupational accidents in the construction setting that were randomly selected from all the work-related accidents in Tehran, Iran, over a 5-year period (2010-2014). The main result of this study involves two techniques for CRA: occupational accidents tree analysis (OATA) and occupational accidents components analysis (OACA), used in parallel for determination of responsible groups and responsibilities rate. From the results, we find that the management group of construction projects has 74.65% responsibility of work-related accidents. The developed techniques are purposeful for occupational accidents investigation/analysis, especially for the determination of detailed list of tasks, responsibilities, and their rates. Therefore, it is useful for preventing work-related accidents by focusing on the responsible group's duties.
abstract_id: PUBMED:33687311
A study of the shift in fatal construction work-related accidents during 2012-2019 in Turkey. Objectives. The construction sector is one of the sectors with the highest number of occupational accidents and diseases in the world in terms of working conditions. According to the 'Communiqué on Occupational Hazard Classes on Occupational Health and Safety' related to Occupational Health and Safety Law No. 6331, the construction sector is considered a 'very dangerous works' class. Methods. Occupational accidents that occurred between 2012 and 2019 are examined according to occupational groups, working environments, etc. Feature importance and Kendall, Pearson and Spearman correlations were used for analysis. Results. From the studies, it is determined that fatal accidents in the construction sector in Turkey are caused by falling from height with a high rate. When the correlation values were examined, it was determined that the column 'accident type' had a negative relationship with the 'injured part of the body' and a positive relationship with the 'accident environment' column. Conclusion. A total 51% of 3517 fatal accidents examined occurred in the construction of buildings. Most deaths in the construction sector in Turkey are caused by falling from height, like many countries (41.6%). Statistics shows that despite the relevant regulations, the construction sector in Turkey is seen as a weak safety culture.
abstract_id: PUBMED:36495027
Investigation of non-fatal occupational accidents and their causes in Turkish shipyards. This study reports new data for 1028 non-fatal occupational accidents dated between January 2010 and April 2015 by applying the analytical hierarchy process (AHP) technique. A comprehensive survey was conducted at four shipyards in Tuzla, Istanbul and Yalova regions in Turkey and a workplace questionnaire appropriate for the AHP technique was carried out. The obtained results indicated that inadequate safety equipment and protective clothing, unsuitable usage of machines and tools, and disobeying occupational health and safety (OHS) procedures were the most common risk factors for the accidents. Hence the preventive measures could be identified by analyzing non-fatal accident data. After the identification of the descriptive measures, the priority order of these measures was asked of the occupational safety professionals in the shipbuilding industry, and we used the AHP method to evaluate the results.
abstract_id: PUBMED:14642872
Corporate cost of occupational accidents: an activity-based analysis. The systematic accident cost analysis (SACA) project was carried out during 2001 by The Aarhus School of Business and PricewaterhouseCoopers Denmark with financial support from The Danish National Working Environment Authority. Its focused on developing and testing a method for evaluating occupational costs of companies for use by occupational health and safety professionals. The method was tested in nine Danish companies within three different industry sectors and the costs of 27 selected occupational accidents in these companies were calculated. One of the main conclusions is that the SACA method could be used in all of the companies without revisions. The evaluation of accident cost showed that 2/3 of the costs of occupational accidents are visible in the Danish corporate accounting systems reviewed while 1/3 is hidden from management view. The highest cost of occupational accidents for a company with 3.600 employees was estimated to approximately US$ 682.000. The paper includes an introduction regarding accident cost analysis in companies, a presentation of the SACA project methodology and the SACA method itself, a short overview of some of the results of the SACA project and a conclusion. Further information about the project is available at http://www.asb.dk/saca.
abstract_id: PUBMED:25317524
Trends in incidence and mortality due to occupational accidents in Brazil, 1998-2008 The objective was to evaluate trends in incidence and mortality due to occupational accidents in Brazil from 1998 to 2008. This was a time-trend series study that included cases of occupational accidents recorded in official Federal government statistics. The authors calculated annual percentage changes (APC) in incidence and mortality rates with the Joinpoint method using the calendar year as a regressor variable. There was a significant downward trend in incidence rates of occupational accidents, and the same trend was observed in typical occupational accidents. However, the number of cases increased during this period. There was a statistically significant upward trend in the incidence and number of cases of commuting accidents. The number of deaths and mortality rates showed a downward trend. Several factors may have contributed to the decline in incidence and mortality rates for occupational accidents, including improvement in working conditions, a shift in the economy from industry to services, underreporting of occupational accidents, and outsourcing of services. The increase in commuting accidents suggests the influence of violence in urban areas.
abstract_id: PUBMED:14605434
A significant outcome of work life: occupational accidents in a developing country, Turkey. Occupational accident rates are high in the developing world and they are among the crucial indicators of work life. The data on serious occupational accidents in three years from the Aegean Region of Turkey were analyzed in order to define the working life situation. The data for 1995-97 were obtained from The Social Insurance Institution. Variables were defined for workers, work environment and working conditions for descriptive analysis. The relationships between these variables and outcomes of the occupational accidents were analyzed. The fatality risk was 2.6 times higher for mining and construction than for the production sector (p<0.001), and 1.6 times higher for evening and night shifts than for day shifts (p<0.01). Mortality from being struck by something, falls, motor vehicle accidents and electrocutions were significantly higher than being cut or in, under or between objects (OR 3.5, 8.7, 23.1, 26.1 respectively and p<0.01). As a conclusion, it was suggested that within the globalization picture, occupational health problems in the developing world should be explored intensively in order to establish effective prevention programs.
Answer: The evaluation of the variation in the rates of occupational accidents in Turkey using trend analysis methodology reveals mixed results. On one hand, a study conducted over the period 1970-2000 showed that the morbidity and mortality rates due to occupational accidents have significantly decreased over the last 30 years (PUBMED:15951882). This suggests that occupational accidents may be diminishing. However, the study also raises concerns about the underreporting of minor accidents, which could skew the data and imply that the decrease in reported accidents might not fully represent the actual situation.
In contrast, another study focused on the impact of the COVID-19 pandemic on occupational accidents in Tokat, Turkey, found that there was a significant decrease in occupational accidents during the pandemic compared to the period before the pandemic (PUBMED:35377266). This decrease was not uniform across sectors, with increases in the service sector, particularly in health and transportation due to motor courier accidents and pandemic quarantines.
Further analysis in different contexts, such as an automotive company, revealed that organizational system defects had a significant impact on the occurrence of occupational accidents, and that by addressing certain risk factors, a substantial reduction in accidents could be achieved (PUBMED:34970941). Similarly, emergency medicine residents in Turkey were found to be exposed to a high number of occupational accidents, with a considerable discrepancy between the actual number and those reported, indicating underreporting (PUBMED:34615447).
In the construction sector, which is considered very dangerous, there was a high rate of fatal accidents caused by falls from height, suggesting a weak safety culture despite regulations (PUBMED:33687311). Additionally, in the shipbuilding industry, inadequate safety equipment and non-compliance with occupational health and safety procedures were identified as common risk factors for accidents (PUBMED:36495027).
Overall, while there is evidence of a decrease in occupational accidents in some studies, other research points to persistent risks and underreporting issues. The actual trend may be influenced by various factors, including changes in reporting practices, economic shifts, and specific industry challenges. Therefore, it is not conclusively clear whether occupational accidents are truly diminishing across the board in Turkey. |
Instruction: Do medical homes increase medication adherence for persons with multiple chronic conditions?
Abstracts:
abstract_id: PUBMED:25517069
Do medical homes increase medication adherence for persons with multiple chronic conditions? Background: Medications are an integral component of management for many chronic conditions, and suboptimal adherence limits medication effectiveness among persons with multiple chronic conditions (MCC). Medical homes may provide a mechanism for increasing adherence among persons with MCC, thereby enhancing management of chronic conditions.
Objective: To examine the association between medical home enrollment and adherence to newly initiated medications among Medicaid enrollees with MCC.
Research Design: Retrospective cohort study comparing Community Care of North Carolina medical home enrollees to nonenrollees using merged North Carolina Medicaid claims data (fiscal years 2008-2010).
Subjects: Among North Carolina Medicaid-enrolled adults with MCC, we created separate longitudinal cohorts of new users of antidepressants (N=9303), antihypertensive agents (N=12,595), oral diabetic agents (N=6409), and statins (N=9263).
Measures: Outcomes were the proportion of days covered (PDC) on treatment medication each month for 12 months and a dichotomous measure of adherence (PDC>0.80). Our primary analysis utilized person-level fixed effects models. Sensitivity analyses included propensity score and person-level random-effect models.
Results: Compared with nonenrollees, medical home enrollees exhibited higher PDC by 4.7, 6.0, 4.8, and 5.1 percentage points for depression, hypertension, diabetes, and hyperlipidemia, respectively (P's<0.001). The dichotomous adherence measure showed similar increases, with absolute differences of 4.1, 4.5, 3.5, and 4.6 percentage points, respectively (P's<0.001).
Conclusions: Among Medicaid enrollees with MCC, adherence to new medications is greater for those enrolled in medical homes.
abstract_id: PUBMED:32780539
Changes in chronic medication adherence, costs, and health care use after a cancer diagnosis among low-income patients and the role of patient-centered medical homes. Background: Approximately 40% of patients with cancer also have another chronic medical condition. Patient-centered medical homes (PCMHs) have improved outcomes among patients with multiple chronic comorbidities. The authors first evaluated the impact of a cancer diagnosis on chronic medication adherence among patients with Medicaid coverage and, second, whether PCMHs influenced outcomes among patients with cancer.
Methods: Using linked 2004 to 2010 North Carolina cancer registry and claims data, the authors included Medicaid enrollees who were diagnosed with breast, colorectal, or lung cancer who had hyperlipidemia, hypertension, and/or diabetes mellitus. Using difference-in-difference methods, the authors examined adherence to chronic disease medications as measured by the change in the percentage of days covered over time among patients with and without cancer. The authors then further evaluated whether PCMH enrollment modified the observed differences between those patients with and without cancer using a differences-in-differences-in-differences approach. The authors examined changes in health care expenditures and use as secondary outcomes.
Results: Patients newly diagnosed with cancer who had hyperlipidemia experienced a 7-percentage point to 11-percentage point decrease in the percentage of days covered compared with patients without cancer. Patients with cancer also experienced significant increases in medical expenditures and hospitalizations compared with noncancer controls. Changes in medication adherence over time between patients with and without cancer were not determined to be statistically significantly different by PCMH status. Some PCMH patients with cancer experienced smaller increases in expenditures (diabetes) and emergency department use (hyperlipidemia) but larger increases in their inpatient hospitalization rates (hypertension) compared with non-PCMH patients with cancer relative to patients without cancer.
Conclusions: PCMHs were not found to be associated with improvements in chronic disease medication adherence, but were associated with lower costs and emergency department visits among some low-income patients with cancer.
abstract_id: PUBMED:26454560
Predicting medication adherence in multiple sclerosis using telephone-based home monitoring. Background: Poor medication adherence exerts a substantial negative impact on the health and well-being of individuals with multiple sclerosis (MS). Improving adherence rates requires a proactive approach of frequent and ongoing monitoring; however, this can be difficult to achieve within traditional, reactive health care systems that generally emphasize acute care services. Telephone-based home monitoring may circumvent these barriers and facilitate optimal care coordination and management for individuals with MS and other chronic illnesses.
Objective: The current study evaluated the utility of a one-item, telephone-administered measure of adherence expectations as a prospective predictor of medication adherence across a six month period among individuals with MS.
Methods: As part of a longitudinal study, Veterans with MS (N = 89) who were receiving medical services through the Veterans Health Administration completed monthly telephone-based interviews for six months.
Results: Using mixed model regression analyses, adherence expectations predicted adherence after adjusting for demographic, illness-related, and psychosocial factors (B = -5.54, p < .01).
Conclusions: Brief, telephone-based assessments of adherence expectations may represent an easy and efficient method for monitoring medication use among individuals with MS. The results offer an efficient method to detect and provide support for individuals who may benefit from interventions to promote medication adherence.
abstract_id: PUBMED:25686809
Serving persons with severe mental illness in primary care-based medical homes. Objective: Primary care-based medical homes are rapidly disseminating through populations with chronic illnesses. Little is known about how these models affect the patterns of care for persons with severe mental illness who typically receive much of their care from mental health specialists. This study examined whether enrollment in a primary care medical home alters the patterns of care for Medicaid enrollees with severe mental illness.
Methods: The authors conducted a retrospective secondary data analysis of medication adherence, outpatient and emergency department visits, and screening services used by adult Medicaid enrollees with diagnoses of schizophrenia (N=7,228), bipolar disorder (N=13,406), or major depression (N=45,000) as recorded in North Carolina Medicaid claims from 2004-2007. Participants not enrolled in a medical home (control group) were matched by propensity score to medical home participants on the basis of demographic characteristics and comorbidities. Those dually enrolled in Medicare were excluded.
Results: Results indicate that medical home enrollees had greater use of both primary and specialty mental health care, better medication adherence, and reduced use of the emergency department. Better rates of preventive lipid and cancer screening were found only for persons with major depression.
Conclusions: Enrollment in a medical home was associated with substantial changes in patterns of care among persons with severe mental illness. These changes were associated with only a modest set of incentives, suggesting that medical homes can have large multiplier effects in primary care of persons with severe mental illness.
abstract_id: PUBMED:34609933
Impact of a Statewide Multi-Payer Patient-Centered Medical Home Program on Antihypertensive Medication Adherence. Evidence suggests that the patient-centered medical home (PCMH) model of primary care improves management of chronic disease, but there is limited research contrasting this model's effect when financed by a single payer versus multiple payers, and among patients with different types of health insurance. This study evaluates the impact of a statewide medical home demonstration, the Maryland Multi-Payer PCMH Program (MMPP), on adherence to antihypertensive medication therapy relative to non-PCMH primary care and to the PCMH model when financed by a single payer. The authors used a difference-in-differences analytic design to analyze changes in medication possession ratio for antihypertensive medications among Medicaid-insured and privately insured non-elderly adult patients attributed to primary care practices in the MMPP ("multi-payer PCMHs"), medical homes in Maryland that participated in a regional PCMH program funded by a single private payer ("single-payer PCMHs"), and non-PCMH practices in Maryland. Comparison sites were matched to multi-payer PCMHs using propensity scores based on practice characteristics, location, and aggregated provider characteristics. Multi-payer PCMHs performed better on antihypertensive medication adherence for both Medicaid-insured and privately insured patients relative to single-payer PCMHs. Statistically significant effects were not observed consistently until the second year of the demonstration. There were negligible differences in outcome trends between multi-payer medical homes and matched non-PCMH practices. Findings indicate that health care delivery innovations may yield superior population health outcomes under multi-payer financing compared to when such initiatives are financed by a single payer.
abstract_id: PUBMED:32004857
Multiple modality approach to assess adherence to medications across time in Multiple Sclerosis. Background: Medication adherence is especially challenging in a chronic condition such as Relapsing Multiple Sclerosis (RMS). Medication adherence among persons with MS (PwMS) is usually assessed via a single measure, mostly electronic pharmacy records.
Objectives: Assess medication adherence in multiple modes across time among PwMS; examine consistency across time and associations between measures.
Methods: PwMS (N = 194) were surveyed prospectively at three time points (baseline, 6 and 12 months later) and their health records and medication claims were retrospectively obtained. Adherence score was based on medication possession ratio (MPR) and two patient-reported outcome (PRO) measures. Electronic monitoring devices assessing medication adherence were also initiated.
Results: MPR of each nonadherent PwMS, once compared to medical records containing prescription changes, was found as underestimating adherence. MPR was between the two PROs in identifying nonadherence and associations between the measures and across time was moderate (Kappa ranged 0.37-0.42). The use of electronic monitoring devices was not adopted by patients. A score indicated adherence as 66% and 64.9% at Time1 and Time 2, respectively, with 21.1% of PwMS nonadherent at both time points. Adherence did not vary significantly by DMT type.
Conclusions: Being a dynamic behavior, medication adherence should be repeatedly monitored by using multiple modalities and focused on in clinician-patient encounters, especially in chronic diseases such as MS, which requires long-term treatments. Applying PROs in monitoring medication adherence would facilitate implementation of Participatory Medicine and patient-centered strategies in MS care.
abstract_id: PUBMED:29445630
Effects of First Diagnosed Diabetes Mellitus on Medical Visits and Medication Adherence in Korea. Background: The National Health Insurance Service (NHIS) conducted a screening test to detect chronic diseases such as hypertension and diabetes in Korea. This study evaluated the effects of health screening for DM on pharmacological treatment.
Methods: The data from qualification and the General Health Screening in 2012, the insurance claims of medical institutions from Jan 2009 to Dec 2014, and the diabetic case management program extracted from the NHIS administrative system were used. Total 16068 subjects were included. Visiting rate to medical institution, medication possession ratio and the rate of medication adherence of study subjects were used as the indices.
Results: The visiting rates to medical institutions were 39.7%. The percentage who received a prescription for a diabetes mellitus medication from a doctor was 80.9%, the medication possession ratio was 70.8%, and the rate of medication adherence was 57.8%.
Conclusion: The visiting rate, medication possession ratio and rate of medication adherence for DM medication were not high. In order to increase the visiting rate, medication possession ratio and rate of medication adherence, NHIS should support environment in which medical institutions and DM patients can do the role of each part.
abstract_id: PUBMED:27842386
Association Between Patient-Centered Medical Homes and Adherence to Chronic Disease Medications: A Cohort Study. Background: Despite the widespread adoption of patient-centered medical homes into primary care practice, the evidence supporting their effect on health care outcomes has come primarily from geographically localized and well-integrated health systems.
Objective: To assess the association between medication adherence and medical homes in a national patient and provider population, given the strong ties between adherence to chronic disease medications and health care quality and spending.
Design: Retrospective cohort study.
Setting: Claims from a large national health insurer.
Patients: Patients initiating therapy with common medications for chronic diseases (diabetes, hypertension, and hyperlipidemia) between 2011 and 2013.
Measurements: Medication adherence in the 12 months after treatment initiation was compared among patients cared for by providers practicing in National Committee for Quality Assurance-recognized patient-centered medical homes and propensity score-matched control practices in the same Primary Care Service Areas. Linear mixed models were used to examine the association between medical homes and adherence.
Results: Of 313 765 patients meeting study criteria, 18 611 (5.9%) received care in patient-centered medical homes. Mean rates of adherence were 64% among medical home patients and 59% among control patients. Among 4660 matched control and medical home practices, medication adherence was significantly higher in medical homes (2.2% [95% CI, 1.5% to 2.9%]). The association between medical homes and better adherence did not differ significantly by disease state (diabetes, 3.0% [CI, 1.5% to 4.6%]; hypertension, 3.2% [CI, 2.2% to 4.2%]; hyperlipidemia, 1.5% [CI, 0.6% to 2.5%]).
Limitation: Clinical outcomes related to medication adherence were not assessed.
Conclusion: Receipt of care in a patient-centered medical home is associated with better adherence, a vital measure of health care quality, among patients initiating treatment with medications for common high-cost chronic diseases.
Primary Funding Source: CVS Health.
abstract_id: PUBMED:31573463
Medication adherence for persons with spinal cord injury and dysfunction from the perspectives of healthcare providers: A qualitative study. Context: People with spinal cord injury and dysfunction (SCI/D) often take multiple medications (i.e. polypharmacy) to manage secondary health complications and multiple chronic conditions. Numerous healthcare providers are often involved in clinical care, increasing the risk of fragmented care, problematic polypharmacy, and conflicting health advice. These providers can play a crucial role in assisting patients with medication self-management to improve medication adherence. Design: A qualitative study involving telephone interviews, following a semi-structured guide that explored healthcare providers' conceptualization of factors impacting medication adherence for persons with SCI/D. The interviews were transcribed and analyzed descriptively and interpretively using a constant comparative process with the assistance of data display matrices. Analysis was guided by an ecological model of medication adherence. Setting and participants: Thirty-two healthcare providers from Canada, with varying clinical expertise. Intervention: Not Applicable. Outcome measures: Not Applicable. Results: Providers identified several factors that impact medication adherence for persons with SCI/D, which were grouped into micro (medication and patient-related), meso- (provider-related) and macro- (health system-related) factors. Medication-related factors included side effects, effectiveness, safety, and regimen complexity. Patient-specific factors included medication knowledge, preferences/expectations/goals, severity and type of injury, cognitive function/mental health, time since injury, and caregiver support. Provider-related factors included knowledge/confidence and trust. Health system-related factors included access to healthcare and access to medications. While providers were able to identify several factors influencing medication adherence, micro-level factors were the most frequently discussed. Conclusion: Findings from this study indicate that strategies to optimize medication adherence for persons with SCI/D should be multi-faceted.
abstract_id: PUBMED:26832226
'The lesser of two evils…' - views of persons with rheumatoid arthritis on medication adherence: a qualitative study. Objective: This study aimed to explore medication adherence among adherent and non-adherent persons suffering from rheumatoid arthritis (RA). A special focus was put on the reasons accounting for successful medication adherence and on potential barriers or facilitating factors.
Design: A qualitative study with semi-structured interviews was conducted. Eighteen participants were recruited through stratified purposive sampling according to their medication adherence level. Interviews were analysed by interpretative phenomenological analysis.
Results: Medication adherence behaviour was described on a continuum ranging from non-adherent to adherent. Participants' current adherence level was represented as a result of inner negotiations between a variety of influential factors and the successful application of a range of strategies. The influential factors were: experiences with medication, outcome expectations, knowledge of therapeutic options, the traits 'openness' and 'conscientiousness', belief in medical progress, characteristics of the medication, level of trust in one's physician, and perceived autonomy. Facilitating strategies were: establishing routines, using social support and the deliberate suppression of information about potential adverse events.
Conclusion: The experience of and the reasons for medication (non-)adherence from the perspective of people with RA were explored comprehensively. Participants' ongoing negotiations between adherence and non-adherence emerged as a key finding with implications for health service providers.
Answer: Yes, medical homes do increase medication adherence for persons with multiple chronic conditions. A study examining the association between medical home enrollment and adherence to newly initiated medications among Medicaid enrollees with multiple chronic conditions (MCC) found that adherence to new medications was greater for those enrolled in medical homes. Specifically, medical home enrollees exhibited higher proportions of days covered (PDC) by 4.7, 6.0, 4.8, and 5.1 percentage points for depression, hypertension, diabetes, and hyperlipidemia, respectively, compared with nonenrollees (PUBMED:25517069).
Additionally, another study on the impact of a cancer diagnosis on chronic medication adherence among Medicaid patients found that patient-centered medical homes (PCMHs) were associated with lower costs and emergency department visits among some low-income patients with cancer, although they were not found to be associated with improvements in chronic disease medication adherence (PUBMED:32780539).
Furthermore, a study on the effects of first diagnosed diabetes mellitus in Korea found that the visiting rate, medication possession ratio, and rate of medication adherence for diabetes medication were not high, suggesting that more support is needed to increase these rates (PUBMED:29445630).
In the context of a national patient and provider population, a retrospective cohort study found that receipt of care in a patient-centered medical home is associated with better adherence to medications for common high-cost chronic diseases (PUBMED:27842386).
These findings collectively suggest that medical homes can play a significant role in improving medication adherence among individuals with multiple chronic conditions, although the impact may vary depending on the specific context and patient population. |
Instruction: Is pyloric function preserved in pylorus-preserving pancreaticoduodenectomy?
Abstracts:
abstract_id: PUBMED:19088935
Antecolic gastrointestinal reconstruction with pylorus dilatation. Does it improve delayed gastric emptying after pylorus-preserving pancreaticoduodenectomy? Objective: The aim of our study focuses upon prevention of delayed gastric emptying (DGE) after pancreaticoduodenectomy using a alternative reconstruction procedure.
Method: Forty consecutive patients underwent a typical pylorus-preserving pancreaticoduodenectomy (PPPD) with antecolic reconstruction in a two-year period (January 2002 until January 2004), while a similar group of 40 consecutive patients underwent PPPD with application of pyloric dilatation between January 2004 and January 2006. Early and late complications were compared between the two groups.
Results: DGE occurred significantly more often in the group of patients treated by the classical PPPD technique (nine patients -22%) compared with those operated on with the addition of pyloric dilatation technique (two patients -5%) (p<0.05). The incidence of other complications did not differ significantly between the two groups.
Conclusions: The application of dilatation may decrease the incidence of DGE after PPPD and facilitates earlier hospital discharge.
abstract_id: PUBMED:21861144
Pancreaticoduodenectomy versus pylorus-preserving pancreaticoduodenectomy: the clinical impact of a new surgical procedure; pylorus-resecting pancreaticoduodenectomy. Pylorus-preserving pancreaticoduodenectomy (PpPD) has been performed increasingly for periampullary tumors as a modification of conventional pancreaticoduodenectomy (PD) with antrectomy. Five randomized controlled trials (RCTs) and two meta-analyses have been performed to compare PD with PpPD. The results of these trials have shown that the two procedures were equally effective concerning morbidity, mortality, quality of life (QOL), and survival, although the length of surgery and blood loss were significantly lower for PpPD than for PD in one RCT and in the two meta-analyses. Delayed gastric emptying (DGE) is the major postoperative complication after PpPD. One of the pathogeneses of DGE after PpPD is thought to be denervation or devascularization around the pyloric ring. Therefore, one RCT was performed to compare PpPD with pylorus-resecting pancreaticoduodenectomy (PrPD; a new PD surgical procedure that resects only the pyloric ring and preserves nearly all of the stomach), concerning the incidence of DGE. The results clarified that the incidence of DGE was 4.5% after PrPD and 17.2% after PpPD, which was a significant difference. Several RCTs of surgical or postoperative management techniques have been performed to reduce the incidence of DGE. One RCT for surgical techniques clarified that the antecolic route for duodenojejunostomy significantly reduced the incidence of DGE compared with the retrocolic route. Two RCTs examining postoperative management showed that the administration of erythromycin after PpPD reduced the incidence of DGE.
abstract_id: PUBMED:9537720
Is pyloric function preserved in pylorus-preserving pancreaticoduodenectomy? Objective: To assess the function of the pylorus after pylorus-preserving pancreaticoduodenectomy (PPPD) done for periampullary or pancreatic cancer.
Design: Prospective, observational controlled clinical study.
Setting: Teaching hospital, Italy.
Subjects: 17 patients who had undergone PPPD, and 15 healthy control subjects.
Investigations: Endoscopy to check for gastritis and marginal ulcers and 24 h-pH monitoring and 99mTc HIDA scintigraphy to detect jejunogastric reflux. Scintigraphy was also used to evaluate gastric and jejunal transit after a solid meal labelled with 99mTc colloid sulphur.
Main Outcome Measures: Signs of delayed gastric emptying, jejunogastric reflux and gastric outlet obstruction in the short and long term.
Results: In the early postoperative period only 1 patient had delayed gastric emptying. In the long term, two patients had symptoms of dyspepsia and 8/11 showed alkaline reflux with persistent gastric pH more than 4 for more than 12 hours; 3 had histological signs of gastritis. There was no difference in gastric emptying compared with controls, but three patients had prolonged emptying time (T1/2 more than 85 minutes). Endoscopy findings correlated with pH monitoring results.
Conclusions: After PPPD, most patients have abnormal pyloric function, but it is clinically evident in only a small proportion.
abstract_id: PUBMED:17153459
Pylorotomy in pylorus-preserving pancreaticoduodenectomy. Background/aims: The incidence of delayed gastric emptying after pylorus-preserving pancreaticoduodenectomy has been reported to be 30% to 70%.
Methodology: Between January 1996 and December 2002, 43 patients underwent pylorus-preserving pancreaticoduodenectomy, involving pylorotomy, in the First Department of Surgery, Kinki University School of Medicine. The first step in pylorotomy is to cut the duodenal stump obliquely. The next is incision of the pyloric sphincter along its inferior aspect. The incidences of postoperative complications and changes in body weight were collated retrospectively.
Results: Delayed gastric emptying was observed in 4 patients (9.3%). However, this complication did not last more than 1 month in any patients. Two patients (4.7%) developed reflux esophagitis 1 month after surgery, but this complication had resolved by 6 months. Weight gain was noted beginning 3 months after surgery.
Conclusions: Pylorus-preserving pancreaticoduodenectomy involving pylorotomy may reduce the incidence of delayed gastric emptying and preserve the long-term quality of life more than similar procedures.
abstract_id: PUBMED:26011210
Our contrivances to diminish complications after pylorus-preserving pancreaticoduodenectomy. The objective of this study is to diminish postoperative complications after pylorus-preserving pancreaticoduodenectomy. Pylorus-preserving pancreaticoduodenectomy is still associated with major complications, especially leakage at pancreatojejunostomy and delayed gastric emptying. Traditional pylorus-preserving pancreaticoduodenectomy was performed in group A, while the novel procedure, an antecolic vertical duodenojejunostomy and internal pancreatic drainage with omental wrapping, was performed in group B (n = 40 each). We compared the following characteristics between the 2 groups: operation time, blood loss, time required before removal of nasogastric tube and resumption of food intake, length of hospital stay, and postoperative complications. The novel procedure required less time and was associated with less blood loss (both P < 0.0001). In the comparison of the 2 groups, group B showed less time for removal of nasogastric tubes and resumption of food intake, shorter hospital stays, and fewer postoperative complications (all P < 0.0001). The novel procedure appears to be a safe and effective alternative to traditional pancreaticoduodenectomy techniques.
abstract_id: PUBMED:16610023
Surgical anatomy of the innervation of pylorus in human and Suncus murinus, in relation to surgical technique for pylorus-preserving pancreaticoduodenectomy. Aim: To clarify the innervation of the antro-pyloric region in humans from a clinico-anatomical perspective.
Methods: The stomach, duodenum and surrounding structures were dissected in 10 cadavers, and immersed in a 10mg/L solution of alizarin red S in ethanol to stain the peripheral nerves. The distribution details were studied to confirm innervations in the above areas using a binocular microscope. Similarly, innervations in 10 Suncus murinus were examined using the method of whole-mount immunohistochemistry.
Results: The innervation of the pyloric region in humans involved three routes: One arose from the anterior hepatic plexus via the route of the suprapyloric/supraduodenal branch of the right gastric artery; the second arose from the anterior and posterior gastric divisions, and the third originated from the posterior-lower region of the pyloric region, which passed via the infrapyloric artery or retroduodenal branches and was related to the gastroduodenal artery and right gastroepiploic artery. For Suncus murinus, results similar to those in humans were observed.
Conclusion: There are three routes of innervation of the pyloric region in humans, wherein the route of the right gastric artery is most important for preserving pyloric region innervation. Function will be preserved by more than 80% by preserving the artery in pylorus-preserving pancreaticoduodenectomy (PPPD). However, the route of the infrapyloric artery should not be disregarded. This route is related to several arteries (the right gastroepiploic and gastroduodenal arteries), and the preserving of these arteries is advantageous for preserving pyloric innervation in PPPD. Concurrently, the nerves of Latarjet also play an important role in maintaining innervation of the antro-pyloric region in PPPD. This is why pyloric function is not damaged in some patients when the right gastric artery is dissected or damaged in PPPD.
abstract_id: PUBMED:16455453
Method of pyloric reconstruction and impact upon delayed gastric emptying and hospital stay after pylorus-preserving pancreaticoduodenectomy. Preservation of the pylorus at the time of pancreaticoduodenectomy has been associated with equal oncological outcomes when compared to the classical Whipple operation. Multiple studies have demonstrated that pylorus-preserving pancreaticoduodenectomy (PPPD) has equal or superior outcomes regarding quality of life when compared with the traditional Whipple operation, but many studies have suggested a higher incidence of delayed gastric emptying (DGE). DGE prolongs hospital stay, and its association with PPPD has hampered its adoption by many pancreatic surgery centers. We describe a novel surgical technique for the prevention of delayed gastric emptying following pylorus-preserving pancreaticoduodenectomy. The technique of pyloric dilatation appears to decrease the incidence of delayed gastric emptying and facilitates earlier hospital discharge, when compared with standard pylorus preserving pancreaticoduodenectomy.
abstract_id: PUBMED:22024087
Effect of pyloric dilatation on gastric emptying after pylorus-preserving pancreaticoduodenectomy. Background/aims: Pylorus-preserving pancreaticoduodenectomy (PPPD) is the standard treatment for periampullary and pancreatic head tumors. Delayed gastric emptying (DGE) is the most common (ranging from 15-45%) but not life threatening complication and impairs patient recovery and prolongs the hospital stay after PPPD. The precise pathomechanism of DGE is still unclear. The aim of this study was to evaluate whether the method of pyloric dilatation performed at the time of PPPD could improve gastric emptying.
Methodology: Forty patients underwent PPPD for pancreatic or periampullary lesions from January 1999 to July 2004 were included in this study. In twenty patients mechanical dilatation of the pylorus after duodenal transaction was performed (PPPD+PD group) while in other twenty PPPD was not followed with pyloric dilatation (PPPD group). The incidence of DGE as well as other complications was analyzed. Delayed gastric emptying was defined as gastric stasis requiring nasogastric intubation for more than 4 postoperative days (POD), or the inability to tolerate a regular diet on the 8th POD.
Results: Delayed gastric emptying occurred in seven (35%) out of the 20 patients in the PPPD group, while none of the 20 patients in the PPPD+PD group developed DGE.
Conclusions: Pyloric dilatation reduces DGE after PPPD enabling patients to return sooner to a normal diet.
abstract_id: PUBMED:29725396
Pylorus-preserving pancreaticoduodenectomy versus standard pancreaticoduodenectomy in the treatment of duodenal papilla carcinoma. It is not known whether pylorus-preserving pancreaticoduodenectomy (PPPD) is as effective as the standard pancreaticoduodenectomy (SPD) in the treatment of duodenal papilla carcinoma (DPC). A retrospective cohort trial was undertaken to compare the results of these two procedures. Clinical data, histological findings, short-term results, survival and quality of life of all patients who had undergone surgery for primary DPC between January 2003 and February 2010 were analyzed. According to the inclusion criteria and the surgical methods, 116 patients were divided into the PPPD group (n=43) and the SPD group (n=73). There were no significant differences in various indices, including surgery duration, extent of intraoperative hemorrhage and postoperative pathological indexes. The incidence of postoperative complications, including pancreatic fistula and delayed gastric emptying, were also similar between the two groups (20.9 vs. 21.9%; P=0.900 and 11.6 vs. 5.4%; P=0.402). Long-term survival and quality of life were identical following a median follow-up of 45.6 months (range, 4-144 months). Within 6 months, there was a decreased loss of appetite following the pylorus-preserving procedure (26.9 vs. 49.3; P=0.003). The procedures were equally effective for the treatment of DPC. PPPD offers minor advantages in the early postoperative period but not in the long term.
abstract_id: PUBMED:37697996
From Classic Whipple to Pylorus Preserving Pancreaticoduodenectomy and Ultimately to Pylorus Resecting - Stomach Preserving Pancreaticoduodenectomy: A Review. Pancreaticoduodenectomy is the procedure of choice for benign or malignant tumors of the periampullary region. The preservation of the pylorus has been established as the mostly utilized approach during pancreaticoduodenectomy among the majority of specialized, in the surgical treatment of pancreatic cancer, centers worldwide. The factors that influenced this predilection are the shorter operation times, the less intraoperative blood loss, the decreased technical difficulty, and the quite similar short- and long-term outcomes compared to the classic Whipple. However, there is a notable trend in the literature highlighting the increased incidence of delayed gastric emptying following pylorus preserving pancreaticoduodenectomy. Among other factors, pylorus dysfunction attributable to the surgical maneuvers has been implemented in the etiology of this complication. In an attempt to overcome this limitation of the pylorus preserving pancreaticoduodenectomy, pylorus resecting pancreaticoduodenectomy with the preservation of the stomach was proposed. In theory, pylorus resecting pancreaticoduodenectomy could maintain the advantages of organ sparing surgery, but at the same time guarantee a more seamless gastric emptying. Only three RCTs, to date, aimed to evaluate the approach with only one reporting results in favor of the pylorus resecting pancreaticoduodenectomy in regard to the incidence of delayed gastric emptying. Further well-designed prospective randomized studies are needed for an accurate assessment of the true role of each of these surgical alternatives on the treatment of pancreatic cancer.
Answer: Pyloric function after pylorus-preserving pancreaticoduodenectomy (PPPD) is not consistently preserved. A study assessing the function of the pylorus after PPPD found that most patients had abnormal pyloric function, but it was clinically evident in only a small proportion. In the early postoperative period, only one patient had delayed gastric emptying, but in the long term, symptoms of dyspepsia and alkaline reflux were observed in some patients, with a few showing histological signs of gastritis. There was no significant difference in gastric emptying compared with controls, but some patients had prolonged emptying time (PUBMED:9537720).
Additionally, delayed gastric emptying (DGE) is a major postoperative complication after PPPD, with the incidence reported to be between 30% to 70% in some studies. Various techniques have been explored to reduce the incidence of DGE, such as pyloric dilatation, which has been shown to reduce the incidence of DGE and facilitate earlier hospital discharge (PUBMED:19088935, PUBMED:16455453, PUBMED:22024087). Another study found that pylorotomy in PPPD may reduce the incidence of DGE and preserve long-term quality of life more than similar procedures (PUBMED:17153459).
The surgical anatomy of the innervation of the pylorus is also relevant to the preservation of pyloric function. There are three routes of innervation of the pyloric region in humans, and preserving these routes, particularly the route of the right gastric artery, is important for maintaining pyloric function in PPPD (PUBMED:16610023).
In summary, while pyloric function may not be fully preserved in all patients undergoing PPPD, certain surgical techniques and considerations can help reduce the incidence of DGE and potentially preserve pyloric function to some extent. |
Instruction: Inflammation on the cervical papanicolaou smear: evidence for infection in asymptomatic women?
Abstracts:
abstract_id: PUBMED:24204103
Inflammation on the cervical papanicolaou smear: evidence for infection in asymptomatic women? Background: The significance of the possible presence of infection on the Pap smear of asymptomatic women based on cytological criteria is practically unknown.
Materials And Methods: A total of 1117 asymptomatic nonpregnant women had Pap smear tests and vaginal as well as cervical cultures completed (622 with and 495 without inflammation on the Pap smear).
Results: Out of the 622 women with inflammation on Pap test, 251 (40.4%) had negative cultures (normal flora present), while 371 (59.6%) women had positive cultures with different pathogens. In contrast, the group of women without inflammation on Pap test displayed significantly increased percentage of negative cultures (67.1%, P < 0.001) and decreased percentage of positive cultures (32.9%, P < 0.001). Bacterial vaginosis was diagnosed more frequently in both groups and significantly more in the group with inflammation on Pap smear compared to the group without inflammation (P < 0.02).
Conclusions: A report of inflammatory changes on the cervical Pap smear cannot be used to reliably predict the presence of a genital tract infection, especially in asymptomatic women. Nevertheless, the isolation of different pathogens in about 60% of the women with inflammation on the Pap smear cannot be overlooked and must be regarded with concern.
abstract_id: PUBMED:1397815
Inflammation on the cervical Papanicolaou smear: the predictive value for infection in asymptomatic women. Background: The clinical significance of inflammation on the cervical Papanicolaou (Pap) smear of asymptomatic women is unknown. This study assessed the possible association between inflammation on Pap smears with the presence of cervical/vaginal pathogens.
Methods: A questionnaire was given to 290 asymptomatic women seen for routine gynecologic examination, including Pap smear, in a primary care setting. The women were tested for the presence of Candida species, Trichomonas vaginalis, Gardnerella vaginalis, Neisseria gonnorrhoeae, and Chlamydia trachomatis.
Results: Recovery of Chlamydia and Trichomonas was more frequent in women with inflammation on Pap smear than in women without inflammation, but the positive predictive value of inflammation was only 7% for Chlamydia and 14% for Trichomonas. Seventy-one percent of the women with inflammation had no evidence of any of the organisms. After a 6-month follow-up period, women with inflammation on Pap smear were no more likely than their matched counterparts without inflammation to return for a clinic visit with symptoms of vaginitis.
Conclusions: In this study, inflammation on Pap smear had a relatively low predictive value for the presence of vaginal pathogens in asymptomatic women.
abstract_id: PUBMED:2486964
Detection of endocervical chlamydia infections by comparing the Papanicolaou staining test and direct immunofluorescence The association of infection with Chlamydia trachomatis and cytologic changes on Papanicolaou smear was examined in 453 sexually active postmenarcal female subjects attending the cytology service for routine Papanicolaou smear. We described inflammatory and epithelial cell patterns that permit the detection of group of women with and without cervicitis at high risk for cervical chlamydial infection. We confirmed the infection by direct immunofluorescence using monoclonal antibodies. Ninety-five of 453 women had cervicitis (20.9%) chlamydial inclusions were noted by Papanicolaou in 26 patients with cervicitis and in 61 without cervicitis. Direct stain with fluorescein-conjugated monoclonal antibodies demonstrated elementary bodies of C. trachomatis in 42/453 women, 24 had cervicitis and 18 without cervicitis. One of two patients with cervical smears with chlamydial inclusions as "changes suggestive of chlamydial infection" by Papanicolaou was confirmed by inmmunofluorescence. We calculated the efficay of the Papanicolaou smear as a diagnostic technique: the sensitivity was 0.27, the specificity was 0.80, the predictive value of o positive test was 0.29. In order to compare the efficiency with immunofluorescence the sensitivity was 0.25, specificity 0.94 and the positive predictive value was 0.57. Using the epithelial changes interpreted as inflammatory, we had the highest sensitivity with both tests, 0.76 to Papanicolaou and 0.90 to immunofluorescence, specificity is near 100% for both tests, cytology tended to be more efficient in identifying women without infection than in identifying those with infection.
abstract_id: PUBMED:3069248
Cervical chlamydia trachomatis and mycoplasmal infections in women with abnormal Papanicolaou smears. In a series of 2,346 Papanicolaou-stained smears from women undergoing routine gynaecological examination, 39 showed cytomorphological signs of inflammation suggesting Chlamydia trachomatis infection (Papanicolaou class II or III). The 39 smears were studied microbiologically by the direct-immunofluorescence test and cell culture to see whether chlamydial infection correlated with the presence of Mycoplasma hominis and Ureaplasma urealyticum. The results were compared with the cytological and colposcopic findings. C. trachomatis was cultured in 56.41% of the 39 smears, and isolated by the direct-immunofluorescence test in 51.28%. M. hominis was detected in 35.89% and U. urealyticum in 25.54%. Though all three organisms coexisted in 10.25% of the smears, C. trachomatis and M. hominis in 15.38%, C. trachomatis and U. urealyticum in 2.56%, no valid conclusions could be drawn from their association. The study did, however, indicate that vacuolated cells and cells with "bubbly" cytoplasm are common also to other infections seen in PAP-test smears and do not necessarily warrant a diagnosis of C. trachomatis, but that Gupta-type intracellular inclusion bodies do.
abstract_id: PUBMED:20949462
Clinicopathological study of Papanicolaou (Pap) smears for diagnosing of cervical infections. Cervical infections are not uncommon in our population especially in young and sexually active women. One thousand samples of married women, aged between 20 and 70 years, were studied by conventional Papanicolaou smears. These samples were examined in the Department of Pathology, King Edward Medical University, Lahore from January 2007 to June 2009. Only cases without (pre)neoplastic cytology were included. Six types of infections were diagnosed cytologically. The overall frequency of normal, inadequate, neoplastic, and infective smears was 50%, 1.8%, 10.2%, and 38.3%, respectively. Most of the patients (67%) were in the reproductive age group with mean age 34.7 ± 2.6 years. The commonest clinical sign seen in 354/383 (92%) cases and symptom in (349/383; 91%) cases were vaginal discharge and pruritis vulvae. Among the infective smears, 290 cases (75.7%), the cytologic diagnosis was nonspecific inflammation. Most of these 290 smears contained clue cells (indicating Gardnerella infection) and a lack of lactobacilli. Such smears are predominant in patients suffering from bacterial vaginosis (BV). Twenty-eight smears (7.3%) were positive for Trichomonas vaginalis, 27 cases (7%) were smears with koilocytic change pathognomonic of human papilloma virus infection. Twenty-five smears (6.5%) were positive for fungal infection. Seven cases (1.8%) were diagnosed as herpes simplex virus infection. Finally, there were six cases (1.5%) with atrophic vaginitis. We conclude that the cervical smear is well suited for diagnosing cervical infections. It is clear that Gardnerella, known to be associated with bacterial vaginosis, is a major problem in our Pakistani population.
abstract_id: PUBMED:26644228
Bacterial vaginosis and inflammatory response showed association with severity of cervical neoplasia in HPV-positive women. Vaginal infections may affect susceptibility to and clearance of human papillomavirus (HPV) infection and chronic inflammation has been linked to carcinogenesis. This study aimed to evaluate the association between bacterial vaginosis (BV) and inflammatory response (IR) with the severity of cervical neoplasia in HPV-infected women. HPV DNA was amplified using PGMY09/11 primers and genotyping was performed using a reverse line blot hybridization assay in 211 cervical samples from women submitted to excision of the transformation zone. The bacterial flora was assessed in Papanicolaou stained smears, and positivity for BV was defined as ≥ 20% of clue cells. Present inflammatory response was defined as ≥ 30 neutrophils per field at 1000× magnification. Age higher than 29 years (OR:1.91 95% CI 1.06-3.45), infections by the types 16 and/or 18 (OR:1.92 95% CI 1.06-3.47), single or multiple infections associated with types 16 and/or 18 (OR: 1.92 CI 95% 1.06-3.47), BV (OR: 3.54 95% CI 1.62-7.73) and IR (OR: 6.33 95% CI 3.06-13.07) were associated with severity of cervical neoplasia (CIN 2 or worse diagnoses), while not smoking showed a protective effect (OR: 0.51 95% CI 0.26-0.98). After controlling for confounding factors, BV(OR: 3.90 95% CI 1.64-9.29) and IR (OR: 6.43 95% CI 2.92-14.15) maintained their association with the severity of cervical neoplasia. Bacterial vaginosis and inflammatory response were independently associated with severity of cervical neoplasia in HPV-positive women, which seems to suggest that the microenvironment would relate to the natural history of cervical neoplasia.
abstract_id: PUBMED:7840040
Preventing cervical cancer: the role of the Bethesda system. The Papanicolaou smear is a well-established component of preventive health protocols for women. The purpose of this screening tool is to detect precursor lesions of invasive cervical carcinoma; however, the natural progression of these lesions is unclear, and it currently is not possible to determine which of the many dysplastic findings have carcinogenic potential. Furthermore, disagreement exists concerning the time frame for the malignant transformation of dysplastic cervical lesions. Despite these concerns, cervical screening has been credited with reducing morbidity and mortality from invasive cervical carcinoma in certain populations, and almost all family physicians provide this service to their female patients. The Bethesda system of cytopathologic reporting (introduced in 1988 and revised in 1991) is designed to improve communication between pathologists and clinicians. Compared with other taxonomies, the Bethesda system allows for distinction between changes associated with inflammation and infection and those reflecting squamous cell atypia and dysplasia.
abstract_id: PUBMED:6589928
Chlamydial infection in Papanicolaou-stained cervical smears. Infection by Chlamydia trachomatis was frequently observed in routine cytologic smears studied for cancer detection. Seventy-three smears from 187 Chlamydia-positive cases seen in a two-and-one-half-year period were reviewed to establish the relationship between C. trachomatis infection and the incidence of metaplastic and dysplastic cells. An inflammatory process with metaplastic cells was found in 72.6% of the smears and dysplastic cells in 16.4%. The Papanicolaou stain gave enough detail not only for the identification of the inclusions but also for the relation of different types of inclusions to the different stages of the life cycle of the microorganism.
abstract_id: PUBMED:1462789
Does evidence of inflammation on Papanicolaou smears of pregnant women predict preterm labor and delivery? Background: Preterm delivery is the most common cause of neonatal morbidity and mortality in the United States. There is evidence that cervicovaginal infection could predispose to preterm labor. This study explored a possible association of evidence of inflammation on an otherwise normal Papanicolaou smear obtained during pregnancy with subsequent preterm labor and preterm delivery.
Methods: Using a retrospective matched cohort design, we studied women who gave birth to live singleton infants at the University of Missouri Hospital and Clinics during a 21-month period. Papanicolaou smears were obtained from 1 to 8 months before delivery and were interpreted in the same cytopathology laboratory. Data pertaining to outcome variables and potential confounding variables were collected from hospital charts.
Results: Incidence rates were 14.4 percent for labor < 37 weeks' gestation (preterm labor), 12.3 percent for hospitalization for preterm labor, 9.9 percent for delivery < 37 weeks (preterm delivery), 2.6 percent for delivery < 34 weeks, and 7.5 percent for birth weight < 2500 g. On univariate and multivariate analyses, there were no significant differences in any outcome between the 293 women with inflammation and the 284 women without inflammation on Papanicolaou smear. Results were unchanged when the analysis was limited to the 412 women who received no antibiotics during pregnancy. Among the 38 women with a history of preterm labor or preterm delivery, those with cervical inflammation had a higher rate of preterm labor than those without inflammation.
Conclusions: In the sample as a whole, there was little evidence that findings of inflammation on Papanicolaou smear constituted a risk factor for preterm labor or preterm delivery. The data suggest that inflammation could be associated with an increased risk in a subgroup of women at higher risk by virtue of their obstetric history.
abstract_id: PUBMED:10730380
The correlation between the study data on the Papanicolaou cytological smear and the data on direct immunofluorescence for C. trachomatis A study was carried out to examine the relationship between the clinical data, the data from the Papanicolaou smear and the direct immunofluorescence for C. trachomatis in 70 women with infectious cervicitis. The prevalence of Chlamydial infection in the study population was 27%. The study showed that one third of the women in whom the Papanicolaou smear revealed inflammatory changes might be expected to be infected with C. trachomatis. The probability of not having infection when no inflammatory changes were observed in the Papanicolaou smear was 0.74--negative predictive value which means that more than two thirds of the women without inflammatory changes in cytology smear will not be infected with C. trachomatis.
Answer: The presence of inflammation on a cervical Papanicolaou (Pap) smear in asymptomatic women does not reliably predict the presence of a genital tract infection. Studies have shown that while there is an association between inflammation on Pap smears and the presence of certain pathogens, the predictive value for infection is relatively low. For instance, one study found that the positive predictive value of inflammation on a Pap smear was only 7% for Chlamydia and 14% for Trichomonas, with 71% of women with inflammation having no evidence of these organisms (PUBMED:1397815). Another study reported that about 60% of women with inflammation on the Pap smear had positive cultures for different pathogens, but 40.4% had negative cultures, indicating normal flora (PUBMED:24204103).
Furthermore, the detection of Chlamydia trachomatis infections using the Pap smear showed a sensitivity of 0.27 and a specificity of 0.80, indicating that while the Pap smear can identify some infections, it is not highly sensitive (PUBMED:2486964). In another study, 56.41% of smears with signs of inflammation suggesting Chlamydia trachomatis infection were confirmed by culture, and 51.28% by direct-immunofluorescence test (PUBMED:3069248).
In summary, while inflammation on a Pap smear in asymptomatic women can indicate the presence of an infection, it is not a definitive diagnostic tool for infections. A significant proportion of women with inflammation on their Pap smear may not have a detectable infection, and other diagnostic methods may be necessary to confirm the presence of pathogens (PUBMED:1397815; PUBMED:24204103; PUBMED:2486964; PUBMED:3069248). |
Instruction: Does the American College of Surgeons NSQIP-Pediatric Accurately Represent Overall Patient Outcomes?
Abstracts:
abstract_id: PUBMED:35926308
Are Kids More Than Just Little Adults? A Comparison of Surgical Outcomes. Introduction: While complication rates have been well described using the National Surgical Quality Improvement Program (NSQIP) and National Surgical Quality Improvement Program-Pediatric registries, there have been no direct comparisons of outcomes between adults and children. Our objective was to describe differences in postoperative outcomes between children and adults undergoing common surgical procedures.
Methods: Using data from 2013 to 2017, we identified patients undergoing laparoscopic appendectomy, laparoscopic cholecystectomy, thyroidectomy, and colectomy. Propensity score matching on gender, race, American Society of Anesthesiologists class, surgical indication, and procedure type was performed. Outcomes included surgical site infection (SSI), readmission rates, mortality/serious morbidity, and hospital length of stay and were analyzed using χ2 and student's t-test with statistical significance defined as P < 0.05.
Results: We matched 79,866 patients from 812 hospitals. Compared to adults, children had higher rates of SSI following appendectomy (4.12% versus 1.40%, P < 0.01) and cholecystectomy (0.96% versus 0.66%, P = 0.04), readmission following appendectomy (4.26% versus 2.47%, P < 0.01), and longer length of stay in all procedures. In adults, 30-day mortality/serious morbidity was higher for all procedures.
Conclusions: Compared to adults, children demonstrate unique surgical complication and outcome profiles. Quality improvement efforts such as SSI prevention bundles and enhanced recovery protocols used in adults should be expanded to children.
abstract_id: PUBMED:34801250
The influence of decreasing variable collection burden on hospital-level risk-adjustment. Background: Risk-adjustment is a key feature of the American College of Surgeons National Surgical Quality Improvement Program-Pediatric (NSQIP-Ped). Risk-adjusted model variables require meticulous collection and periodic assessment. This study presents a method for eliminating superfluous variables using the congenital malformation (CM) predictor variable as an example.
Methods: This retrospective cohort study used NSQIP-Ped data from January 1st to December 31st, 2019 from 141 hospitals to compare six risk-adjusted mortality and morbidity outcome models with and without CM as a predictor. Model performance was compared using C-index and Hosmer-Lemeshow (HL) statistics. Hospital-level performance was assessed by comparing changes in outlier statuses, adjusted quartile ranks, and overall hospital performance statuses between models with and without CM inclusion. Lastly, Pearson correlation analysis was performed on log-transformed ORs between models.
Results: Model performance was similar with removal of CM as a predictor. The difference between C-index statistics was minimal (≤ 0.002). Graphical representations of model HL-statistics with and without CM showed considerable overlap and only one model attained significance, indicating minimally decreased performance (P = 0.058 with CM; P = 0.044 without CM). Regarding hospital-level performance, minimal changes in the number and list of hospitals assigned to each outlier status, adjusted quartile rank, and overall hospital performance status were observed when CM was removed. Strong correlation between log-transformed ORs was observed (r ≥ 0.993).
Conclusions: Removal of CM from NSQIP-Ped has minimal effect on risk-adjusted outcome modelling. Similar efforts may help balance optimal data collection burdens without sacrificing highly valued risk-adjustment in the future.
Level Of Evidence: Level II prognosis study.
abstract_id: PUBMED:27666656
Development and Evaluation of the American College of Surgeons NSQIP Pediatric Surgical Risk Calculator. Background: There is an increased desire among patients and families to be involved in the surgical decision-making process. A surgeon's ability to provide patients and families with patient-specific estimates of postoperative complications is critical for shared decision making and informed consent. Surgeons can also use patient-specific risk estimates to decide whether or not to operate and what options to offer patients. Our objective was to develop and evaluate a publicly available risk estimation tool that would cover many common pediatric surgical procedures across all specialties.
Study Design: American College of Surgeons NSQIP Pediatric standardized data from 67 hospitals were used to develop a risk estimation tool. Surgeons enter 18 preoperative variables (demographics, comorbidities, procedure) that are used in a logistic regression model to predict 9 postoperative outcomes. A surgeon adjustment score is also incorporated to adjust for any additional risk not accounted for in the 18 risk factors.
Results: A pediatric surgical risk calculator was developed based on 181,353 cases covering 382 CPT codes across all specialties. It had excellent discrimination for mortality (c-statistic = 0.98), morbidity (c-statistic = 0.81), and 7 additional complications (c-statistic > 0.77). The Hosmer-Lemeshow statistic and graphic representations also showed excellent calibration.
Conclusions: The ACS NSQIP Pediatric Surgical Risk Calculator was developed using standardized and audited multi-institutional data from the ACS NSQIP Pediatric, and it provides empirically derived, patient-specific postoperative risks. It can be used as a tool in the shared decision-making process by providing clinicians, families, and patients with useful information for many of the most common operations performed on pediatric patients in the US.
abstract_id: PUBMED:34175114
Validation study of the ACS NSQIP surgical risk calculator for two procedures in Japan. Introduction: The ACS NSQIP Surgical Risk Calculator (SRC) assesses risk to support goal-concordant care. While it accurately predicts US outcomes, its performance internationally is unknown. This study evaluates SRC accuracy in predicting mortality following low anterior resection (LAR) and pancreaticoduodenectomy (PD) in NSQIP patients and accuracy retention when applied to native Japanese patients (National Clinical Database, NCD).
Methods: NSQIP (41,260 LAR; 15,114 PD) and NCD cases (61,220 LAR; 27,901 PD) from 2015 to 2017 were processed through the SRC mortality model. Country-specific calibration and discrimination were assessed with and without an intercept correction applied to the Japanese data.
Results: The SRC exhibited acceptable calibration for LAR and PD when applied to NSQIP data but miscalibration for NCD data. A simple correction to the model intercept, motivated by lower mortality rates in the Japanese data, successfully remediated the miscalibration.
Conclusions: The SRC may inaccurately predict surgical risk when applied to the native Japanese population. An intercept correction method is suggested when miscalibration is encountered; it is simple to implement and may permit effective international use of the SRC.
abstract_id: PUBMED:38144509
The Accuracy of the NSQIP Universal Surgical Risk Calculator Compared to Operation-Specific Calculators. Objective: To compare the performance of the ACS NSQIP "universal" risk calculator (N-RC) to operation-specific RCs.
Background: Resources have been directed toward building operation-specific RCs because of an implicit belief that they would provide more accurate risk estimates than the N-RC. However, operation-specific calculators may not provide sufficient improvements in accuracy to justify the costs in development, maintenance, and access.
Methods: For the N-RC, a cohort of 5,020,713 NSQIP patient records were randomly divided into 80% for machine learning algorithm training and 20% for validation. Operation-specific risk calculators (OS-RC) and OS-RCs with operation-specific predictors (OSP-RC) were independently developed for each of 6 operative groups (colectomy, whipple pancreatectomy, thyroidectomy, abdominal aortic aneurysm (open), hysterectomy/myomectomy, and total knee arthroplasty) and 14 outcomes using the same 80%/20% rule applied to the appropriate subsets of the 5M records. Predictive accuracy was evaluated using the area under the receiver operating characteristic curve (AUROC), the area under the precision-recall curve (AUPRC), and Hosmer-Lemeshow (H-L) P values, for 13 binary outcomes, and mean squared error for the length of stay outcome.
Results: The N-RC was found to have greater AUROC (P = 0.002) and greater AUPRC (P < 0.001) compared to the OS-RC. No other statistically significant differences in accuracy, across the 3 risk calculator types, were found. There was an inverse relationship between the operation group sample size and magnitude of the difference in AUROC (r = -0.278; P = 0.014) and in AUPRC (r = -0.425; P < 0.001) between N-RC and OS-RC. The smaller the sample size, the greater the superiority of the N-RC.
Conclusions: While operation-specific RCs might be assumed to have advantages over a universal RC, their reliance on smaller datasets may reduce their ability to accurately estimate predictor effects. In the present study, this tradeoff between operation specificity and accuracy, in estimating the effects of predictor variables, favors the N-R, though the clinical impact is likely to be negligible.
abstract_id: PUBMED:21238651
Pediatric American College of Surgeons National Surgical Quality Improvement Program: feasibility of a novel, prospective assessment of surgical outcomes. Purpose: The American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) provides validated assessment of surgical outcomes. This study reports initiation of an ACS NSQIP Pediatric at 4 children's hospitals.
Methods: From October 2008 to June 2009, 121 data variables were prospectively collected for 3315 patients, including 30-day outcomes and tailoring the ACS NSQIP methodology to children's surgical specialties.
Results: Three hundred seven postoperative complications/occurrences were detected in 231 patients representing 7.0% of the study population. Of the patients with complications, 175 (75.7%) had 1, 39 (16.9%) had 2, and 17 (7.4%) had 3 or more complications. There were 13 deaths (0.39%) and 14 intraoperative occurrences (0.42%) detected. The most common complications were infection, 105 (34%) (SSI, 54; sepsis, 31; pneumonia, 13; urinary tract infection, 7); airway/respiratory events, 27 (9%); wound disruption, 18 (6%); neurologic events, 8 (3%) (nerve injury, 4; stroke/vascular event, 2; hemorrhage, 2); deep vein thrombosis, 3 (<1%); renal failure, 3 (<1%); and cardiac events, 3 (<1%). Current sampling captures 17.5% of cases across institutions with unadjusted complication rates ranging from 6.8% to 10.2%. Completeness of data collection for all variables exceeded 95% with 98% interrater reliability and 87% of patients having full 30-day follow-up.
Conclusion: These data represent the first multiinstitutional prospective assessment of specialty-specific surgical outcomes in children. The ACS NSQIP Pediatric is poised for institutional expansion and future development of risk-adjusted models.
abstract_id: PUBMED:30374815
Combining Surgical Outcomes and Patient Experiences to Evaluate Hospital Gastrointestinal Cancer Surgery Quality. Background: Assessments of surgical quality should consider both surgeon and patient perspectives simultaneously. Focusing on patients undergoing major gastrointestinal cancer surgery, we sought to characterize hospitals, and their patients, on both these axes of quality.
Methods: Using the American College of Surgeons' National Surgical Quality Improvement Program registry, hospitals were profiled on a risk-adjusted composite measure of death or serious morbidity (DSM) generated from patients who underwent colectomy, esophagectomy, hepatectomy, pancreatectomy, or proctectomy for cancer between January 1, 2015 and December 31, 2016. These hospitals were also profiled using the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. Highest-performing hospitals on both quality axes, and their respective patients, were compared to the lowest-performing hospitals.
Results: Overall, 60,526 patients underwent their cancer operation at 530 hospitals. There were 38 highest- and 48 lowest-performing hospitals. The correlation between quality axes was poor (ρ = 0.10). Compared to the lowest-performing hospitals, the highest-performing hospitals were more often NCI-designated cancer centers (29.0% vs. 4.2%, p = 0.002) and cared for a lower proportion of Medicaid patients (0.14 vs. 0.23, p < 0.001). Patients who had their operations at the lowest- versus highest-performing hospitals were more often black (17.2% vs. 8.4%, p < 0.001), Hispanic (8.3% vs. 3.5%, p < 0.001), functionally dependent (3.8% vs. 0.9%, p < 0.001), and not admitted from home (4.4% vs. 2.4%, p < 0.001).
Conclusions: Hospital performance varied when assessed by both risk-adjusted surgical outcomes and patient experiences. In this study, poor-performing hospitals appeared to be disproportionately serving disadvantaged and minority cancer patients.
abstract_id: PUBMED:29482831
Sustained culture and surgical outcome improvement. Background: A focus on the culture of safety and patient outcomes continues to grow in importance. Several initiatives targeted at individual deficits have been described but few institutions have shown the effect of a global change in culture on patient outcomes.
Methods: Patient care perception was assessed using Safety Attitudes Questionnaire (SAQ) by Pascal Metrics®. A change in culture was initiated, followed by implementation of initiatives targeting communication and patient safety. ACS-NSQIP data was analyzed to assess outcomes during the period of improved culture.
Results: Our institution had poor outcomes as measured by ACS-NSQIP data and several deficiencies in our culture score. Both statistically improved after initiative implementation. A difference in mean culture score across time (p < 0.001 = .031) was seen from 2013 to 2015, while NSQIP odds ratios falling in the 'exemplary' category increased.
Conclusion: Our results demonstrate an improvement in both culture and outcomes from 2013 to 2015, suggesting a correlation between culture and surgical outcomes.
abstract_id: PUBMED:37075656
The past, present and future of ACS NSQIP-Pediatric: Evolution from a quality registry to a comparative quality performance platform. Quality and process improvement (QI/PI) in children's surgical care require reliable data across the care continuum. Since 2012, the American College of Surgeons' (ACS) National Surgical Quality Improvement Program-Pediatric (NSQIP-Pediatric) has supported QI/PI by providing participating hospitals with risk-adjusted, comparative data regarding postoperative outcomes for multiple surgical specialties. To advance this goal over the past decade, iterative changes have been introduced to case inclusion and data collection, analysis and reporting. New datasets for specific procedures, such as appendectomy, spinal fusion for scoliosis, vesicoureteral reflux procedures, and tracheostomy in children less than 2 years old, have incorporated additional risk factors and outcomes to enhance the clinical relevance of data, and resource utilization to consider healthcare value. Recently, process measures for urgent surgical diagnoses and surgical antibiotic prophylaxis variables have been developed to promote timely and appropriate care. While a mature program, NSQIP-Pediatric remains dynamic and responsive to meet the needs of the surgical community. Future directions include introduction of variables and analyses to address patient-centered care and healthcare equity.
abstract_id: PUBMED:25881789
Patterns of care among patients undergoing hepatic resection: a query of the National Surgical Quality Improvement Program-targeted hepatectomy database. Background: The American College of Surgeons recently added liver-specific variables to the National Surgical Quality Improvement Program (NSQIP). We sought to use these variables to define patterns of care, as well as characterize perioperative outcomes among patients undergoing hepatic resection.
Methods: The American College of Surgeons-NSQIP database was queried for all patients undergoing hepatic resection between January 1, 2013 and December 31, 2013 (n = 2448). Liver-specific variables were summarized.
Results: Preoperatively, 11.3% of patients had hepatitis B or C or both, whereas 9.2% had cirrhosis. The indication for hepatic resection was benign (20.8%) or malignant (74.2%) disease. Among patients with a malignant indication, metastatic disease (47.3%) was more common than primary liver cancer (26.9%). Preoperative treatment included neoadjuvant chemotherapy (25.5%), portal vein embolization (2.1%), and intra-arterial therapy (0.9%). At surgery, most patients underwent an open hepatic resection (70.7%), whereas 21.4% and 1.1% underwent a laparoscopic or robotic procedure. The Pringle maneuver was used in 27.7% of patients. Although 6.5% of patients had a concomitant hepaticojejunostomy, 10.1% had a concurrent ablation. An operative drain was placed in half of patients (46.5%, minor resection: 42.0% versus major resection: 53.4%; P < 0.001). Among the entire cohort, bile leak (7.3%, minor resection: 4.9% versus major resection: 10.9%; P < 0.001) and liver insufficiency and/or failure (3.8%, minor resection: 1.9% versus major resection: 6.9%; P < 0.001) were relatively uncommon. A subset of patients (9.5%) did experience major liver-specific complications that required intervention (drainage of collection and/or abscess: 38.4%; stenting for biliary obstruction and/or leak: 21.2%; biloma drainage: 18.4%).
Conclusions: In addition to standard variables, the new inclusion of liver-specific variables provides a unique opportunity to study NSQIP outcomes and practice patterns among patients undergoing hepatic resection.
Answer: The American College of Surgeons National Surgical Quality Improvement Program-Pediatric (ACS NSQIP-Pediatric) is designed to provide reliable data for quality and process improvement in children's surgical care. It offers participating hospitals risk-adjusted, comparative data on postoperative outcomes across multiple surgical specialties (PUBMED:37075656). The program has evolved from a quality registry to a comparative quality performance platform, with iterative changes introduced to case inclusion, data collection, analysis, and reporting to enhance the clinical relevance of data and consider healthcare value (PUBMED:37075656).
The ACS NSQIP Pediatric Surgical Risk Calculator, developed using standardized and audited multi-institutional data from the ACS NSQIP Pediatric, provides empirically derived, patient-specific postoperative risks. It has shown excellent discrimination for mortality, morbidity, and additional complications, indicating its utility in shared decision-making processes (PUBMED:27666656).
However, the accuracy of the NSQIP Pediatric in representing overall patient outcomes can vary. For instance, a study comparing surgical outcomes between children and adults found that children had unique surgical complication and outcome profiles, suggesting that quality improvement efforts such as SSI prevention bundles and enhanced recovery protocols used in adults should be expanded to children (PUBMED:35926308). Additionally, efforts to reduce the data collection burden by eliminating superfluous variables, such as the congenital malformation (CM) predictor, have shown minimal effect on risk-adjusted outcome modeling, indicating that the NSQIP-Pediatric can maintain its accuracy even with streamlined data collection (PUBMED:34801250).
Moreover, the NSQIP Universal Surgical Risk Calculator has been compared to operation-specific calculators, and it was found that the universal calculator performed as well or better than operation-specific calculators, suggesting that the NSQIP can accurately represent patient outcomes across a range of procedures (PUBMED:38144509).
In conclusion, the ACS NSQIP Pediatric appears to accurately represent overall patient outcomes, providing valuable data for quality improvement and informed decision-making in pediatric surgery. However, it is important to continuously evaluate and refine the program to ensure it remains responsive to the needs of the surgical community and the patients it serves. |
Instruction: Orthotopic heart transplantation hemodynamics: does atrial preservation improve cardiac output after transplantation?
Abstracts:
abstract_id: PUBMED:8803753
Orthotopic heart transplantation hemodynamics: does atrial preservation improve cardiac output after transplantation? Background: We have described an alternative technique for orthotopic heart transplantation (bicaval Wythenshawe technique) which maintains the right and left atrial anatomy and contractility.
Methods: Fifty patients were randomized into two groups: group A (n = 25) who had orthotopic heart transplantation using the bicaval Wythenshawe technique and group B (n = 25) who had conventional (Lower and Shumway) technique of orthotopic heart transplantation. We compared the cardiac output (measured by thermodilution technique) with atrial activation (AAI pacing) to cardiac output without atrial activity (VVI pacing) in both groups to identify any beneficial hemodynamic effects. All patients were studied the first and second weeks after transplantation. The inaccuracies of comparing cardiac output measurements caused by different loading conditions, inotropic state, and systemic vascular resistance were eliminated by using the patient as his or her own control.
Results: The difference between the measured cardiac output with atrial pacing and ventricular pacing was 1.42 +/- 0.44 L/min in group A in comparison with 0.32 +/- 0.4 L/min in group B (p = 0.001 Wilcoxon signed rank). The percentage of atrial contribution to the cardiac output in group A was 30% +/- 12% (standard deviation), 95% confidence interval in comparison with 7% +/- 9%, 95% confidence interval in group B. The mean stroke volume in group A was higher in sinus rhythm (65 +/- 19.2 ml) and atrial pacing (62 +/- 17.7 ml) compared with ventricular pacing (49.17 +/- 16.43 ml) p = 0.001. In group B no statistical difference was found between stroke volume measured with atrial (47.71 +/- 6.23 ml) or ventricular pacing (46.9 +/- 6.35 ml).
Conclusions: We conclude that the bicaval technique of orthotopic heart transplantation preserve the atrial kick and its contribution to cardiac output early after transplantation.
abstract_id: PUBMED:16202067
hDAF porcine cardiac xenograft maintains cardiac output after orthotopic transplantation into baboon--a perioperative study. Background: Only limited data are available on the physiological functional compatibility of cardiac xenografts after orthotopic pig to baboon transplantation (oXHTx). Thus we investigated hemodynamic parameters including cardiac output (CO) before and after oXHTx.
Methods: Orthotopic xenogeneic heart transplantation from nine hDAF transgeneic piglets to baboons was performed. We used femoral arterial thermodilution for the invasive assessment of CO and stroke volume.
Results: Baseline CO of the baboons after induction of anesthesia was 1.36 (1.0-1.9) l/min. 30 to 60 min after termination of the cardiopulmonary bypass, CO of the cardiac xenograft was significantly increased to 1.72 (1.3-2.1) l/min (P < 0.01). The stroke volumes of the baboon heart before transplantation and the cardiac xenograft was comparable [14.9 (11-26) vs. 11.8 (10-23) ml]. Thus the higher CO was achieved by an increase in heart rate after oXHTx [75.0 (69-110) vs. 140.0 (77-180)/min; P < 0.01]. Despite the increased CO, oxygen delivery was reduced [256 (251-354) vs. 227 (172-477) ml/min; P < 0.01] due to the inevitable hemodilution during the cardiopulmonary bypass and the blood loss caused by the surgical procedures.
Conclusion: Our results demonstrate that in the early phase after orthotopic transplantation of hDAF pig hearts to baboons, cardiac function of the donor heart is maintained and exceeds baseline CO. However, in the early intraoperative phase this was only possible by using inotropic substances and vasopressors due to the inevitable blood loss and dilution by the priming of the bypass circuit.
abstract_id: PUBMED:3149258
An experimental study on the heart and lung preservation and transplantation. Autoperfusion method and cardiac and pulmonary functions after transplantation Heart and lung transplantation has only provided long-term survival for patients with end-stage cardiopulmonary disease. Many more patients of potential recipients cannot receive the transplantation because of a pause of satisfactory donors. One of its reason is difficulty in prolonged heart-lung preservation and this has imposed a significant barrier to donor procurement. Using Autoperfusion of heart and lung, six hours preservation was successfully achieved in six mongrel dogs and an adequate condition for preservation was evaluated. 1. Glucose metabolism, 2. Electrolytes, 3. Acid-base balance, 4. Pulmonary blood flow, 5. Temperature, 6. Ventilation were considered important factors for long hours preservation. Heart and lung preserved for six hours were transplanted in thirteen dogs in heterotopic and orthotopic models and their functions were evaluated two hours after transplantation. The cardiac function was well preserved and pathological changes in cardiac muscle were minimum. But lung preservation was not so stable. Levels of PaO2 were variable and pathological changes of the donor lungs such as pulmonary edema, emphysema, etc were observed. Further studies were needed for lung preservation.
abstract_id: PUBMED:10930819
FK409 ameliorates ischemia-reperfusion injury in heart transplantation following 12-hour cold preservation. Objective: FK409 is the first spontaneous nitric oxide donor to increase plasma guanosine 3':5'-cyclic monophosphate. We designed this study to investigate whether the administration of FK409 during reperfusion ameliorated ischemia-reperfusion injury and enhanced post-transplant graft function in orthotopic heart transplantation following 12-hour cold preservation in a canine model.
Methods: We used 10 pairs of adult mongrel dogs, weighing 9.5 to 13.5 kg. Following cardiac arrest using cardioplegia, we washed out the coronary vascular beds with cold University of Wisconsin solution followed by 12-hour preservation. After preservation, we performed orthotopic transplantation. The experimental animals were divided into 2 groups. In the FK group (n = 5), FK409 (5 microg/kg/min) was administered intravenously, beginning 15 minutes before the onset of reperfusion and continuing for 45 minutes after reperfusion. In the control group (n = 5), saline vehicle was administered in the same manner. Two hours after transplantation, we assessed cardiac function, including cardiac output, left ventricular systolic pressure (LVP), and the maximum rates of positive and negative increase of LVP (+/-LV dP/dt) by comparing the recovery rate (%) of the cardiac function of the donor animal. We measured endothelin-1 levels in blood obtained from a catheter inserted into the coronary sinus 30, 60, and 120 minutes after reperfusion.
Results: Cardiac output was higher in the FK group than in the control group, but the difference was not significant (p = 0.08). Left ventricular systolic pressure and +/-LV dP/dt were significantly (p < 0.05) higher in the FK group than in the control group. Endothelin-1 levels were significantly (p < 0.05) lower in the FK group than in the control group 30 minutes after reperfusion. Transmission electron microscopy showed that the basal lamina of capillary vessels, glycogen granules, and mitochondrial structure were well-preserved in the FK group.
Conclusions: In orthotopic transplantation models, FK409 is effective in ameliorating ischemia-reperfusion injury following preservation and in enhancing post-transplant cardiac function.
abstract_id: PUBMED:11446505
Ischemic preconditioning and nicorandil pretreatment improve donor heart preservation. The present study investigated the effects of ischemic preconditioning (IPC) and nicorandil pretreatment on myocardial storage in a donor heart preservation model. Isolated rat hearts were separated into groups: group 1, non-preconditioned control group; group 2, 2.5 min of normothermic ischemia followed by 15 min of normothermic Langendorff perfusion (one IPC cycle); and group 3, 2 cycles of IPC. All hearts were subsequently stored in University of Wisconsin solution at 4 degrees C for 2, 4 and 6h, and the concentrations of high-energy phosphate metabolites were measured for each time point. Heart function parameters (aortic flow, coronary flow and cardiac output) were measured when the heart was reperfused following the 2, 4 or 6 h of preservation. The effects of nicorandil, an ATP-sensitive potassium channel opener, on heart function following preservation were also evaluated. Nicorandil was injected intravenously before heart harvesting. The results showed that the energy status was well preserved in the IPC groups. The 2-cycle IPC group showed better recovery of heart function following preservation. Pretreatment with nicorandil also improved functional recovery of the heart following preservation. The present study showed that IPC of the rat heart resulted in improved myocardial energy metabolism and functional recovery after hypothermic preservation, and that nicorandil has potential for pharmacological preconditioning in heart preservation for transplantation.
abstract_id: PUBMED:25132982
Preservation solutions for cardiac and pulmonary donor grafts: a review of the current literature. Hypothermic preservation of donor grafts is imperative to ameliorate ischemia related cellular damage prior to organ transplantation. Numerous solutions are in existence with widespread variability among transplant centers as to a consensus regarding the optimal preservation solution. Here, we present a concise review of pertinent preservation studies involving cardiac and pulmonary allografts in an attempt to minimize the variability among institutions and potentially improve graft and patient survival. A biochemical comparison of common preservation solutions was undertaken with an emphasis on Euro Collins (EC), University of Wisconsin (UW), histidine-tryptophan-ketoglutarate (HTK), Celsior (CEL), Perfadex (PER), Papworth, and Plegisol. An appraisal of the literature ensued containing the aforementioned preservation solutions in the setting of cardiac and pulmonary transplantation. Available evidence supports UW solution as the preservation solution of choice for cardiac transplants with encouraging outcomes relative to notable contenders such as CEL. Despite its success in the setting of cardiac transplantation, its use in pulmonary transplantation remains suboptimal and improved outcomes may be seen with PER. Together, we suggest, based on the literature that the use of UW solution and PER for cardiac and pulmonary transplants, respectively may improve transplant outcomes such as graft and patient survival.
abstract_id: PUBMED:17661768
Twenty-four hours postoperative results after orthotopic cardiac transplantation in swine. Background: In-vivo explants in pigs are well-established to investigate myocardial function directly after transplantation. However, there is no functional data available for a longer time period after transplantation. We have established a pig model to investigate myocardial function 24 hours after orthotopic transplantation.
Materials And Methods: Orthotopic cardiac transplantations (HTx) in pigs were performed with a postoperative observation period of 24 hours (n = 6). To analyze myocardial function after transplantation, hemodynamical parameters (Swan-Ganz- and impedance-catheter data) as well as tissue and blood samples were obtained. Regional myocardial blood flow (RMBF) was assessed using fluorescent microspheres.
Results: The impedance-catheter parameters demonstrated a preserved contractility in both ventricles 24 hours post-transplantation. In contrast, cardiac output 24 hours after HTx was diminished by 50% as compared to the preoperative value. Conversely, pulmonary vascular resistance increased significantly. The RMBF was increased in both ventricles. Metabolic and histological analyses indicate myocardial recovery 24 hours after HTx with no irreversible damage.
Conclusions: For the first time, we were able to establish a porcine model to investigate myocardial function 24 hours after heart transplantation. While the contractility of the transplanted hearts was well-preserved, impaired cardiac output was going along with an increase in pulmonary vascular resistance. Using this clinical relevant model, improvements of human cardiac transplantation and post-transplant contractile dysfunction, especially, could be investigated.
abstract_id: PUBMED:32841431
Cold non-ischemic heart preservation with continuous perfusion prevents early graft failure in orthotopic pig-to-baboon xenotransplantation. Background: Successful preclinical transplantations of porcine hearts into baboon recipients are required before commencing clinical trials. Despite years of research, over half of the orthotopic cardiac xenografts were lost during the first 48 hours after transplantation, primarily caused by perioperative cardiac xenograft dysfunction (PCXD). To decrease the rate of PCXD, we adopted a preservation technique of cold non-ischemic perfusion for our ongoing pig-to-baboon cardiac xenotransplantation project.
Methods: Fourteen orthotopic cardiac xenotransplantation experiments were carried out with genetically modified juvenile pigs (GGTA1- KO/hCD46/hTBM) as donors and captive-bred baboons as recipients. Organ preservation was compared according to the two techniques applied: cold static ischemic cardioplegia (IC; n = 5) and cold non-ischemic continuous perfusion (CP; n = 9) with an oxygenated albumin-containing hyperoncotic cardioplegic solution containing nutrients, erythrocytes and hormones. Prior to surgery, we measured serum levels of preformed anti-non-Gal-antibodies. During surgery, hemodynamic parameters were monitored with transpulmonary thermodilution. Central venous blood gas analyses were taken at regular intervals to estimate oxygen extraction, as well as lactate production. After surgery, we measured troponine T and serum parameters of the recipient's kidney, liver and coagulation functions.
Results: In porcine grafts preserved with IC, we found significantly depressed systolic cardiac function after transplantation which did not recover despite increasing inotropic support. Postoperative oxygen extraction and lactate production were significantly increased. Troponin T, creatinine, aspartate aminotransferase levels were pathologically high, whereas prothrombin ratios were abnormally low. In three of five IC experiments, PCXD developed within 24 hours. By contrast, all nine hearts preserved with CP retained fully preserved systolic function, none showed any signs of PCXD. Oxygen extraction was within normal ranges; serum lactate as well as parameters of organ functions were only mildly elevated. Preformed anti-non-Gal-antibodies were similar in recipients receiving grafts from either IC or CP preservation.
Conclusions: While standard ischemic cardioplegia solutions have been used with great success in human allotransplantation over many years, our data indicate that they are insufficient for preservation of porcine hearts transplanted into baboons: Ischemic storage caused severe impairment of cardiac function and decreased tissue oxygen supply, leading to multi-organ failure in more than half of the xenotransplantation experiments. In contrast, cold non-ischemic heart preservation with continuous perfusion reliably prevented early graft failure. Consistent survival in the perioperative phase is a prerequisite for preclinical long-term results after cardiac xenotransplantation.
abstract_id: PUBMED:1540062
Cardiac function and myocardial performance of 24-hour-preserved asphyxiated canine hearts. A method of 24-hour storage of asphyxiated canine hearts for orthotopic cardiac transplantation was studied to expand the geographical size of the donor pool. Left ventricular function of asphyxiated hearts preserved for 24 hours (group 1, n = 8) was compared with that of hearts donated on-site (group 2, n = 5). Group 1 donors were pretreated with verapamil hydrochloride, propranolol hydrochloride, and prostacyclin. The donor hearts were perfused with warm blood cardioplegia in situ after 10 minutes of asphyxiation and then perfused with cold crystalloid cardioplegia for 2 hours. The hearts were excised and stored in ice-cold University of Wisconsin solution for 22 hours. At orthotopic transplantation, coronary perfusion with warm blood cardioplegia was performed before the graft aorta was unclamped. Conventional cardiac variables (eg, cardiac output and maximum rate of rise of left ventricular pressure), myocardial performance, and diastolic compliance of grafted hearts were assessed 1 hour after weaning from bypass. All recipients in both groups were easily weaned from cardiopulmonary bypass without inotropic agents, and there were no significant differences in cardiac variables between the two groups. These results strongly suggest that cadaver hearts can be preserved for 24 hours with satisfactory cardiac function.
abstract_id: PUBMED:21364498
A recombinant human neuregulin-1 peptide improves preservation of the rodent heart after prolonged hypothermic storage. Background: Donor hearts are subjected to ischemia-reperfusion injury during transplantation. Recombinant human neuregulin (rhNRG)-1 peptide attenuates myocardial injury in various animal models of cardiomyopathy. Supplementing the organ-storage solution, Celsior (C), with glyceryl trinitrate (GTN) and cariporide improves cardiac preservation after hypothermic storage. We hypothesized that the addition of rhNRG-1 to C would improve cardiac preservation after hypothermic storage and provide incremental benefit in combination with GTN and cariporide.
Methods: An isolated working rat heart model was used. To assess the effect of rhNRG-1, hearts were stored for 6 hr at 4°C in C ± rhNRG-1 (14 nM). To assess the effect of using a combination of prosurvival kinase activators on cardiac preservation, the ischemic storage time was extended to 10 hr and hearts stored in C ± rhNRG-1 (14 nM) ± GTN (0.1 mg/mL) ± Cariporide (10 μM). Hearts were subsequently reperfused, cardiac function remeasured, and tissue collected for protein analysis and immunohistochemistry. Optimal timing of rhNRG-1 administration was also assessed.
Results: rhNRG-1 supplemented C improved functional recovery after 6 hr of storage (cardiac output recovery [mean ± SEM]: control 1.4% ± 0.6%; rhNRG-1+C 21.1% ± 7.9%; P<0.05). After 10-hr storage, no improvement in functional recovery was observed with rhNRG-1, GTN, or cariporide alone; however, GTN combined with cariporide did improve recovery (P<0.01), which was further enhanced by the addition of rhNRG-1 (P<0.01). Functional improvements were accompanied by increased phosphorylation of Akt, ERK1/2, STAT3, and GSK-3β and reduced cleaved caspase-3 (P<0.01).
Conclusions: rhNRG-1 given together with other activators of prosurvival pathways improves preservation of the rat heart and shows promise for increasing the cold-ischemic life of donor hearts in transplantation.
Answer: Yes, atrial preservation does improve cardiac output after orthotopic heart transplantation. A study comparing the bicaval Wythenshawe technique, which maintains the right and left atrial anatomy and contractility, with the conventional Lower and Shumway technique found that the bicaval technique preserved the atrial kick and its contribution to cardiac output early after transplantation. The difference in cardiac output with atrial pacing versus ventricular pacing was significantly higher in the group that underwent the bicaval technique (group A) compared to the conventional technique (group B). The percentage of atrial contribution to the cardiac output in group A was 30% compared to only 7% in group B. Additionally, the mean stroke volume in group A was higher in sinus rhythm and atrial pacing compared with ventricular pacing, whereas in group B, no statistical difference was found between stroke volumes measured with atrial or ventricular pacing (PUBMED:8803753). |
Instruction: Is there an association between ephedra and heart failure?
Abstracts:
abstract_id: PUBMED:15704057
Is there an association between ephedra and heart failure? a case series. Background: Ephedra is a sympathomimetic commonly used for the purposes of athletic performance enhancement and weight loss. It is known to be associated with gastrointestinal and psychiatric manifestations. We report here on 6 cases of dilated cardiomyopathy associated with ephedra use.
Methods And Results: Over a period of 18 months, 6 patients attending our outpatient department with new onset heart failure were noted to have exposure to ephedra. The case record was reviewed and detailed clinical and echocardiographic data were extracted. All 6 patients (4 males) had left ventricular dysfunction at presentation (mean ejection fraction 20 +/- 5%) and were treated with conventional heart failure pharmacotherapy. All patients discontinued ephedra use as advised. New York Heart Association class improved from class III in 5 patients (class II in 1 patient) to class I, within a median of 6 months (range 3-96). Ejection fraction improved to a mean of 47 +/- 6%.
Conclusions: Ephedra may be associated with left ventricular systolic dysfunction. Withdrawal of this agent, in conjunction with proven pharmacotherapy, results in a significant improvement in functional status and left ventricular ejection fraction. We recommend specific enquiry into the use of over-the-counter supplements, particularly ephedra and its derivatives, when being evaluated with heart failure symptoms. These cases illustrate the potential risk of ephedra and provide additional support for the recent decision to ban this supplement.
abstract_id: PUBMED:14742827
Ephedra-associated cardiomyopathy. Objective: To report 2 cases of cardiomyopathy associated with use of dietary supplements containing ephedra. case summaries: A 19-year-old white man presented to the emergency department (ED) complaining of exertional shortness of breath and episodic chest pain radiating to the left arm. Left heart catheterization revealed no significant coronary artery disease, a dilated left ventricle, and global hypokinesis. He was discharged home 5 days after admission on standard therapies for heart failure, but died 5 weeks later. A 21-year-old white man presented to the ED with recurrent chest pain and was diagnosed with myopericarditis. An echocardiogram showed global hypokinesis with an ejection fraction of 40-50%. He was treated for myopericarditis with standard therapies for heart failure. An objective causality assessment probability scale revealed that an adverse drug reaction was possible between cardiomyopathy and ephedra use in these 2 patients. Both of these cases have been reported to MedWatch.
Discussion: Ephedrine is a potent sympathomimetic agent with direct and indirect effects on adrenergic receptors to cause increases in heart rate, blood pressure, cardiac output, and vascular resistance. The adverse effects of adrenergic stimulation are well known in cardiomyopathy, inducing direct and indirect myocyte toxicity.
Conclusions: It is well documented that ephedra, through its sympathomimetic effects, can cause a range of cardiovascular toxicities including myocarditis, arrhythmias, myocardial infarction, cardiac arrest, and sudden death.
abstract_id: PUBMED:17351165
The effect of ephedra and high fat dieting: a cause for concern! A case report. The increased incidence of obesity in the world has resulted in more and more people attempting to lose weight through a variety of diets. Many of these diets employ caloric reduction through the elimination of certain food groups. These diets may initially be associated with weight loss (including water weight) but follow up reports of these diets show high drop out rates, proinflammatory changes which can precipitate heart disease and weight gain following cessation of these diets. Efforts to use prescription anorexic medications have been associated with valvular disease and other health concerns. Dissatisfaction with the medical community and a subsequent increase in the availability of information on the Internet, are only two of the reasons why people are looking at alternative medicine to assist with health care issues. This includes the use of herbal supplements for appetite suppression. A review of the literature reveals several problems with some of these supplements, including Ephedra. Potentially serious adverse effects include dysrhythmias, heart failure, myocardial infarction, changes in blood pressure, and death have occurred. Unfortunately, one half of all patients experiencing a myocardial infarction have total cholesterol levels below 150 mg/dL and/or no prior cardiac symptoms. This means that the development of inflammatory changes which can precipitate myocardial infarction may go unnoticed by conventional testing and unless markers of inflammation and coronary perfusion are looked for, changes which can precipitate myocardial infarction may go unnoticed until cardiac injury occurs. The following case presentation shows how an individual with exertional dyspnea and concerned about her weight was affected by both the ingestion of a low-carbohydrate diet and ephedra.
abstract_id: PUBMED:24333010
2,3,5,6-Tetramethylpyrazine of Ephedra sinica regulates melanogenesis and inflammation in a UVA-induced melanoma/keratinocytes co-culture system. Background: 2,3,5,6-Tetramethylpyrazine (TMP) is known as a composition of Ephedra sinica and it has been used in the treatment of several disorders such as asthma, heart failure, rhinitis, and urinary incontinence. It has been reported that TMP inhibits melanoma metastasis and suppression angiogenesis by VEGF.
Objective: The inhibitory activity of melanogenic proteins by TMP was confirmed in UVA-induced melanoma/keratinocyte co-culture system in this paper.
Methods: The melanin content, cell viability and cytokines release such as TNFα, IL-1β, IL-8 and GM-CSF were measured by ELISA assay. In addition, TRP1, MITF and MAPK signaling protein expression were also evaluated by Western blotting analysis.
Results: Decreasing melanogenic factors (TRP1, MITF, and MAPK) and factors (TNFα, IL-1β, IL-8, and GM-CSF) improving skin cancer and inflammation were identified.
Conclusion: It suggests that TMP can serve as a potent candidate for regulation of melanogenesis.
abstract_id: PUBMED:35052354
Causal Association between Periodontal Diseases and Cardiovascular Diseases. Observational studies have revealed that dental diseases such as periodontitis and dental caries increase the risk of cardiovascular diseases (CVDs). However, the causality between periodontal disease (PD) and CVDs is still not clarified. In the present study, two-sample Mendelian randomization (MR) studies were carried out to assess the association between genetic liability for periodontal diseases (dental caries and periodontitis) and major CVDs, including coronary artery disease (CAD), heart failure (HF), atrial fibrillation (AF), and stroke-including ischemic stroke as well as its three main subtypes-based on large-scale genome-wide association studies (GWASs). Our two-sample MR analyses did not provide evidence for dental caries and periodontitis as the causes of cardiovascular diseases; sensitivity analyses, including MR-Egger analysis and weighted median analysis, also supported this result. Gene functional annotation and pathway enrichment analyses indicated the common pathophysiology between cardiovascular diseases and periodontal diseases. The associations from observational studies may be explained by shared risk factors and comorbidities instead of direct consequences. This also suggests that addressing the common risk factors-such as reducing obesity and improving glucose tolerance-could benefit both conditions.
abstract_id: PUBMED:36523355
J-shaped association between serum albumin levels and long-term mortality of cardiovascular disease: Experience in National Health and Nutrition Examination Survey (2011-2014). Background: Cardiovascular disease (CVD) is a constellation of heart, brain, and peripheral vascular diseases with common soil hypothesis of etiology, and its subtypes have been well-established in terms of the albumin-mortality association. However, the association between albumin and the mortality of CVD as a whole remains poorly understood, especially the non-linear association. We aimed to investigate the association of albumin levels with long-term mortality of CVD as a whole.
Materials And Methods: This study included all CVD patients who participated in the National Health and Nutrition Examination Survey (NHANES 2011-2014). CVD was defined as coronary heart disease, stroke, heart failure, or any combination of these two or three diseases. Serum albumin was tertile partitioned: tertile 1, <4.1; tertile 2, 4.1-4.3; and tertile 3, >4.3 g/dl. COX proportional hazards model was used to assess the association between the serum albumin levels and CVD mortality. Restricted cubic spline (RCS) curves were used to explore the non-linear relationship.
Results: A total of 1,070 patients with CVD were included in the analysis, of which 156 deaths occurred during a median 34 months of follow-up. On a continuous scale, per 1 g/dl albumin decrease was associated with an adjusted HR (95% CI) of 3.85 (2.38-6.25). On a categorical scale, as compared with tertile 3, the multivariable adjusted hazard ratio (95% CI) was 1.42 (0.74-2.71) for the tertile 2, and 2.24 (1.20-4.16) for the tertile 1, respectively, with respect to mortality. RCS curve analysis revealed a J-shaped association between albumin and CVD mortality.
Conclusion: A J-shaped association between low serum albumin levels and increased long-term mortality of CVD has been revealed. This J-shaped association's implications for CVD prevention and treatment are deserving of being further studied.
abstract_id: PUBMED:22015653
Association of angiotensin-converting enzyme I/D polymorphism with heart failure: a meta-analysis. Heart failure (HF) is a complex clinical syndrome and is thought to have a genetic basis. Numerous case-control studies have investigated the association between heart failure and polymorphisms in candidate genes. Most studies focused on the angiotensin-converting enzyme insertion/deletion (ACE I/D) polymorphism, however, the results were inconsistent because of small studies and heterogeneous samples. The objective was to assess the association between the ACE I/D polymorphism and HF. We performed a meta-analysis of all case-control studies that evaluated the association between ACE I/D polymorphism and HF in humans. Studies were identified in the PUBMED and EMBASE databases, reviews, and reference lists of relevant articles. Two reviewers independently assessed the studies. Seventeen case-control studies with a total of 5576 participants were included in the meta-analysis, including 2453 cases with HF and 3123 controls. The heterogeneity between studies was significant. No association was found under all the four genetic models (D vs. I, DD vs. ID and II, DD and ID vs. II, DD vs. ID). Subgroup analyses for ischemic HF (IHF) and HF because of dilated cardiomyopathy (DHF) also showed no significant association between ACE I/D polymorphism and HF. No significant association between the ACE I/D polymorphism and risk of HF was found in this meta-analysis. The future studies should focus on large-scale prospective and case-control studies which designed to investigate gene-gene and gene-environment interactions to shed light on the genetics of HF.
abstract_id: PUBMED:37920978
Evolution of Value in American College of Cardiology/American Heart Association Clinical Practice Guidelines. Background: In January 2014, the American College of Cardiology/American Heart Association released a policy statement arguing for the inclusion of cost-effectiveness analysis (CEA) and value assessments in clinical practice guidelines. It is unclear whether subsequent guidelines changed how they incorporated such concepts.
Methods: We analyzed guidelines of cardiovascular disease subconditions with a guideline released before and after 2014. We counted the words (total and per page) for 8 selected value- or CEA-related terms and compared counts and rates of terms per page in the guidelines before and after 2014. We counted the number of recommendations with at least 1 reference to a CEA or a CEA-related article to compare the ratios of such recommendations to all recommendations before and after 2014. We looked for the inclusion of the value assessment system recommended by the writing committee of the American College of Cardiology/American Heart Association policy statement of 2014.
Results: We analyzed 20 guidelines of 10 different cardiovascular disease subconditions. Seven of the 10 cardiovascular disease subconditions had guidelines with a greater term per page rate after 2014 than before 2014. Across all 20 guidelines, the proportion of recommendations with at least 1 reference to a CEA changed from 0.44% to 1.99% (P<0.01). The proportion of recommendations with at least 1 reference to a CEA-related article changed from 1.02% to 3.34% (P<0.01). Only 3 guidelines used a value assessment system.
Conclusions: The proportion of recommendations with at least 1 reference to a CEA or CEA-related article was low before and after 2014 for most of the subconditions, however, with substantial variation in this finding across the guidelines included in our analysis. There is a need to organize existing CEA information better and produce more policy-relevant CEAs so guideline writers can more easily make recommendations that incentivize high-value care and caution against using low-value care.
abstract_id: PUBMED:22125304
Genomewide association studies in cardiovascular disease--an update 2011. Background: Genomewide association studies have led to an enormous boost in the identification of susceptibility genes for cardiovascular diseases. This review aims to summarize the most important findings of recent years.
Content: We have carefully reviewed the current literature (PubMed search terms: "genome wide association studies," "genetic polymorphism," "genetic risk factors," "association study" in connection with the respective diseases, "risk score," "transcriptome").
Summary: Multiple novel genetic loci for such important cardiovascular diseases as myocardial infarction, hypertension, heart failure, stroke, and hyperlipidemia have been identified. Given that many novel genetic risk factors lie within hitherto-unsuspected genes or influence gene expression, these findings have inspired discoveries of biological function. Despite these successes, however, only a fraction of the heritability for most cardiovascular diseases has been explained thus far. Forthcoming techniques such as whole-genome sequencing will be important to close the gap of missing heritability.
abstract_id: PUBMED:36800181
Association of Hypertensive Disorders of Pregnancy With Future Cardiovascular Disease. Conclusions And Relevance: The findings of this study provide genetic evidence supporting an association between HDPs and higher risk of coronary artery disease and stroke, which is only partially mediated by cardiometabolic factors. This supports classification of HDPs as risk factors for cardiovascular disease.
Design, Setting, And Participants: A genome-wide genetic association study using mendelian randomization (MR) was performed from February 16 to March 4, 2022. Primary analysis was conducted using inverse-variance-weighted MR. Mediation analyses were performed using a multivariable MR framework. All studies included patients predominantly of European ancestry. Female-specific summary-level data from FinnGen (sixth release).
Exposures: Uncorrelated (r2<0.001) single-nucleotide variants (SNVs) were selected as instrumental variants from the FinnGen consortium summary statistics for exposures of any HDP, gestational hypertension, and preeclampsia or eclampsia.
Importance: Hypertensive disorders in pregnancy (HDPs) are major causes of maternal and fetal morbidity and are observationally associated with future maternal risk of cardiovascular disease. However, observational results may be subject to residual confounding and bias.
Main Outcomes And Measures: Genetic association estimates for outcomes were extracted from genome-wide association studies of 122 733 cases for coronary artery disease, 34 217 cases for ischemic stroke, 47 309 cases for heart failure, and 60 620 cases for atrial fibrillation.
Objective: To investigate the association of HDPs with multiple cardiovascular diseases.
Results: Genetically predicted HDPs were associated with a higher risk of coronary artery disease (odds ratio [OR], 1.24; 95% CI, 1.08-1.43; P = .002); this association was evident for both gestational hypertension (OR, 1.08; 95% CI, 1.00-1.17; P = .04) and preeclampsia/eclampsia (OR, 1.06; 95% CI, 1.01-1.12; P = .03). Genetically predicted HDPs were also associated with a higher risk of ischemic stroke (OR, 1.27; 95% CI, 1.12-1.44; P = 2.87 × 10-4). Mediation analysis revealed a partial attenuation of the effect of HDPs on coronary artery disease after adjustment for systolic blood pressure (total effect OR, 1.24; direct effect OR, 1.10; 95% CI, 1.02-1.08; P = .02) and type 2 diabetes (total effect OR, 1.24; direct effect OR, 1.16; 95% CI, 1.04-1.29; P = .008). No associations were noted between genetically predicted HDPs and heart failure (OR, 0.97; 95% CI, 0.76-1.23; P = .79) or atrial fibrillation (OR, 1.11; 95% CI, 0.65-1.88; P = .71).
Answer: Yes, there is an association between ephedra and heart failure. Several case reports and studies have indicated a potential link between the use of ephedra, a sympathomimetic commonly used for athletic performance enhancement and weight loss, and the development of heart failure. For instance, a case series reported six patients who developed new-onset heart failure and were found to have been exposed to ephedra. Upon discontinuation of ephedra and treatment with conventional heart failure pharmacotherapy, these patients showed significant improvement in functional status and left ventricular ejection fraction (PUBMED:15704057). Additionally, two cases of cardiomyopathy associated with ephedra use were reported, with one of the patients dying five weeks after discharge (PUBMED:14742827). Ephedra's sympathomimetic effects can cause a range of cardiovascular toxicities, including myocarditis, arrhythmias, myocardial infarction, cardiac arrest, and sudden death, due to its direct and indirect effects on adrenergic receptors (PUBMED:14742827). Furthermore, a case report highlighted the potential serious adverse effects of ephedra, such as dysrhythmias, heart failure, myocardial infarction, changes in blood pressure, and death (PUBMED:17351165). These findings support the notion that ephedra may be associated with left ventricular systolic dysfunction and other serious cardiovascular events, including heart failure. |
Instruction: Intercondylar Roof Inclination Angle: Is It a Risk Factor for ACL Tears or Tibial Spine Fractures?
Abstracts:
abstract_id: PUBMED:26327400
Intercondylar Roof Inclination Angle: Is It a Risk Factor for ACL Tears or Tibial Spine Fractures? Background: The relationship between the angle of inclination of the intercondylar roof [roof inclination angle (RIA)] and likelihood of knee injury has not been previously investigated in children.
Methods: Twenty-five skeletally immature patients with a tibial spine fracture were age matched (±1 y) and sex matched with 25 patients with an anterior cruciate ligament (ACL) tear and with 50 control knees (2 for each patient). Demographic and diagnostic information was collected, and radiographic measurements were performed on notch and lateral radiographs of the knee.
Results: Patients with a tibial spine fracture had an increased RIA compared with controls and patients with an ACL tear. Patients with ACL tears had a steeper notch roof, as indicated by a decreased RIA when compared with controls and patients with tibial spine fractures.
Conclusions: Our results demonstrated that a decreased RIA was associated with ACL tear and that an increased RIA was associated with tibial spine fracture.
Level Of Evidence: Level III-prognostic.
abstract_id: PUBMED:23221829
Relationship of the intercondylar roof and the tibial footprint of the ACL: implications for ACL reconstruction. Background: Debate exists on the proper relation of the anterior cruciate ligament (ACL) footprint with the intercondylar notch in anatomic ACL reconstructions. Patient-specific graft placement based on the inclination of the intercondylar roof has been proposed. The relationship between the intercondylar roof and native ACL footprint on the tibia has not previously been quantified.
Hypothesis: No statistical relationship exists between the intercondylar roof angle and the location of the native footprint of the ACL on the tibia.
Study Design: Case series; Level of evidence, 4.
Methods: Knees from 138 patients with both lateral radiographs and MRI, without a history of ligamentous injury or fracture, were reviewed to measure the intercondylar roof angle of the femur. Roof angles were measured on lateral radiographs. The MRI data of the same knees were analyzed to measure the position of the central tibial footprint of the ACL (cACL). The roof angle and tibial footprint were evaluated to determine if statistical relationships existed.
Results: Patients had a mean ± SD age of 40 ± 16 years. Average roof angle was 34.7° ± 5.2° (range, 23°-48°; 95% CI, 33.9°-35.5°), and it differed by sex but not by side (right/left). The cACL was 44.1% ± 3.4% (range, 36.1%-51.9%; 95% CI, 43.2%-45.0%) of the anteroposterior length of the tibia. There was only a weak correlation between the intercondylar roof angle and the cACL (R = 0.106). No significant differences arose between subpopulations of sex or side.
Conclusion: The tibial footprint of the ACL is located in a position on the tibia that is consistent and does not vary according to intercondylar roof angle. The cACL is consistently located between 43.2% and 45.0% of the anteroposterior length of the tibia. Intercondylar roof-based guidance may not predictably place a tibial tunnel in the native ACL footprint. Use of a generic ACL footprint to place a tibial tunnel during ACL reconstruction may be reliable in up to 95% of patients.
abstract_id: PUBMED:36961530
Meniscal injuries in skeletally immature children with tibial eminence fractures. Systematic review of literature. Purpose: Although the mechanisms of injury are similar to ACL rupture in adults, publications dealing with meniscal lesions resulting from fractures of the intercondylar eminence in children are much rarer. The main objective was to measure the frequency of meniscal lesions associated with tibial eminence fractures in children. The second question was to determine whether there is any available evidence on association between meniscal tears diagnostic method, and frequencies of total lesions, total meniscal lesions, and total entrapments.
Methods: A comprehensive literature search was performed using PubMed and Scopus. Articles were eligible for inclusion if they reported data on intercondylar tibial fracture, or tibial spine fracture, or tibial eminence fracture, or intercondylar eminence fracture. Article selection was performed in accordance with the PRISMA guidelines.
Results: In total, 789 studies were identified by the literature search. At the end of the process, 26 studies were included in the final review. This systematic review identified 18.1% rate of meniscal tears and 20.1% rate of meniscal or IML entrapments during intercondylar eminence fractures. Proportion of total entrapments was significantly different between groups (17.8% in the arthroscopy group vs. 6.2% in the MRI group; p < .0001). Also, we found 20.9% of total associated lesions in the arthroscopy group vs. 26.1% in the MRI group (p = .06).
Conclusion: Although incidence of meniscal injuries in children tibial eminence fractures is lower than that in adults ACL rupture, pediatric meniscal tears and entrapments need to be systematically searched. MRI does not appear to provide additional information about the entrapment risk if arthroscopy treatment is performed. However, pretreatment MRI provides important informations about concomitant injuries, such as meniscal tears, and should be mandatory if orthopaedic treatment is retained. MRI modalities have yet to be specified to improve the diagnosis of soft tissues entrapments.
Study Design: Systematic review of the literature REGISTRATION: PROSPERO N° CRD42021258384.
abstract_id: PUBMED:33438924
What's New in the Management of Pediatric Anterior Cruciate Ligament Tears and Tibial Spine Fractures. As the number of pediatric and adolescent patients participating in sports continues to increase, so too does the incidence of anterior cruciate ligament (ACL) tears in this population. There is increasing research on pediatric and adolescent ACL tears; hundreds of articles on the topic have been published in the past few years alone. It is important to highlight the most pertinent information in the past decade. In discussing pediatric ACL tears, it is also important to review tibial spine fractures. These injuries are rightfully grouped together because tibial spine fractures often occur with a mechanism of injury similar to that of ACL tears, but typically in a younger age group. Because management is different, understanding the similarities and differences between the two pathologies is important. Recent updates on the epidemiology, diagnosis, management, and outcomes of both pediatric ACL tears and tibial spine fractures need to be reviewed.
abstract_id: PUBMED:35752828
The most economical arthroscopic suture fixation for tibial intercondylar eminence avulsion fracture without any implant. Background: Avulsion fracture of the tibial intercondylar eminence is a rare injury, which mainly occurs in adolescents aged 8-14 years and in those with immature bones. The current commonly used surgery may result in severe surgical trauma, affecting knee joint function and accompanied by serious complications. In this study, we described an all-inside and all-epiphyseal arthroscopic suture fixation technique for a patient to treat tibial intercondylar eminence fracture.
Methods: ETHIBOND EXCEL-coated braided polyester sutures were used for fixation. Three ETHIBOND sutures were passed through the ACL at 2, 6 and 10 o'clock of the footprint of the ACL and made a cinch-knot loop separately. Under the guidance of ACL tibial locator, three corresponding tibial tunnels were drilled with K-wires at 2, 6 and 10 o'clock of the fracture bed, and the two ends of the suture were pulled out through the tunnel with double-folded steel wire heads. After reduction of the tibial eminence, three sutures were tightened and tied to the medial aspect of the tibial tubercle.
Results: After all the surgical treatments surgically performed by this method and following a standard postoperative protocol, our patient's ROM, stability, and functional structural scores all improved significantly.
Conclusion: This three-point suture technique provides a suitable reduction and stable fixation and is suitable for patients with all types of avulsion fractures of the tibial intercondylar eminence.
abstract_id: PUBMED:31803786
Suture Versus Screw Fixation of Tibial Spine Fractures in Children and Adolescents: A Comparative Study. Background: Tibial spine fractures involve an avulsion injury of the anterior cruciate ligament (ACL) at the intercondylar eminence, typically in children and adolescents. Displaced fractures are commonly treated with either suture or screw fixation.
Purpose: To investigate differences in various outcomes between patients treated with arthroscopic suture versus screw fixation for tibial spine avulsion fractures in one of the largest patient cohorts in the literature.
Study Design: Cohort study; Level of evidence, 3.
Methods: A search of medical records was performed with the goal of identifying all type 2 and type 3 tibial spine avulsion fractures surgically treated between 2000 and 2014 at a pediatric hospital. All patients had a minimum of 12 months clinical follow-up, suture or screw fixation only, and no major concomitant injury.
Results: There were 68 knees in 67 patients meeting criteria for analysis. There were no differences with regard to postsurgical arthrofibrosis (P = .59), ACL reconstruction (P = .44), meniscal procedures (P = .85), instability (P = .49), range of motion (P = .51), return to sport (P >.999), or time to return to sport (P = .11). Elevation of the repaired fragment on postoperative imaging was significantly greater in the suture group (5.4 vs 3.5 mm; P = .005). Postoperative fragment elevation did not influence surgical outcomes. The screw fixation group had more reoperations (13 vs 23; P = .03), a larger number of reoperations for implant removal (3 vs 22; P < .001), and nearly 3 times the odds of undergoing reoperation compared with suture patients (odds ratio, 2.9; P = .03).
Conclusion: Clinical outcomes between suture and screw fixation were largely equivalent in our patients. Postoperative fragment elevation does not influence surgical outcomes. Consideration should be given for the greater likelihood of needing a second operation, planned or unplanned, after screw fixation.
abstract_id: PUBMED:34800136
Incidence of anterior tibial spine fracture among skiers does not differ with age. Purpose: Injury to the anterior cruciate ligament (ACL) is common in alpine skiing in the form of either an intra-substance ACL tear or anterior tibial spine fracture (ATSF). Anterior tibial spine fractures are typically reported in children. However, several case reports describe these injuries in adults while skiing. The purpose of this study is to describe the sport specific incidence of ATSF in alpine skiing.
Methods: The study was conducted over a 22-year period. Skiers who suffered an ATSF were identified and radiographs were reviewed to confirm the diagnosis. Additionally, control data from intra-substance ACL injury groups were collected. The incidence of these injuries in children, adolescents, and adults (grouped as ages 0-10, 11-16, and 17 + years old, respectively) was evaluated and the risk factors for ATSF versus ACL tear were determined.
Results: There were 1688 intra-substance ACL and 51 ATSF injuries. The incidence of intra-substance ACL injury was greater in adults (40.0 per 100,000 skier days) compared to the adolescent (15.4 per 100,000) and child (1.1 per 100,000) age groups. In contrast, the incidence of ATSF was similar in the adult (0.9 per 100,000), adolescent (1.9 per 100,000), and child (1.9 per 100,000) age groups. Loose ski boot fit was identified as a risk factor for ATSF.
Conclusion: The incidence of ATSF in alpine skiers is similar among all age groups. However, the incidence of intra-substance ACL injuries is far greater in adult skiers compared to adolescents and children. Risk factors for ATSF relate to compliance between the foot/ankle and the ski boot.
Level Of Evidence: III.
abstract_id: PUBMED:30547044
Do Tibial Eminence Fractures and Anterior Cruciate Ligament Tears Have Similar Outcomes? Background: Avulsion fractures involving the tibial eminence are considered equivalent in terms of the cause to anterior cruciate ligament (ACL) tears; however, there are limited data comparing the outcomes of adolescent patients undergoing surgical fixation of a tibial eminence fracture (TEF) with those undergoing ACL reconstruction.
Purpose: To compare the clinical outcomes, subsequent ACL injury rates, and activity levels between adolescent patients who underwent TEF fixation with patients with midsubstance ACL tears who required acute reconstruction.
Study Design: Cohort study; Level of evidence, 3.
Methods: This study included a group of patients with TEFs treated with surgical fixation matched to a group of similar patients with ACL tears treated with reconstruction between the years 2001 and 2015. Data regarding the initial injury, surgical intervention, ACL/ACL graft injury rates, and physical examination findings were recorded. Clinical and functional outcomes were obtained using a physical examination, the International Knee Documentation Committee (IKDC) subjective score, the Lysholm score, and the Tegner activity score.
Results: Sixty patients with a mean follow-up of 57.7 months (range, 24-206 months) were included; 20 patients (11 male, 9 female; mean age, 11.9 years [range, 7-15 years]) who underwent surgical fixation for a TEF were matched to a group of 40 patients (23 male, 17 female; mean age, 12.5 years [range, 8-5 years]) who underwent reconstruction for ACL tears. The TEF group demonstrated significantly lower postoperative IKDC scores (TEF group, 94.0; ACL group, 97.2; P = .04) and Lysholm scores (TEF group, 92.4; ACL group, 96.9; P = .02). The TEF group returned to sport 119 days sooner (P < .01), but there was no difference in postoperative Tegner scores (TEF group, 7.3; ACL group, 7.6; P = .16). The TEF group demonstrated increased postoperative anterior laxity (P = .02) and a higher rate of postoperative arthrofibrosis (P = .04). There was no difference in subsequent ACL injuries (P = .41).
Conclusion: Both groups demonstrated quality outcomes at a minimum 2-year follow-up. Patients with TEFs demonstrated lower mean clinical outcome scores compared with patients with ACL tears, but the differences were less than reported minimal clinically important difference values. Additionally, the TEF group experienced more postoperative anterior laxity and had a higher rate of postoperative arthrofibrosis. There was no difference in the rate of subsequent ACL injuries. The TEF group returned to sport sooner than the ACL group, but the postoperative activity levels were similar.
abstract_id: PUBMED:27159316
Delayed Anterior Cruciate Ligament Reconstruction in Young Patients With Previous Anterior Tibial Spine Fractures. Background: Avulsion fractures of the anterior tibial spine in young athletes are injuries similar to anterior cruciate ligament (ACL) injuries in adults. Sparse data exist on the association between anterior tibial spine fractures (ATSFs) and later ligamentous laxity or injuries leading to ACL reconstruction.
Purpose: To better delineate the incidence of delayed instability or ACL ruptures requiring delayed ACL reconstruction in young patients with prior fractures of the tibial eminence.
Study Design: Case series; Level of evidence, 4.
Methods: We identified 101 patients between January 1993 and January 2012 who sustained an ATSF and who met inclusion criteria for this study. All patients had been followed for at least 2 years after the initial injury and were included for analysis after completion of a questionnaire via direct contact, mail, and/or telephone. If patients underwent further surgical intervention and/or underwent later ACL reconstruction, clinical records and operative reports pertaining to these secondary interventions were obtained and reviewed. Differences between categorical variables were assessed using the Fisher exact test. The association between time to revision ACL surgery and fracture type was assessed by Kaplan-Meier plots. The association between need for revision ACL surgery and age, sex, and mechanism of surgery was assessed using logistic regression.
Results: Nineteen percent of all patients evaluated underwent delayed ACL reconstruction after a previous tibial spine fracture on the ipsilateral side. While there were a higher proportion of ACL reconstructions in type II fractures, there was not a statistically significant difference in the number of patients within each fracture group who went on to undergo later surgery (P = .29). Further, there was not a significant association between fracture type, sex, or mechanism of injury as it related to the progression to later ACL reconstruction. However, there was a significant association between age at the time of injury and progression to later ACL reconstruction (P = .02). For every year increase in age at the time of injury, the odds of going on to undergo delayed ACL reconstruction were greater by a factor of 1.3 (95% CI, 1.1-1.6).
Conclusion: Although an ATSF is a relatively rare injury, our cohort of patients suggests that a subset of young patients with all types of tibial spine fractures will require later ACL reconstruction. There is a need to counsel patients that a delayed ACL rupture is a potential risk after an ATSF, especially as children approach skeletal maturity. Further patient follow-up and prospective studies are required.
abstract_id: PUBMED:31480110
Treatment and evolution of chronic avulsion of the anterior tibial spine in the pediatric age Tibial spine avulsion fractures (also called tibial eminence fractures) are bony avulsions of the anterior cruciate ligament (ACL) from its insertion in the intercondylar eminence. It is most commonly seen in children aged eight to 12 years, because LCA is more resistent than bone and physis. Furthermore, the union between epiphisys and LCA is very strong due to the collagen fibers. Meyers and McKeever classification defines three types: type I: non displaced; type II: partially dislaced; and type III: completely displaced. This classification is very important for the treatment. Arthroscopic treatment is the gold standard for displaced fractures. The association with other intraarticular injuries is frequent, and the anterior horn of the medial meniscus or the transverse meniscal ligament is frequently trapped within the fracture site. A miniarthrotomy may still be necessary in fractures that are irreducible by arthroscopic means. Its difficult to find chronic fractures in children in the literature. The treatment is similar to acute cases, but includes debridement of the scar tissue and reparation of intraarticular injuries. Loss of extension is the main problem we find in this patients, although acceptable results may be achieved. In this article we present a seven years old boy with a chronic tibial spine avulsion, the treatment and the outcome.
Answer: The Intercondylar Roof Inclination Angle (RIA) has been investigated as a potential risk factor for knee injuries, specifically Anterior Cruciate Ligament (ACL) tears and tibial spine fractures. Research has demonstrated that a decreased RIA is associated with ACL tears, while an increased RIA is associated with tibial spine fractures in skeletally immature patients. This suggests that the angle of inclination of the intercondylar roof may indeed be a risk factor for these types of injuries (PUBMED:26327400).
However, it is important to note that the relationship between the intercondylar roof and the native ACL footprint on the tibia does not show a strong correlation. This implies that while the RIA may be a risk factor, it does not necessarily predict the location of the ACL footprint on the tibia, which is consistent across individuals regardless of the RIA. Therefore, using the intercondylar roof angle as a guide for graft placement in ACL reconstruction may not reliably place a tibial tunnel in the native ACL footprint (PUBMED:23221829).
In summary, the inclination angle of the intercondylar roof appears to be a risk factor for ACL tears and tibial spine fractures, with different angles being associated with each type of injury in children. However, the RIA does not have a strong relationship with the position of the ACL footprint on the tibia, which is an important consideration in ACL reconstruction surgery. |
Instruction: Do patients want to disclose psychological problems to GPs?
Abstracts:
abstract_id: PUBMED:16055471
Do patients want to disclose psychological problems to GPs? Background: GPs are an accessible health care provider for most patients with mental disorders and are gatekeepers to specialist care. The extent to which patients consider their primary care team as relevant to their mental health problems needs to be explored.
Objectives: To explore reasons why patients choose not to disclose psychological problems to GPs, and to discuss the implications for the provision of primary mental health care.
Methods: A cross-sectional survey of consecutive patients attending general practices in New Zealand (part of the MaGPIe study). Patients were screened using the GHQ-12 and a stratified sample participated in a structured in-depth interview to assess their psychological health. Non-disclosure of psychological problems was explored. GPs assessed patients' psychological health using a 5-point scale of severity.
Results: Seventy GPs (90%) and 775 patients (70%) participated. Overall, 29.8% of all patients and 36.9% of patients with current symptoms reported non-disclosure of self-perceived psychological problems. Younger patients, those consulting more frequently and those with greater psychiatric disability were more likely to report non-disclosure. The most frequently given reasons were beliefs that a GP is not the 'right' person to talk to (33.8%) or that mental health problems should not be discussed at all (27.6%).
Conclusions: Interventions such as screening and GP education may be ineffective in improving primary mental health care unless accompanied by educational programmes for the general public to increase mental health literacy, de-stigmatise mental illness and increase awareness of general practice as an appropriate and effective source of health care.
abstract_id: PUBMED:17382508
The workload of general practitioners does not affect their awareness of patients' psychological problems. Objective: To investigate if general practitioners (GPs) with a higher workload are less inclined to encourage their patients to disclose psychological problems, and are less aware of their patients' psychological problems.
Methods: Data from 2095 videotaped consultations from a representative selection of 142 Dutch GPs were used. Multilevel regression analyses were performed with the GPs' awareness of the patient's psychological problems and their communication as outcome measures, the GPs' workload as a predictor, and GP and patient characteristics as confounders.
Results: GPs' workload is not related to their awareness of psychological problems and hardly related to their communication, except for the finding that a GP with a subjective experience of a lack of time is less patient-centred. Showing eye contact or empathy and asking questions about psychological or social topics are associated with more awareness of patients' psychological problems.
Conclusion: Patients' feelings of distress are more important for GPs' communication and their awareness of patients' psychological problems than a long patient list or busy moment of the day. GPs who encourage the patient to disclose their psychological problems are more aware of psychological problems.
Practice Implications: We recommend that attention is given to all the communication skills required to discuss psychological problems, both in the consulting room and in GPs' training. Additionally, attention for gender differences and stress management is recommended in GPs' training.
abstract_id: PUBMED:16105369
The workload of GPs: consultations of patients with psychological and somatic problems compared. Background: GPs report that patients' psychosocial problems play a part in 20% of all consultations. GPs state that these consultations are more time-consuming and the perceived burden on the GP is higher.
Aim: To investigate whether GPs' workload in consultations is related to psychological or social problems of patients.
Design Of Study: A cross-sectional national survey in general practice, conducted in the Netherlands from 2000-2002.
Setting: One hundred and four general practices in the Netherlands.
Method: Videotaped consultations (n = 1392) of a representative sample of 142 GPs were used. Consultations were categorised in three groups: consultations with a diagnosis in the International Classification of Primary Care chapter P 'psychological' or Z 'social' (n = 138), a somatic diagnosis but with a psychological background according to the GP (n = 309), or a somatic diagnosis and background (n = 945). Workload measures were consultation length, number of diagnoses and GPs' assessment of sufficiency of patient time.
Results: Consultations in which patients' mental health problems play a part (as a diagnosis or in the background) take more time and involve more diagnoses, and the GP is more heavily burdened with feelings of insufficiency of patient time. In consultations with a somatic diagnosis but psychological background, GPs more often experienced a lack of time compared to consultations with a psychological or social diagnosis.
Conclusion: Consultations in which the GP notices psychosocial problems make heavier demands on the GP's workload than other consultations. Patients' somatic problems that have a psychological background induce the highest perceived burden on the GP.
abstract_id: PUBMED:23281962
Psychological and social problems in primary care patients - general practitioners' assessment and classification. Objective: To estimate the frequency of psychological and social classification codes employed by general practitioners (GPs) and to explore the extent to which GPs ascribed health problems to biomedical, psychological, or social factors.
Design: A cross-sectional survey based on questionnaire data from GPs. Setting. Danish primary care.
Subjects: 387 GPs and their face-to-face contacts with 5543 patients.
Main Outcome Measures: GPs registered consecutive patients on registration forms including reason for encounter, diagnostic classification of main problem, and a GP assessment of biomedical, psychological, and social factors' influence on the contact.
Results: The GP-stated reasons for encounter largely overlapped with their classification of the managed problem. Using the International Classification of Primary Care (ICPC-2-R), GPs classified 600 (11%) patients with psychological problems and 30 (0.5%) with social problems. Both codes for problems/complaints and specific disorders were used as the GP's diagnostic classification of the main problem. Two problems (depression and acute stress reaction/adjustment disorder) accounted for 51% of all psychological classifications made. GPs generally emphasized biomedical aspects of the contacts. Psychological aspects were given greater importance in follow-up consultations than in first-episode consultations, whereas social factors were rarely seen as essential to the consultation.
Conclusion: Psychological problems are frequently seen and managed in primary care and most are classified within a few diagnostic categories. Social matters are rarely considered or classified.
abstract_id: PUBMED:15778235
The workload of GPs: patients with psychological and somatic problems compared. Background: GPs state that patients with mental problems make heavy demands on their available time. To what extent these perceived problems correspond with reality needs more investigation.
Objectives: To investigate the effect of patients with psychological or social diagnoses on GP's workload, expressed in time investments.
Methods: Data were derived of a cross-sectional National Survey in General Practice, conducted in The Netherlands in 2000-2002. For a year, all patient contacts with a representative sample of 104 general practices were registered. Patients diagnosed with one or more diagnoses in ICPC (International Classification of Primary Care) chapter 'Psychological' or 'Social' (n = 37,189) were compared to patients with only somatic diagnoses (n = 189,731). A subdivision was made in diagnoses depression, anxiety, sleeping disorders, stress problems, problems related to work or partner and 'other psychological or social problems'. Workload measures are the consultation frequency, number of diagnoses and episodes of illness of the patients involved.
Results: Patients in all categories of psychological or social problems had almost twice as many contacts with their general practice as patients with only somatic problems. They received more diagnoses and more episodes of illness were shown. Patients with psychological or social diagnoses also contacted their general practice about their somatic problems more frequently, compared to patients with only somatic problems.
Conclusion: Patients with psychological or social problems make heavy demands on the GP's workload, for the greater part due to the increase in somatic problems presented.
abstract_id: PUBMED:28914563
Patients with psychological ICPC codes in primary care; a case-control study investigating the decade before presenting with problems. Background: Recognizing patients with psychological problems can be difficult for general practitioners (GPs). Use of information collected in electronic medical records (EMR) could facilitate recognition.
Objectives: To assess relevant EMR parameters in the decade before patients present with psychological problems.
Methods: Exploratory case-control study assessing EMR parameters of 58 228 patients recorded between 2013 and 2015 by 54 GPs. We compared EMR parameters recorded before 2014 of patients who presented with psychological problems in 2014 with those who did not.
Results: In 2014, 2406 patients presented with psychological problems. Logistic regression analyses indicated that having registrations of the following statistically significant parameters increased the chances of presenting with psychological problems in 2014: prior administration of a depression severity questionnaire (odds ratio (OR): 3.3); fatigue/sleeping (OR: 1.6), neurological (OR: 1.5), rheumatic (OR: 1.5) and substance abuse problems (OR: 1.5); prescriptions of opioids (OR: 1.3), antimigraine preparations (OR: 1.5), antipsychotics (OR: 1.7), anxiolytics (OR: 1.4), hypnotics and sedatives (OR: 1.4), antidepressants (OR: 1.7), and antidementia drugs (OR: 2.1); treatment with minimal interventions (OR: 2.2) and physical exercise (OR: 3.3), referrals to psychology (OR: 1.5), psychiatry (OR: 1.6), and psychosocial care (OR: 2.1); double consultations (OR: 1.2), telephone consultations (OR: 1.1), and home visits (OR: 1.1).
Conclusion: This study demonstrates that possible indications of psychological problems can be identified in EMR. Many EMR parameters of patients presenting with psychological problems were different compared with patients who did not.
abstract_id: PUBMED:15296555
What constructs do GPs use when diagnosing psychological problems? Background: The mismatch between general practice and psychiatric diagnosis of psychological problems has been frequently reported.
Aims: To identify which items from the 28-item general health questionnaire (GHQ-28) best predicted general practitioners' (GPs') own assessments of morbidity and the proportion of time spent in consultations on psychological problems.
Design Of Study: Cross-sectional survey.
Setting: General practice in southeast London.
Method: Eight hundred and five consultations were carried out by 47 GPs, during which patients completed the 28-item GHQ, and doctors independently assessed the degree of psychological disturbance and the proportion of the consultation spent on psychological problems. Data from the consultations were entered into a stepwise multiple regression to determine the best GHQ-item predictors of GP judgements.
Results: GPs' assessments of the degree of psychological disturbance were best predicted using only seven GHQ items, and their perceptions of the proportion of time spent on psychological problems were predicted by only four items. Items were drawn predominantly from the 'anxiety and insomnia' and 'severe depression' sub-scales, ignoring the 'somatic' and 'social dysfunction' dimensions.
Conclusion: In diagnosing psychological disturbance GPs ignore major symptom areas that psychiatrists judge important.
abstract_id: PUBMED:15717912
A comparison of GPs and nurses in their approach to psychological disturbance in primary care consultations. It has frequently been reported that GPs fail to diagnose many of the psychological problems that present to them. It also appears that practice nurses working in primary care also show similar diagnostic 'failings'. This study extends these observations by reporting the psychiatric diagnostic practices of GPs and nurses working in the same settings of six general practices. After each consultation the health professional involved assessed the degree of psychological morbidity and the amount of time they had spent attending to this problem. The health professionals' assessment was compared with the score from a General Health Questionnaire completed by the patient. Analysis of 1646 consultations revealed that GPs saw patients with more psychological problems than nurses. Nurses, however, spent significantly more time dealing with their psychological workload than their GP colleagues, after allowing for the fact that they saw fewer patients in this category. This observation raises the question of whether this use of scarce time resource in the consultation is appropriate.
abstract_id: PUBMED:26504283
The influence of horseback riding training on the physical function and psychological problems of stroke patients. [Purpose] The purpose of this study was to determine the influence of horseback riding training on the physical function and psychological problems of stroke patients. [Subjects and Methods] Thirty stroke patients were divided evenly into an experimental group and a control group. Both groups carried out neurodevelopmental treatment. The experimental group additionally performed mechanical horseback riding training for 30 minutes a day, 5 days a week, for 6 weeks. Physical function was evaluated using the Berg Balance Scale (BBS) and the Timed Up and Go Test (TUGT). Psychological problems were assessed using the Beck Depression Inventory (BDI). In order to compare differences within groups between before and after the experiment, the paired t test was conducted. In order to compare differences between groups before and after the experiment, the independent t test was conducted. [Results] In the experimental group, the BBS, TUGT, and BDI showed significant improvements after the intervention. The experimental group's BBS, TUGT, and BDI post-intervention changes were significantly better than those observed in the control group. [Conclusion] According to our results, horseback riding training has a positive effect on the physical function and psychological problems of stroke patients.
abstract_id: PUBMED:24926278
The effect of a career choice guidance on self-reported psychological problems. Late adolescents with career choice problems often have psychological problems as well. The starting point of this study was the question of career choice counselors whether potential clients with career choice problems and psychological problems could be accepted in career choice intervention, or whether it was better to advise them to seek help for their psychological problems. We investigated whether a successful career choice intervention reduced psychological problems, and whether this program was equally effective in participants with low and with high levels of psychological problems. Participants were 45 Dutch students (age 17-24) with career choice problems. They had above average levels of self-reported psychological problems before the start of the intervention. These problems decreased significantly following the intervention. With regard to vocational commitment development, the intervention was equally effective for participants with low or average and with (very) high levels of psychological problems before the start of the intervention.
Answer: Patients have mixed attitudes towards disclosing psychological problems to general practitioners (GPs). A study conducted in New Zealand found that a significant proportion of patients choose not to disclose psychological problems to their GPs. Specifically, 29.8% of all patients and 36.9% of those with current symptoms reported non-disclosure. The reasons for non-disclosure included beliefs that a GP is not the 'right' person to talk to (33.8%) or that mental health problems should not be discussed at all (27.6%) (PUBMED:16055471). This suggests that while some patients may be willing to discuss psychological issues with their GPs, there is a substantial number who do not see GPs as the appropriate avenue for such discussions, possibly due to stigma or lack of awareness about the role of GPs in managing mental health.
To improve the situation, the study suggests that interventions such as screening and GP education may be ineffective unless they are accompanied by educational programmes for the general public to increase mental health literacy, de-stigmatise mental illness, and increase awareness of general practice as an effective source of health care for psychological problems (PUBMED:16055471).
In summary, while some patients are open to discussing psychological problems with their GPs, a significant number prefer not to, indicating a need for public education and changes in perception regarding the role of GPs in mental health care. |
Instruction: Does a dedicated program for young breast cancer patients affect the likelihood of fertility preservation discussion and referral?
Abstracts:
abstract_id: PUBMED:27212696
Does a dedicated program for young breast cancer patients affect the likelihood of fertility preservation discussion and referral? Purpose: To assess whether a dedicated program for young breast cancer patients, including a nurse navigator, improves the frequency of: a) fertility discussion documentation and b) fertility preservation (FP) referrals.
Methods: A retrospective chart review and prospective survey were undertaken of breast cancer patients diagnosed at age 40 or younger between 2011 and 2013 who received adjuvant or neo-adjuvant chemotherapy at two academic cancer centers in Toronto, Canada. The Odette Cancer Centre (OCC) has a dedicated program for young breast cancer patients while Princess Margaret Cancer Centre (PM) does not. Patient demographics, tumor pathology, treatment and fertility discussion documentation prior to systemic chemotherapy administration were extracted from patient records. Prospective surveys were administered to the same cohort to corroborate data collected.
Results: Eighty-one patient charts were reviewed at both OCC and PM. Forty-seven and 49 at OCC and PM returned surveys for a response rate of 58% and 60% respectively. Chart reviews demonstrated no difference in the frequency of fertility discussion documentation (78% versus 75% for OCC and PM, p = 0.71); however, surveys demonstrated higher rates of recall of fertility discussion at OCC (96% versus 80%, p = 0.02). A greater proportion of women were offered FP referrals at OCC, as observed in chart reviews (56% versus 41%, p = 0.09), and surveys (73% versus 51%, p = 0.04). Time to initiation of chemotherapy did not differ between women who underwent FP and those who did not.
Conclusion: A dedicated program for young breast cancer patients is associated with a higher frequency of FP referrals without delaying systemic therapy.
abstract_id: PUBMED:25549654
If you did not document it, it did not happen: rates of documentation of discussion of infertility risk in adolescent and young adult oncology patients' medical records. Purpose: The adolescent and young adult (AYA) population is underserved because of unique late-effect issues, particularly future fertility. This study sought to establish rates of documentation of discussion of risk of infertility, fertility preservation (FP) options, and referrals to fertility specialists in AYA patients' medical records at four cancer centers.
Methods: All centers reviewed randomized records within the top four AYA disease sites (breast, leukemia/lymphoma, sarcoma, and testicular). Eligible records included those of patients: diagnosed in 2011, with no prior receipt of gonadotoxic therapy; age 18 to 45 years; with no multiple primary cancers; and for whom record was not second opinion. Quality Oncology Practice Initiative methods were used to evaluate documentation of discussion of risk of infertility, discussion of FP options, and referral to a fertility specialist.
Results: Of 231 records, 26% documented infertility risk discussion, 24% documented FP option discussion, and 13% documented referral to a fertility specialist. Records were less likely to contain evidence of infertility risk and FP option discussions for female patients (P = .030 and .004, respectively) and those with breast cancer (P = .021 and < .001, respectively). Records for Hispanic/Latino patients were less likely to contain evidence of infertility risk discussion (P = .037). Records were less likely to document infertility risk discussion, FP option discussion, and fertility specialist referral for patients age ≥ 40 years (P < .001, < .001, and .002, respectively) and those who already had children (all P < .001).
Conclusion: The overall rate of documentation of discussion of FP is low, and results show disparities among specific groups. Although greater numbers of discussions may be occurring, there is a need to create interventions to improve documentation.
abstract_id: PUBMED:34650912
Factors Associated With the Discussion of Fertility Preservation in a Cohort of 1,357 Young Breast Cancer Patients Receiving Chemotherapy. Purpose: Female breast cancer (BC) patients exposed to gonadotoxic chemotherapy are at risk of future infertility. There is evidence of disparities in the discussion of fertility preservation for these patients. The aim of the study was to identify factors influencing the discussion of fertility preservation (FP).
Material And Methods: We analyzed consecutive BC patients treated by chemotherapy at Institut Curie from 2011-2017 and aged 18-43 years at BC diagnosis. The discussion of FP was classified in a binary manner (discussion/no discussion), based on mentions present in the patient's electronic health record (EHR) before the initiation of chemotherapy. The associations between FP discussion and the characteristics of patients/tumors and healthcare practitioners were investigated by logistic regression analysis.
Results: The median age of the 1357 patients included in the cohort was 38.7 years, and median tumor size was 30.3 mm. The distribution of BC subtypes was as follows: 702 luminal BCs (58%), 241 triple-negative breast cancers (TNBCs) (20%), 193 HER2+/HR+ (16%) and 81 HER2+/HR- (6%). All patients received chemotherapy in a neoadjuvant (n=611, 45%) or adjuvant (n= 744, 55%) setting. A discussion of FP was mentioned for 447 patients (33%). Earlier age at diagnosis (discussion: 34.4 years versus no discussion: 40.5 years), nulliparity (discussion: 62% versus no discussion: 38%), and year of BC diagnosis were the patient characteristics significantly associated with the mention of FP discussion. Surgeons and female physicians were the most likely to mention FP during the consultation before the initiation of chemotherapy (discussion: 22% and 21%, respectively). The likelihood of FP discussion increased significantly over time, from 15% in 2011 to 45% in 2017. After multivariate analysis, FP discussion was significantly associated with younger age, number of children before BC diagnosis, physicians' gender and physicians' specialty.
Conclusion: FP discussion rates are low and are influenced by patient and physician characteristics. There is therefore room for improvement in the promotion and systematization of FP discussion.
abstract_id: PUBMED:28776607
"Joven & Fuerte": Program for Young Women with Breast Cancer in Mexico - Initial Results. Despite the high rates of breast cancer among young Mexican women, their special needs and concerns have not been systematically addressed. To fulfill these unsatisfied demands, we have developed "Joven & Fuerte: Program for Young Women with Breast Cancer in Mexico," the first program dedicated to the care of young breast cancer patients in Latin America, which is taking place at the National Cancer Institute of Mexico and the two medical facilities of the Instituto Tecnológico y de Estudios Superiores de Monterrey. The program was created to optimize the complex clinical and psychosocial care of these patients, enhance education regarding their special needs, and promote targeted research, as well as to replicate this program model in other healthcare centers across Mexico and Latin America. From November 2013 to February 2017, the implementation of the "Joven & Fuerte" program has delivered specialized care to 265 patients, through the systematic identification of their particular needs and the provision of fertility, genetic, and psychological supportive services. Patients and families have engaged in pedagogic activities and workshops and have created a motivated and empowered community. The program developed and adapted the first educational resources in Spanish dedicated for young Mexican patients, as well as material for healthcare providers. As for research, a prospective cohort of young breast cancer patients was established to characterize clinicopathological features and psychosocial effects at baseline and during follow-up, as a guide for the development of specific cultural interventions addressing this vulnerable group. Eventually, it is intended that the program's organization and structure can reach national and international interactions and serve as a platform for other countries.
abstract_id: PUBMED:35001240
Surgeon and Patient Reports of Fertility Preservation Referral and Uptake in a Prospective, Pan-Canadian Study of Young Women with Breast Cancer. Background: Prompt referral by their surgeon enables fertility preservation (FP) by young women with breast cancer (YWBC) without treatment delay. Following a FP knowledge intervention, we evaluated surgeon and patient reports of fertility discussion, FP referral offer and uptake, and FP choices and reasons for declining FP among patients enrolled in the Reducing Breast Cancer in Young Women, prospective pan-Canadian study.
Methods: Between September 2015 and December 2020, 1271 patients were enrolled at 31 sites. For each patient, surgeons were sent a questionnaire inquiring whether: (1) fertility discussion was initiated by the surgical team; (2) FP referral was offered; (3) referral was accepted; a reason was requested for any "no" response. Patients were surveyed about prediagnosis fertility plans and postdiagnosis oncofertility management.
Results: Surgeon questionnaires were completed for 1068 (84%) cases. Fertility was discussed with 828 (84%) and FP consultation offered to 461 (47%) of the 990 YWBC with invasive disease. Among the 906 responding YWBC, referral was offered to 220 (82%) of the 283 (33%) with invasive disease who stated that they had definitely/probably not completed childbearing prediagnosis. Of these, 133 (47%) underwent FP. The two most common reasons for not choosing FP were cost and unwillingness to delay treatment.
Conclusions: Although the rates of surgeon fertility discussion and FP referral was higher than most reports, likely due to our previous intervention, further improvement is desirable. FP should be offered to all YWBC at diagnosis, regardless of perceived childbearing intent. Cost remains an important barrier to FP uptake.
abstract_id: PUBMED:29985734
Measuring the Impact of an Adolescent and Young Adult Program on Addressing Patient Care Needs. Purpose: We aimed to evaluate the effectiveness of an adult-based adolescent and young adult (AYA) cancer program by assessing patient satisfaction and whether programming offers added incremental benefit beyond primary oncology providers (POP) to address their needs.
Methods: A modified validated survey was used to ask two questions: (1) rate on a 10-point Likert scale their level of satisfaction with the information provided to them by their POP and (2) did the AYA consult provide added value on top of their POP. Young people at PM were recruited over two separate time points spaced 1 year apart. Descriptive statistics was used to report demographics and survey responses. Differences in demographics between cohorts 1 and 2 were compared using Student's t-tests.
Results: Participants were an average of 31 years (range 15-39) of age; (Cohort 1 = 137; Cohort 2 = 130) and were dominated by diagnoses of leukemia, lymphoma, and breast cancer. More patients had a consultation with the AYA program in 2016 (Cohort 2 = 55/130, 42%) compared to 2015 (Cohort 1 = 34/137, 25%, p = 0.026). Mean satisfaction scores (±SD) with information provided by POP in AYA domains in both cohorts combined were highest among (1) cancer information (8.09 ± 2.22), (2) social supports (7.45 ± 2.52), and (3) school/work (7.42 ± 2.88). When evaluating the incremental benefit of the AYA-dedicated team, statistically significant added value was perceived in 5/10 domains, including school/work (p < 0.001), social supports (p < 0.001), physical appearance (p = 0.009), sexual health (p = 0.01), and fertility (p < 0.001).
Conclusions: Participants were satisfied with the information provided by their POP and still declared incremental added benefit of the AYA program. Cancer centers should continue to advocate for AYA focused programming with ongoing evaluation.
abstract_id: PUBMED:36756160
Knowledge, attitudes, and behaviors toward fertility preservation in patients with breast cancer: A cross-sectional survey of physicians. Background: Fertility is an important issue for young women with breast cancer, but studies about physicians' knowledge, attitudes, and practices toward fertility preservation (FP) are largely based on Western populations and do not reflect recent international guidelines for FP. In this international study, we aimed to assess the knowledge, attitudes, and practices of physicians from South Korea, other Asian countries, and Latin America toward FP in young women with breast cancer, and identify the related barriers.
Methods: The survey was conducted anonymously among physicians from South Korea, other Asian countries, and Latin America involved in breast cancer care between November 2020 and July 2021. Topics included knowledge, attitudes, and perceptions toward FP; practice behaviors; barriers; and participant demographics. We grouped related questions around two main themes-discussion with patients about FP, and consultation and referral to a reproductive endocrinologist. We analyzed the relationships between main questions and other survey items.
Results: A total of 151 physicians completed the survey. Most participants' overall knowledge about FP was good. More than half of the participants answered that they discussed FP with their patients in most cases, but that personnel to facilitate discussions about FP and the provision of educational materials were limited. A major barrier was time constraints in the clinic (52.6%). Discussion, consultations, and referrals were more likely to be performed by surgeons who primarily treated patients with operable breast cancer (FP discussion odds ratio [OR]: 2.90; 95% confidence interval [CI]: 1.24-6.79; FP consultation and referral OR: 2.98; 95% CI: 1.14-7.74). Participants' knowledge and attitudes about FP were significantly associated with discussion, consultations, and referrals.
Conclusion: Physicians from South Korea, other Asian countries, and Latin America are knowledgeable about FP and most perform practice behaviors toward FP well. Physicians' knowledge and favorable attitudes are significantly related to discussion with patients, as well as consultation with and referral to reproductive endocrinologists. However, there are also barriers, such as limitations to human resources and materials, suggesting a need for a systematic approach to improve FP for young women with breast cancer.
abstract_id: PUBMED:25069500
Referral for fertility preservation counselling in female cancer patients. Study Question: What changes can be detected in fertility preservation (FP) counselling (FPC) over time and what are the determinants associated with the referral of newly diagnosed female cancer patients, aged 0-39 years, to a specialist in reproductive medicine for FPC?
Summary Answer: Although the absolute number of patients receiving FPC increased over time, only 9.8% of all potential patients (aged 0-39 years) were referred in 2011 and referral disparities were found with respect to patients' age, cancer diagnosis and healthcare provider-related factors.
What Is Known Already: Referral rates for FPC prior to the start of gonadotoxic cancer treatment are low. Determinants associated with low referral and referral disparities have been identified in previous studies, although there are only scarce data on referral practices and determinants for FPC referral in settings with reimbursement of FP(C).
Study Design, Size, Duration: We conducted a retrospective observational and questionnaire study in a Dutch university hospital. Data on all female cancer patients counselled for FP in this centre (2001-2013), as well as all newly diagnosed female cancer patients aged 0-39 years in the region (2009-2011) were collected.
Participants/materials, Setting, Methods: Data were retrieved from medical records (FPC patients), cancer incidences reported by the Dutch Cancer Registry (to calculate referral percentages) and referring professionals (to identify reasons for the current referral behaviour).
Main Results And The Role Of Chance: In 2011, a total of 9.8% of the patients were referred for FPC. Patients aged 20-29 years or diagnosed with breast cancer or lymphoma were referred more frequently compared with patients under the age of 20 years or patients diagnosed with other malignancies. The absolute numbers of patients receiving FPC increased over time. Healthcare provider-related determinants for low referral were not starting a discussion about fertility-related issues, not knowing where to refer a patient for FPC and not collaborating with patients' associations.
Limitations, Reasons For Caution: Actual referral rates may slightly differ from our estimation as there may have been patients who did not wish to receive FPC. Sporadically, patients might have been directly referred to other regions or may have received ovarian transposition without FPC. By excluding skin cancer patients, we will have underestimated the group of women who are eligible for FPC as this group also includes melanoma patients who might have received gonadotoxic therapy.
Wider Implications Of The Findings: The low referral rates and referral disparities reported in the current study indicate that there are opportunities to improve referral practices. Future research should focus on the implementation and evaluation of interventions to improve referral practices, such as information materials for patients at oncology departments, discussion prompts or methods to increase the awareness of physicians and patients of FP techniques and guidelines.
Study Funding/competing Interests: This work was supported by the Radboud university medical center and the Radboud Institute for Health Sciences. The authors have declared no conflicts of interest with respect to this work.
Trial Registration Number: Not applicable.
abstract_id: PUBMED:23443036
pynk : Breast Cancer Program for Young Women. CONSIDER THIS SCENARIO: A 35-year-old recently married woman is referred to a surgeon because of a growing breast lump. After a core biopsy shows cancer, she undergoes mastectomy for a 6-cm invasive lobular cancer that has spread to 8 axillary nodes. By the time she sees the medical oncologist, she is told that it is too late for a fertility consultation, and she receives a course of chemotherapy. At clinic appointments, she seems depressed and admits that her husband has been less supportive than she had hoped. After tamoxifen is started, treatment-related sexuality problems and the probability of infertility contribute to increasing strain on the couple's relationship. Their marriage ends two years after the woman's diagnosis.Six years after her diagnosis, this woman has completed all treatment, is disease-free, and is feeling extremely well physically. However, she is upset about being postmenopausal, and she is having difficulty adopting a child as a single woman with a history of breast cancer. Could this woman and her husband have been offered additional personalized interventions that might have helped them better cope with the breast cancer diagnosis and the effects of treatment?Compared with their older counterparts, young women with breast cancer often have greater and more complex supportive care needs. The present article describes the goals, achievements, and future plans of a specialized interdisciplinary program-the first of its kind in Canada-for women 40 years of age and younger newly diagnosed with breast cancer. The program was created to optimize the complex clinical care and support needs of this population, to promote research specifically targeting issues unique to young women, and to educate the public and health care professionals about early detection of breast cancer in young women and about the special needs of those women after their diagnosis.
abstract_id: PUBMED:30676852
Patterns of Referral for Fertility Preservation Among Female Adolescents and Young Adults with Breast Cancer: A Population-Based Study. Purpose: To assess the fertility preservation (FP) referral rates and patterns of newly diagnosed breast cancer in female adolescent and young adult (AYA) patients.
Methods: Women aged 15-39 years with newly diagnosed breast cancer in Ontario from 2000 to 2017 were identified using the Ontario Cancer Registry. Exclusion criteria included prior sterilizing procedure, health insurance ineligibility, and prior infertility or cancer diagnosis. Women with a gynecology consult between cancer diagnosis and chemotherapy commencement with the billed infertility diagnostic code (ICD-9 628) were used as a surrogate for FP referral. The effect of age, parity, year of cancer diagnosis, staging, income, region, neighborhood marginalization, and rurality on referral status was investigated.
Results: A total of 4452 patients aged 15-39 with newly diagnosed breast cancer met the inclusion criteria. Of these women, 178 (4.0%) were referred to a gynecologist with a billing code of infertility between cancer diagnosis and initiation of chemotherapy. Older patients, prior parity, and advanced disease were inversely correlated with referrals. Referral rates also varied regionally: patients treated in the south-east and south-west Local Health Integration Networks (LHINs) had the highest probability of referral, and patients covered by north LHINs had the lowest (central LHIN as reference). General surgeons accounted for 36.5% of all referrals, the highest percentage of all specialists. Referral rates significantly increased over time from 0.4% in 2000 to 10.7% in 2016.
Conclusion: FP referral rates remain low and continue to be influenced by patient demographics and prognosis. These findings highlight the need for further interdisciplinary coordination in addressing the fertility concerns of AYA with newly diagnosed breast cancers.
Answer: Yes, a dedicated program for young breast cancer patients does affect the likelihood of fertility preservation (FP) discussion and referral. The study conducted at the Odette Cancer Centre (OCC), which has a dedicated program for young breast cancer patients, showed that there was a higher frequency of FP referrals compared to the Princess Margaret Cancer Centre (PM), which does not have such a program. The chart reviews demonstrated no significant difference in the frequency of fertility discussion documentation between the two centers, but surveys indicated a higher recall of fertility discussions at OCC (96% vs. 80% at PM). Additionally, a greater proportion of women were offered FP referrals at OCC, as observed in both chart reviews and surveys. Importantly, the time to initiation of chemotherapy did not differ between women who underwent FP and those who did not, indicating that the dedicated program facilitated FP referrals without delaying treatment (PUBMED:27212696).
This finding is consistent with the broader literature, which indicates that the overall rate of documentation of discussion of FP is low, and there are disparities among specific groups. Dedicated programs and interventions are needed to improve documentation and ensure that discussions about FP are happening consistently (PUBMED:25549654). Moreover, factors such as patient and physician characteristics influence FP discussion rates, suggesting that there is room for improvement in promoting and systematizing FP discussions (PUBMED:34650912).
Programs like "Joven & Fuerte" in Mexico have also shown that addressing the specific needs of young breast cancer patients through dedicated programs can lead to the systematic identification of patients' needs and the provision of fertility, genetic, and psychological supportive services (PUBMED:28776607). Similarly, the "pynk: Breast Cancer Program for Young Women" in Canada was created to optimize the complex clinical care and support needs of young women with breast cancer, including fertility concerns (PUBMED:23443036).
In conclusion, dedicated programs for young breast cancer patients are associated with a higher likelihood of FP discussion and referral, which is crucial for addressing the fertility concerns of this patient population without delaying their cancer treatment. |
Instruction: The intestinal effects of bran-like plastic particles: is the concept of 'roughage' valid after all?
Abstracts:
abstract_id: PUBMED:9222725
The intestinal effects of bran-like plastic particles: is the concept of 'roughage' valid after all? Objective: The mechanisms by which dietary fibre exerts is laxative action are not fully understood. Studies using sliced plastic tubing as a fibre substitute showed a decrease in both small and large bowel transit time. The significance of these studies is hard to interpret. We set out to compare the effects on intestinal function of wheat bran with plastic flakes similar in size and flaky shape to wheat bran (and devoid of plasticizers).
Design And Methods: Volunteers consumed coarse wheat bran then, after a washout period, plastic flakes of the same size and shape as the bran. Before and after each intervention whole-gut transit time (WGTT), defecation frequency, stool form, stool water content, stool beta-glucuronidase activity and dietary intake were assessed.
Results: Twenty-nine volunteers consumed a mean of 27.1 g of raw wheat bran and 24 g of plastic flakes a day. Baseline WGTT, interdefecatory intervals (IDI), stool form, weight, output, water content, and beta-glucuronidase were similar before both interventions. Both led to a decrease in mean faecal beta-glucuronidase activity, median WGTT (bran 25.8%, plastic 28.6%) and IDI (bran 23.3% plastic 25.0%). Both also increased stool form score (bran 28.6%, plastic 21.2%) and stool output (bran 67.1%, plastic 79.0%). Stool water content only rose with wheat bran (72%-75%, P = 0.014).
Conclusion: Overall, plastic 'pseudobran' was as effective at altering colonic function as wheat bran at a similar dosage but with fewer particles. The mechanism is not by increased faecal water. Reduction in enzyme activity with plastic flakes suggests that the plastic led to qualitative and, probably, beneficial changes in the bacterial flora or their metabolic processes. The concept of roughage deserves to be revived.
abstract_id: PUBMED:22949864
Effects of rice bran oil on the intestinal microbiota and metabolism of isoflavones in adult mice. This study examined the effects of rice bran oil (RBO) on mouse intestinal microbiota and urinary isoflavonoids. Dietary RBO affects intestinal cholesterol absorption. Intestinal microbiota seem to play an important role in isoflavone metabolism. We hypothesized that dietary RBO changes the metabolism of isoflavonoids and intestinal microbiota in mice. Male mice were randomly divided into two groups: those fed a 0.05% daidzein with 10% RBO diet (RO group) and those fed a 0.05% daidzein with 10% lard control diet (LO group) for 30 days. Urinary amounts of daidzein and dihydrodaidzein were significantly lower in the RO group than in the LO group. The ratio of equol/daidzein was significantly higher in the RO group (p < 0.01) than in the LO group. The amount of fecal bile acids was significantly greater in the RO group than in the LO group. The composition of cecal microbiota differed between the RO and LO groups. The occupation ratios of Lactobacillales were significantly higher in the RO group (p < 0.05). Significant positive correlation (r = 0.591) was observed between the occupation ratios of Lactobacillales and fecal bile acid content of two dietary groups. This study suggests that dietary rice bran oil has the potential to affect the metabolism of daidzein by altering the metabolic activity of intestinal microbiota.
abstract_id: PUBMED:30619170
Combination of Clostridium butyricum and Corn Bran Optimized Intestinal Microbial Fermentation Using a Weaned Pig Model. Experimental manipulation of the intestinal microbiota influences health of the host and is a common application for synbiotics. Here Clostridium butyricum (C. butyricum, C.B) combined with corn bran (C.B + Bran) was taken as the synbiotics application in a waned pig model to investigate its regulation of intestinal health over 28 days postweaning. Growth performance, fecal short chain fatty acids (SCFAs) and bacterial community were evaluated at day 14 and day 28 of the trial. Although the C.B + Bran treatment has no significant effects on growth performance (P > 0.05), it optimized the composition of intestinal bacteria, mainly represented by increased acetate-producing bacteria and decreased pathogens. Microbial fermentation in the intestine showed a shift from low acetate and isovalerate production on day 14 to enhanced acetate production on day 28 in the C.B + Bran treatment. Thus, C.B and corn bran promoted intestinal microbial fermentation and optimized the microbial community for pigs at an early age. These findings provide perspectives on the advantages of synbiotics as a new approach for effective utilization of corn barn.
abstract_id: PUBMED:30111703
Effects of Oat Bran on Nutrient Digestibility, Intestinal Microbiota, and Inflammatory Responses in the Hindgut of Growing Pigs. Oat bran has drawn great attention within human research for its potential role in improving gut health. However, research regarding the impact of oat bran on nutrient utilization and intestinal functions in pigs is limited. The purpose of this study was to investigate the effects of oat bran on nutrient digestibility, intestinal microbiota, and inflammatory responses in the hindgut of growing pigs. Twenty-six growing pigs were fed either a basal diet (CON) or a basal diet supplemented with 10% oat bran (OB) within a 28 day feeding trial. Results showed that digestibility of dietary gross energy, dry matter, organic matter, and crude protein were lower in the OB group compared to the CON group on day 14, but no differences were observed between the two groups on day 28. In the colon, the relative abundance of operational taxonomic units (OTUs) associated with Prevotella, Butyricicoccus, and Catenibacterium were higher, while those associated with Coprococcus and Desulfovibrio were lower in the OB group compared to the CON group. Oat bran decreased mRNA expression of caecal interleukin-8 (IL-8), as well as colonic IL-8, nuclear factor-κB (NF-κB), and tumor necrosis factor-α (TNF-α) of the pigs. In summary, oat bran treatment for 28 day did not affect dietary nutrient digestibility, but promoted the growth of cellulolytic bacteria and ameliorated inflammatory reactions in the hindgut of growing pigs.
abstract_id: PUBMED:33561964
Effects of a Rice Bran Dietary Intervention on the Composition of the Intestinal Microbiota of Adults with a High Risk of Colorectal Cancer: A Pilot Randomised-Controlled Trial. Rice bran exhibits chemopreventive properties that may help to prevent colorectal cancer (CRC), and a short-term rice bran dietary intervention may promote intestinal health via modification of the intestinal microbiota. We conducted a pilot, double-blind, randomised placebo-controlled trial to assess the feasibility of implementing a long-term (24-week) rice bran dietary intervention in Chinese subjects with a high risk of CRC, and to examine its effects on the composition of their intestinal microbiota. Forty subjects were randomised into the intervention group (n = 19) or the control group (n = 20). The intervention participants consumed 30 g of rice bran over 24-h intervals for 24 weeks, whilst the control participants consumed 30 g of rice powder on the same schedule. High rates of retention (97.5%) and compliance (≥91.3%) were observed. No adverse effects were reported. The intervention significantly enhanced the intestinal abundance of Firmicutes and Lactobacillus, and tended to increase the Firmicutes/Bacteroidetes ratio and the intestinal abundance of Prevotella_9 and the health-promoting Lactobacillales and Bifidobacteria, but had no effect on bacterial diversity. Overall, a 24-week rice bran dietary intervention was feasible, and may increase intestinal health by inducing health-promoting modification of the intestinal microbiota. Further larger-scale studies involving a longer intervention duration and multiple follow-up outcome assessments are recommended.
abstract_id: PUBMED:35498986
The effects of dietary fibers from rice bran and wheat bran on gut microbiota: An overview. Whole grain is the primary food providing abundant dietary fibers (DFs) in the human diet. DFs from rice bran and wheat bran have been well documented in modulating gut microbiota. This review aims to summarize the physicochemical properties and digestive behaviors of DFs from rice bran and wheat bran and their effects on host gut microbiota. The physicochemical properties of DFs are closely related to their fermentability and digestive behaviors. DFs from rice bran and wheat bran modulate specific bacteria and promote SAFCs-producing bacteria to maintain host health. Moreover, their metabolites stimulate the production of mucus-associated bacteria to enhance the intestinal barrier and regulate the immune system. They also reduce the level of related inflammatory cytokines and regulate Tregs activation. Therefore, DFs from rice bran and wheat bran will serve as prebiotics, and diets rich in whole grain will be a biotherapeutic strategy for human health.
abstract_id: PUBMED:27649240
Current Hypothesis for the Relationship between Dietary Rice Bran Intake, the Intestinal Microbiota and Colorectal Cancer Prevention. Globally, colorectal cancer (CRC) is the third most common form of cancer. The development of effective chemopreventive strategies to reduce CRC incidence is therefore of paramount importance. Over the past decade, research has indicated the potential of rice bran, a byproduct of rice milling, in CRC chemoprevention. This was recently suggested to be partly attributable to modification in the composition of intestinal microbiota when rice bran was ingested. Indeed, previous studies have reported changes in the population size of certain bacterial species, or microbial dysbiosis, in the intestines of CRC patients and animal models. Rice bran intake was shown to reverse such changes through the manipulation of the population of health-promoting bacteria in the intestine. The present review first provides an overview of evidence on the link between microbial dysbiosis and CRC carcinogenesis and describes the molecular events associated with that link. Thereafter, there is a summary of current data on the effect of rice bran intake on the composition of intestinal microbiota in human and animal models. The article also highlights the need for further studies on the inter-relationship between rice bran intake, the composition of intestinal microbiota and CRC prevention.
abstract_id: PUBMED:34070845
Fermented Rice Bran Supplementation Prevents the Development of Intestinal Fibrosis Due to DSS-Induced Inflammation in Mice. Fermented rice bran (FRB) is known to protect mice intestines against dextran sodium sulfate (DSS)-induced inflammation; however, the restoration of post-colitis intestinal homeostasis using FRB supplementation is currently undocumented. In this study, we observed the effects of dietary FRB supplementation on intestinal restoration and the development of fibrosis after DSS-induced colitis. DSS (1.5%) was introduced in the drinking water of mice for 5 days. Eight mice were sacrificed immediately after the DSS treatment ended. The remaining mice were divided into three groups, comprising the following diets: control, 10% rice bran (RB), and 10% FRB-supplemented. Diet treatment was continued for 2 weeks, after which half the population of mice from each group was sacrificed. The experiment was continued for another 3 weeks before the remaining mice were sacrificed. FRB supplementation could reduce the general observation of colitis and production of intestinal pro-inflammatory cytokines. FRB also increased intestinal mRNA levels of anti-inflammatory cytokine, tight junction, and anti-microbial proteins. Furthermore, FRB supplementation suppressed markers of intestinal fibrosis. This effect might have been achieved via the canonical Smad2/3 activation and the non-canonical pathway of Tgf-β activity. These results suggest that FRB may be an alternative therapeutic agent against inflammation-induced intestinal fibrosis.
abstract_id: PUBMED:32182669
Defatted Rice Bran Supplementation in Diets of Finishing Pigs: Effects on Physiological, Intestinal Barrier, and Oxidative Stress Parameters. Rice bran is a waste product with low cost and high fiber content, giving it an added advantage over corn and soybean meal, which have to be purchased and always at a relatively higher cost. Under the background of increased attention to sustainable agriculture, it is significant to find alternative uses for this byproduct. A total of 35 finishing pigs were allotted to five dietary treatments: a control group with basal diet and four experimental diets where corn was equivalently substituted by 7%, 14%, 21%, and 28% defatted rice bran (DFRB), respectively. With increasing levels of DFRB, the neutrophil to lymphocyte ratio (NLR) linearly decreased (p < 0.05). In the jejunum, the mRNA level of nuclear factor erythroid-2 related factor-2 (Nrf2) exhibited a quadratic response (p < 0.01) with incremental levels of DFRB. In the colon, the mRNA levels of mucin 2 (MUC2), Nrf2, and NAD(P)H: quinone oxidoreductase 1 (NQO1) were upregulated (linear, p < 0.05) and heme oxygenase-1 (HO-1) was upregulated (linear, p < 0.01). Overall, using DFRB to replace corn decreased the inflammatory biomarkers of serum and showed potential function in modulating the intestinal barrier by upregulating the mRNA expression levels of MUC2 and downregulating that of Nrf2, NQO1, and HO-1 in the colon.
abstract_id: PUBMED:10219832
Roughage revisited: the effect on intestinal function of inert plastic particles of different sizes and shape. The mechanisms by which dietary fiber exerts its laxative action are not fully understood. Finely grinding wheat bran reduces its effect. Inert plastic particles are equipotent to bran if they consist of flakes or sliced tubing. It is not known whether altering the size or shape of inert particles alters their effect on intestinal function. In a randomized crossover study, 18 volunteers swallowed 24 g/day of plastic as branlike flakes or as small granules for 10-12 days with a two-week washout period between interventions. Whole-gut transit time (WGTT), orocecal transit time (OCTT), defecation frequency, stool form, stool water content, stool pH, and dietary intake were assessed. The plastic flakes caused a 24% (P < 0.001) reduction in WGTT and a 19% (P = 0.002) fall in OCTT. Resultant and appropriate changes in stool form, interdefecatory intervals (IDI), and stool weight were seen. The small granules did not cause any significant change in WGTT or OCTT, although IDI did decrease and stool output and stool form score increased. The stimulant effect of solid particles in the intestinal lumen upon transit time is influenced by the morphology of the particles.
Answer: The concept of 'roughage' in the context of dietary fiber and its effects on intestinal function has been revisited through studies that have used inert plastic particles as a substitute for dietary fiber. One study compared the effects of wheat bran with plastic flakes similar in size and shape to wheat bran and found that both decreased whole-gut transit time (WGTT), interdefecatory intervals (IDI), and increased stool form score and stool output. However, only wheat bran increased stool water content, suggesting that the mechanism of action for the plastic 'pseudobran' did not involve increased fecal water. The study concluded that plastic flakes were as effective at altering colonic function as wheat bran at a similar dosage, indicating that the concept of roughage, which refers to the indigestible portion of food that aids in intestinal transit, may still be valid (PUBMED:9222725).
Additional studies have explored the effects of different types of bran and their impact on the intestinal microbiota and metabolism. For instance, rice bran oil was found to affect the metabolism of isoflavonoids and intestinal microbiota in mice, with changes in the composition of cecal microbiota and a significant positive correlation between the occupation ratios of Lactobacillales and fecal bile acid content (PUBMED:22949864). Similarly, a combination of Clostridium butyricum and corn bran optimized intestinal microbial fermentation in a weaned pig model (PUBMED:30619170), and oat bran was shown to promote the growth of cellulolytic bacteria and ameliorate inflammatory reactions in the hindgut of growing pigs (PUBMED:30111703).
Furthermore, a rice bran dietary intervention in adults with a high risk of colorectal cancer was found to be feasible and may increase intestinal health by inducing health-promoting modification of the intestinal microbiota (PUBMED:33561964). The effects of dietary fibers from rice bran and wheat bran on gut microbiota were summarized, highlighting their role in modulating specific bacteria and promoting short-chain fatty acids-producing bacteria to maintain host health (PUBMED:35498986). Rice bran intake was also suggested to modify the composition of intestinal microbiota and potentially contribute to colorectal cancer prevention (PUBMED:27649240), and fermented rice bran supplementation was observed to prevent the development of intestinal fibrosis due to inflammation in mice (PUBMED:34070845). |
Instruction: Is laparoscopic radical prostatectomy after transurethral prostatectomy appropriated?
Abstracts:
abstract_id: PUBMED:26177871
Laparoscopic radical prostatectomy after previous transurethral resection of prostate using a catheter balloon inflated in prostatic urethra: Oncological and functional outcomes from a matched pair analysis. Objectives: To explore the surgical, oncological and functional outcomes of laparoscopic radical prostatectomy in patients who have undergone transurethral resection of the prostate, using a catheter balloon inflated in the prostatic urethra.
Methods: A total of 25 patients were randomly assigned to the no balloon previous transurethral resection of the prostate laparoscopic radical prostatectomy group (n = 12) and the with balloon previous transurethral resection of the prostate laparoscopic radical prostatectomy group (n = 13). Two matched pairs analyses were carried out to identify the 12 (control A) and 13 (control B) surgery-naïve patients. The outcomes were compared between the groups with previous transurethral resection of the prostate (no balloon previous transurethral resection of the prostate laparoscopic radical prostatectomy and with balloon previous transurethral resection of the prostate laparoscopic radical prostatectomy groups) and the controls. The rate of intra- and postoperative complications was assessed. The International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form and the International Index of Erectile Function 5 were used for symptoms evaluation.
Results: The mean blood loss was higher in patients submitted to transurethral resection of the prostate, with statistically insignificant reduced blood loss in the with balloon previous transurethral resection of the prostate laparoscopic radical prostatectomy group. The no balloon previous transurethral resection of the prostate laparoscopic radical prostatectomy group had longer operative time compared with both the with balloon previous transurethral resection of the prostate laparoscopic radical prostatectomy and control A groups (P < 0.05). International Index of Erectile Function 5 showed a significant difference between no balloon previous transurethral resection of the prostate laparoscopic radical prostatectomy and its control group; the International Consultation on Incontinence Questionnaire showed a statistically significant difference (P < 0.05) between the no balloon previous transurethral resection of the prostate laparoscopic radical prostatectomy and control A groups.
Conclusion: The use of a catheter balloon inflated in the prostatic urethra seems to facilitate laparoscopic radical prostatectomy in patients with previous transurethral resection of the prostate, ultimately reducing the rate of perioperative complications. These findings warrant further investigation on a larger case series with a longer follow up.
abstract_id: PUBMED:17048423
Is laparoscopic radical prostatectomy after transurethral prostatectomy appropriated? Objective: To evaluate the appropriateness and morbidity of laparoscopic radical prostatectomy (LRP) in patients who had previous trans urethral prostatectomy (TURP).
Material And Method: From February 2005 to February 2006, 27 patients with clinical localized prostate cancer underwent LRP with the same technique by a single surgeon. Nineteen patients were diagnosed with trans rectal ultrasound guided biopsy (TRUSBX) and eight patients were diagnosed with TURP Operative data and pathological outcomes were evaluated between the two group.
Results: Mean operative time and blood loss in TURSBX group were 233 minutes and 610 ml while those in TURP group were 251 minutes and 812 ml, respectively. These were not significantly different (all p valve > 0.1). There was no significant complication or mortality in either groups. LRP could achieve high free margin rate. Of 19 patients with pathological localized disease, 17 (89.4%) had free margin. It was found in 12 of 14 patients (85.7%) in TRUSBX group and in all patients in the TURP group.
Conclusion: LRP is appropriate to undergo in prostate cancer patients with previous TURP LRP after TURP did not have a higher morbidity than LRP after TRUSBX and did not compromise free margin rate.
abstract_id: PUBMED:21206663
Laparoscopic radical prostatectomy. Millions of men are diagnosed annually with prostate cancer worldwide. With the advent of PSA screening, there has been a shift in the detection of early prostate cancer, and there are increased numbers of men with asymptomatic, organ confined disease. Laparoscopic radical prostatectomy is the latest, well accepted treatment that patients can select. We review the surgical technique, and oncologic and functional outcomes of the most current, large series of laparoscopic radical prostatectomy published in English.Positive margin rates range from 2.1-6.9% for pT2a, 9.9-20.6% for pT2b, 24.5-42.3% for pT3a, and 22.6-54.5% for pT3b. Potency rates after bilateral nerve sparing laparoscopic radical prostatectomy range from 47.1 to 67%. Continence rates at 12 months range from 83.6 to 92%.
abstract_id: PUBMED:31436793
Patient-reported outcomes after open radical prostatectomy, laparoscopic radical prostatectomy and permanent prostate brachytherapy. Objective: To assess patient-reported outcomes after open radical prostatectomy, laparoscopic radical prostatectomy and permanent prostate brachytherapy.
Methods: patient-reported outcomes were evaluated using Expanded Prostate Cancer Index Composite scores at baseline and 1, 3, 6, 12 and 36 months after treatment, respectively, using differences from baseline scores.
Results: Urinary function was the same in the three groups at baseline, but worse after surgery than after permanent prostate brachytherapy until 12 months, and similar after open radical prostatectomy and permanent prostate brachytherapy and better than after laparoscopic radical prostatectomy at 36 months. Urinary bother was significantly worse at 1 month after surgery, but better after open radical prostatectomy than after permanent prostate brachytherapy and laparoscopic radical prostatectomy at 3 months, after which symptoms improved gradually in all groups. Obstructive/irritative symptoms were worse after permanent prostate brachytherapy than after open radical prostatectomy at 36 months, and worse after laparoscopic radical prostatectomy until 6 months. Urinary incontinence was worse after surgery, particularly after 1 month. This symptom returned to the baseline level at 12 months after open radical prostatectomy, but recovery after laparoscopic radical prostatectomy was slower. Bowel function after permanent prostate brachytherapy was significantly worse than after surgery at 1 month and this continued until 6 months. Bowel bother was slightly worse at 3 and 6 months after permanent prostate brachytherapy compared to these time points after surgery.
Conclusion: Urinary function and bother were worst after laparoscopic radical prostatectomy, especially in the early postoperative phase, whereas urinary obstructive/irritative symptom, bowel function and bother were worse after permanent prostate brachytherapy. These findings are useful and informative for the treatment of patients with prostate cancer.
abstract_id: PUBMED:24912809
Pitfalls of robot-assisted radical prostatectomy: a comparison of positive surgical margins between robotic and laparoscopic surgery. Objectives: To compare the surgical outcomes of laparoscopic radical prostatectomy and robot-assisted radical prostatectomy, including the frequency and location of positive surgical margins.
Methods: The study cohort comprised 708 consecutive male patients with clinically localized prostate cancer who underwent laparoscopic radical prostatectomy (n = 551) or robot-assisted radical prostatectomy (n = 157) between January 1999 and September 2012. Operative time, estimated blood loss, complications, and positive surgical margins frequency were compared between laparoscopic radical prostatectomy and robot-assisted radical prostatectomy.
Results: There were no significant differences in age or body mass index between the laparoscopic radical prostatectomy and robot-assisted radical prostatectomy patients. Prostate-specific antigen levels, Gleason sum and clinical stage of the robot-assisted radical prostatectomy patients were significantly higher than those of the laparoscopic radical prostatectomy patients. Robot-assisted radical prostatectomy patients suffered significantly less bleeding (P < 0.05). The overall frequency of positive surgical margins was 30.6% (n = 167; 225 sites) in the laparoscopic radical prostatectomy group and 27.5% (n = 42; 58 sites) in the robot-assisted radical prostatectomy group. In the laparoscopic radical prostatectomy group, positive surgical margins were detected in the apex (52.0%), anterior (5.3%), posterior (5.3%) and lateral regions (22.7%) of the prostate, as well as in the bladder neck (14.7%). In the robot-assisted radical prostatectomy patients, they were observed in the apex, anterior, posterior, and lateral regions of the prostate in 43.0%, 6.9%, 25.9% and 15.5% of patients, respectively, as well as in the bladder neck in 8.6% of patients.
Conclusions: Positive surgical margin distributions after robot-assisted radical prostatectomy and laparoscopic radical prostatectomy are significantly different. The only disadvantage of robot-assisted radical prostatectomy is the lack of tactile feedback. Thus, the robotic surgeon needs to take this into account to minimize the risk of positive surgical margins.
abstract_id: PUBMED:17561162
Surgical outcomes for men undergoing laparoscopic radical prostatectomy after transurethral resection of the prostate. Purpose: We reviewed outcomes for men with a history of transurethral prostate resection who underwent laparoscopic radical prostatectomy for prostate cancer.
Materials And Methods: Between January 26, 1998 and December 2006, 3,061 men underwent laparoscopic radical prostatectomy at our institution. A retrospective review showed that 119 had a history of transurethral prostate resection. These men were compared to randomized matched controls with regard to operative and postoperative outcomes. The matching criteria used to randomly select patients were clinical stage, preoperative prostate specific antigen and biopsy Gleason score.
Results: Mean +/- SD age in the groups with and without transurethral prostate resection was 66.2 +/- 5.6 and 60.7 +/- 7.0 years, respectively (p <0.01). Mean estimated blood loss, transfusion rate, pathological prostate volume and reoperation rate were statistically similar between the groups. Mean length of stay for the groups with and without transurethral prostate resection was 6.5 +/- 3.0 and 5.29 +/- 2.3 days, respectively (p <0.01). Mean operative time for the groups with and without transurethral prostate resection was 179 +/- 44 and 171 +/- 38 minutes, respectively (p = 0.02). Positive margins were seen in 21.8% and 12.6% of the patients with and without transurethral prostate resection, respectively (p = 0.02). A total of 64 complications were seen in patients with a history of transurethral prostate resection compared to 34 in those without such a history (p <0.01).
Conclusions: We report that patients with a history of transurethral prostate resection who undergo laparoscopic radical prostatectomy have worse outcomes with respect to operative time, length of stay, positive margin rate and overall complication rate. This subset of patients should be made aware of these potential risks before undergoing laparoscopic radical prostatectomy.
abstract_id: PUBMED:24791781
Health-related quality of life in the first year after laparoscopic radical prostatectomy compared with open radical prostatectomy. Objective: To assess health-related quality of life in the first year after laparoscopic radical prostatectomy compared with that after open radical prostatectomy.
Methods: The subjects were 105 consecutive patients with localized prostate cancer treated with laparoscopic radical prostatectomy between January 2011 and June 2012. Health-related quality of life was evaluated using the International Prostate Symptom Score, Medical Outcome Study 8-Items Short Form Health Survey (SF-8) and Expanded Prostate Cancer Index Composite at baseline and 1, 3, 6 and 12 months after surgery. Comparisons were made with data for 107 consecutive patients treated with open radical prostatectomy between October 2005 and July 2007.
Results: The International Prostate Symptom Score change was similar in each group. The laparoscopic radical prostatectomy group had a better baseline Medical Outcome Study 8-Items Short Form Health Survey mental component summary score and a better Medical Outcome Study 8-Items Short Form Health Survey physical component summary score at 1 month after surgery. In Expanded Prostate Cancer Index Composite, obstructive/irritative symptoms did not differ between the groups, but urinary incontinence was worse until 12 months after surgery and particularly severe after 1 month in the laparoscopic radical prostatectomy group. The rate of severe urinary incontinence was much higher in the laparoscopic radical prostatectomy group in the early period. Urinary bother was worse in the laparoscopic radical prostatectomy group at 1 and 3 months, but did not differ between the groups thereafter. Urinary function and bother were good after nerve sparing procedures and did not differ between the groups. Bowel and sexual function and bother were similar in the two groups.
Conclusion: Urinary function in the first year after laparoscopic radical prostatectomy is worse than that after open radical prostatectomy.
abstract_id: PUBMED:17033208
Laparoscopic radical prostatectomy in patients following transurethral resection of the prostate. Objectives: Previous transurethral resection of the prostate (TURP) was reported to impose difficulties during open radical prostatectomy. We describe our experience in laparoscopic radical prostatectomy (LRP) following transurethral resection of the prostate.
Patients And Methods: The series included 35 patients: 22 patients underwent transperitoneal LRP (tpLRP) and 13 underwent extraperitoneal LRP (epLRP). The minimal interval between TURP and laparoscopy was 3 months. Patients' charts were reviewed for their preoperative characteristics, intraoperative difficulties and complications, and outcome.
Results: Patients' mean age was 67.5+/-4.4 years. 12 patients were cT1a,b and 23 patients were cT1c/T2. Twenty-two patients underwent tpLRP and 13 underwent epLRP. No statistical difference was found between the preoperative characteristics and the pathological results of cT1a,b vs. T1c/cT2 patients, or tpLRP vs. epLRP patients. Thirty-three procedures were completed laparoscopically and 2 were converted to open surgery. Perioperative complications included two leaking anastomoses, prolonged lymph drainage in 1 case, atelectasis (n=1) and duodenal ulcer (n=1). Twelve positive margins were noted, half of them in pT2 tumors. The mean follow-up was 28.5 months. Twenty-five of 35 patients had more than 12 months of follow-up. Among them 19 patients were completely continent (76%) and 6 (24%), reported mild stress incontinence.
Conclusions: Although LRP following TURP is sometimes more technically difficult, simple modifications in the operative strategy help facilitate surgery. LRP following TURP favorably compares to open radical prostatectomy after TURP and laparoscopy in non-TURP patients.
abstract_id: PUBMED:26649093
Laparoscopic radical prostatectomy and resection of rectum performed together: first experience. Introduction: Laparoscopy is an increasingly used approach in the surgical treatment of rectal cancer and prostate cancer. The anatomical proximity of the two organs is the main reason to consider performing both procedures simultaneously.
Aim: To present our first experience of laparoscopic rectal resection and radical prostatectomy, performed simultaneously, in 3 patients.
Material And Methods: The first patient was diagnosed with locally advanced rectal cancer and tumor infiltration of the prostate and seminal vesicles. The other 2 patients were diagnosed with tumor duplicity. The surgery of the first patient started with laparoscopic prostatectomy except division of the prostate from the rectal wall. The next step was resection of the rectum, extralevator amputation of the rectum and vesicourethral anastomosis. In the other patients, resection of the rectum, followed by radical prostatectomy, was performed.
Results: The median follow-up was 12 months. The median operation time was 4 h 40 min, with blood loss of 300 ml. The operations and postoperative course were without incident in the case of 2 patients. However, 1 patient had stercoral peritonitis and a vesicorectal fistula in the early postoperative stage. Sigmoidostomy and postponed ureteroileal conduit were carried out. All patients were in oncologic remission.
Conclusions: Combined laparoscopic rectal resection and radical prostatectomy is a viable option for selected patients with locally advanced rectal cancer or tumor duplication. The procedures were completed without complications in 2 out of 3 patients.
abstract_id: PUBMED:36187528
Versatility of 3D laproscopy for radical prostatectomy: A single tertiary cancer center experience. Objective: The objective of this study is to compare our institutional outcomes of 3D laparoscopic when compared with open radical prostatectomy in terms of functional and oncological outcomes.
Methods: This is a retrospective study of patients who underwent radical prostatectomy during the period January 2016 to September 2019 at our institute. Out of 49 patients who underwent radical prostatectomy, 23 were done by open approach and 25 were operated by 3D laparoscopy. One patient was lost to follow-up and was excluded from the study. Data were collected from medical records, and functional evaluation was done by telephonic interview. Data analysis was done by SPSS software to calculate overall and disease-free survival.
Results: Laparoscopic arm patients had lesser blood loss, postoperative pain, hospital stay and wound-related issues although they had a longer operating time. Functional outcomes in terms of erectile dysfunction and incontinence were almost similar in both open and 3D laparoscopic approach. No statistically significant difference was observed for overall survival or disease-free survival. All shortcomings with the laparoscopic arm were improved as our experience increased with 3D laparoscopic prostatectomy. The outcomes of 3D laparoscopic radical prostatectomy were comparable to previously published data of robotic radical prostatectomy.
Conclusions: 3D LRP is a feasible technique with similar oncological or functional outcomes and better perioperative outcomes as compared to ORP. Being cost-effective and with comparable outcomes it is a suitable alternative to RRP in resource-limited settings.
Answer: Laparoscopic radical prostatectomy (LRP) after transurethral resection of the prostate (TURP) is considered appropriate based on the findings from the provided abstracts. Several studies have evaluated the surgical outcomes, oncological results, and functional outcomes of LRP in patients with a history of TURP.
One study found that LRP after TURP did not have higher morbidity than LRP after transrectal ultrasound-guided biopsy (TRUSBX) and did not compromise the free margin rate, suggesting that LRP is a suitable option for prostate cancer patients with previous TURP (PUBMED:17048423). Another study reported that using a catheter balloon inflated in the prostatic urethra during LRP in patients with previous TURP seems to facilitate the procedure, ultimately reducing the rate of perioperative complications (PUBMED:26177871).
However, it is important to note that some studies have reported that patients with a history of TURP who undergo LRP may experience worse outcomes with respect to operative time, length of stay, positive margin rate, and overall complication rate compared to those without a history of TURP (PUBMED:17561162). Despite these challenges, simple modifications in the operative strategy can help facilitate surgery, and LRP following TURP favorably compares to open radical prostatectomy after TURP and laparoscopy in non-TURP patients (PUBMED:17033208).
In summary, while LRP after TURP may present some technical difficulties and potentially worse outcomes in certain aspects, it is still considered an appropriate and viable surgical option for patients with a history of TURP. The appropriateness of LRP in this context is supported by its ability to achieve similar oncological outcomes and the potential for reduced morbidity when compared to other surgical approaches. |
Instruction: Is incidental Meckel's diverticulum resected safely?
Abstracts:
abstract_id: PUBMED:18815602
Is incidental Meckel's diverticulum resected safely? Aim: Meckel's diverticulum (MD) is the most common congenital anomaly of the small intestine. The majority of MD cases are discovered incidentally. On the other hand, there is disagreement about the management of incidentally discovered asymptomatic MD. The aim of the study was to compare the clinicopathologic characteristics of incidentally found and symptomatic cases of MD, and to compare morbidity and mortality in symptomatic and asymptomatic patients.
Patients And Methods: Records of patients whose MD was resected at our institution between 1989 and 2004 were reviewed and 76 patients were found. The patients were divided into two groups. The incidental group included patients in whom the MDs were found incidentally during the course of laparotomy performed for reasons not related to the diverticular complications. The symptomatic group included patients who presented with complications related to the MDs. We compared the clinicopathologic characteristics of the patients between the two groups
Results: The incidental group included 40 patients (34 males) and the symptomatic group included 36 patients (30 males). There was no significant difference between the two groups with respect to age, gender, APACHE scores, postoperative complications, and hospital stay. There were two deaths in the symptomatic group. There was a significant correlation between operative mortality and APACHE II scores.
Conclusions: Resection of incidentally found MD is not associated with increased operative morbidity or mortality.
abstract_id: PUBMED:15593467
Meckel's diverticulum: comparison of incidental and symptomatic cases. Although Meckel's diverticulum is the commonest congenital gastrointestinal anomaly, there is still debate concerning the proper management of asymptomatic diverticula. Records of all patients whose Meckel's diverticulum was resected at our hospitals between 1990 and 2002 were reviewed. Clinical characteristics, mode of presentations, and management for all patients were analyzed. Meckel's diverticula were resected in 68 patients. Patients were divided into two groups: the incidental group included 40 patients (24 males) in whom the diagnosis of diverticula was incidental. The symptomatic group included 28 patients (20 males) who presented with diverticulum-related complications. Preoperative diagnosis was possible in only four cases. In four patients from the symptomatic group, Meckel's diverticula were found and left untouched during a previous laparotomy. There was no significant difference between the two groups with respect to gender (p = 0.48). Patients in the symptomatic group were significantly younger than patients in the incidental group (p = 0.002). The diverticula in the symptomatic group tended to be longer (p = 0.001) with a narrower base (p = 0.001) than the diverticula in the incidental group. A diameter of < or = 2 cm was significantly associated with more complications (p = 0.01). Heterotopic tissue was present more significantly in the symptomatic group than the incidental group (p = 0.01). There was no significant difference in the morbidity rate between the two groups (p = 0.71), and there was no mortality in either group. Preoperative diagnosis of Meckel's diverticulum is difficult and should be kept in mind in cases of acute abdomen. Resection of incidentally found diverticula is not associated with increased operative morbidity or mortality.
abstract_id: PUBMED:32923303
Does an Incidental Meckel's Diverticulum Warrant Resection? Meckel's diverticulum (MD) is the most common gastrointestinal malformation. The management of symptomatic Meckel's diverticulum has been undecidedly resection; however, the management of incidental Meckel's diverticulum has been fraught in comparison. As a systematic literature review, PubMed, PubMed Central (PMC), and MEDLINE were used. The search phrase utilized was "Meckel Diverticulum/Surgery [Mesh]" and resection incidental. The search was completed on July 18, 2020 and was limited to 1980 until the day of the search. Searches resulted in 62 initial articles on PubMed. On initial screening, 23 of these articles met the criteria. The references of these 23 articles were screened for relevant studies, yielding a total of 31 studies of which all were assessed for quality. Four articles made a recommendation for no resection. Twelve studies made a recommendation for resection. Ten studies concluded that resection should be completed in the presence of risk factors. Lastly, five studies made no clear recommendation. In recent literature, there has been a shift towards resection for all or in those with high-risk factors. In the future, it will be necessary for researchers to determine if resection is recommended for all patients with incidental MD or in those with risk factors. If only in those with risk factors, it will be important that research is completed to create evidence-based guidelines to support the risk factors.
abstract_id: PUBMED:36072199
Incidental Meckel's Diverticulum With Neuroendocrine Tumor. Meckel's diverticulum (MD), the most common congenital disease of the small bowel, commonly presents with symptoms of painless rectal bleeding and intestinal obstruction. The treatment of symptomatic MD involves resection of the lesion regardless of patient age; however, the excision of asymptomatic and incidentally identified MDs in adults remain controversial. On one hand, the complications arising from MDs decrease with age, leading to a lower benefit than risk ratio with prophylactic resection. On the other hand, malignancies, such as neuroendocrine tumors, may arise over time from untreated MDs. This can lead to poor prognostic complications, such as liver or lymph node metastases. In this case report, we describe an incidental Meckel's diverticulum discovered during an exploratory laparotomy for acute sigmoid diverticulitis in an adult male. Later biopsy findings discovered the lesion to contain a grade 1 neuroendocrine tumor. Based on our literature review findings, resection of the incidental Meckel's diverticulum was a reasonable approach given the low complication risks of the procedure and the possibility of malignant transformation and progression.
abstract_id: PUBMED:28769478
MECKEL'S DIVERTICULUM - REVISITED. Twenty five cases of Meckel's diverticulum were studied between 1985-1995. Eight of these were symptomatic and in the remaining 17 it was an incidental finding. The symptomatic patients presented with intestinal obstruction (5 cases), perforated peritonitis (2 cases) and intussusception (1 case). All cases of acute appendicitis were also subjected to a search for Meckel's diverticulum. Of the 25 Meckel's diverticuli encountered, 22 were resected and in 3 patients it was left in situ. Both the patients with perforated Meckel's diverticulum showed ectopic gastric mucosa. Complications occurred only after surgery for symptomatic Meckel's diverticulum. All patients undergoing incidental diverticulectomy had a smooth and uncomplicated recovery.
abstract_id: PUBMED:31930430
The Many Faces of Meckel's Diverticulum: Update on Management in Incidental and Symptomatic Patients. Purpose Of Review: Meckel's diverticulum may be detected incidentally or present with symptoms from infancy and to old age. The presentation may be acute, with several complications associated with the condition. We aim to review the many faces with which a Meckel's diverticulum may present, either symptomatically or as an incidental finding.
Recent Findings: Due to its rarity, recent studies mainly include small retrospective series or case reports. Emphasis in the recent literature is on clinical presentation, the pathology of symptomatic cases, management options and risks of neoplasia. Symptoms are mainly caused by obstruction, bleeding or diverticulitis. Cross-sectional imaging is unspecific, although capsule endoscopy is reported of use in case series. Meckel's diverticulum presents with clinical features that are age-specific. Complicated Meckel's diverticulum is treated by resection. Optimal treatment of incidental cases remains debated. Meckel's diverticulum usually stays asymptomatic, and decision-making for management should be based on patient-specific factors. Use of minimal invasive techniques mandates refinement of the optimal treatment.
abstract_id: PUBMED:17368293
Calcified Meckel's diverticulum: an unusual incidental finding during laparoscopy. During a cholecystectomy for stones, an unusual lesion mimicking a neoplasm was found in a 40-year-old man. The lesion was resected using an endoscopic stapler, and the histologic diagnosis was Meckel's diverticulum with chronic inflammation and calcification of the diverticular wall. It is possible that the diverticulum had been responsible for abdominal pain in this patient, in whom it had an atypical appearance.
abstract_id: PUBMED:34774270
Management of Incidentalomas. Incidental findings are common in the evaluation of surgical patients. Understanding the appropriate assessment and management of these frequent occurrences is important for the provision of comprehensive quality care. This review details the epidemiology, considerations, and recommendations for management of common incidental manifestations in surgical patients, including Meckel diverticulum, adrenal incidentaloma, thyroid nodule, solitary pulmonary nodule, small bowel intussusception, gallstones, and incidental appendectomy.
abstract_id: PUBMED:15902057
Surgical management of incidental Meckel's diverticulum: the necessity to obtain the informed consent Surgical approach to the incidentally found Meckel's diverticulum (MD) is still source of debate: some support the systematic search and the surgical resection, others suggest to leave in situ the asymptomatic diverticulum. A wide literature review and some claims for malpractice referred to resection of diverticular formations (2 cases are described herein) allow us to deem that the removal of an asymptomatic MD may be an excess of surgical indication regarding an entity that cannot be defined as pathological for itself; anyway, the informed consent must be obtained before the incidental resection. The consent to the removal of MD must always be requested for laparotomies in which the surgeon forecasts the possibility to remove a diverticulum eventually found. Finally, an easy informed consent form about an incidentally found MD to be submitted to the patient is proposed.
abstract_id: PUBMED:24884768
Incidental finding of carcinoid tumor on Meckel's diverticulum: case report and literature review, should prophylactic resection be recommended? Meckel's diverticulum (MD) is the most common congenital anomaly of the gastrointestinal tract and is caused by incomplete obliteration of the vitelline duct during intrauterine life. MD affects less than 2% of the population. In most cases, MD is asymptomatic and the estimated average complication risk of MD carriers, which is inversely proportional to age, ranges between 2% and 4%. The most common MD-related complications are gastrointestinal bleeding, intestinal obstruction and acute phlogosis. Excision is mandatory in the case of symptomatic diverticula regardless of age, while surgical treatment for asymptomatic diverticula remains controversial. According to the majority of studies, the incidental finding of MD in children is an indication for surgical resection, while the management of adults is not yet unanimous. In this case report, we describe the prophylactic resection of an incidentally detected MD, which led to the removal of an occult mucosal carcinoid tumor. In literature, the association of MD and carcinoid tumor is reported as a rare finding. Even though the strategy for adult patients of an incidental finding of MD during surgery performed for other reasons divides the experts, we recommend prophylactic excision in order to avoid any further risk.
Answer: Yes, incidental Meckel's diverticulum can be resected safely. Studies have shown that resection of incidentally found Meckel's diverticulum is not associated with increased operative morbidity or mortality. A comparison of clinicopathologic characteristics between incidentally found and symptomatic cases of Meckel's diverticulum revealed no significant difference in postoperative complications and hospital stay between the two groups, and there were no deaths in the incidental group (PUBMED:18815602). Another study also reported no significant difference in morbidity rate between incidental and symptomatic groups, and there was no mortality in either group (PUBMED:15593467). Furthermore, patients undergoing incidental diverticulectomy had a smooth and uncomplicated recovery (PUBMED:28769478).
The management of incidental Meckel's diverticulum has been debated, but recent literature suggests a shift towards resection for all or in those with high-risk factors (PUBMED:32923303). Resection of an incidental Meckel's diverticulum is considered a reasonable approach given the low complication risks of the procedure and the possibility of malignant transformation and progression, as evidenced by a case report of an incidental Meckel's diverticulum with a neuroendocrine tumor (PUBMED:36072199).
Overall, the evidence suggests that incidental Meckel's diverticulum can be resected safely, and the decision-making for management should be based on patient-specific factors (PUBMED:31930430). However, it is important to obtain informed consent before the incidental resection, as the removal of an asymptomatic Meckel's diverticulum may not always be necessary (PUBMED:15902057). |
Instruction: Use of Ethanol in the Trans-Arterial Lipiodol Embolization (TAELE) of Intermediated-Stage HCC: Is This Safer than Conventional Trans-Arterial Chemo-Embolization (c-TACE)?
Abstracts:
abstract_id: PUBMED:26110810
Use of Ethanol in the Trans-Arterial Lipiodol Embolization (TAELE) of Intermediated-Stage HCC: Is This Safer than Conventional Trans-Arterial Chemo-Embolization (c-TACE)? Purpose: To evaluate safety and efficacy of Trans-Arterial Ethanol-Lipiodol Embolization (TAELE) compared with conventional Trans-Arterial Chemo-Embolization (cTACE) in the treatment of small intermediate-HCC (BCLC-Stage B).
Materials And Methods: A random sample of 87 patients (37.93% male; 62.07% female; age range, 36-86 years) with documented small intermediate-HCC and treated with TAELE (mixture 1:1 of Ethanol and Lipiodol) or cTACE (mixture of 50mg-Epirubicin and 5cc-Lipiodol) were retrospectively studied in an institutional review board approved protocol. The two procedures were compared with χ2-test, χ2-test with Yates correction, McNemar's exact test, ANOVA test and log-rank test.
Results: TAELE and cTACE therapies were performed in 45 and 42 patients, respectively. Thirty days after the procedure, a Multi-Detector Computed Tomography (MDCT) showed no significant difference in the number of patients with partial and complete response between the two groups (p-value = 0.958), according to mRECIST. Contrary, significant differences were found in tumor-devascularization, lesion-reduction and post-embolization syndrome occurrence (p-value = 0.0004, p-value = 0.0003 and p-value = 0.009, respectively). Similar survival was observed during 36-month follow-up (p-value = 0.884).
Conclusion: Compared to cTACE, TAELE showed a better toxicity profile with similar 36-month survival and similar one-month anti-tumor effects, which makes it better tolerated by patients, especially in case of more than one treatment.
abstract_id: PUBMED:35421728
Trans-arterial therapy for Fibrolamellar carcinoma: A case report and literature review. Introduction: Fibrolamellar carcinoma (FLC) is a rare pathologically distinct primary liver cancer. Surgical resection is the only treatment associated with prolonged survival. Trans-arterial embolization (TAE), which is a recognised treatment for hepatocellular carcinoma has been used to treat FLC. We present a case and performed a literature review of patients with FLC treated with TAE.
Case Presentation: We present a 19-year old female with a large potentially resectable FLC which was initially treated with trans-arterial chemo-embolization (TACE) with drug eluting beads. The TACE was followed by surgical resection. Histology confirmed tumour necrosis related to the previous TACE.
Discussion & Literature Review: We identified seven case reports and one case series of TAE for FLC. TAE was either used as a neo-adjuvant therapy to facilitate subsequent tumour resection or as a palliative treatment modality. We propose an algorithm for the treatment of FLC that includes TAE.
Conclusion: The rarity of FLC and the paucity of data precludes establishing clear evidence-based standards of care. We propose an algorithm for the treatment of FLC. The establishment of an international registry may facilitate the collection of better quality evidence.
abstract_id: PUBMED:25755602
Role of Transcatheter Intra-arterial Therapies for Hepatocellular Carcinoma. Transcatheter intra-arterial therapies play a vital role in treatment of HCC due to the unique tumor vasculature. Evolution of techniques and newer efficacious modalities of tumor destruction have made these techniques popular. Various types of intra-arterial therapeutic options are currently available. These constitute: bland embolization, trans-arterial chemotherapy, trans-arterial chemo embolization with or without drug-eluting beads and trans-arterial radio embolization, which are elaborated in this review.
abstract_id: PUBMED:37663557
A presumed pathological complete response of ruptured hepatocellular carcinoma showing retained intratumoral blood flow after trans-arterial chemo-embolization. A 74-year-old women with abdominal pain emergently visited our hospital in a shock status. After hemodynamics stabilization with intravenous fluid/albumin administration and blood transfusion, image evaluation showed perihepatic presumed blood retention and an intrahepatic large tumor. Angiography showed a tumor stain in the liver and no active leakage of the contrast medium from the tumor. These findings led to the diagnosis of ruptured hepatocellular carcinoma (HCC) without active bleeding. The patient, therefore, was treated not with trans-arterial embolization (TAE) but with trans-arterial chemo-embolization (TACE) using 10 mg of epirubicin. Post-TACE images showed marked tumor shrinkage with retained intratumoral blood flow. Under the tentative diagnosis of shrunken but viable HCC, the patient underwent laparoscopic segmentectomy for the HCC. Postoperative pathological study showed coagulative and lytic necrosis, intratumoral bleeding, hemosiderin deposits, massive collagen fiber, infiltration of inflammatory cells, and no viable cancer cells in the resected tumor. These pathological findings highly suggested that chemotherapeutic effect of epirubicin had brought about complete cancer cell death in the area not affected by TAE. Physicians should treat the patients with ruptured HCC, especially when showing stable hemodynamics, not by TAE but by TACE for better clinical outcome. Oncologists should further note that a complete pathological response of HCC could be observed even in cases of retained intratumoral blood flow.
abstract_id: PUBMED:26604044
Refining prognosis after trans-arterial chemo-embolization for hepatocellular carcinoma. Background & Aims: To develop an individual prognostic calculator for patients with unresectable hepatocellular carcinoma (HCC) undergoing trans-arterial chemo-embolization (TACE).
Methods: Data from two prospective databases, regarding 361 patients who received TACE as first-line therapy (2000-2012), were reviewed in order to refine available prognostic tools and to develop a continuous individual web-based prognostic calculator. Patients with neoplastic portal vein invasion were excluded from the analysis. The model was built following a bootstrap resampling procedure aimed at identifying prognostic predictors and by carrying out a 10-fold cross-validation for accuracy assessment by means of Harrell's c-statistic.
Results: Number of tumours, serum albumin, serum total bilirubin, alpha-foetoprotein and maximum tumour size were selected as predictors of mortality following TACE with the bootstrap resampling technique. In the 10-fold cross-validation cohort, the model showed a Harrell's c-statistic of 0.649 (95% CI: 0.610-0.688), significantly higher than that of the Hepatoma Arterial-embolization Prognostic (HAP) score (0.589; 95% CI: 0.552-0.626; P = 0.001) and of the modified HAP-II score (0.611; 95% CI: 0.572-0.650; P = 0.005). Akaike's information criterion for the model was 2520; for the mHAP-II it was 2544 and for the HAP score it was 2554. A web-based calculator was developed for quick consultation at http://www.livercancer.eu/mhap3.html.
Conclusions: The proposed individual prognostic model can provide an accurate prognostic prediction for each patient with unresectable HCC following treatment with TACE without class stratification. The availability of an online calculator can help physicians in daily clinical practice.
abstract_id: PUBMED:33672012
Comparison of Trans-Arterial Chemoembolization and Bland Embolization for the Treatment of Hepatocellular Carcinoma: A Propensity Score Analysis. No definitive conclusion could be reached about the role of chemotherapy in adjunction of embolization in the treatment of hepatocellular carcinoma (HCC). We aim to compare radiological response, toxicity and long-term outcomes of patients with hepatocellular carcinoma (HCC) treated by trans-arterial bland embolization (TAE) versus trans-arterial chemoembolization (TACE). We retrospectively included 265 patients with HCC treated by a first session of TACE or TAE in two centers. Clinical and biological features were recorded before the treatment and radiological response was assessed after the first treatment using modified Response Evaluation Criteria in Solid Tumors (mRECIST) criteria. Correlation between the treatment and overall, progression-free and transplantation-free survival was performed after adjustment using a propensity score matching: 86 patients were treated by bland embolization and 179 patients by TACE, including 44 patients with drug-eluting beads and 135 with lipiodol TACE, 89.8% of patients were male with a median age of 65 years old. Cirrhosis was present in 90.9% of patients with a Child Pugh score A in 84% of cases. After adjustment, no difference in the rate of AE, including liver failure, was observed between the two treatments. TACE was associated with a significant increase in complete radiological response (odds ratio (OR) = 8.5 (95% confidence interval (CI): 2.8-25.4)) but not in the overall response rate (OR = 2.2 (95% CI = 0.8-5.8)). No difference in terms of overall survival (p = 0.3905), progression-free survival (p = 0.4478) and transplantation-free survival (p = 0.9020) was observed between TACE and TAE. TACE was associated with a higher rate of complete radiological response but without any impact on overall radiological response, progression-free survival and overall survival compared to TAE.
abstract_id: PUBMED:38189812
The predictive value of sarcopenia and myosteatosis in trans-arterial (chemo)-embolization treated HCC patients. Background: We conducted a meta-analysis to provide evidence-based results for the predictive values of sarcopenia, skeletal muscle index, psoas muscle index and the myosteatosis regarding the impact of survival outcomes and tumor response in patients treated by trans-arterial (chemo)-embolization (TAE/TACE), thereby optimizing therapeutic strategies and maximizing clinical benefits for hepatocellular carcinoma patients.
Methods: Qualified studies were retrieved from PubMed, the Cochrane Library, EMBASE, and Google Scholar before June 19, 2023. We investigated the relationships between sarcopenia, SMI, PMI, myosteatosis, and the overall survival of TAE/TACE-treated hepatocellular carcinoma patients with pooling data.
Results: A total of 167 studies were collected and 12 studies were finally included for analysis. The meta-analysis assisted that the sarcopenia (HR: 1.46, 95% CI: 1.30-1.64, p < 0.001), skeletal muscle index (HR: 1.48, 95% CI: 1.29-1.69, p < 0.001), and psoas muscle index (HR: 1.45, 95% CI: 1.19-1.77, p < 0.001) were significantly related to a shorter OS of hepatocellular carcinoma patients who treated by TAE/TACE. Sarcopenia significantly contributed to a lower objective response rate of TAE/TACE treated hepatocellular carcinoma patients (OR: 0.80, 95% CI: 0.65-0.98, p = 0.032). But there was no significant association between the myosteatosis and the overall survival (HR: 1.29, 95% CI: 0.74-2.25, p = 0.366). Sensitivity analysis supported the stability and dependability of above analyses conclusions.
Conclusion: Sarcopenia, skeletal muscle index and psoas muscle index, are significant prognostic predictors for TAE/TACE treated hepatocellular carcinoma patients. While myosteasis does not demonstrate a prognostic impact on the overall survival of TAE/TACE treated hepatocellular carcinoma patients.
abstract_id: PUBMED:24512125
Trans-arterial chemo-embolization is safe and effective for elderly advanced hepatocellular carcinoma patients: results from an international database. Objective: Hepatocellular carcinoma (HCC) incidence among elderly patients is increasing. Trans-arterial chemo-embolization (TACE) prolongs survival in selected HCC patients. The safety and efficacy of TACE in elderly patients has not been extensively studied. The objective of this study was to assess the safety and efficacy of TACE in elderly patients (older than 75) with HCC.
Design: Combined HCC registries (Spain, Italy, China and Israel) and cohort design analysis of patients who underwent TACE for HCC.
Results: Five hundred and forty-eight patients diagnosed and treated between 1988 and 2010 were included in the analysis (China 197, Italy 155, Israel 102 and Spain 94,). There were 120 patients (22%) older than 75 years and 47 patients (8.6%) older than 80. Median (95% CI) survival estimates were 23 (17-28), 21 (17-26) and 19 (15-23) months (P=0.14) among patients aged younger than 65, 65-75 and older than 75 respectively. An age above 75 years at diagnosis was not associated with worse prognosis, hazard ratio of 1.05 (95% CI 0.75-1.5), controlling for disease stage, sex, diagnosis year, HBV status and stratifying per database. No differences in complication rates were found between the age groups.
Conclusions: TACE is safe for patients older than 75 years. Results were similar over different eras and geographical locations. Though selection bias is inherent, the results suggest overall adequate selection of patients, given the similar outcomes among the different age groups.
abstract_id: PUBMED:28770315
Parameters for Stable Water-in-Oil Lipiodol Emulsion for Liver Trans-Arterial Chemo-Eembolization. Purpose: Water-in-oil type and stability are important properties for Lipiodol emulsions during conventional trans-arterial chemo-embolization. Our purpose is to evaluate the influence of 3 technical parameters on those properties.
Materials And Methods: The Lipiodol emulsions have been formulated by repetitive back-and-forth pumping of two 10-ml syringes through a 3-way stopcock. Three parameters were compared: Lipiodol/doxorubicin ratio (2/1 vs. 3/1), doxorubicin concentration (10 vs. 20 mg/ml) and speed of incorporation of doxorubicin in Lipiodol (bolus vs. incremental vs. continuous). The percentage of water-in-oil emulsion obtained and the duration until complete coalescence (stability) for water-in-oil emulsions were, respectively, evaluated with the drop-test and static light scattering technique (Turbiscan).
Results: Among the 48 emulsions formulated, 32 emulsions (67%) were water-in-oil. The percentage of water-in-oil emulsions obtained was significantly higher for incremental (94%) and for continuous (100%) injections compared to bolus injection (6%) of doxorubicin. Emulsion type was neither influenced by Lipiodol/doxorubicin ratio nor by doxorubicin concentration. The mean stability of water-in-oil emulsions was 215 ± 257 min. The emulsions stability was significantly longer when formulated using continuous compared to incremental injection (326 ± 309 vs. 96 ± 101 min, p = 0.018) and using 3/1 compared to 2/1 ratio of Lipiodol/doxorubicin (372 ± 276 vs. 47 ± 43 min, p = <0.0001). Stability was not influenced by the doxorubicin concentration.
Conclusion: The continuous and incremental injections of doxorubicin in the Lipiodol result in highly predictable water-in-oil emulsion type. It also demonstrates a significant increase in stability compared to bolus injection. Higher ratio of Lipiodol/doxorubicin is a critical parameter for emulsion stability too.
abstract_id: PUBMED:28869805
Pre-operative trans-catheter arterial chemo-embolization increases hepatic artery thrombosis after liver transplantation - a retrospective study. Little is known about nonsurgical risk factors for hepatic artery thrombosis (HAT) after liver transplantation (LT). We determined risk factors for HAT occurring within 90 days post-LT and analysed the effect of HAT on graft and patient survival. Donor and recipient demographics, surgery-related data and outcome in transplants complicated by thrombosis (HAT+) and their matched controls (HAT-) were compared. Risk factors were assessed by univariate logistic regression. Median (IQR) is given. A total of 25 HAT occurred among 1035 adult LT (1/1997-12/2014) and 50 controls were manually matched. Donor and recipient demographics were similar. Pre-LT trans-catheter arterial chemo-embolization (TACE) was more frequent in HAT+ (HAT+ 20% vs. HAT- 4%, P = 0.037). HAT+ had longer implantation [HAT+ 88 min (76-108) vs. HAT- 77 min (66-93), P = 0.028] and surgery times [HAT+ 6.25 h (5.18-7.47) vs. HAT- 5.25 h (4.33-6.5), P = 0.001]. Early graft dysfunction and sepsis were more frequent in HAT+ and hospitalization longer. TACE had the greatest odds ratio in unadjusted analysis (OR: 6, 95% CI: 1.07-33.53, P = 0.03). All but seven grafts were lost after HAT (HAT+ 72% vs. HAT- 36%, P = 0.003); however, patient survival was unaffected (HAT+ 79.8% vs. HAT- 76%, P = 0.75). LT candidates undergoing TACE are at risk of developing HAT early after transplant.
Answer: The study reported in abstract PUBMED:26110810 aimed to evaluate the safety and efficacy of Trans-Arterial Ethanol-Lipiodol Embolization (TAELE) compared with conventional Trans-Arterial Chemo-Embolization (cTACE) in the treatment of small intermediate-stage hepatocellular carcinoma (HCC). The study included a random sample of 87 patients who were treated with either TAELE, which is a mixture of Ethanol and Lipiodol, or cTACE, which is a mixture of Epirubicin and Lipiodol. The results showed no significant difference in the number of patients with partial and complete response between the two groups according to the modified Response Evaluation Criteria in Solid Tumors (mRECIST) one month after the procedure. However, significant differences were found in tumor-devascularization, lesion-reduction, and post-embolization syndrome occurrence, with TAELE showing better outcomes in these aspects. Additionally, similar survival rates were observed during a 36-month follow-up. The conclusion drawn from this study is that compared to cTACE, TAELE demonstrated a better toxicity profile with similar 36-month survival and similar one-month anti-tumor effects, making it better tolerated by patients, especially in the case of more than one treatment. Therefore, TAELE may be considered a safer option than cTACE for the treatment of small intermediate-stage HCC. |
Instruction: Should we use carbohydrate-deficient transferrin instead of gamma-glutamyltransferase for detecting problem drinkers?
Abstracts:
abstract_id: PUBMED:11106319
Should we use carbohydrate-deficient transferrin instead of gamma-glutamyltransferase for detecting problem drinkers? A systematic review and metaanalysis. Background: Carbohydrate-deficient transferrin (CDT) has been used as a test for excessive alcohol consumption in research, clinical, and medico-legal settings, but there remain conflicting data on its accuracy, with sensitivities ranging from <20% to 100%. We examined evidence of its benefit over a conventional and less expensive test, gamma-glutamyltransferase (GGT), and compared the accuracy of different CDT assay methods.
Methods: We performed a systematic review using summary ROC analysis of 110 studies prior to June 1998 on the use of CDT in the detection of alcohol dependence or hazardous/harmful alcohol use.
Results: We identified several potential sources of bias in studies. In studies examining CDT and GGT in the same subjects, subject characteristics were less likely to influence the comparison. In such paired studies, the original Pharmacia CDT assay was significantly more accurate than GGT, but the modified CDTect assay did not perform as well as the original and was not significantly better than GGT. The accuracy of the AXIS %CDT assay was statistically indistinguishable from modified CDTect. Several CDT assay methods appeared promising, in particular, liquid chromatography (chromatofocusing, HPLC, fast protein liquid chromatography) and isoelectric focusing, but there were insufficient paired studies from which to draw firm conclusions.
Conclusions: In studies published before June 1998, the results obtained with commercially available CDT assays were not significantly better than GGT as markers of excessive alcohol use in paired studies. Further high-quality studies comparing CDTect (modified) and other CDT assays with GGT in the same subjects are needed.
abstract_id: PUBMED:8988962
Superiority of carbohydrate-deficient transferrin to gamma-glutamyltransferase in detecting relapse in alcoholism. Objective: The usefulness of carbohydrate-deficient transferrin is widely accepted in screening (male) population samples for heavy alcohol consumption, but its role in relapse detection is not convincingly established. The authors therefore compared the diagnostic value of carbohydrate-deficient transferrin with the commonly used gamma-glutamyltransferase in identifying relapsed alcoholics during outpatient aftercare.
Method: The patients were 101 male alcoholics who entered a 6-month rehabilitation program after hospital detoxification. Drinking status was assessed by means of self- and collateral reports obtained during regular contacts with the rehabilitation team; relapse was defined as consumption of any alcohol. Visits occurred weekly during month 1, biweekly during month 2, and every 4 weeks during months 3-6. At every visit a blood sample was taken for measurement of carbohydrate-deficient transferrin and gamma-glutamyltransferase.
Results: The proportion of men who reported relapse was 25.6% per scheduled contact on average. Positive predictive values indicated that relapse was identified with a 76.2% probability by carbohydrate-deficient transferrin values above the upper normal limit, in contrast to a 32.9% chance with gamma-glutamyltransferase. Carbohydrate-deficient transferrin was especially useful in detecting early relapses during the initial rehabilitation phase, when gamma-glutamyltransferase values had not normalized. Because of the longer half-life of gamma-glutamyltransferase, it had some value with a 4-week monitoring schedule in detecting new drinking episodes in alcoholics whose previous results had been normal.
Conclusions: Carbohydrate-deficient transferrin proved to be superior to gamma-glutamyltransferase in relapse detection in an outpatient care setting for alcoholics.
abstract_id: PUBMED:7978101
Carbohydrate-deficient transferrin as an alcohol marker among female heavy drinkers: a population-based study. Carbohydrate-deficient transferrin (CDT) has previously been reported to be an excellent marker of male alcoholics. Less is known of its efficiency among women and especially of early-phase alcohol abuse in nonselected populations. The present population-based study examined the diagnostic value of CDT among consecutive women, including 13 teetotallers, 135 social drinkers (mean alcohol consumption 45 +/- 34 g/week), and 57 nonalcoholic heavy drinkers (197 +/- 97 g/week). Sixty-two women with a well-documented history of chronic alcoholism (942 +/- 191 g/week) were also studied, as well as 36 pregnant women used as a reference group. Two weeks of abstinence among 11 alcoholics was followed. The CDT (containing part of isotransferrin with pI = 5.7, 5.8, and 5.9) was separated by anion exchange chromatography and assayed by radioimmunoassay. In the whole material, CDT correlated significantly with alcohol consumption (r = 0.43, p < 0.001) but not with conventional markers (gamma-glutamyltransferase, AST, ALT, and mean corpuscular volume). The CDT values of alcoholics (34 +/- 20 units/liter) were significantly (p < 0.001) higher than those of teetotallers (19 +/- 6 units/liter), social drinkers (20 +/- 6 units/liter), or pregnant women (16 +/- 3 units/liter). Heavy drinkers also had higher values (25 +/- 13 units/liter), but the difference did not reach statistic significance. The specificity of CDT was on the level of conventional markers when the cut-off value was increased from 26 to 29 units/liter. At a specificity of 95%, CDT found 19% of the heavy drinkers and 52% of the alcoholics; the best traditional marker, AST, with a specificity of 97%, found 7% and 56%, respectively.(ABSTRACT TRUNCATED AT 250 WORDS)
abstract_id: PUBMED:9660318
Carbohydrate-deficient transferrin and conventional alcohol markers as indicators for brief intervention among heavy drinkers in primary health care. Brief intervention is a promising treatment for heavy drinking. The present study examined the diagnostic value of carbohydrate-deficient transferrin (CDT), mean corpuscular volume (MCV), aspartate aminotransferase (AST), alanine aminotransferase (ALT), and gamma-glutamyltransferase (GGT) in detecting early-phase heavy drinkers for brief intervention treatment in primary health care. Laboratory data were collected from consecutive 20- to 60-year-old, early-phase heavy drinkers (329 males and 136 females), who were willing to undergo brief intervention treatment in five primary health care outpatient clinics. An elevated value of at least 1 of the 5 markers studied was found in 75% of the male and in 76% of the female heavy drinkers. The sensitivities of CDT, MCV, AST, ALT and GGT values were low; in men, respectively, 39%, 28%, 12%, 28%, and 33%, and in women 29%, 40%, 20%, 29%, and 34%. However, marker combinations, including CDT, reached a good level of sensitivity; the best triple combination (CDT or MCV or GGT) was positive in 69% of the men and 70% of the women. According to logistic regression, the age of the patient had an increasing effect on MCV, ALT and GGT. High body mass index increased all transaminases and decreased CDT and MCV. Smoking increased MCV and decreased AST. Thus, primary health care marker combinations, especially those including CDT, should be considered for the detection of early-phase heavy drinkers for brief intervention treatment.
abstract_id: PUBMED:2571374
Carbohydrate deficient transferrin: a marker for alcohol abuse. Objective: To assess the value of serum carbohydrate deficient transferrin as detected by isoelectric focusing on agarose as an indicator of alcohol abuse.
Design: Coded analysis of serum samples taken from patients with carefully defined alcohol intake both with and without liver disease. Comparison of carbohydrate deficient transferrin with standard laboratory tests for alcohol abuse.
Setting: A teaching hospital unit with an interest in general medicine and liver disease.
Patients: 22 "Self confessed" alcoholics admitting to a daily alcohol intake of at least 80 g for a minimum of three weeks; 15 of the 22 self confessed alcoholics admitted to hospital for alcohol withdrawal; 68 patients with alcoholic liver disease confirmed by biopsy attending outpatient clinics and claiming to be drinking less than 50 g alcohol daily; 47 patients with non-alcoholic liver disorders confirmed by biopsy; and 38 patients with disorders other than of the liver and no evidence of excessive alcohol consumption.
Intervention: Serial studies performed on the 15 patients undergoing alcohol withdrawal in hospital. MAIN OUTCOME measure--Determination of relative value of techniques for detecting alcohol abuse.
Results: Carbohydrate deficient transferrin was detected in 19 of the 22 (86%) self confessed alcohol abusers, none of the 47 patients with non-alcoholic liver disease, and one of the 38 (3%) controls. Withdrawal of alcohol led to the disappearance of carbohydrate deficient transferrin at a variable rate, though in some subjects it remained detectable for up to 15 days. Carbohydrate deficient transferrin was considerably superior to the currently available conventional markers for alcohol abuse.
Conclusion: As the technique is fairly simple, sensitive, and inexpensive we suggest that it may be valuable in detecting alcohol abuse.
abstract_id: PUBMED:29958893
Determination of serum carbohydrate-deficient transferrin by a nephelometric immunoassay for differential diagnosis of alcoholic and non-alcoholic liver diseases. Background: Carbohydrate-deficient transferrin is a biological marker of excessive drinking. The aim of this study was to evaluate the diagnostic value of a direct nephelometric immunoassay for the differential diagnosis of alcoholic and non-alcoholic liver diseases in comparison with gamma glutamyl transferase.
Methods: Serum samples were obtained from 305 subjects, including 122 patients with alcoholic and 102 cases with non-alcoholic liver diseases. Serum levels of carbohydrate-deficient transferrin were expressed as a percentage of total transferrin.
Results: Serum % carbohydrate-deficient transferrin levels were significantly higher in patients with alcoholic than with non-alcoholic liver diseases. Carbohydrate-deficient transferrin had better specificity than gamma glutamyl transferase to differentiate between alcoholic and non-alcoholic liver diseases.There were 8 alcoholic liver disease patients with normal gamma glutamyl transferase levels, and carbohydrate-deficient transferrin was significantly elevated in 6 of them. On the other hand, there were 25 non-alcoholic liver disease patients with elevated gamma glutamyl transferase levels; their carbohydrate-deficient transferrin levels were within the reference intervals in all cases.
Conclusion: This simple carbohydrate-deficient transferrin immunoassay is useful to detect so-called gamma glutamyl transferase non-responding drinkers and also to exclude the possible role of excessive drinking in apparently non-alcoholic liver diseases. A large-scale prospective study is needed to further confirm the diagnostic utility of carbohydrate-deficient transferrin.
abstract_id: PUBMED:16799164
Comparison of the combined marker GGT-CDT and the conventional laboratory markers of alcohol abuse in heavy drinkers, moderate drinkers and abstainers. Aims: A combined index based on gamma-glutamyltransferase (GGT) and carbohydrate-deficient transferrin (CDT) measurements (GGT-CDT) has been recently suggested to improve the detection of excessive ethanol consumption. The aim of this work was to compare GGT-CDT with the conventional markers of alcohol abuse in individuals with a wide variety of alcohol consumption.
Methods: A cross-sectional and follow-up analysis was conducted in a sample of 165 heavy drinkers, consuming 40-540 g of ethanol per day, and 86 reference individuals who were either moderate drinkers (n = 51) or abstainers (n = 35).
Results: GGT-CDT (5.35 +/- 1.08) in the heavy drinkers was significantly higher than in the reference individuals (3.30 +/- 0.37). The sensitivity of GGT-CDT (90%) in correctly classifying heavy drinkers exceeded that of CDT (63%), GGT (58%), mean corpuscular volume (MCV) (45%), aspartate aminotransferase (AST) (47%), and alanine aminotransferase (ALT) (50%), being also essentially similar for alcoholics with (93%) or without (88%) liver disease. When comparing the data using either moderate drinkers or abstainers as reference population, the sensitivity of GGT-CDT, CDT, and ALT remained unchanged whereas the sensitivity of GGT, MCV, and AST was found to show variation.
Conclusions: GGT-CDT improves the sensitivity of detecting excessive ethanol consumption as compared with the traditional markers of ethanol consumption. These findings should be considered in the assessment of patients with alcohol use disorders.
abstract_id: PUBMED:11274018
Improved diagnostic classification of alcohol abusers by combining carbohydrate-deficient transferrin and gamma-glutamyltransferase. Background: Biochemical markers can provide objective evidence of high alcohol consumption. However, currently available markers have limitations in their diagnostic performance.
Methods: The diagnostic values of the most frequently used markers [carbohydrate-deficient transferrin (CDT), gamma-glutamyltransferase (GGT), aspartate aminotransferase, alanine aminotransferase, and mean corpuscular volume] were studied in an analysis of six different clinical studies (n = 1412) on alcohol abusers and social drinkers. The purpose of the analyses was to determine whether a combination of markers would improve the diagnosis of subjects.
Results: Discrimination between alcohol abusers and social drinkers, as measured by the areas under nonparametric ROC plots, was significantly better (P<0.001) for the new combined marker [gamma-CDT = 0.8. ln(GGT) + 1.3. ln(CDT)] than for any of the separate markers or combination of CDT or GGT with other markers. The cutoff values for gamma-CDT (6.5) can be taken to be the same among males and females.
Conclusions: The combined variable gamma-CDT is a powerful tool to discriminate alcohol abusers from social drinkers and is recommended for clinical use.
abstract_id: PUBMED:17728504
Carbohydrate-deficient transferrin as a marker of heavy drinking in Korean males. This study was performed to evaluate the usefulness of carbohydrate-deficient transferrin (CDT) as a marker of heavy drinking in Korean males. The subjects (143 Korean males) were classified into 2 groups according to the amount of drinking, moderate drinkers (72 individuals) who drank 14 drinks or less per week and heavy drinkers (71 individuals) who drank more than 14 drinks per week. Using %CDT, gamma glutamyl transferase (GGT), aspartate aminotransferase (AST), and alanine aminotransferase (ALT) as clinical markers for heavy drinking, sensitivity, specificity, positive and negative predictive values were investigated. Sensitivities of %CDT, GGT, AST, and ALT were 83.1%, 67.6%, 52.1% and 46.5%, respectively. Specificities were 63.9%, 45.8%, 72.2%, and 54.2%, respectively. Positive predictive values were 69.4%, 55.2%, 64.9%, and 50.0% respectively. Negative predictive values were 79.3%, 58.9%, 60.5%, and 50.6% respectively. The areas under the receiver operating characteristic curve (95% confidence interval) for %CDT, GGT, AST, and ALT were 0.823 (0.755-0.891), 0.578 (0.484-0.673), 0.622 (0.528-0.717), and 0.516 (0.420-0.613), respectively. CDT is considered as the most reliable marker for detecting heavy drinking in Korean males.
abstract_id: PUBMED:11584153
Utility of a new assay for carbohydrate-deficient transferrin (Biorad %CDT TIA) to monitor abstinence during a treatment outcome study. Background: The ability to reliably detect heavy alcohol use is important in both clinical and research populations. The current study evaluates the utility of the newest method of measuring carbohydrate deficient transferrin (CDT) in monitoring the abstinence during a treatment outcome study.
Methods: Blood from 40 alcohol dependent individuals was obtained at baseline and at weeks 4, 8, and 12 of treatment. Differences in percent of baseline GGT and %CDT levels were analyzed in people who remained abstinent throughout treatment (abstainers) and in those who consumed alcohol during treatment (drinkers).
Results: There was a significant decrease in the percent of baseline %CDT levels in the subjects who abstained at week 4 and a trend at weeks 8 and 12. Conversely, there were no significant differences in percent of baseline GGT levels between drinkers and abstainers at any time point.
Conclusions: Although small in nature, this study provides preliminary evidence for the use of the relatively new Biorad %CDT assay to monitor drinking status during treatment outcome studies. This study is also consistent with previously reported findings that GGT appears to be less sensitive than %CDT in detecting the consumption of alcohol. A larger trial focusing on sex differences in the utility of % CDT to monitor outcome would be of interest.
Answer: The decision to use carbohydrate-deficient transferrin (CDT) instead of gamma-glutamyltransferase (GGT) for detecting problem drinkers depends on various factors, including the context of use, the accuracy of the tests, and the specific populations being tested.
Several studies have shown that CDT can be a valuable marker for detecting excessive alcohol consumption. For instance, one study found that CDT was superior to GGT in detecting relapse in alcoholics during outpatient aftercare, particularly in identifying early relapses when GGT values had not normalized (PUBMED:8988962). Another study suggested that CDT is the most reliable marker for detecting heavy drinking in Korean males, with higher sensitivity compared to GGT and other conventional markers (PUBMED:17728504).
Moreover, a study that compared a combined index based on GGT and CDT measurements (GGT-CDT) with conventional markers found that GGT-CDT had a higher sensitivity in correctly classifying heavy drinkers than CDT, GGT, and other markers alone, indicating that a combination of markers might improve the detection of excessive ethanol consumption (PUBMED:16799164). Similarly, another study concluded that a combined variable gamma-CDT is a powerful tool to discriminate alcohol abusers from social drinkers and is recommended for clinical use (PUBMED:11274018).
However, a systematic review and meta-analysis prior to June 1998 indicated that the results obtained with commercially available CDT assays were not significantly better than GGT as markers of excessive alcohol use in paired studies (PUBMED:11106319). This suggests that while CDT may have certain advantages, it may not always outperform GGT.
In conclusion, while CDT has shown promise and may be superior to GGT in certain situations, such as relapse detection and in specific populations, the evidence is not unequivocal. The choice between CDT and GGT may depend on the specific clinical or research context, and a combination of markers may offer improved diagnostic accuracy. Further high-quality studies comparing different CDT assays with GGT in the same subjects are needed to provide more definitive guidance on the use of CDT over GGT for detecting problem drinkers. |
Instruction: Is Propranolol Safe and Effective for Outpatient Use for Infantile Hemangioma?
Abstracts:
abstract_id: PUBMED:26101993
Is Propranolol Safe and Effective for Outpatient Use for Infantile Hemangioma? A Prospective Study of 679 Cases From One Center in China. Background: The protocol for the treatment of infantile hemangioma with propranolol varies among different clinical centers.
Methods: Six hundred seventy-nine patients who were 1 to 12 months old were recruited in this prospective study to receive propranolol treatment. The response to the propranolol therapy was classified as 4 levels. The results were primarily evaluated using color Doppler ultrasound examinations before and after propranolol treatment.
Results: The response was excellent in 176 (25.9%), good in 492 (72.5%), stable in 5 (0.7%), and poor in 6 (0.9%) of the patients. The mean age at the initiation of the therapy was 3.3 months (range, 1 to 10.9 months) and the mean duration of the therapy was 7.1 months (range, 3-17 months). The mean duration of the follow-up time after the discontinuation of the therapy was 5.3 months (range, 3-17 months). Regrowth of the hemangioma was observed in 92 cases (13.5%). Seventy-nine (11.6%) of the parents complained of their child's minor discomfort during the therapy.
Conclusions: Propranolol (2 mg/kg per day) may significantly reduce the size of a hemangioma. As an outpatient therapy, propranolol was found to be safe for Chinese children and to have minor side effects.
abstract_id: PUBMED:34287374
Safe and Effective Treatment of Intracranial Infantile Hemangiomas with Beta-Blockers. Infantile hemangiomas are common benign vascular tumors but are rarely found in an intracranial location. Our literature review identified 41 reported cases. There is no general consensus on management of these rare lesions and until recently, treatment was limited to surgery or pharmacological management with steroids or interferon. Although beta-blockers have been widely prescribed in the treatment of cutaneous infantile hemangiomas since 2008, their use in the treatment of intracranial infantile hemangiomas has been minimal. We present a case of infantile hemangioma affecting the right orbit, associated with intracranial extension, causing intermittent right facial nerve palsy. The patient achieved an excellent outcome following combined treatment with oral propranolol and topical timolol maleate 0.5%, with complete regression of the lesion by 4 months. We conclude that beta-blockers are a safe and effective treatment of intracranial infantile hemangiomas and can be employed as first-line management of these lesions.
abstract_id: PUBMED:21697036
Oral propranolol: an effective, safe treatment for infantile hemangiomas. Infantile hemangiomas (IH) are the most common childhood tumors. In 2008, Labreze reported the serendipitous effect of oral propranolol on hemangioma and since then it has overshadowed the use of other therapeutic modalities in the treatment of IH. The aim of this prospective, clinical study was to assess the efficacy and safety profile of oral propranolol at a fixed dose of 2 mgkg(-1) in the treatment of 30 patients with problematic IH. Propranolol treatment continued for a duration of 2-14 months where 60% of the patients (n=18) showed a final excellent response with complete resolution of the lesion (P<0.001). 20% (n=6) showed a good response with more than 50% reduction in the size of the IH. 16.6% showed a fair response (n=5) with less than 50% reduction in the size of the IH. Only one patient (3.3%) was resistant to treatment. Five patients (17.24%) showed evidence of rebound growth after cessation of therapy and responded well to re-treatment.We did not face any side effects related to the oral propranolol. In conclusion, propranolol therapy at a fixed dose of 2 mgkg(-1), given in three equally divided doses, is a very safe and effective regimen in the treatment of IH.
abstract_id: PUBMED:22897120
Use of propranolol for treatment of infantile haemangiomas in an outpatient setting. Introduction: Propranolol has recently emerged as an effective drug treatment for infantile haemangiomas. The side effect profile of the drug and the safety of administering propranolol in outpatient settings in this age group remain uncertain. We report our experience with 200 infants and children prescribed propranolol to treat infantile haemangiomas, including 37 patients considered to have a poor response to treatment.
Method: Patients were prescribed propranolol (1 mg/kg/dose bd) as outpatients at the Vascular Anomalies Service at the Royal Children's Hospital, Melbourne.
Results: The median age at commencement was 4 months (range 5 days-7 years). Twenty patients were older than 12 months at commencement. The median duration of treatment was 8 months. About 80% of treated haemangiomas were on the face. Approximately 50% of patients were considered to have an excellent response, 30% to have a good response and 20% to have a poor response. All segmental facial haemangiomas responded well. In contrast, 25% of focal facial haemangiomas responded poorly. Sleep disturbance was the most common side effect. Gross motor abnormalities including delayed walking were observed in 13 patients.
Conclusion: Propranolol appears to be an effective treatment for infantile haemangiomas, particularly large segmental facial lesions. A poor response was seen in 20% of patients. Treatment has been provided in an outpatient setting without major complications and with excellent parental compliance. The side effect profile appears to be favourable, but further follow-up is required to identify unexpected long-term side effects.
abstract_id: PUBMED:35283595
A Clinicopathological Study to Assess the Role of Intralesional Sclerotherapy Following Propranolol Treatment in Infantile Hemangioma. Context: As propranolol has emerged as first-line therapy for problematic infantile hemangioma, the number of non-responders and partial responders to propranolol therapy is also increasing.
Aims: The study was conducted to evaluate the response of intralesional bleomycin, triamcinolone, and a combination of both as second line of treatment for the residual hemangioma following propranolol therapy.
Settings And Design: A prospective comparative study was conducted in patients who were either non-responders or partial responders to previous propranolol treatment.
Materials And Methods: The patients randomly received injection bleomycin, injection triamcinolone, and combination of both bleomycin and triamcinolone. The response to treatment was recorded clinically by using photographs. The pathological response was assessed by calculating pre-treatment and post-treatment microvessel density in biopsy of lesion from the non-cosmetic sites using immunohistochemistry.
Statistical Analysis Used: χ2 test was used to test the association between the variables. The utility of microvessel diameter (MVD) in terms of clinical response to the therapy was predicted by using receiver operating characteristic (ROC) curve.
Results: Out of the 134 patients, 42 received bleomycin and 44 received triamcinolone and were treated with a combination of both. The overall clinical response was better in the combination group compared with the bleomycin group (P = 0.018) and triamcinolone group (P = 0.0005), respectively, after 6 months of follow-up. There was no difference in clinical response between the triamcinolone and bleomycin groups. Change in MVD correlated with the clinical response.
Conclusion: The combination of bleomycin and triamcinolone is effective and safe for the treatment of residual hemangioma.
abstract_id: PUBMED:30047346
Cost-effectiveness of treating infantile haemangioma with propranolol in an outpatient setting. Background: Infantile haemangioma is one of the most commonly known benign vascular tumours of infancy and childhood, having an incidence of 3-10%. Most lesions regress spontaneously; however, some may require treatment owing to their clinical and cosmetic effects. Propranolol has become the treatment of choice for infantile haemangioma, but treatment protocols are largely institutional based without any specific consensus guidelines. Our aim was to evaluate the cost-effectiveness of propranolol use as inpatient versus outpatient therapy.
Methods: A decision tree model was created depicting alternate strategies for initiating propranolol treatment on an inpatient versus outpatient basis combined with the option of a pretreatment echocardiogram applied to both strategies. Cost analysis was assumed to be based on treatment of haemangioma in patients who were born at term, had no chronic illnesses, a non-life-threatening location of the haemangioma, and those who were not taking any other medications that could potentiate the side effects of propranolol. A sensitivity analysis was performed to evaluate the probability of side effects.
Results: The average cost incurred for inpatient treatment of infantile haemangioma was approximately $2603 for a single hospital day and increased to $2843 with the addition of an echocardiogram. The expected cost of treatment in the outpatient setting was $138, which increased to $828 after the addition of an echocardiogram.
Conclusion: Treating infantile haemangioma with propranolol is more cost-effective when initiated on an outpatient basis.
abstract_id: PUBMED:22727954
Outpatient treatment of infantile hemangiomas with propranolol: a prospective study. Objective: To assess the safety and effectiveness of oral propranolol (OP) in the treatment of infantile hemangiomas.
Material And Method: We conducted a prospective study of infantile hemangiomas (IHs) treated with oral propranolol between October 2008 and March 2011. We included fast-growing IHs in the proliferative phase, IHs affecting vital structures, ulcerated IHs, and IHs that could cause functional or aesthetic problems after the proliferative phase. The patients received oral propranolol 2mg/kg/d and were monitored on an outpatient basis. Response to treatment was assessed by volume reduction, lightening of color, improvement of symptoms, and parent satisfaction. Time of initial and peak response, as well as side effects and sequelae, were recorded.
Results: We analyzed 20 IHs, corresponding to 17 girls and 3 boys. The main sites of involvement were around the eyes (20%), the nose (15%), the neck (15%), and the trunk (15%). Ninety percent of the hemangiomas were focal and in the proliferative phase. Treatment was started between the ages of 2 and 19 months and the main reason for starting treatment was rapid growth (50% of cases). Initial response was observed in 70% of cases and only in 2 of them it took over a month. Peak response occurred at 3 months. All the IHs responded to treatment; response was excellent in 55% of cases, good in 35%, and minimal in 10%. The following factors were predictive of response: focal IH, proliferative phase, periorbital location, and ulceration. No serious side effects were observed.
Conclusion: Oral propranolol was clinically effective in reducing the volume and color of infantile hemangiomas, although the reduction was not complete and telangiectasia and scarring persisted after treatment. Oral propranolol also proved to be safe for use in outpatients.
abstract_id: PUBMED:22166728
Propranolol for infantile haemangiomas: initiating treatment on an outpatient basis. Introduction: Propranolol was recently discovered to be an effective treatment for infantile haemangiomas, and varying doses and monitoring regimens have been proposed. Adverse events, although uncommon, have been reported.
Materials And Methods: This was a retrospective chart review of infants with haemangiomas who were started on propranolol at a dose of 3 milligrams per kilogram per day on an outpatient basis. After a baseline cardiac evaluation including an electrocardiogram and an echocardiogram, treatment was initiated during 6 hours of observation.
Results: A total of 15 patients were identified; however, only 13 returned for at least one follow-up visit. This cohort was followed up for a median of 2.8 months with a range from 0.2 to 10.0. No hypotension, hypoglycaemia, bronchospasm, or clinically significant bradycardia occurred during treatment. All patients had clinical improvement of their haemangiomas.
Conclusions: This study suggests that initiating treatment during outpatient observation may be a reasonable alternative to inpatient admission. In addition, expensive testing may not be necessary during pre-treatment screening when the physical examination is normal.
abstract_id: PUBMED:25556828
Outpatient use of oral propranolol and topical timolol for infantile hemangiomas: survey results and comparison with propranolol consensus statement guidelines. Oral and topical β-blockers are used to treat infantile hemangiomas (IHs). Although a recent consensus report provided guidelines for the treatment of IH with propranolol, there are no standard guidelines for the use of topical timolol. The objectives of this study were to determine the current use of oral propranolol and topical timolol by pediatric dermatologists in an outpatient setting and to compare current propranolol use with published propranolol consensus guidelines. An electronic survey was sent to pediatric dermatologists in May and June 2013. One hundred forty-nine pediatric dermatologists responded to the survey, a 79% response rate. Of the respondents, 96% prescribed oral propranolol, but 75% did not follow consensus guidelines exactly; recommended history, physical examination, initial dose, and frequency varied. The dose of propranolol was usually titrated up to goal dose as recommended (89%). Fifty-six percent monitored vital signs in patients after the initial dose and 49% continued to monitor vital signs in their clinic after each dose escalation, which did not meet consensus guideline recommendations. Ninety-one percent reported using topical timolol for the treatment of IH and 66% responded they had used topical timolol in conjunction with oral propranolol to treat IH. The most common indication was superficial hemangiomas (97%). Most practitioners (74%) did not routinely monitor heart rate or blood pressure in infants treated with topical timolol. This study highlights the variability in prescribing and monitoring practices of physicians using propranolol for the treatment of IHs and demonstrates that topical timolol is commonly used alone and in conjunction with oral propranolol to treat IHs.
abstract_id: PUBMED:21402965
Use of propranolol in treating hemangiomas. Question: I see many children with infantile hemangiomas and have read about new therapeutic options such as propranolol. Is this medication effective and safe for treating hemangiomas in children?
Answer: Most infantile hemangiomas resolve spontaneously without any need for therapy. In many case series, propranolol has been shown to be effective and safe in treating hemangiomas that cause complications. Further studies are required to determine the optimal dose and duration of propranolol treatment for problematic hemangiomas.
Answer: Based on the provided abstracts, propranolol appears to be both safe and effective for outpatient use in the treatment of infantile hemangioma. A prospective study of 679 cases in China found that propranolol significantly reduced the size of hemangiomas with minor side effects, suggesting it is safe for Chinese children as an outpatient therapy (PUBMED:26101993). Another study reported an excellent outcome with the combined treatment of oral propranolol and topical timolol maleate for intracranial infantile hemangiomas, indicating that beta-blockers can be employed as first-line management for these lesions (PUBMED:34287374).
A clinical study assessing the efficacy and safety profile of oral propranolol at a fixed dose of 2 mg/kg in the treatment of problematic infantile hemangiomas showed that 60% of patients had complete resolution of the lesion, with no side effects related to the treatment, supporting its safety and effectiveness (PUBMED:21697036). Similarly, another report from the Royal Children's Hospital, Melbourne, indicated that propranolol is an effective treatment for infantile hemangiomas, especially large segmental facial lesions, with a favorable side effect profile when provided in an outpatient setting (PUBMED:22897120).
For cases where propranolol treatment alone was not sufficient, a study found that the combination of bleomycin and triamcinolone is effective and safe for the treatment of residual hemangioma (PUBMED:35283595). Additionally, a cost-effectiveness analysis showed that treating infantile hemangioma with propranolol is more cost-effective when initiated on an outpatient basis (PUBMED:30047346).
A prospective study concluded that oral propranolol was clinically effective in reducing the volume and color of infantile hemangiomas and proved to be safe for use in outpatients (PUBMED:22727954). Moreover, a retrospective chart review suggested that initiating treatment on an outpatient basis may be a reasonable alternative to inpatient admission, with no serious side effects observed (PUBMED:22166728).
Lastly, a survey among pediatric dermatologists revealed that while there is variability in prescribing and monitoring practices, the majority prescribed oral propranolol and topical timolol for the treatment of infantile hemangiomas in an outpatient setting, further supporting its safety and effectiveness (PUBMED:25556828). |
Instruction: Informal care of people with mental disorders: does the Austrian long-term care system provide adequate support?
Abstracts:
abstract_id: PUBMED:17555005
Informal care of people with mental disorders: does the Austrian long-term care system provide adequate support? Objective: The Austrian long-term care system covers all types of long-term chronic diseases and handicaps and is based on a payment for care scheme. The benefit is directed to care recipients, who are - in the outpatient sector - largely free in how to use it. Herewith, the payment for care scheme also has a significant impact on the provision of informal care. The paper studies this impact for the particular case of informal care provided for mentally ill people.
Methods: The analysis is based on a questionnaire survey of persons that are providing informal care to mentally ill people in Austria. The data is confronted with results from other studies on informal care provision in Austria.
Results: The study shows specific characteristics of informal long-term care for people with mental illness which is associated with specific burdens for the informal carers. Carers and care recipients benefit only to a moderate extent from public long-term care benefits and bear comparably high material and social costs.
Conclusions: The cash-oriented long-term care system in Austria offers only limited support for the particular case of informal care provided for mentally ill people. The long-term care system needs to be tailored to the special needs of both carer and care recipient in order to achieve the stated aim of self-determination and freedom of choice.
abstract_id: PUBMED:31417610
Insights into the system of care of the elderly with mental disorders from the perspective of informal caregivers in Lithuania. Background: Changes in the demographics and respective growth of life expectancy and social needs make informal caregiving crucial component of comprehensive health and social care network, which substantially contributes to the health and well-being of the elderly. The purpose of this paper is to understand the system of care of elderly patients with mental disorders from the perspective of informal caregivers in Lithuania.
Methods: We conducted five semi-structured focus group discussions with 31 informal caregivers attending to elderly patients with mental disorders. The data were audiotaped and transcribed verbatim. A thematic analysis was subsequently performed.
Results: Five thematic categories were established: (1) the current state of care-receivers: representation of the complexity of patients' physical and mental condition. (2) The current state of caregivers: lack of formal caregivers' integration as a team; inadequate formal involvement of informal caregivers. (3) Basic care needs: the reflection of the group needs relating directly to the patient, care organisation and the caretaker. (4) The (non-) Readiness of the existing system to respond to the needs for care: long-term care reliance on institutional services, lack of distinction between acute/immediate care and nursing, lack of integration between the medical sector and the social care sector. (5) Potential trends for further improvement of long-term care for the elderly with mental disorders.
Conclusions: Strengthening of the care network for elderly patients with mental disorders should cover more than a personalised and comprehensive assessment of the needs of patients and their caregivers. Comprehensive approaches, such as formalization of informal caregivers' role in the patient care management and planning, a more extensive range of available services and programs supported by diverse sources of funding, systemic developments and better integration of health and social care systems are essential for making the system of care more balanced.
abstract_id: PUBMED:33792016
Needs, Dilemmas, and Policy Suggestions Related to Patients With Mental Health Long-Term Care In Taiwan, The increase in life expectancy in Taiwan has increased the incidents of age-related problems among patients with mental illness. Therefore, the needs related to long-term care in mental health are significantly important. These needs include: (1) reducing stigmatization; (2) reducing the physical and economic burden of caregivers; (3) constructing a comprehensive, long-term care service system; and (4) developing assessment tools suitable to the long-term care of patients with mental illness. Moreover, six dilemmas in meeting long-term care needs were identified. These dilemmas include: (1) lack of a model of continuous care and of a platform for integrating hospital and community resources; (2) poor / inadequate service quality provided by certain community rehabilitation institutions; (3) the needs of patient/family centered care; (4) the persistence of stigma and misunderstanding; (5) the heavy burdens borne by family members providing long-term care; and (6) the disconnect between subsequent needs and the disability assessment system. Policy suggestions provided in this article include: (1) establish an inclusive platform for mental health long-term care information and resource integration; (2) construct long-term care centers for patients with mental health conditions; (3) train adequate manpower to provide long-term care services to these patients; and (4) promote community inclusiveness for these patients. In order to enter the era of long-term mental health care, government policy should target long-term care programs to meet the needs of patients with mental health conditions. These programs should include seamlessly integrating services into the long-term mental health care system and the care resources of community mental health, developing suitable assessment tools, establishing a multidisciplinary team of long-term care professionals to provide mental health care.
abstract_id: PUBMED:10185310
Current challenges to providing personalized care in a long term care facility. Long term care facilities are finding it increasingly difficult to deliver quality, personalized care to their clients. This is related to: the integration of frail, elderly residents with cognitive impairment and/or behavioural disorders; ethical dilemmas; settings that are not conducive to providing a stimulating and supportive atmosphere which would enhance care delivery; admission of residents with increasingly complex care needs without adequate funding and/or support services, and staff training needs. This paper defines how one organization intends to facilitate the changes required to improve the delivery of quality, personalized care.
abstract_id: PUBMED:31070867
Nurses' perceptions regarding providing psychological care for older residents in long-term care facilities: A qualitative study. Aims And Objectives: To explore nurses' perceptions regarding providing psychological health care for older residents in long-term care facilities (LTCFs).
Background: Loneliness and depressive symptoms are commonly observed among older residents living in LTCFs. Nurses are expected to provide holistic care including physical, psychological and social care for older residents in LTCFs to fulfil their needs. Therefore, understanding nurses' feelings and thoughts regarding providing care for older residents who feel lonely, sad, unhappy or depressed is important for delivering better care.
Design: A qualitative research design was employed. The Standards for Reporting Qualitative Research (SRQR) was used to enhance for reporting quality.
Methods: Purposive sampling and snowball sampling were applied in Northern Taiwan. One-to-one in-depth interviews were conducted using a semi-structured interview guide. Twenty-one nurses with a mean age of 38.4 years were interviewed. Content analysis was performed for data analysis.
Findings: Four themes were generated from the data: "insufficient psychological healthcare competency," "having a willing heart but not adequate support," "families playing an essential role in residents' mood" and "physical-oriented care model."
Conclusions: Long-term care facilitie nurses felt that they were not adequately prepared for taking care of older adults' psychological problems before their nursing career or during their practice. Unreasonable nurse-to-resident ratios and an absence of care consensus among healthcare providers can make nurses feel that they have a willing heart but not adequate support. Family members are essential in older residents' emotional status within the Taiwanese cultural context. Physical care evaluation indicators emphasised by LTCF accreditation resulted in the current care practice model.
Implications For Practice: This study provides valuable information for LTCF nurses, managers and directors to develop appropriate strategies to assist nurses in providing better psychological health care for older residents. Evaluation indicators required by LTCF accreditation in Taiwan must be re-examined at the earliest stage.
abstract_id: PUBMED:25175475
A Relationship of Care Time with Functional Status and Patients Characteristics among Patients in Long-term Care Hospitals. Objectives: The aim of this study was to investigate the functional status variables related to the care time of health professionals for patients in long-term care facilities.
Methods: The functional stati of 1001 patients in 8 longterm care hospitals were examined by the Resident Assessment Instrument for Long-term Care Facility Version 2.0. The care time of health professionals for patients was calculated using data from a self-reported task survey by nurses, auxiliary nurses, private aides, doctors, physiotherapists and social workers.
Results: The average care time per diem was 240.6 minutes. The care time by doctors, nurses and private aides were 11.0, 71.0 and 139.5 minutes, respectively. The lower the function of activities of daily living (ADL) and the greater the symptoms of extensive services, special care and clinical complexity, the more care time was served. On the contrary, the greater the symptoms of nursing rehabilitation, depression, cognitive disorder, behavior problem and psychiatry/mood disorder, the less care time was served. Age and gender were not significantly related to the care time.
Conclusions: Developing a case mix classification system for elderly long term care patients may be helpful for both of patients and health care providers. The ADL, extensive services, special care and clinical complexity of variables should be considered in the development of a case mix system for the long term care of patients in Korea.
abstract_id: PUBMED:20598194
Principles of good care for long-term care facilities. Background: The International Psychogeriatric Association Task Force on Mental Health Services in Long-Term Care Facilities aims to support and strengthen mental health services in the long-term care sector. The purpose of this paper is to identify broad principles that may underpin the drive towards meeting the mental health needs of residents of long-term care facilities and their families, as well as to enhance the overall delivery of residential care services.
Methods: Principles of good care are extrapolated from an analysis of international consensus documents and existing guidelines and discussed in relation to the research and practice literature.
Results: Although the attention to principles is limited, this review reveals an emerging consensus that: (1) residential care should be situated within a continuum of services which are accessible on the basis of need; (2) there should be an explicit focus on quality of care in long-term care facilities; and (3) quality of life for the residents of these facilities should be a primary objective. We take a broad perspective on the challenges associated with actualizing each of these principles, taking into consideration key issues for families, facilities, systems and societies.
Conclusions: Recommendations for practice, policy and advocacy to establish an internationally endorsed principles-based framework for the evolution and development of good mental health care within long-term care facilities are provided.
abstract_id: PUBMED:25035692
Integrating care for people with mental illness: the Care Programme Approach in England and its implications for long-term conditions management. Introduction: This policy paper considers what the long-term conditions policies in England and other countries could learn from the experience of the Care Programme Approach (CPA). The CPA was introduced in England in April 1991 as the statutory framework for people requiring support in the community for more severe and enduring mental health problems. The CPA approach is an example of a long-standing 'care co-ordination' model that seeks to develop individualised care plans and then attempt to integrate care for patients from a range of providers.
Policy Description: The CPA experience is highly relevant to both the English and international debates on the future of long-term conditions management where the agenda has focused on developing co-ordinated care planning and delivery between health and social care; to prioritise upstream interventions that promote health and wellbeing; and to provide for a more personalised service.
Conclusion: This review of the CPA experience suggests that there is the potential for better care integration for those patients with multiple or complex needs where a strategy of personalised care planning and pro-active care co-ordination is provided. However, such models will not reach their full potential unless a number of preconditions are met including: clear eligibility criteria; standardised measures of service quality; a mix of governance and incentives to hold providers accountable for such quality; and genuine patient involvement in their own care plans.
Implications: Investment and professional support to the role of the care co-ordinator is particularly crucial. Care co-ordinators require the requisite skills and competencies to act as a care professional to the patient as well as to have the power to exert authority among other care professionals to ensure multidisciplinary care plans are implemented successfully. Attention to inter-professional practice, culture, leadership and organisational development can also help crowd-in behaviours that promote integrated care.
abstract_id: PUBMED:33250796
The Active Recovery Triad Model: A New Approach in Dutch Long-Term Mental Health Care. Unlike developments in short-term clinical and community care, the recovery movement has not yet gained foothold in long-term mental health services. In the Netherlands, approximately 21,000 people are dependent on long-term mental health care and support. To date, these people have benefited little from recovery-oriented care, rather traditional problem-oriented care has remained the dominant approach. Based on the view that recovery is within reach, also for people with complex needs, a new care model for long-term mental health care was developed, the active recovery triad (ART) model. In a period of 2.5 years, several meetings with a large group of stakeholders in the field of Dutch long-term mental health care took place in order to develop the ART model. Stakeholders involved in the development process were mental health workers, policy advisors, managers, directors, researchers, peer workers, and family representatives. The ART model combines an active role for professionals, service users, and significant others, with focus on recovery and cooperation between service users, family, and professionals in the triad. The principles of ART are translated into seven crucial steps in care and a model fidelity scale in order to provide practical guidelines for teams implementing the ART model in practice. The ART model provides guidance for tailored recovery-oriented care and support to this "low-volume high-need" group of service users in long-term mental health care, aiming to alter their perspective and take steps in the recovery process. Further research should investigate the effects of the ART model on quality of care, recovery, and autonomy of service users and cooperation in the triad.
abstract_id: PUBMED:11432122
Long-term care in dementia: patients and caregivers. General principles of managing chronic, age-associated diseases apply as much to Alzheimer's disease (AD) and other late-life dementing disorders as they do to congestive heart failure or osteoarthritis. Beyond efforts to maintain residual tissue or organ function, important physician roles include promoting general well-being and helping patients and their caregivers adjust to disease-related limitations. Physicians provide essential information to patients and their families about the disease, its social and legal ramifications, and community resources to facilitate care. Therefore, physicians must be knowledgeable about broadly intersecting medical, legal, financial, and ethical issues surrounding the long-term management of AD and other dementias. The many challenges faced by patients with dementia and their caregivers over time underscore the need for an ongoing diagnostic and therapeutic alliance with primary care physicians. This article reviews salient aspects of long-term care for patients with AD and other dementias, highlighting the vital and varied roles of physicians in managing these chronic brain disorders.
Answer: The Austrian long-term care system, which is based on a payment for care scheme, offers only limited support for the particular case of informal care provided for mentally ill people. The system is cash-oriented and is directed to care recipients who are largely free in how to use it in the outpatient sector. However, the study shows that carers and care recipients benefit only to a moderate extent from public long-term care benefits and bear comparably high material and social costs. This indicates that the system needs to be tailored to the special needs of both carer and care recipient in order to achieve the stated aim of self-determination and freedom of choice (PUBMED:17555005). |
Instruction: Relocation of patients after cardiac surgery: is it worth the effort?
Abstracts:
abstract_id: PUBMED:38174167
Pacemaker Relocation for Radiation Against Overlapping Lung Cancer. We experienced a patient after pacemaker (PM) implantation who had lung cancer of the left upper lobe that developed just behind the PM. The patient was an 81-year-old man with many complications. Radiation was the only treatment option. The PM had to be moved to another place to avoid direct radiation exposure to it. An epicardial pacing lead was implanted on the right ventricular epicardium, and the new generator was implanted in the abdomen. The patient was treated with a total of 62 Gy of radiotherapy for lung cancer, achieving a temporary shrinkage of the tumor. During the radiotherapy period, the PM functioned well without harmful events. When radiation therapy is needed in cases where the tumor overlaps the PM, relocation surgery using an epicardial pacing lead may be a useful option.
abstract_id: PUBMED:31272901
Role of autologous platelet-rich fibrin in relocation pharyngoplasty for obstructive sleep apnoea. The aim of this study was to investigate the efficacy of platelet-rich fibrin (PRF) in decreasing the incidence of wound breakdown in relocation pharyngoplasty performed for the treatment of obstructive sleep apnoea (OSA). This prospective clinical study included 30 OSA patients. They were divided into two groups according to a random table. One group underwent classic relocation pharyngoplasty as described by Li and Lee in 2009. The other group underwent relocation pharyngoplasty with the placement of PRF before suturing. The main outcomes measured during follow-up were the degree of postoperative pain (assessed using a visual analogue scale), wound dehiscence, and the time taken to return to a normal diet after surgery. There was a statistically significant difference in wound dehiscence, with less dehiscence in the PRF group (P=0.013). There was less pain on days 3, 5, and 10 postoperatively in the PRF group (P<0.001). The time taken to return to a normal diet was lower in the PRF group (P=0.001). There was a reduction in apnoea-hypopnoea index (AHI) at 6 months postoperative for all patients. PRF is a powerful bioactive tissue healing material that can provide an important option to decrease the incidence of palatal wound breakdown in relocation pharyngoplasty and in other palatal procedures.
abstract_id: PUBMED:36074950
Does Relocation of Lower Pole Stone During Retrograde Intrarenal Surgery Improve Stone-Free Rate? A Prospective Randomized Study. Purpose: The aim of this study was to compare the stone-free rate (SFR) of in situ treatment vs relocation and lithotripsy for lower pole stones of less than 2 cm following retrograde intrarenal surgery (RIRS). Methods: This prospective randomized study was undertaken from June 2019 to May 2020 in patients undergoing RIRS for lower pole renal stones less than 2 cm in diameter. Patients were randomized into two groups: in situ lithotripsy group and relocation lithotripsy group. The in situ lithotripsy group underwent laser lithotripsy for lower pole stones without relocation of the calculus, and the relocation lithotripsy group had their stones relocated to a favorable location using a tipless Nitinol basket, followed by laser lithotripsy. Laser lithotripsy was achieved using the holmium:YAG (Ho:YAG) laser (120 W) with a 200-μm laser fiber. A Double-J stent was placed in all patients at the end of the procedure. Patient demographics, stone characteristics, operative outcomes, and complications were evaluated. The SFR was determined at 1 month postoperatively with a kidney, ureter, and bladder radiograph (KUB) and ultrasound KUB. Results: Sixty-eight patients were included in the study: in situ group (n = 35) and relocation group (n = 33). The mean stone size and stone density were similar between the groups. The total operative duration, lasing duration, and total energy used were similar between the groups. At the 1-month follow-up, the complete SFR was 85.7% and 91% in the in situ lithotripsy and relocation lithotripsy groups, respectively (p = 0.506). Conclusions: Relocation followed by subsequent laser lithotripsy was associated with similar SFRs as with in situ laser lithotripsy for lower pole renal calculi less than 2 cm in diameter following RIRS using the Ho:YAG laser.
abstract_id: PUBMED:30706624
Relocation of inadequate resection margins in the wound bed during oral cavity oncological surgery: A feasibility study. Background: Specimen-driven intraoperative assessment of the resection margins provides immediate feedback if an additional excision is needed. However, relocation of an inadequate margin in the wound bed has shown to be difficult. The objective of this study is to assess a reliable method for accurate relocation of inadequate tumor resection margins in the wound bed after intraoperative assessment of the specimen.
Methods: During oral cavity cancer surgery, the surgeon placed numbered tags on both sides of the resection line in a pair-wise manner. After resection, one tag of each pair remained on the specimen and the other tag in the wound bed. Upon detection of an inadequate margin in the specimen, the tags were used to relocate this margin in the wound bed.
Results: The method was applied during 80 resections for oral cavity cancer. In 31 resections an inadequate margin was detected, and based on the paired tagging an accurate additional resection was achieved.
Conclusion: Paired tagging facilitates a reliable relocation of inadequate margins, enabling an accurate additional resection during the initial surgery.
abstract_id: PUBMED:23818489
Positional dependency and surgical success of relocation pharyngoplasty among patients with severe obstructive sleep apnea. Objective: To examine the effect of positional dependency on surgical success among patients with severe obstructive sleep apnea (OSA) following modified uvulopalatopharyngoplasty, known as relocation pharyngoplasty.
Study Design: Case series with planned data collection.
Setting: Tertiary referred center.
Subjects And Methods: Standard nocturnal polysomnography was used to compare the apnea-hypopnea index (AHI) in different sleep positions before and after relocation pharyngoplasty in 47 consecutive patients with severe OSA (AHI, 59.5 ± 18.2 events/hour; Epworth Sleepiness Scale [ESS] scores, 12.2 ± 4.4) who failed continuous positive airway pressure therapy. Positional (dependency) OSA was defined when the supine:non-supine AHI ratio was >2, otherwise it was defined as nonpositional OSA. Surgical success was defined as a ≥50% reduction in AHI and a postoperative AHI of ≤20 events/hour. Polysomnographic parameters, ESS, and surgical success following surgery were recorded.
Results: Of the 47 patients, 27 (57%) had positional OSA and 20 (43%) nonpositional OSA. The nonpositional OSA patients had higher AHI and ESS scores than the positional OSA patients (P = .002 and .104, respectively). Relocation pharyngoplasty significantly improved AHI and ESS scores in both positional and nonpositional OSA groups 6 months postoperatively (P < .05). The overall surgical success rate was 49%; however, positional OSA patients had a significantly higher success rate than nonpositional OSA patients (67% vs 25%, P = .008).
Conclusion: The presence of positional dependency at baseline was a favorable outcome predictor of surgical success among severe OSA patients undergoing relocation pharyngoplasty.
abstract_id: PUBMED:36836008
Postoperative Airway Management after Submandibular Duct Relocation in 96 Drooling Children and Adolescents. The aim of this study was to evaluate our institutions airway management and complications after submandibular duct relocation (SMDR). We analysed a historic cohort of children and adolescents who were examined at the Multidisciplinary Saliva Control Centre between March 2005 and April 2016. Ninety-six patients underwent SMDR for excessive drooling. We studied details of the surgical procedure, postoperative swelling and other complications. Ninety-six patients, 62 males and 34 females, were treated consecutively by SMDR. Mean age at time of surgery was 14 years and 11 months. The ASA physical status was 2 in most patients. The majority of children were diagnosed with cerebral palsy (67.7%). Postoperative swelling of the floor of the mouth or tongue was reported in 31 patients (32.3%). The swelling was mild and transient in 22 patients (22.9%) but profound swelling was seen in nine patients (9.4%). In 4.2% of the patients the airway was compromised. In general, SMDR is a well-tolerated procedure, but we should be aware of swelling of the tongue and floor of the mouth. This may lead to a prolonged period of endotracheal intubation or a need for reintubation which can be challenging. After extensive intra-oral surgery such as SMDR we strongly recommend a extended perioperative intubation and extubation after the airway is checked and secure.
abstract_id: PUBMED:38478164
Validating the Revised Mating Effort Questionnaire. The mating effort questionnaire (MEQ) is a multi-dimensional self-report instrument that captures factors reflecting individual effort in upgrading from a current partner, investment in a current partner, and mate seeking when not romantically paired. In the current studies, we sought to revise the MEQ so that it distinguishes among two facets of mate seeking-mate locating and mate attracting-to enable a more nuanced measurement and understanding of individual mating effort. Moreover, we developed additional items to better measure partner investment. In total, the number of items was increased from 12 to 26. In Study 1, exploratory factor analysis revealed that a four-factor solution, reflecting partner upgrading, mate locating, mate attracting, and partner investment, yielded the best fit. In Study 2, this structure was replicated using confirmatory factor analysis in an independent sample. Based on extant studies documenting the relationships between psychopathy, short-term mating effort, and sexual risk taking, a structural equation model (SEM) indicated that trait psychopathy positively predicted mate locating, mate attracting, and partner upgrading and negatively predicted partner investment. A separate SEM showed that partner upgrading positively predicted risky sexual behaviors, while partner upgrading and mate locating positively predicted acceptance of cosmetic surgery.
abstract_id: PUBMED:29989346
Effectiveness of submandibular duct relocation in 91 children with excessive drooling: A prospective cohort study. Objective: To evaluate the effectiveness of submandibular duct relocation (SMDR) in drooling children with neurological disorders.
Design: Prospective cohort study.
Setting: Academic Outpatient Saliva Control Clinic.
Participants: Ninety-one children suffering from moderate to severe drooling.
Main Outcome Measures: Direct observational drooling quotient (DQ; 0-100) and caretaker Visual Analogue Scale (VAS; 0-100). Secondary outcome measures were drooling severity (DS) and frequency rating scales.
Results: The DQ at baseline, 8 and 32 weeks postoperatively was 26.4, 12.3 and 10.8, respectively. VAS score decreased from 80.1 at baseline to 28.3 and 37.0 at 8 and 32 weeks after surgery. Median DS at baseline, 8 and 32 weeks was 5, 3 and 4, whereas the drooling frequency median scores were 4, 2 and 2, respectively. Five children required prolonged intubation due to transient floor of the mouth swelling, two of whom developed a ventilator-associated pneumonia. Another child developed atelectasis with postoperative pneumonia. Two more children needed tube feeding because of postoperative eating difficulties for 3 days or suprapubic catheterisation for urinary retention. Children aged 12 years or older (OR = 3.41; P = 0.03) and those with adequate stability and position of the head (OR = 2.84; P = 0.09) appeared to benefit most from treatment.
Conclusions: Submandibular duct relocation combined with excision of the sublingual glands appears to be relatively safe and effective in diminishing visible drooling in children with neurological disorders, particularly in children aged 12 years and older and those without a forward head posture.
abstract_id: PUBMED:26243027
Transverse Retropalatal Collapsibility Is Associated with Obstructive Sleep Apnea Severity and Outcome of Relocation Pharyngoplasty. Objective: The aim of this study was to investigate whether the retropalatal airway shape and collapsibility defined by awake nasopharyngoscopy with Müller's maneuver were associated with apnea-hypopnea index (AHI), positional dependency, and surgical outcome of relocation pharyngoplasty in patients with obstructive sleep apnea.
Study Design: Case series with planned data collection.
Setting: Tertiary referral center.
Subjects And Methods: A total of 45 obstructive sleep apnea patients were included who underwent conservative treatment (n = 13) or relocation pharyngoplasty (n = 32), and their baseline and postoperative polysomnographies and awake nasopharyngoscopies with Müller's maneuver were reviewed. Shape ratio (transverse diameter [TD] / longitudinal diameter [LD]) in the stationary and Müller's phases and collapsibility (ColTD and ColLD) of the airway at the level of the uvular base were measured with a picture archiving and communication system. Intra- and interrater reliabilities were assessed. Associations among nasopharyngoscopic measurements, AHI, positional dependency, and surgical success (defined as a reduction of AHI ≥50% and a postoperative AHI ≤20/h) were statistically analyzed.
Results: Reliability tests indicated substantial agreements of all nasopharyngoscopic measurements between raters and within raters. A higher baseline ColTD was significantly associated with an elevated AHI (r = 0.49, P = .001), whereas a lower postoperative ColTD was significantly related to surgical success (r = -0.38, P = .034). Nasopharyngoscopic findings were not statistically significantly correlated with positional dependency.
Conclusion: Transverse retropalatal collapsibility measured by awake nasopharyngoscopy with Müller's maneuver helps to predict individuals with moderate to severe sleep apnea and surgical outcome.
abstract_id: PUBMED:36259305
The Opinions and Practices of Saudi Arabian Dentists about Cervical Margin Relocation. Aim: The aim of this study was to assess the opinions and practices of different dentists about the cervical margin relocation (CMR) concept.
Materials And Methods: A total of 432 general dentists, advanced general dentists (AGDs), periodontists, restorative dentists, and prosthodontists practicing in Saudi Arabia were approached in person or asked to complete an electronic survey (Google Forms) assessing demographic data and their opinions and practice of CMR. Differences between groups were assessed using the Chi-squared test, and binary regression models were constructed to identify predictors of opinions and practice of CMR.
Results: About half of the surveyed dentists practiced CMR when indicated, but many felt that it represents an invasion of the biological width and might affect crown survival. Participants had several, often concurrent concerns about CMR. The opinions and practices of CMR were influenced by several factors including educational degree, work setting, country of clinical training, years of experience, and specialty.
Conclusion: The relatively conservative attitude of dentists towards CMR is justifiable, as it is a relatively new concept with little long-term clinical data. Clinical trials with sufficient follow-up periods are now needed to evaluate outcomes from CMR to provide further confidence to dentists to implement the procedure.
Clinical Significance: Deep proximal carious lesions extending subgingivally is a common clinical scenario. Assessing the opinion and practice of dentists towards conservative treatment with CMR provides the basis for encouraging practitioners to use the procedure both clinically and in clinical trials.
Answer: The abstracts provided do not directly address the question of relocating patients after cardiac surgery and whether it is worth the effort. However, they do discuss various relocation procedures in different medical contexts, which can offer some insights into the potential benefits and considerations of relocating patients in a broader sense.
For instance, one abstract discusses the relocation of a pacemaker in a patient with lung cancer to avoid direct radiation exposure during radiotherapy (PUBMED:38174167). This suggests that relocation can be a critical intervention to enable necessary treatments while protecting vital devices or organs from potential harm.
Another abstract describes the use of autologous platelet-rich fibrin in relocation pharyngoplasty for obstructive sleep apnoea, which was found to decrease the incidence of wound breakdown and postoperative pain, suggesting that certain relocation procedures can improve surgical outcomes (PUBMED:31272901).
In the context of retrograde intrarenal surgery, relocation of lower pole stones was compared with in situ treatment, and the study found similar stone-free rates, indicating that relocation might not always provide a significant advantage over other methods (PUBMED:36074950).
The relocation of inadequate resection margins during oral cavity oncological surgery was shown to be feasible and reliable, enabling accurate additional resection during the initial surgery (PUBMED:30706624). This indicates that relocation techniques can be crucial for ensuring complete surgical resection and potentially improving oncological outcomes.
While these abstracts discuss relocation in various medical procedures, they do not provide evidence specifically related to the relocation of patients after cardiac surgery. Therefore, based on the provided abstracts, it is not possible to conclusively determine whether patient relocation after cardiac surgery is worth the effort. Further research and context-specific evaluations would be necessary to answer this question accurately. |
Instruction: Can electronic fetal monitoring identify preterm neonates with cerebral white matter injury?
Abstracts:
abstract_id: PUBMED:15738008
Can electronic fetal monitoring identify preterm neonates with cerebral white matter injury? Objective: Although preterm delivery occurs in only 10% of all births, these infants are at high risk for cerebral white matter injury and constitute a third of all cerebral palsy cases. Our objective was to estimate if electronic monitoring can identify preterm fetuses diagnosed with brain injury during the neonatal period.
Methods: In this case-control study, 150 consecutive neonates with ultrasonography-diagnosed cerebral white matter injury were matched by gestational age within 7 days to 150 controls with normal head ultrasonograms. Tracings were retrieved for 125 cases (83%) and 121 controls (81%) and reviewed by 3 perinatologists blinded to outcome. Vaginal (64 cases, 72 controls) and cesarean deliveries (61 cases, 49 controls) were analyzed separately.
Results: There was no difference in baseline heart rate, tachycardia, bradycardia, short-term variability, accelerations, reactivity, number or types of decelerations, or bradycardic episodes between cases and controls in either the vaginal or cesarean delivery groups. For the 6 neonates with metabolic acidosis severe enough to increase the risk for long-term neurologic morbidity, there was a significant increase in baseline amplitude range less than 5 beats per minute; however, its positive predictive value in predicting severe metabolic acidosis was only 7.7%. Increasing late decelerations were associated with decreasing umbilical arterial pH and base excess, but were not significantly different in the acidosis and control groups (1.0 +/- 1.8, 0.55 +/- 1.23 late decelerations per hour, P = .39).
Conclusion: Although decreased short-term variability and increased late decelerations are associated with decreasing umbilical arterial pH and base excess, electronic fetal monitoring is unable to identify preterm neonates with cerebral white matter injury.
abstract_id: PUBMED:25844316
Stochastic process for white matter injury detection in preterm neonates. Preterm births are rising in Canada and worldwide. As clinicians strive to identify preterm neonates at greatest risk of significant developmental or motor problems, accurate predictive tools are required. Infants at highest risk will be able to receive early developmental interventions, and will also enable clinicians to implement and evaluate new methods to improve outcomes. While severe white matter injury (WMI) is associated with adverse developmental outcome, more subtle injuries are difficult to identify and the association with later impairments remains unknown. Thus, our goal was to develop an automated method for detection and visualization of brain abnormalities in MR images acquired in very preterm born neonates. We have developed a technique to detect WMI in T1-weighted images acquired in 177 very preterm born infants (24-32 weeks gestation). Our approach uses a stochastic process that estimates the likelihood of intensity variations in nearby pixels; with small variations being more likely than large variations. We first detect the boundaries between normal and injured regions of the white matter. Following this we use a measure of pixel similarity to identify WMI regions. Our algorithm is able to detect WMI in all of the images in the ground truth dataset with some false positives in situations where the white matter region is not segmented accurately.
abstract_id: PUBMED:29890324
White matter injury in term neonates with congenital heart diseases: Topology & comparison with preterm newborns. Background: Neonates with congenital heart disease (CHD) are at high risk of punctate white matter injury (WMI) and impaired brain development. We hypothesized that WMI in CHD neonates occurs in a characteristic distribution that shares topology with preterm WMI and that lower birth gestational age (GA) is associated with larger WMI volume.
Objective: (1) To quantitatively assess the volume and location of WMI in CHD neonates across three centres. (2) To compare the volume and spatial distribution of WMI between term CHD neonates and preterm neonates using lesion mapping.
Methods: In 216 term born CHD neonates from three prospective cohorts (mean birth GA: 39 weeks), WMI was identified in 86 neonates (UBC: 29; UCSF: 43; UCZ: 14) on pre- and/or post-operative T1 weighted MRI. WMI was manually segmented and volumes were calculated. A standard brain template was generated. Probabilistic WMI maps (total, pre- and post-operative) were developed in this common space. Using these maps, WMI in the term CHD neonates was compared with that in preterm neonates: 58 at early-in-life (mean postmenstrual age at scan 32.2 weeks); 41 at term-equivalent age (mean postmenstrual age at scan 40.1 weeks).
Results: The total WMI volumes of CHD neonates across centres did not differ (p = 0.068): UBC (median = 84.6 mm3, IQR = 26-174.7 mm3); UCSF (median = 104 mm3, IQR = 44-243 mm3); UCZ (median = 121 mm3, IQR = 68-200.8 mm3). The spatial distribution of WMI in CHD neonates showed strong concordance across centres with predilection for anterior and posterior rather than central lesions. Predominance of anterior lesions was apparent on the post-operative WMI map relative to the pre-operative map. Lower GA at birth predicted an increasing volume of WMI across the full cohort (41.1 mm3 increase of WMI per week decrease in gestational age; 95% CI 11.5-70.8; p = 0.007), when accounting for centre and heart lesion. While WMI in term CHD and preterm neonates occurs most commonly in the intermediate zone/outer subventricular zone there is a paucity of central lesions in the CHD neonates relative to preterms.
Conclusions: WMI in term neonates with CHD occurs in a characteristic topology. The spatial distribution of WMI in term neonates with CHD reflects the expected maturation of pre-oligodendrocytes such that the central regions are less vulnerable than in the preterm neonates.
abstract_id: PUBMED:34456064
The dimensions of white matter injury in preterm neonates. White matter injury (WMI) represents a frequent form of parenchymal brain injury in preterm neonates. Several dimensions of WMI are recognized, with distinct neuropathologic features involving a combination of destructive and maturational anomalies. Hypoxia-ischemia is the main mechanism leading to WMI and adverse white matter development, which result from injury to the oligodendrocyte precursor cells. Inflammation might act as a potentiator for WMI. A combination of hypoxia-ischemia and inflammation is frequent in several neonatal comorbidities such as postnatal infections, NEC and bronchopulmonary dysplasia, all known contributors to WMI. White matter injury is an important predictor of adverse neurodevelopmental outcomes. When WMI is detected on neonatal brain imaging, a detailed characterization of the injury (pattern of injury, severity and location) may enhance the ability to predict outcomes. This clinically-oriented review will provide an overview of the pathophysiology and imaging diagnosis of the multiple dimensions of WMI, will explore the association between postnatal complications and WMI, and will provide guidance on the signification of white matter anomalies for motor and cognitive development.
abstract_id: PUBMED:30179864
Human Umbilical Cord Blood Therapy Protects Cerebral White Matter from Systemic LPS Exposure in Preterm Fetal Sheep. Background: Infants born preterm following exposure to in utero inflammation/chorioamnionitis are at high risk of brain injury and life-long neurological deficits. In this study, we assessed the efficacy of early intervention umbilical cord blood (UCB) cell therapy in a large animal model of preterm brain inflammation and injury. We hypothesised that UCB treatment would be neuroprotective for the preterm brain following subclinical fetal inflammation.
Methods: Chronically instrumented fetal sheep at 0.65 gestation were administered lipopolysaccharide (LPS, 150 ng, 055:B5) intravenously over 3 consecutive days, followed by 100 million human UCB mononuclear cells 6 h after the final LPS dose. Controls were administered saline instead of LPS and cells. Ten days after the first LPS dose, the fetal brain and cerebrospinal fluid were collected for analysis of subcortical and periventricular white matter injury and inflammation.
Results: LPS administration increased microglial aggregate size, neutrophil recruitment, astrogliosis and cell death compared with controls. LPS also reduced total oligodendrocyte count and decreased mature myelinating oligodendrocytes. UCB cell therapy attenuated cell death and inflammation, and recovered total and mature oligodendrocytes, compared with LPS.
Conclusions: UCB cell treatment following inflammation reduces preterm white matter brain injury, likely mediated via anti-inflammatory actions.
abstract_id: PUBMED:36370872
Fetal heart rate evolution and brain imaging findings in preterm infants with severe cerebral palsy. Background: Cerebral palsy is more common among preterm infants than among full-term infants. Although there is still no clear evidence that fetal heart rate monitoring effectively reduces cerebral palsy incidence, it is helpful to estimate the timing of brain injury leading to cerebral palsy and the causal relationship with delivery based on the fetal heart rate evolution patterns. Understanding the relationship between the timing and the type of brain injury can help to identify preventive measures in obstetrical care.
Objective: This study aimed to examine the relationship between the timing of insults and the type of brain injury in preterm infants with severe cerebral palsy.
Study Design: This longitudinal study was based on a nationwide database for cerebral palsy. The data of infants with severe cerebral palsy (equivalent to levels 3-5 of the Gross Motor Function Classification System-Expanded and Revised), born between 2009 and 2014 at 28 to 33 weeks of gestation, were included. The intrapartum fetal heart rate evolution patterns were evaluated by 3 obstetricians blinded to clinical information other than gestational age at birth, and these were categorized after agreement by at least 2 of the 3 reviewers into (1) continuous bradycardia, (2) persistently nonreassuring (prenatal onset), (3) reassuring-prolonged deceleration, (4) Hon's pattern (intrapartum onset), (5) persistently reassuring (pre- or postnatal onset), and (6) unclassified. Infant brain magnetic resonance imaging findings at term-equivalent age were assessed by a pediatric neurologist blinded to the background details, except for gestational age at birth and corrected age at image acquisition, and these were categorized as (1) basal ganglia-thalamus, (2) white matter, (3) watershed cortex or subcortex, (4) stroke, (5) normal, and (6) unclassified based on the predominant site involved. The risk factors for the basal ganglia-thalamus group were compared with those of the combined white matter and watershed injuries group.
Results: Among 1593 infants with severe cerebral palsy, 231 were born at 28 to 33 weeks of gestation, and 140 met the eligibility criteria. Fetal heart rate evolution patterns were categorized as bradycardia (17% [24]); persistently nonreassuring (40% [56]); reassuring-prolonged deceleration (7% [10]); reassuring-Hon (6% [8]); persistently reassuring (7% [10]); and unclassified (23% [32]). Cerebral palsy was presumed to have an antenatal onset in 57% of infants and to have been caused by intrapartum insult in 13% of infants. Magnetic resonance imaging showed that 34% (n=48) of infants developed basal ganglia-thalamus-dominant brain injury. Of the remaining 92 infants, 43% (60) showed white matter injuries, 1% (1) showed watershed injuries, 4% (5) showed stroke, 1% (1) had normal findings, and 18% (25) had unclassified findings. Infants with continuous bradycardia (adjusted odds ratio, 1033.06; 95% confidence interval, 15.49-68,879.92) and persistently nonreassuring fetal heart rate patterns (61.20; 2.09-1793.12) had a significantly increased risk for basal ganglia-thalamus injury.
Conclusion: Severe cerebral palsy was presumed to have an antenatal onset in 57% of infants and to have been caused by intrapartum insult in only 13% of infants born at 28 to 33 weeks of gestation. Although the white matter-watershed injury was predominant in the study populations, severe acute hypoxia-ischemia may be an important prenatal etiology of severe cerebral palsy in preterm infants.
abstract_id: PUBMED:30232359
Association between Subcortical Morphology and Cerebral White Matter Energy Metabolism in Neonates with Congenital Heart Disease. Complex congenital heart disease (CHD) is associated with neurodevelopmental impairment, the mechanism of which is unknown. Cerebral cortical dysmaturation in CHD is linked to white matter abnormalities, including developmental vulnerability of the subplate, in relation to oxygen delivery and metabolism deficits. In this study, we report associations between subcortical morphology and white matter metabolism in neonates with CHD using quantitative magnetic resonance imaging (MRI) and spectroscopy (MRS). Multi-modal brain imaging was performed in three groups of neonates close to term-equivalent age: (1) term CHD (n = 56); (2) preterm CHD (n = 37) and (3) preterm control group (n = 22). Thalamic volume and cerebellar transverse diameter were obtained in relation to cerebral metrics and white matter metabolism. Short echo single-voxel MRS of parietal and frontal white matter was used to quantitate metabolites related to brain maturation (n-acetyl aspartate [NAA], choline, myo-inositol), neurotransmitter (glutamate), and energy metabolism (glutamine, citrate, creatine and lactate). Multi-variate regression was performed to delineate associations between subcortical morphological measurements and white matter metabolism controlling for age and white matter injury. Reduced thalamic volume, most pronounced in the preterm control group, was associated with increased citrate levels in all three group in the parietal white matter. In contrast, reduced cerebellar volume, most pronounced in the preterm CHD group, was associated with reduced glutamine in parietal grey matter in both CHD groups. Single ventricle anatomy, aortic arch obstruction, and cyanotic lesion were predictive of the relationship between reduced subcortical morphometry and reduced GLX (particularly glutamine) in both CHD cohorts (frontal white matter and parietal grey matter). Subcortical morphological associations with brain metabolism were also distinct within each of the three groups, suggesting these relationships in the CHD groups were not directly related to prematurity or white matter injury alone. Taken together, these findings suggest that subplate vulnerability in CHD is likely relevant to understanding the mechanism of both cortical and subcortical dysmaturation in CHD infants. Future work is needed to link this potential pattern of encephalopathy of CHD (including the constellation of grey matter, white matter and brain metabolism deficits) to not only abnormal fetal substrate delivery and oxygen conformance, but also regional deficits in cerebral energy metabolism.
abstract_id: PUBMED:30840951
The Effect of Antenatal Betamethasone on White Matter Inflammation and Injury in Fetal Sheep and Ventilated Preterm Lambs. Antenatal administration of betamethasone (BM) is a common antecedent of preterm birth, but there is limited information about its impact on the acute evolution of preterm neonatal brain injury. We aimed to compare the effects of maternal BM in combination with mechanical ventilation on the white matter (WM) of late preterm sheep. At 0.85 of gestation, pregnant ewes were randomly assigned to receive intra-muscular (i.m.) saline (n = 9) or i.m. BM (n = 13). Lambs were delivered and unventilated controls (UVCSal, n = 4; UVCBM, n = 6) were humanely killed without intervention; ventilated lambs (VentSal, n = 5; VentBM, n = 7) were injuriously ventilated for 15 min, followed by conventional ventilation for 75 min. Cardiovascular and cerebral haemodynamics and oxygenation were measured continuously. The cerebral WM underwent assessment of inflammation and injury, and oxidative stress was measured in the cerebrospinal fluid (CSF). In the periventricular and subcortical WM tracts, the proportion of amoeboid (activated) microglia, the density of astrocytes, and the number of blood vessels with protein extravasation were higher in UVCBM than in UVCSal (p < 0.05 for all). During ventilation, tidal volume, mean arterial pressure, carotid blood flow, and oxygen delivery were higher in -VentBM lambs (p < 0.05 vs. VentSal). In the subcortical WM, microglial infiltration was increased in the VentSal group compared to UVCSal. The proportion of activated microglia and protein extravasation was higher in the VentBM group compared to VentSal within the periventricular and subcortical WM tracts (p < 0.05). CSF oxidative stress was increased in the VentBM group compared to UVCSal, UVCBM, and VentSal groups (p < 0.05). Antenatal BM was associated with inflammation and vascular permeability in the WM of late preterm fetal sheep. During the immediate neonatal period, the increased carotid perfusion and oxygen delivery in BM-treated lambs was associated with increased oxidative stress, microglial activation and microvascular injury.
abstract_id: PUBMED:37719062
Oligodendrocyte Progenitor Cell Transplantation Ameliorates Preterm Infant Cerebral White Matter Injury in Rats Model. Background: Cerebral white matter injury (WMI) is the most common brain injury in preterm infants, leading to motor and developmental deficits often accompanied by cognitive impairment. However, there is no effective treatment. One promising approach for treating preterm WMI is cell replacement therapy, in which lost cells can be replaced by exogenous oligodendrocyte progenitor cells (OPCs).
Methods: This study developed a method to differentiate human neural stem cells (hNSCs) into human OPCs (hOPCs). The preterm WMI animal model was established in rats on postnatal day 3, and OLIG2+/NG2+/PDGFRα+/O4+ hOPCs were enriched and transplanted into the corpus callosum on postnatal day 10. Then, histological analysis and electron microscopy were used to detect lesion structure; behavioral assays were performed to detect cognitive function.
Results: Transplanted hOPCs survived and migrated throughout the major white matter tracts. Morphological differentiation of transplanted hOPCs was observed. Histological analysis revealed structural repair of lesioned areas. Re-myelination of the axons in the corpus callosum was confirmed by electron microscopy. The Morris water maze test revealed cognitive function recovery.
Conclusion: Our study showed that exogenous hOPCs could differentiate into CC1+ OLS in the brain of WMI rats, improving their cognitive functions.
abstract_id: PUBMED:37164491
Improving the interpretation of electronic fetal monitoring: the fetal reserve index. Electronic fetal monitoring, particularly in the form of cardiotocography, forms the centerpiece of labor management. Initially successfully designed for stillbirth prevention, there was hope to also include prediction and prevention of fetal acidosis and its sequelae. With the routine use of electronic fetal monitoring, the cesarean delivery rate increased from <5% in the 1970s to >30% at present. Most at-risk cases produced healthy babies, resulting in part from considerable confusion as to the differences between diagnostic and screening tests. Electronic fetal monitoring is clearly a screening test. Multiple attempts have aimed at enhancing its ability to accurately distinguish babies at risk of in utero injury from those who are not and to do this in a timely manner so that appropriate intervention can be performed. Even key electronic fetal monitoring opinion leaders admit that this goal has yet to be achieved. Our group has developed a modified approach called the "Fetal Reserve Index" that contextualizes the findings of electronic fetal monitoring by formally including the presence of maternal, fetal, and obstetrical risk factors and increased uterine contraction frequencies and breaking up the tracing into 4 quantifiable components (heart rate, variability, decelerations, and accelerations). The result is a quantitative 8-point metric, with each variable being weighted equally in version 1.0. In multiple previously published refereed papers, we have shown that in head-to-head studies comparing the fetal reserve index with the American College of Obstetricians and Gynecologists' fetal heart rate categories, the fetal reserve index more accurately identifies babies born with cerebral palsy and could also reduce the rates of emergency cesarean delivery and vaginal operative deliveries. We found that the fetal reserve index scores and fetal pH and base excess actually begin to fall earlier in the first stage of labor than was commonly appreciated, and the fetal reserve index provides a good surrogate for pH and base excess values. Finally, the last fetal reserve index score before delivery combined with early analysis of neonatal heart rate and acid/base balance shows that the period of risk for neonatal neurologic impairment can continue for the first 30 minutes of life and requires much closer neonatal observation than is currently being done.
Answer: No, electronic fetal monitoring is unable to identify preterm neonates with cerebral white matter injury. A study aimed at estimating if electronic fetal monitoring could identify preterm fetuses diagnosed with brain injury during the neonatal period found that there was no difference in various heart rate parameters between cases with cerebral white matter injury and controls with normal head ultrasonograms. The study concluded that despite associations between decreased short-term variability and increased late decelerations with decreasing umbilical arterial pH and base excess, electronic fetal monitoring could not effectively identify preterm neonates with cerebral white matter injury (PUBMED:15738008). |
Instruction: Lipoprotein particle subclass profiles among metabolically healthy and unhealthy obese and non-obese adults: does size matter?
Abstracts:
abstract_id: PUBMED:26277632
Lipoprotein particle subclass profiles among metabolically healthy and unhealthy obese and non-obese adults: does size matter? Objectives: No data regards lipoprotein particle profiles in obese and non-obese metabolic health subtypes exist. We characterised lipoprotein size, particle and subclass concentrations among metabolically healthy and unhealthy obese and non-obese adults.
Methods: Cross-sectional sample of 1834 middle-aged Irish adults were classified as obese (BMI ≥30 kg/m(2)) and non-obese (BMI <30 kg/m(2)). Metabolic health was defined using three metabolic health definitions based on various cardiometabolic abnormalities including metabolic syndrome criteria, insulin resistance and inflammation. Lipoprotein size, particle and subclass concentrations were determined using nuclear magnetic resonance (NMR) spectroscopy.
Results: Lipoprotein profiling identified a range of adverse phenotypes among the metabolically unhealthy individuals, regardless of BMI and metabolic health definition, including increased numbers of small low density lipoprotein (LDL) (P < 0.001) and high density lipoprotein (HDL) particles (P < 0.001), large very low density lipoprotein (VLDL) particles (P < 0.001) and greater lipoprotein related insulin resistance (P < 0.001). The most significant predictors of metabolic health were lower numbers of large VLDL (ORs 2.72-3.13 and 2.49-3.86, P < 0.05 among obese and non-obese individuals, respectively) and small dense LDL particles (ORs 1.78-2.39 and 1.50-1.94, P < 0.05) and higher numbers of large LDL (ORs 1.82-2.66 and 2.84-3.27, P < 0.05) and large HDL particles (ORs 1.88-2.58 and 1.81-3.49, P < 0.05).
Conclusions: Metabolically healthy adults displayed favourable lipoprotein particle profiles, irrespective of BMI and metabolic health definition. These findings underscore the importance of maintaining a healthy lipid profile in the context of overall cardiometabolic health.
abstract_id: PUBMED:32405361
The metabolome profiling of obese and non-obese individuals: Metabolically healthy obese and unhealthy non-obese paradox. Objectives: The molecular basis of "metabolically healthy obese" and "metabolically unhealthy non-obese" phenotypes is not fully understood. Our objective was to identify metabolite patterns differing in obese (metabolically healthy vs unhealthy (MHO vs MUHO)) and non-obese (metabolically healthy vs unhealthy (MHNO vs MUHNO)) individuals.
Materials And Methods: This case-control study was performed on 86 subjects stratified into four groups using anthropometric and clinical measurements: MHO (21), MUHO (21), MHNO (22), and MUHNO (22). Serum metabolites were profiled using nuclear magnetic resonance (NMR). Multivariate analysis was applied to uncover discriminant metabolites, and enrichment analysis was performed to identify underlying pathways.
Results: Significantly higher levels of glutamine, asparagine, alanine, L-glutathione reduced, 2-aminobutyrate, taurine, betaine, and choline, and lower level of D-sphingosine were observed in MHO group compared with MUHO. In comparison of MHNO and MUHNO groups, significantly lower levels of alanine, glycine, glutamine, histidine, L-glutathione reduced, and betaine, and higher levels of isoleucine, L-proline, cholic acid, and carnitine appeared in MUHNO individuals. Moreover, significantly affected pathways included amino acid metabolism, urea cycle and ammonia recycling in MUHO subjects and glutathione metabolism, amino acid metabolism, and ammonia recycling in MUHNO members.
Conclusion: Literature review helped us to hint that altered levels of most metabolites might associate to insulin sensitivity and insulin resistance in MHO and MUHNO individuals, respectively. Besides, abnormal amino acid metabolism and ammonia recycling involved in unhealthy phenotypes (MUHO, MUHNO) might be associated with insulin resistance.
abstract_id: PUBMED:34947881
Association of Metabolically Healthy and Unhealthy Obesity Phenotype with Markers Related to Obesity, Diabetes among Young, Healthy Adult Men. Analysis of MAGNETIC Study. Adipose tissue secretes many regulatory factors called adipokines. Adipokines affect the metabolism of lipids and carbohydrates. They also influence the regulation of the immune system and inflammation. The current study aimed to evaluate the association between markers related to obesity, diabesity and adipokines and metabolically healthy and unhealthy obesity in young men. The study included 98 healthy participants. We divided participants into three subgroups based on body mass index and metabolic health definition: 49 metabolically healthy normal-weight patients, 27 metabolically healthy obese patients and 22 metabolically unhealthy obese patients. The 14 metabolic markers selected were measured in serum or plasma. The analysis showed associations between markers related to obesity, diabesity and adipokines in metabolically healthy and unhealthy obese participants. The decreased level of adipsin (p < 0.05) was only associated with metabolically healthy obesity, not with metabolically unhealthy obesity. The decreased level of ghrelin (p < 0.001) and increased level of plasminogen activator inhibitor-1 (p < 0.01) were only associated with metabolically unhealthy obesity, not with metabolically healthy obesity. The decreased level of adiponectin and increased levels of leptin, c-peptide, insulin and angiopoietin-like 3 protein were associated with metabolically healthy and unhealthy obesity. In conclusion, our data show that metabolically healthy obesity was more similar to metabolically unhealthy obesity in terms of the analyzed markers related to obesity and diabesity.
abstract_id: PUBMED:33440881
Association of Metabolically Healthy and Unhealthy Obesity Phenotypes with Oxidative Stress Parameters and Telomere Length in Healthy Young Adult Men. Analysis of the MAGNETIC Study. Obesity is a significant factor related to metabolic disturbances that can lead to metabolic syndrome (MetS). Metabolic dysregulation causes oxidative stress, which affects telomere structure. The current study aimed to evaluate the relationships between telomere length, oxidative stress and the metabolically healthy and unhealthy phenotypes in healthy young men. Ninety-eight participants were included in the study (49 healthy slim and 49 obese patients). Study participants were divided into three subgroups according to body mass index and metabolic health. Selected oxidative stress markers were measured in serum. Relative telomere length (rTL) was measured using quantitative polymerase chain reaction. The analysis showed associations between laboratory markers, oxidative stress markers and rTL in metabolically healthy and unhealthy participants. Total oxidation status (TOS), total antioxidant capacity (TAC) and rTL were significantly connected with metabolically unhealthy obesity. TAC was associated with metabolically healthy obesity. Telomeres shorten in patients with metabolic dysregulation related to oxidative stress and obesity linked to MetS. Further studies among young metabolically healthy and unhealthy individuals are needed to determine the pathways related to metabolic disturbances that cause oxidative stress that leads to MetS.
abstract_id: PUBMED:36313785
Sex differences in metabolically healthy and metabolically unhealthy obesity among Chinese children and adolescents. Objectives: To analyze sex differences in the prevalence of obesity phenotypes and their risk factors among children and adolescents aged 7-18 years in China.
Methods: We enrolled 15,114 children and adolescents aged 7-18 years into the final analysis. Obesity phenotypes were classified by body mass index (BMI) and metabolic status as metabolically healthy or unhealthy obesity. In addition, we collected four possible influencing factors on obesity phenotypes through questionnaires, including demographic, parental, early life, and lifestyle indicators. Multinomial logistic regression analysis in a generalized linear mixed model (GLMM) was selected to estimate the odds ratio (OR) and 95% confidence interval (95% CI) for identifying risk factors and control the cluster effects of schools. More importantly, the interaction terms of sex and each indicator were established to demonstrate the sex differences.
Results: The prevalence of metabolically healthy obesity (MHO), metabolically unhealthy obesity (MUO), metabolically healthy overweight and obesity (MHOO), and metabolically unhealthy overweight and obesity (MUOO) were 3.5%, 5.6%, 11.1%, and 13.0% respectively, with higher prevalence in boys (5.3% vs. 1.6%, 7.9% vs. 3.1%, 14.3% vs. 7.7%, 15.6% vs. 10.1%). In addition, younger ages, single children, parental smoking, parental history of diseases (overweight, hypertension, diabetes), caesarean, premature, and delayed delivery time, high birth weight, insufficient sleep time, and excessive screen time were considered as important risk factors of MHO and MUO among children and adolescents (p < 0.05). More notably, boys were at higher risks of MUO when they were single children (boys: OR = 1.56, 95% CI: 1.24-1.96; girls: OR = 1.12, 95% CI: 0.82-1.54), while girls were more sensitive to MUO with parental smoking (girls: OR = 1.34, 95% CI: 1.02-1.76; boys: OR = 1.16, 95% CI: 0.97-1.39), premature delivery (girls: OR = 3.11, 95% CI: 1.59-6.07; boys: OR = 1.22, 95% CI: 0.67-2.22), high birth weight (girls: OR = 2.45, 95% CI: 1.63-3.69; boys: OR = 1.28, 95% CI: 0.96-1.70), and excessive screen time (girls: OR = 1.47, 95% CI: 1.06-2.04; boys: OR = 0.97, 95% CI: 0.79-1.20), with significant interaction term for sex difference (pinteraction < 0.05).
Conclusions: MHO and MUO are becoming prevalent among Chinese children and adolescents. Significant sex differences in the prevalence of obesity phenotypes as well as their environmental and genetic risk factors suggest it might be necessary to manage obesity phenotypes problems from a sex perspective.
abstract_id: PUBMED:35287586
Metabolically healthy versus unhealthy obese phenotypes in relation to hypertension incidence; a prospective cohort study. Background: Although obesity increases the risk of hypertension, the effect of obesity based on metabolic status on the incidence of hypertension is not known. This study aimed to determine the association between obesity phenotypes including metabolically unhealthy obesity (MUO) and metabolically healthy obesity (MHO) and the risk of hypertension incidence.
Methods: We conducted a prospective cohort study on 6747 adults aged 35-65 from Ravansar non-communicable diseases (RaNCD) study. Obesity was defined as body mass index above 30 kg/m2 and metabolically unhealthy was considered at least two metabolic disorders based on the International Diabetes Federation criteria. Obesity phenotypes were categorized into four groups including MUO, MHO, metabolically unhealthy non obesity (MUNO), and metabolically healthy non obesity (MHNO). Cox proportional hazards regression models were applied to analyze associations with hypertension incidence.
Results: The MHO (HR: 1.37; 95% CI: 1.03-1.86) and MUO phenotypes (HR: 2.44; 95% CI: 1.81-3.29) were associated with higher hypertension risk compared to MHNO. In addition, MUNO phenotype was significantly associated with risk of hypertension incidence (HR: 1.65; 95% CI: 1.29-2.14).
Conclusions: Both metabolically healthy and unhealthy obesity increased the risk of hypertension incidence. However, the increase in metabolically unhealthy phenotype was higher.
abstract_id: PUBMED:37233682
Metabolome Profiling and Pathway Analysis in Metabolically Healthy and Unhealthy Obesity among Chinese Adolescents Aged 11-18 Years. The underlying mechanisms of the development of unhealthy metabolic phenotypes in obese children and adolescents remain unclear. We aimed to screen the metabolomes of individuals with the unhealthy obesity phenotype and identify the potential metabolic pathways that could regulate various metabolic profiles of obesity in Chinese adolescents. A total of 127 adolescents aged 11-18 years old from China were investigated using a cross-sectional study. The participants were classified as having metabolically healthy obesity (MHO) or metabolically unhealthy obesity (MUO) based on the presence/absence of metabolic abnormalities defined by metabolic syndrome (MetS) and body mass index (BMI). Serum-based metabolomic profiling using gas chromatography-mass spectrometry (GC-MS) was undertaken on 67 MHO and 60 MUO individuals. ROC analyses showed that palmitic acid, stearic acid, and phosphate could predict MUO, and that glycolic acid, alanine, 3-hydroxypropionic acid, and 2-hydroxypentanoic acid could predict MHO (all p < 0.05) from selected samples. Five metabolites predicted MUO, 12 metabolites predicted MHO in boys, and only two metabolites predicted MUO in girls. Moreover, several metabolic pathways may be relevant in distinguishing the MHO and MUO groups, including the fatty acid biosynthesis, fatty acid elongation in mitochondria, propanoate metabolism, glyoxylate and dicarboxylate metabolism, and fatty acid metabolism pathways. Similar results were observed for boys except for phenylalanine, tyrosine and tryptophan biosynthesis, which had a high impact [0.098]. The identified metabolites and pathways could be efficacious for investigating the underlying mechanisms of the development of different metabolic phenotypes in obese Chinese adolescents.
abstract_id: PUBMED:31496776
Prevalence and clinical characteristics of metabolically unhealthy obesity in an Iranian adult population. Purpose: The incidence of obesity is globally increasing and it is a predisposing factor for morbidity and mortality. This study assessed the prevalence of metabolically unhealthy (MU) individuals and its determinants according to body mass index (BMI).
Materials And Method: In our cross-sectional study, 891 persons aged 30 years or older participated. Participants were classified as obese (BMI ≥30 kg/m2), overweight (BMI 25-<30 kg/m2 and normal weight (BMI <25 kg/m2). Metabolic health status was defined using four existing cardio-metabolic abnormalities (elevated blood pressure, elevated serum concentrations of triglyceride and fasting glucose and a low serum concentration of high density lipoprotein cholesterol). Then, two phenotypes were defined: healthy (existence of 0-1 cardio-metabolic abnormalities) and unhealthy (presence of 2 or more cardio-metabolic abnormalities).
Result: Overall, 10.9% (95% confidence interval (CI): 8.8-13.0) and 7.2% (95% CI: 5.5-8.9) of participants were MU obese and metabolically healthy obese, respectively. The prevalence of MU was higher in overweight (55.6%; 95% CI: 50.6-60.6, p<0.001) and obese (60.2%; 95% CI: 52.8-67.6, p=0.001) subjects than in individuals with a normal weight (37.5%; 95% CI: 29.4-42.6). Multiple logistic regression analysis showed an association of a MU state with age and dyslipidaemia in the BMI subgroups and with female sex in the normal weight individuals.
Conclusion: The prevalence of a MU state increased with increasing BMI. Ageing and dyslipidaemia were associated with an unhealthy metabolic state in normal weight, overweight and obese subjects and with the female sex in normal weight subjects.
abstract_id: PUBMED:29196231
Metabolically healthy obese and unhealthy normal weight in Iranian adult population: Prevalence and the associated factors. Aims: The objective of this study was to determine the prevalence and the associated factors of metabolically unhealthy in normal-weight and metabolically healthy in obese.
Methods: We analyzed the data of a representative sample of 986 participants recruited among adult population of north of Iran. Data were collected regarding demographic characteristics, lifestyle, body mass index, abdominal obesity measures, blood pressure, and lipid profiles. The participants were classified as metabolically healthy obese (MHO) and metabolically unhealthy normal-weight (MUNW). Metabolically unhealthy was defined as the presence of ≥2 non-obese components of metabolic syndrome based on ATP III criteria.
Results: The prevalence rate of MUNW and MHO accounted for 17.2% and 15.1% respectively. Mean age of participants with metabolically unhealthy was significantly greater than metabolically healthy in both normal weight and overweight/obese (P=0.001). The results of multiple logistic regression analysis showed in normal-weight individuals, a significant association of MUNW was found with age group of 50-59 years(OR=3.83, 95%CI: 1.71-8.57) and 60-70 years by OR=4.74(95%CI:1.79-12.54) as compared with age group of 20-29 years. It was also associated with current smoking. While metabolically healthy state in overweight/obese was inversely associated with age 50-59 years by OR=0.26 (95%CI:0.13-0.54) and age 60-70 years by OR=0.15 (95%CI:0.05- 0.39) and higher WC by OR=0.47 (95%CI:0.31-0.72) but positively associated with female-sex by OR=1.74 (95%CI:1.07-2.82).
Conclusion: Aging and smoking are significantly associated with metabolic abnormalities in normal-weight while aging, abdominal obesity negatively and female positively associated with metabolically healthy in obese.
abstract_id: PUBMED:30458177
Metabolically healthy versus metabolically unhealthy obesity. Obesity-related disease complications reduce life quality and expectancy and increase health-care costs. Some studies have suggested that obesity not always entails metabolic abnormalities and increased risk of cardiometabolic complications. Because of the lack of universally accepted criteria to identify metabolically healthy obesity (MHO), its prevalence varies widely among studies. Moreover, the prognostic value of MHO is hotly debated, mainly because it likely shifts gradually towards metabolically unhealthy obesity (MUO). In this review, we outline the differential factors contributing to the metabolic heterogeneity of obesity by discussing the behavioral, genetic, phenotypical, and biological aspects associated with each of the two metabolic phenotypes (MHO and MUO) of obesity and their clinical implications. Particular emphasis will be laid on the role of adipose tissue biology and function, including genetic determinants of body fat distribution, depot-specific fat metabolism, adipose tissue plasticity and, particularly, adipogenesis. Finally, the emerging role of gut microbiota in obesity and adipose tissue dysfunction as well as the search for novel biomarkers for the obesity-related metabolic traits and associated diseases will be briefly presented. A better understanding of the main determinants of a healthy metabolic status in obesity would allow promotion of this favorable condition by targeting the relevant pathways.
Answer: Lipoprotein particle subclass profiles do indeed differ between metabolically healthy and unhealthy individuals, irrespective of whether they are obese or non-obese. The study characterized lipoprotein size, particle, and subclass concentrations using nuclear magnetic resonance (NMR) spectroscopy in a cross-sectional sample of 1834 middle-aged Irish adults. The results indicated that metabolically unhealthy individuals, regardless of their BMI, exhibited a range of adverse lipoprotein profiles. This included increased numbers of small low-density lipoprotein (LDL) and high-density lipoprotein (HDL) particles, large very low-density lipoprotein (VLDL) particles, and greater lipoprotein-related insulin resistance. Conversely, metabolically healthy adults displayed favorable lipoprotein particle profiles, which were consistent across different BMI categories and definitions of metabolic health. The most significant predictors of metabolic health were lower numbers of large VLDL and small dense LDL particles, and higher numbers of large LDL and large HDL particles. These findings suggest that the size and concentration of lipoprotein particles are important factors in determining metabolic health, and that maintaining a healthy lipid profile is crucial for overall cardiometabolic health (PUBMED:26277632). |
Instruction: Assessing the experience in complex hepatopancreatobiliary surgery among graduating chief residents: is the operative experience enough?
Abstracts:
abstract_id: PUBMED:24953270
Assessing the experience in complex hepatopancreatobiliary surgery among graduating chief residents: is the operative experience enough? Introduction: Resident operative autonomy and case volume is associated with posttraining confidence and practice plans. Accreditation Council for Graduate Medical Education requirements for graduating general surgery residents are four liver and three pancreas cases. We sought to evaluate trends in resident experience and autonomy for complex hepatopancreatobiliary (HPB) surgery over time.
Methods: We queried the Accreditation Council for Graduate Medical Education General Surgery Case Log (2003-2012) for all cases performed by graduating chief residents (GCR) relating to liver, pancreas, and the biliary tract (HPB); simple cholecystectomy was excluded. Mean (±SD), median [10th-90th percentiles] and maximum case volumes were compared from 2003 to 2012 using R(2) for all trends.
Results: A total of 252,977 complex HPB cases (36% liver, 43% pancreas, 21% biliary) were performed by 10,288 GCR during the 10-year period examined (Mean = 24.6 per GCR). Of these, 57% were performed during the chief year, whereas 43% were performed as postgraduate year 1-4. Only 52% of liver cases were anatomic resections, whereas 71% of pancreas cases were major resections. Total number of cases increased from 22,516 (mean = 23.0) in 2003 to 27,191 (mean = 24.9) in 2012. During this same time period, the percentage of HPB cases that were performed during the chief year decreased by 7% (liver: 13%, pancreas 8%, biliary 4%). There was an increasing trend in the mean number of operations (mean ± SD) logged by GCR on the pancreas (9.1 ± 5.9 to 11.3 ± 4.3; R(2) = .85) and liver (8.0 ± 5.9 to 9.4 ± 3.4; R(2) = .91), whereas those for the biliary tract decreased (5.9 ± 2.5 to 3.8 ± 2.1; R(2) = .96). Although the median number of cases [10th:90th percentile] increased slightly for both pancreas (7.0 [4.0:15] to 8.0 [4:20]) and liver (7.0 [4:13] to 8.0 [5:14]), the maximum number of cases preformed by any given GCR remained stable for pancreas (51 to 53; R(2) = .18), but increased for liver (38 to 45; R(2) = .32). The median number of HPB cases that GCR performed as teaching assistants (TAs) remained at zero during this time period. The 90th percentile of cases performed as TA was less than two for both pancreas and liver.
Conclusion: Roughly one-half of GCR have performed fewer than 10 cases in each of the liver, pancreas, or biliary categories at time of completion of residency. Although the mean number of complex liver and pancreatic operations performed by GCR increased slightly, the median number remained low, and the number of TA cases was virtually zero. Most GCR are unlikely to be prepared to perform complex HPB operations.
abstract_id: PUBMED:25498881
The current state of hepatopancreatobiliary fellowship experience in North America. Aim: The face of hepatopancreatobiliary (HPB) training has changed over the past decade. The growth of focused HPB fellowships, which are vetted with a rigorous accreditation process through the Fellowship Council (FC), has established them as an attractive mode of training in HPB surgery. This study looks at the volumes of HPB cases performed during these fellowships in North America.
Methods: After approval by the FC research committee, data from all HPB fellowships that had 3 years worth of complete fellow case log data were tabulated and reported (n = 12). For 2-year fellowships, the fellow logs were tabulated at the completion of both years. Those programs that had transplant experience (n = 9) were reported.
Results: Data for the current fellows' case numbers show that graduating fellows have a median of 26 biliary cases, 19 major liver cases (hemilivers), 28 other liver cases, 40 pancreaticoduodenectomies,18 distal pancreatectomies, and 9 other pancreas cases. The programs that provided transplantation experience had 10 cases for each fellow.
Conclusion: This study validates that FC-accredited HPB fellowships have a robust exposure to complex HPB surgery. Fellows completing these fellowships should be well versed in the management and surgical treatment of HPB patients.
abstract_id: PUBMED:9448124
Chief resident experience with laparoscopic cholecystectomy. Resident competence in both open and laparoscopic cholecystectomy (LC) has been a concern among general surgeons. Laparoscopic surgery was late in coming at many surgical residency programs in the United States, and many residents have graduated with limited experience in LC. We are chief residents who were fortunate enough to start our training when LC was first introduced at our institution in 1990. This report summarizes our experience with LC in our chief year, during which we performed LC on 147 patients. The average operating time was 37 minutes (range, 12-82 minutes). Six patients (4%) required conversion to an open procedure. There were three complications (2 postoperative cystic duct leaks and 1 intraoperative common bile duct injury) for an overall complication rate of 2%. There was no mortality. It is our conclusion that graduating chief residents with 5 years' exposure to LC may perform the procedure with a complication rate comparable to that reported in the current literature. Insuring that graduating chief residents have adequate training in open cholecystectomy may become a more pressing issue in the near future.
abstract_id: PUBMED:37781938
Safe implementation of a minimally invasive hepatopancreatobiliary program, a narrative review and institutional experience. Laparoscopic and robotic-assisted approaches to hepatopancreatobiliary (HPB) operations have expanded worldwide. As surgeons and medical centers contemplate initiating and expanding minimally invasive surgical (MIS) programs for complex HPB surgical operations, there are many factors to consider. This review highlights the key components of developing an MIS HPB program and shares our recent institutional experience with the adoption and expansion of an MIS approach to pancreaticoduodenectomy.
abstract_id: PUBMED:31012042
Resident Operative Experience in Hepatopancreatobiliary Surgery: Exposing the Divide. Background: The Accreditation Council for Graduate Medical Education (ACGME) requires an experience in hepatopancreatobiliary (HPB) surgery as part of general surgery residency training. The composition of this experience, however, is unclear. We set out to evaluate current trends in the HPB experience of US general surgery residents.
Methods: National ACGME operative case logs from 1990 to 2016 were examined with a focus on the HPB operative domains. Time-trend analysis was performed using ANOVA and linear regression analysis.
Results: Median biliary, liver, and pancreatic operative volumes increased by 30%, 33%, and 27% over the 27-year study period (all p < 0.05). Both core and advanced HPB cases increased, but the rate of increase for core was four times greater than that of advanced. However, when cholecystectomy was excluded, this trend reversed such that HPB core operations decreased by 11 cases over the study period. Further analysis demonstrated that laparoscopic cholecystectomy comprised 90% of all biliary cases and 77% of all HPB cases for graduates in 2016. Finally, operative volume variability-the difference in case numbers between high and low volume residents-increased by 16%, 21%, and 73% for the biliary, liver, and pancreatic domains, respectively (all p < 0.05).
Conclusions: Despite increases in overall HPB operative volume, the HPB experience is changing for today's surgical trainees. Moreover, the HPB experience is comprised largely of a single operation-the cholecystectomy. Awareness of these trends is important for surgical educators to facilitate adequate exposure to HPB surgery among general surgery residents.
abstract_id: PUBMED:24179574
Prevalence of stress hyperglycemia among hepatopancreatobiliary postoperative patients. Objective: The aim of this study was to determine the prevalence of stress hyperglycemia and its association with mortality among hepatopancreatobiliary postoperative patients admitted.
Methods: Retrospectively analysis was made on 706 cases of the hepatopancreatobiliary postoperative patients from three Grade A hospitals in Hunan province from November 2011 to June 2012, including the incidence and risk factors of patients with stress hyperglycemia.
Results: The incidence of stress hyperglycemia of pancreatic postoperative patients was 34.28%. The incidence of pancreatic surgery, simple cholecystectomy and biliary tract and liver surgery in patients with stress hyperglycemia was 63.08%, 20.83% and 32.21%, respectively. Stress hyperglycemia was associated with the first postoperative glucose values, duration of surgery, whether the anemia and the presence or absence of hypoproteinemia (P<0.05), but was no related with sex, weight and previous history (P>0.05).
Conclusion: Stress hyperglycemia is common among emergency admissions and these patients have significantly higher mortality rate compared to other patients (P=0.001). Postoperative first blood glucose levels, duration of surgery, whether the anemia and the presence or absence of hypoproteinemia were stress hyperglycemia risk factors for patients.
abstract_id: PUBMED:37509355
Intraoperative Imaging in Hepatopancreatobiliary Surgery. Hepatopancreatobiliary surgery belongs to one of the most complex fields of general surgery. An intricate and vital anatomy is accompanied by difficult distinctions of tumors from fibrosis and inflammation; the identification of precise tumor margins; or small, even disappearing, lesions on currently available imaging. The routine implementation of ultrasound use shifted the possibilities in the operating room, yet more precision is necessary to achieve negative resection margins. Modalities utilizing fluorescent-compatible dyes have proven their role in hepatopancreatobiliary surgery, although this is not yet a routine practice, as there are many limitations. Modalities, such as photoacoustic imaging or 3D holograms, are emerging but are mostly limited to preclinical settings. There is a need to identify and develop an ideal contrast agent capable of differentiating between malignant and benign tissue and to report on the prognostic benefits of implemented intraoperative imaging in order to navigate clinical translation. This review focuses on existing and developing imaging modalities for intraoperative use, tailored to the needs of hepatopancreatobiliary cancers. We will also cover the application of these imaging techniques to theranostics to achieve combined diagnostic and therapeutic potential.
abstract_id: PUBMED:22287361
Emergency airway management: training and experience of chief residents in otolaryngology and anesthesiology. Background: Resident training in emergency airway management is not well described. We quantified training and exposure to airway emergencies among graduating Otolaryngology-Head and Neck Surgery and Anesthesiology residents.
Methods: The methods used for this study were a national web-based survey of chief residents.
Results: The response rate was 52% (otolaryngology) and 60% (anesthesiology). More otolaryngology residents rotated on anesthesiology than anesthesia residents on otolaryngology (33% vs 8%). More anesthesiology chiefs never performed an emergency surgical airway than otolaryngology (92% vs 18%). The most common self-rating of competency was "9," with 82% overall self-rating "8" or higher (10 = "totally competent").
Conclusion: Otolaryngology and anesthesiology emergency airway management experience/training is heterogeneous and nonstandardized. Many chief residents graduate with little exposure to airway emergencies, especially surgical airways. Resident confidence levels are high despite minimal experience. This high confidence-low experience dichotomy may reflect novice overconfidence and suggests the need for improved training methods.
abstract_id: PUBMED:24529805
Perceptions of graduating general surgery chief residents: are they confident in their training? Background: Debate exists within the surgical education community about whether 5 years is sufficient time to train a general surgeon, whether graduating chief residents are confident in their skills, why residents choose to do fellowships, and the scope of general surgery practice today.
Study Design: In May 2013, a 16-question online survey was sent to every general surgery program director in the United States for dissemination to each graduating chief resident (CR).
Results: Of the 297 surveys returned, 76% of CRs trained at university programs, 81% trained at 5-year programs, and 28% were going directly into general surgery practice. The 77% of CRs who had done >950 cases were significantly more comfortable than those who had done less (p < 0.0001). Only a few CRs were uncomfortable performing a laparoscopic colectomy (7%) or a colonoscopy (6%), and 80% were comfortable being on call at a Level I trauma center. Compared with other procedures, CRs were most uncomfortable with open common bile duct explorations (27%), pancreaticoduodenectomies (38%), hepatic lobectomies (48%), and esophagectomies (60%) (p < 0.00001). Of those going into fellowships, 67% said they truly had an interest in that specialty and only 7% said it was because they were not confident in their surgical skills.
Conclusions: Current graduates of general surgery residencies appear to be confident in their skills, including care of the trauma patient. Fellowships are being chosen primarily because of an interest in the subspecialty. General surgery residency no longer provides adequate training in esophageal or hepatopancreatobiliary surgery.
abstract_id: PUBMED:28602224
General surgeon management of complex hepatopancreatobiliary trauma at a level I trauma center. Background: The impact of general surgeons (GS) taking trauma call on patient outcomes has been debated. Complex hepatopancreatobiliary (HPB) injuries present a particular challenge and often require specialized care. We predicted no difference in the initial management or outcomes of complex HPB trauma between GS and trauma/critical care (TCC) specialists.
Materials And Methods: A retrospective review of patients who underwent operative intervention for complex HPB trauma from 2008 to 2015 at an ACS-verified level I trauma center was performed. Chart review was used to obtain variables pertaining to demographics, clinical presentation, operative management, and outcomes. Patients were grouped according to whether their index operation was performed by a GS or TCC provider and compared.
Results: 180 patients met inclusion criteria. The GS (n = 43) and TCC (n = 137) cohorts had comparable patient demographics and clinical presentations. Most injuries were hepatic (73.3% GS versus 72.6% TCC) and TCC treated more pancreas injuries (15.3% versus GS 13.3%; P = 0.914). No significant differences were found in HPB-directed interventions at the initial operation (41.9% GS versus 56.2% TCC; P = 0.100), damage control laparotomy with temporary abdominal closure (69.8% versus 69.3%; P = 0.861), LOS, septic complications or 30-day mortality (13.9% versus 10.2%; P = 0.497). TCC were more likely to place an intraabdominal drain than GS (52.6% versus 34.9%; P = 0.043).
Conclusions: We found no significant differences between GS and TCC specialists in initial operative management or clinical outcomes of complex HPB trauma. The frequent and proper use of damage control laparotomy likely contribute to these findings.
Answer: The operative experience in complex hepatopancreatobiliary (HPB) surgery among graduating chief residents appears to be insufficient. According to a study examining the Accreditation Council for Graduate Medical Education General Surgery Case Log from 2003 to 2012, roughly half of graduating chief residents (GCR) had performed fewer than 10 cases in each of the liver, pancreas, or biliary categories at the time of completion of residency (PUBMED:24953270). Although there was a slight increase in the mean number of complex liver and pancreatic operations performed by GCR, the median number remained low, and the number of cases performed as teaching assistants was virtually zero. This suggests that most GCR are unlikely to be prepared to perform complex HPB operations upon completion of their residency.
In contrast, fellows completing hepatopancreatobiliary fellowships accredited by the Fellowship Council (FC) in North America have a robust exposure to complex HPB surgery, with median case numbers indicating substantial experience across a range of HPB procedures (PUBMED:25498881). This indicates that additional fellowship training may be necessary for residents to feel well versed in the management and surgical treatment of HPB patients.
The overall trend in HPB operative experience for general surgery residents has changed, with a significant portion of the experience comprised largely of a single operation—the cholecystectomy. The variability in operative volume among residents has also increased, suggesting a divide in the level of experience (PUBMED:31012042).
In conclusion, the available evidence suggests that the operative experience in complex HPB surgery for graduating chief residents is not enough, and additional fellowship training may be necessary to ensure adequate preparation for independent practice in this field. |
Instruction: Comparison of olecranon plate fixation in osteoporotic bone: do current technologies and designs make a difference?
Abstracts:
abstract_id: PUBMED:21464739
Comparison of olecranon plate fixation in osteoporotic bone: do current technologies and designs make a difference? Objectives: The purpose of this study is to determine if recent innovations in olecranon plates have any advantages in stabilizing osteoporotic olecranon fractures.
Methods: Five olecranon plates (Acumed, Synthes-SS, Synthes-Ti, US Implants/ITS, and Zimmer) were implanted to stabilize a simulated comminuted fracture pattern in 30 osteoporotic cadaveric elbows. Specimens were randomized by bone mineral density per dual-energy x-ray absorptiometry scan. Three-dimensional displacement analysis was conducted to assess fragment motion through physiological cyclic arcs of motion and failure loading, which was statistically compared using one-way analysis of variance and Tukey honestly significant difference post hoc comparisons with a critical significance level of α = 0.05.
Results: Bone mineral density ranged from 0.546 g/cm to 0.878 g/cm with an average of 0.666 g/cm. All implants limited displacement of the fragments to less than 3 mm until sudden, catastrophic failure as the bone of the proximal fragment pulled away from the implant. The maximum load sustained by all osteoporotic specimens ranged from 1.6 kg to 6.6 kg with an average of 4.4 kg. There was no statistical difference between the groups in terms of cycles survived and maximum loads sustained.
Conclusions: Cyclic physiological loading of osteoporotic olecranon fracture fixation resulted in sudden, catastrophic failure of the bone-implant interface rather than in gradual implant loosening. Recent plate innovations such as locking plates and different screw designs and positions appear to offer no advantages in stabilizing osteoporotic olecranon fractures. Surgeons may be reassured that the current olecranon plates will probably adequately stabilize osteoporotic fractures for early motion in the early postoperative period, but not for heavy activities such as those that involve over 4 kg of resistance.
abstract_id: PUBMED:36726952
A novel internal fixation technique for the treatment of olecranon avulsion fracture. Objective: Tension band wiring and proximal ulnar plate fixation are commonly used fixation methods for olecranon fractures. However, they may not be suitable for repairing proximal olecranon avulsion fractures. In this study, we present a novel fixation technique for the treatment of proximal avulsion fractures, which is a T-shaped plate combined with a wire.
Materials And Methods: Between March 2016 and May 2020, surgery was performed on 16 patients with proximal olecranon avulsion fractures by using a T-shaped plate combined with a wire fixation at our hospital. The parameters followed were fracture healing time, elbow range of motion (ROM), related functional scores (the Mayo score and the DASH score), and complications related to internal fixation.
Results: The average follow-up period was 17 (14-21) months and fractures had healed in all patients included in the study, with an average fracture union of 9.25 (8-12) weeks. No patient reported fixation failure, serious infection, or revision surgery. The average ROM of the elbow joint was 123° (120-135°). The Mayo score was excellent in 11 patients and good in 5. The average DASH score was 17.75 (12-24).
Conclusion: Olecranon avulsion fractures were fixed with a T-shaped steel plate combined with a steel wire, which can be used for early functional exercise and for achieving good final functional results. This method can provide stable fixation, especially in elderly patients with osteoporosis.
abstract_id: PUBMED:35928655
The Evolution of Olecranon Fractures and Its Fixation Strategies. Introduction: Olecranon fractures are a common fracture of the upper extremity. The primary aim was to investigate the evolution of olecranon fractures and fixation method over a period of 12 years. The secondary aim was to compare complication rates of Tension Band Wiring (TBW) and Plate Fixation (PF).
Materials And Methods: Retrospective Study for all patients with surgically treated olecranon fractures from 1 January 2005 to 31 December 2016 from a tertiary trauma center. Records review for demographic, injury characteristics, radiographic classification and configuration, implant choices and complications. Results grouped into three 4-year intervals, analyzed comparatively to establish significant trends over 12 years.
Results: 262 patients were identified. Demographically, increasing mean age (48.7 to 58.9 years old, p value 0.004) and higher ASA scores (7.1% ASA 3 to 21.0% ASA 3 p value 0.001). Later fractures were more oblique (fracture angle 86.1-100.0 degrees, p value 0.001) and comminuted (Schatzker D type 10.4-30.0%, p value 0.025, single fracture line 94.0-66.0%, p value 0.001). Implant choice, sharp increase in PF compared to TBW (PF 16.0% to PF 80.2%, p value 0.001). Complication-wise, TBW had higher rates of symptomatic implant, implant and bony failures and implant removal.
Conclusion: Demographic and fracture characteristic trends suggest that olecranon fractures are exhibiting fragility fracture characteristics (older age, higher ASA scores, more unstable, oblique and comminuted olecranon fractures). Having a high index of suspicion would alert surgeons to consider use of advanced imaging, utilize appropriate fixation techniques and manage the underlying osteoporosis for secondary fracture prevention. Despite this, trends suggest a potential overutilization of PF particularly for stable fracture patterns and the necessary precaution should be exercised.
abstract_id: PUBMED:22025265
An off-loading triceps suture for augmentation of plate fixation in comminuted osteoporotic fractures of the olecranon. Comminuted osteoporotic olecranon fractures in the elderly are relatively common. Open reduction and internal plate fixation is a frequently used treatment option. In highly comminuted osteoporotic bone, fixation may be tenuous leading to an increased risk of fixation failure with loss of reduction and displacement of fracture fragments. The off-loading triceps suture technique is a load-sharing mechanism to decreases distraction forces caused by the extensor mechanism on comminuted osteopenic olecranon fracture fragments managed with plate fixation.
abstract_id: PUBMED:38491923
Triceps Tendon Reattachment Using Mini Plates and Screws After Failure of Olecranon Avulsion Fracture Fixation in Osteoporotic Bone: A Case Report. This is a case report of an 85-year-old woman with osteopenia who underwent olecranon avulsion fracture repair with supplemental triceps tendon repair following a fall on an outstretched arm. The initial procedure failed due to osteoporotic bone quality and an atraumatic disruption of the olecranon fracture fixation. The patient subsequently underwent further surgical intervention with an olecranon avulsion fracture excision and a novel triceps tendon repair technique using plate augmentation and fiber tape. Surgeons may consider this novel approach as an initial treatment for elderly patients with osteopenia or osteoporosis undergoing olecranon avulsion fracture fixation, to prevent the failure and consequent revision surgery.
abstract_id: PUBMED:28303286
Fractures of the olecranon Objective: Fractures of the olecranon are the most common fractures of the elbow in adults. Due to the dislocating force of the triceps muscle, internal fixation is the treatment of choice.
Indications: All fractures of the olecranon without contraindications.
Contraindications: Infection and severe soft tissue damage.
Surgical Technique: Dorsal approach to the olecranon with the patient in a prone position. Open reduction and internal fixation with tension band wiring or plate fixation according to fracture pattern.
Postoperative Management: Treatment goal is early functional mobilization. No load bearing allowed for 6-8 weeks; full load bearing is allowed after fracture healing.
Results: The quality of published studies concerning the surgical treatment of olecranon fractures is poor. Published functional results are predominantly good and excellent. Hardware removal was often required.
abstract_id: PUBMED:33215127
Olecranon fixation with two bicortical screws. Aims: The aim of this study is to report the results of a case series of olecranon fractures and olecranon osteotomies treated with two bicortical screws.
Methods: Data was collected retrospectively for all olecranon fractures and osteotomies fixed with two bicortical screws between January 2008 and December 2019 at our institution. The following outcome measures were assessed; re-operation, complications, radiological loss of reduction, and elbow range of flexion-extension.
Results: Bicortical screw fixation was used to treat 17 olecranon fractures and ten osteotomies. The mean age of patients being treated for olecranon fracture and osteotomy were 48.6 years and 52.7 years respectively. Overall, 18% of olecranon fractures were classified as Mayo type I, 71% type II, and 12% type III. No cases of fracture or osteotomy required operative re-intervention. There were two cases of loss of fracture reduction which occurred in female patients ≥ 75 years of age with osteoporotic bone. In both cases, active extension and a functional range of movement was maintained and so the loss of reduction was managed non-operatively. For the fracture fixation cohort, at final follow-up mean elbow extension and flexion were -5° ± 5° and 136° ± 7°, with a mean arc of motion of 131° ± 11°.
Conclusion: This series has shown that patients regain near full range of elbow flexion-extension and complication rates are low following bicortical screw fixation of olecranon fractures and osteotomy.Cite this article: Bone Joint Open 2020;1-7:376-382.
abstract_id: PUBMED:33359398
Low-profile double plating of unstable osteoporotic olecranon fractures: a biomechanical comparative study. Background: In the treatment of unstable olecranon fractures, anatomically preshaped locking plates exhibit superior biomechanical results compared with tension band wiring. However, posterior plating (PP) still is accompanied by high rates of plate removal because of soft-tissue irritation and discomfort. Meanwhile, low-profile plates precontoured for collateral double plating (DP) are available and enable muscular soft-tissue coverage combined with angular-stable fixation. The goal of this study was to biomechanically compare PP with collateral DP for osteosynthesis of unstable osteoporotic fractures.
Methods: A comminuted displaced Mayo type IIB fracture was created in 8 osteoporotic pairs of fresh-frozen human cadaveric elbows. Pair-wise angular stable fixation was performed by either collateral DP or PP. Biomechanical testing was conducted as a pulling force to the triceps tendon in 90° of elbow flexion. Cyclical load changes between 10 and 300 N were applied at 4 Hz for 50,000 cycles. Afterward, the maximum load was raised by 0.02 N/cycle until construct failure, which was defined as displacement > 2 mm. Besides failure cycles and failure loads, modes of failure were analyzed.
Results: Following DP, a median endurance of 65,370 cycles (range, 2-83,121 cycles) was recorded, which showed no significant difference compared with PP, with 69,311 cycles (range, 150-81,938 cycles) (P = .263). Failure load showed comparable results as well, with 601 N (range, 300-949 N) after DP and 663 N (range, 300-933 N) after PP (P = .237). All PP constructs and 3 of 8 DP constructs failed by proximal fragment cutout, whereas 5 of 8 DP constructs failed by bony triceps avulsion.
Conclusion: Angular-stable DP showed comparable biomechanical stability to PP in unstable osteoporotic olecranon fractures under high-cycle loading conditions. Failure due to bony triceps avulsion following DP requires further clinical and biomechanical investigation, for example, on suture augmentation or different screw configurations.
abstract_id: PUBMED:31879494
Biomechanical properties of an intramedullary suture anchor fixation compared to tension band wiring in osteoporotic olecranon fractures- A cadaveric study. Introduction: The aim of the study is to compare three different fixation techniques for transverse olecranon repair in cadaveric osteoporotic bone: (1) current recommended AO tension band technique with K-wire fixation; (2) Suture anchor fixation and (3) Polyester suture fixation.
Methods: Evaluated with bone densitometry, 7 osteoporotic human elbow specimens were included in the study. A transverse olecranon fracture was reduced anatomically and were fixated first using a K-wire tension band technique, second using two suture anchors, and third using polyester suture. Static simulations of the kinetics associated with active range of motion (AROM) and push up from a chair exercises were performed with cyclic loading using Instron hydraulic testing apparatus. Fracture displacement was measured using videographic analysis. Failure was defined as 2 mm fracture displacement.
Results: The biomechanical analysis found no statistical difference in displacements between the three fixation methods when testing AROM. In simulated push-up exercises, polyester suture fixation failed after 17 cycles and had significantly higher displacement compared to the other two methods. No difference between the K-Wire fixation versus Suture anchor fixation was observed, p = 0.162.
Conclusion: Suture anchor fixation might be a viable surgical treatment option for osteoporotic transverse elbow fractures in geriatric patients.
abstract_id: PUBMED:30315399
Olecranon fractures Olecranon fractures represent a common elbow injury and they are usually treated operatively. Depending on fracture morphology conservative therapy can be considered in geriatric patients. When surgical treatment is performed the anatomical features of the proximal ulna have to be taken into consideration. Tension band wiring is considered a standard procedure for simple fractures and provides good clinical results. Plate osteosynthesis is indicated for more complex fracture patterns. Nowadays, many surgeons prefer plate fixation over tension band wiring even for simple fractures as it leads to a more stable fixation. Both methods frequently require hardware removal for persisting soft tissue irritation. The use of modern low-profile plates could reduce the need for implant removal due to better soft tissue coverage. The available clinical data regarding low-profile plates are promising but limited.
Answer: The study by PUBMED:21464739 aimed to determine if recent innovations in olecranon plates offer any advantages in stabilizing osteoporotic olecranon fractures. The study compared five different olecranon plates and found that all implants limited displacement of the fragments to less than 3 mm until sudden, catastrophic failure occurred. There was no statistical difference between the groups in terms of cycles survived and maximum loads sustained, suggesting that recent plate innovations such as locking plates and different screw designs and positions appear to offer no advantages in stabilizing osteoporotic olecranon fractures. The conclusion was that while current olecranon plates may adequately stabilize osteoporotic fractures for early motion in the early postoperative period, they may not be suitable for heavy activities involving over 4 kg of resistance.
In contrast, a novel internal fixation technique using a T-shaped plate combined with a wire was presented in the study by PUBMED:36726952, which showed good results in the treatment of proximal olecranon avulsion fractures, particularly in elderly patients with osteoporosis. This method provided stable fixation and allowed for early functional exercise, achieving good final functional results.
The study by PUBMED:35928655 highlighted the evolution of olecranon fractures and fixation methods over 12 years, noting an increase in the use of plate fixation compared to tension band wiring. However, it also pointed out that despite the trend towards plate fixation, there might be an overutilization of this method, especially for stable fracture patterns.
Other studies, such as PUBMED:22025265, PUBMED:38491923, and PUBMED:33359398, explored additional techniques and augmentations to improve fixation in osteoporotic olecranon fractures, such as off-loading triceps sutures, triceps tendon reattachment using mini plates and screws, and low-profile double plating. These studies suggest that alternative methods and augmentations may be beneficial in certain cases, particularly in osteoporotic bone where traditional fixation methods may fail.
In summary, while current olecranon plate technologies and designs do not seem to offer significant advantages in stabilizing osteoporotic olecranon fractures under heavy loads, alternative fixation techniques and augmentations may provide better outcomes in specific scenarios, such as in elderly patients with osteoporosis or in cases of highly comminuted fractures (PUBMED:21464739, PUBMED:36726952, PUBMED:35928655, PUBMED:22025265, PUBMED:38491923, PUBMED:33359398). |
Instruction: Effects of OG-VI, a nucleoside/nucleotide mixture, on stunned myocardium in dogs: is the adenosine A1 receptor involved?
Abstracts:
abstract_id: PUBMED:9225340
Limitation of stunning in dog myocardium by nucleoside and nucleotide mixture, OG-VI. OG-VI is a solution composed of 30 mM inosine, 30 mM sodium 5'-guanylate, 30 mM cytidine, 22.5 mM uridine, and 7.5 mM thymidine, expecting to use for total parenteral nutrition. We examined the effect of OG-VI on myocardial contractile dysfunction during reperfusion after ischemia (myocardial stunning) in dogs. Pentobarbital-anesthetized dogs were subjected to 20-min left anterior descending coronary artery ligation followed by 30-min reperfusion. Saline, OG-VI or its constituents [inosine and sodium 5'-guanylate mixture (IG), and cytidine, uridine, and thymidine mixture (CUT)], or 5-amino-4-imidazole carboxamide riboside (AICAr) was infused at 0.1 mL.kg-1.min-1, starting 30 min before the ischemia. The contractile function was determined by ultrasonometry and assessed as % segment shortening (%SS). %SS was markedly decreased by ischemia, and returned toward pre-ischemic level after reperfusion, although the recovery was incomplete. The %SS was almost completely recovered by OG-VI and IG, and to a lesser extent by AICAr; CUT was ineffective. In the presence of 1 mg.kg-1 of 8-cyclopentyl-1,3-dipropylxanthine (DPCPX, a selective adenosine A1 receptor antagonist), cardioprotective effect of OG-VI on stunned myocardium was still observed. In conclusion, infusion of OG-VI improved myocardial contractile dysfunction in stunned myocardium. This effect was more potent than its constituents and AICAr. Adenosine A1 receptors are not involved in the mechanism.
abstract_id: PUBMED:9589188
Effects of OG-VI, a nucleoside/nucleotide mixture, on stunned myocardium in dogs: is the adenosine A1 receptor involved? Background: OG-VI is a solution composed of 30 mmol/l inosine, 30 mmol/l sodium 5'-guanylate, 30 mmol/l cytidine, 22.5 mmol/l uridine and 7.5 mmol/l thymidine; it limits myocardial stunning in dogs. We examined whether adenosine A1 receptors were involved in the mechanism of action of OG-VI.
Methods: Dogs anesthetized with pentobarbital were subjected to 20 min of left anterior descending coronary artery ligation followed by 30 min of reperfusion. Saline, OG-VI in several doses, adenosine or inosine was infused at 0.1 ml/kg/min, starting 30 min before the ischemia. In some experiments, 1 or 3 mg/kg 8-cyclopentyl-1,3-dipropylxanthine (DPCPX), a selective adenosine A1 receptor antagonist, was injected intravenously 15 min before the start of the OG-VI infusion. The percentage myocardial segment shortening (%SS) was measured by sonomicrometry. The tissue concentration of ATP was measured in the 30-min-reperfused hearts.
Results: In the saline group, %SS that had been decreased by ischemia returned toward pre-ischemic values after reperfusion, although the metabolic recovery was incomplete, with a low concentration of ATP. The %SS was almost completely restored by 12 and 1.2 mumol/kg/min OG-VI, but 0.4 mumol/kg/min was less effective. Administration of adenosine or inosine did not modify the changes in %SS during ischemia/reperfusion. Pretreatment with DPCPX worsened the recovery of %SS during reperfusion after ischemia in both the saline and the OG-VI groups. Infusion of DPCPX (3 mg/kg) with saline caused the animals to die shortly after the onset of ischemia. However, the enhancement of %SS recovery during OG-VI reperfusion was observed in the presence of DPCPX.
Conclusion: OG-VI improves the recovery of %SS during reperfusion after brief ischemia in a dose-dependent manner. This effect is not brought about by stimulation of adenosine A1 receptors.
abstract_id: PUBMED:7677544
Protective effects of adenosine in the reversibly injured heart. Background: There is substantial evidence that the nucleoside adenosine reduces postischemic ventricular dysfunction (ie, myocardial stunning). Studies performed in our laboratory have attempted to address the mechanism of adenosine-mediated protection of the reversibly injured heart.
Methods: Experiments were performed in isolated perfused rat and rabbit hearts and in in situ canine and porcine preparations. The role of adenosine A1 receptors was assessed by using adenosine A1 receptor agonists and antagonists, and by measuring interstitial fluid purine levels with the cardiac microdialysis technique.
Results: In isolated perfused hearts, treatment immediately before ischemia with adenosine and adenosine A1 receptor analogues significantly improved postischemic ventricular function, effects that were blocked by a selective adenosine A1 receptor antagonist. In in situ canine and porcine preparations, pretreatment with adenosine and an adenosine deaminase inhibitor increased preischemic interstitial fluid adenosine levels and attenuated regional myocardial stunning. Adenosine treatment was also associated with improved myocardial phosphorylation potential in isolated guinea pig hearts and in the in situ porcine preparation.
Conclusions: These results suggest that adenosine-induced attenuation of myocardial stunning is mediated via adenosine A1 receptor activation and enhancement of postischemic myocardial phosphorylation potential.
abstract_id: PUBMED:22325325
On-pump inhibition of es-ENT1 nucleoside transporter and adenosine deaminase during aortic crossclamping entraps intracellular adenosine and protects against reperfusion injury: role of adenosine A1 receptor. Objective: The inhibition of adenosine deaminase with erythro-9 (2-hydroxy-3-nonyl)-adenine (EHNA) and the es-ENT1 transporter with p-nitro-benzylthioinosine (NBMPR), entraps myocardial intracellular adenosine during on-pump warm aortic crossclamping, leading to a complete recovery of cardiac function and adenosine triphosphate (ATP) during reperfusion. The differential role of entrapped intracellular and circulating adenosine in EHNA/NBMPR-mediated protection is unknown. Selective (8-cyclopentyl-1,3-dipropyl-xanthine) or nonselective [8-(p-sulfophenyl)theophyline] A1 receptor antagonists were used to block adenosine A1-receptor contribution in EHNA/NBMPR-mediated cardiac recovery.
Methods: Anesthetized dogs (n = 45), instrumented to measure heart performance using sonomicrometry, were subjected to 30 minutes of warm aortic crossclamping and 60 minutes of reperfusion. Three boluses of the vehicle (series A) or 100 μM EHNA and 25 μM NBMPR (series B) were infused into the pump at baseline, before ischemia and before reperfusion. 8-Cyclopentyl-1,3-dipropyl-xanthine (10 μM) or 8-(p-sulfophenyl)theophyline (100 μM) was intra-aortically infused immediately after aortic crossclamping distal to the clamp in series A and series B. The ATP pool and nicotinamide adenine dinucleotide was determined using high-performance liquid chromatography.
Results: Ischemia depleted ATP in all groups by 50%. The adenosine/inosine ratios were more than 10-fold greater in series B than in series A (P < .001). ATP and function recovered in the EHNA/NBMPR-treated group (P < .05 vs control group). 8-Cyclopentyl-1,3-dipropyl-xanthine and 8-(p-sulfophenyl)theophyline partially reduced cardiac function in series A and B to the same degree but did not abolish the EHNA/NBMPR-mediated protection in series B.
Conclusions: In addition to the cardioprotection mediated by activation of the adenosine receptors by extracellular adenosine, EHNA/NBMPR entrapment of intracellular adenosine provided a significant component of myocardial protection despite adenosine A1 receptor blockade.
abstract_id: PUBMED:11605996
Protection of IB-MECA against myocardial stunning in conscious rabbits is not mediated by the A1 adenosine receptor. The goal of this study was to determine whether the protective effects of the A3AR agonist N6-(3-iodobenzyl)adenosine-5'-N-methylcarboxamide (IB-MECA) against myocardial stunning are mediated by the A1AR. Six groups of conscious rabbits underwent a sequence of six 4-minute coronary occlusion (O)/4-minute reperfusion (R) cycles for three consecutive days (days 1, 2, and 3). In vehicle-treated rabbits (group I), the recovery of systolic wall thickening (WTh) in the ischemic/reperfused region was markedly depressed on day 1, indicating the presence of severe myocardial stunning. On days 2 and 3, however, the recovery of systolic WTh was markedly accelerated, indicating the presence of late ischemic preconditioning (PC). When rabbits were pretreated with the A1AR agonist 2-chloro-N6-cyclopentyladenosine (CCPA, 100 microg/kg i.v.) or with IB-MECA (100 microg/kg i.v.) 10 min prior to the first sequence of O/R cycles on day 1 (group III and V, respectively), the recovery of systolic WTh was markedly accelerated compared to vehicle-treated animals (reflected as an approximately 48% decrease in the total deficit of systolic WTh). The magnitude of the protection afforded by adenosine receptor agonists was equivalent to that provided by late ischemic PC. Pre-treating rabbits with the A1AR antagonist N-0861 completely blocked both the hemodynamic and the cardioprotective effects of CCPA (group IV). However, the same dose of N-0861 did not block the cardioprotective actions of IB-MECA (group VI). Importantly, N-0861 did not influence the degree of myocardial stunning in the absence of PC (group II) and it did not block the development of late ischemic PC. Taken together, these results provide conclusive evidence that the cardioprotective effects of IB-MECA are not mediated via the A1AR, supporting the concept that activation of A3ARs prior to an ischemic challenge provides protection against ischemia/reperfusion injury.
abstract_id: PUBMED:8293574
Ca2+ preconditioning elicits a unique protection against the Ca2+ paradox injury in rat heart. Role of adenosine. Fixed. Repeated Ca2+ depletion and repletion of short duration, termed Ca2+ preconditioning (CPC), is hypothesized to protect the heart from lethal injury after exposing it to the Ca2+ paradox (Ca2+ PD). Hearts were preconditioned with five cycles of Ca2+ depletion (1 minute) and Ca2+ repletion (5 minutes). These hearts were then subjected to Ca2+ PD, ie, one cycle of Ca2+ depletion (10 minutes) and Ca2+ repletion (10 minutes). Hearts subject to the Ca2+ PD underwent rapid necrosis, and myocytes were severely injured. CPC hearts showed a remarkable preservation of cell structure; ie, 65% of the cells were normal in CPC hearts compared with 0% in the Ca2+ PD hearts. LDH release was significantly reduced in CPC hearts compared with Ca2+ PD hearts (2.45 +/- 0.18 and 8.02 +/- 0.7 U.min-1 x g-1, respectively). ATP contents of CPC hearts were less depleted compared with the Ca2+ PD hearts (5.9 +/- 0.8 and 3.0 +/- 0.16 mumol/g dry weight, respectively). Addition of the adenosine A1 receptor agonist R-phenylisopropyl adenosine before and during Ca2+ PD provided protection similar to that in CPC hearts, whereas the nonselective adenosine A1 receptor antagonist, 8-(p-sulfophenyl)-theophylline, blocked the beneficial effects of CPC. CPC-mediated protection was aborted when hearts subjected to CPC were treated with pertussis toxin (the guanine nucleotide or G-protein inhibitor). The present study suggests that Ca2+ preconditioning confers significant protection against the lethal injury of Ca2+ PD in rat hearts. Cardioprotection appears to result from adenosine release during preconditioning and by Gi-protein-modulated mechanisms.
abstract_id: PUBMED:11133228
Adenosine A1 receptor activation reduces reactive oxygen species and attenuates stunning in ventricular myocytes. Reactive oxygen species (ROS) formation following brief periods of ischemia or hypoxia is thought to be the underlying cause of myocardial stunning. Adenosine A1 receptor activation prior to ischemia/hypoxia attenuates stunning, although the mechanism for this effect remains unknown. Isolated rat ventricular myocytes loaded with the ROS-sensitive indicator dichlorofluorescin were subjected to 30 min glucose-free hypoxia followed by reoxygenation. Intracellular ROS increased approximately 175% (from pre-hypoxic levels) during reoxygenation while cell shortening decreased approximately 50%. In myocytes pretreated with the adenosine A1 agonist 2-chloro-N(6)-cyclopentyladenosine (CCPA), reoxygenation-induced ROS formation was attenuated by 40% and stunning was attenuated by 50% (compared to untreated myocytes). The mitochondrial K(ATP) channel opener diazoxide mimicked the effects of CCPA. Pretreatment with the mitochondrial K(ATP) channel blocker 5-hydroxydecanoate, or the non-selective K(ATP) channel blocker glibenclamide, blocked the effects of CCPA. These results suggest that adenosine A1 receptor activation attenuates stunning by reducing ROS formation. These effects of A1 receptor activation appear to be dependent on the opening of K(ATP) channels.
abstract_id: PUBMED:15271662
Adenosine A1/A2a receptor agonist AMP-579 induces acute and delayed preconditioning against in vivo myocardial stunning. The purpose of this study was to determine whether the adenosine A1/A2a receptor agonist AMP-579 induces acute and delayed preconditioning against in vivo myocardial stunning. Regional stunning was produced by 15 min of coronary artery occlusion and 3 h of reperfusion (RP) in anesthetized open-chest pigs. In acute protection studies, animals were pretreated with saline, low-dose AMP-579 (15 microg/kg iv bolus 10 min before ischemia), or high-dose AMP-579 (50 microg/kg iv at 14 microg/kg bolus + 1.2 microg.kg(-1).min(-1) for 30 min before coronary occlusion). The delayed preconditioning effects of AMP-579 were evaluated 24 h after administration of saline vehicle or high-dose AMP-579 (50 microg/kg iv). Load-insensitive contractility was assessed by measuring regional preload recruitable stroke work (PRSW) and PRSW area. Acute preconditioning with AMP-579 dose dependently improved regional PRSW: 129 +/- 5 and 100 +/- 2% in high- and low-dose AMP-579 groups, respectively, and 78 +/- 5% in the control group at 3 h of RP. Administration of the adenosine A1 receptor antagonist 8-cyclopentyl-1,3-dipropylxanthine (0.7 mg/kg) blocked the acute protective effect of high-dose AMP-579, indicating that these effects are mediated through A1 receptor activation. Delayed preconditioning with AMP-579 significantly increased recovery of PRSW area: 64 +/- 5 vs. 33 +/- 5% in control at 3 h of RP. In isolated perfused rat heart studies, kinetics of the onset and washout of AMP-579 A1 and A2a receptor-mediated effects were distinct compared with those of other adenosine receptor agonists. The unique nature of the adenosine agonist AMP-579 may play a role in its ability to induce delayed preconditioning against in vivo myocardial stunning.
abstract_id: PUBMED:8461527
Adenosine and the stunned heart. Adenosine is one agent under investigation as a therapeutic intervention of myocardial stunning. Adenosine caused numerous effects on the cardiovascular system through its interaction with A1 and A2 receptors. We investigated adenosine A1 receptor mediated mechanisms of cardiac protection in the stunned rat myocardium. Previous studies showed that both adenosine and R-phenylisopropyladenosine (PIA), an A1 receptor agonist, prolonged the time to onset of ischemic contracture in ischemic isolated rat hearts. Phenylaminoadenosine, an A2 receptor agonist, did not have any effect, while receptor antagonists blocked adenosine and PIA action. Direct attenuation of the effects of myocardial stunning was observed by altering levels of interstitial fluid adenosine. Our laboratory has shown that administration of erythro-9(2-hydroxy-3-nonyl) adenine (EHNA; an adenosine deaminase inhibitor) to dogs subjected to left anterior descending coronary artery (LAD) occlusion followed by reperfusion results in dramatic increases in ischemic levels of interstitial fluid adenosine and postischemic myocardial function. Using a similar model in dogs, we have shown that exogenous intracoronary adenosine (50 micrograms/kg per min) augmented postischemic recovery of function, as assessed by significant enhancement (p < 0.01) of systolic wall thickness (7.0 +/- 3.0 pretreatment vs -5.7 +/- 1.7 controls). These data support the role for an adenosine A1 receptor mediated mechanism for protection against myocardial stunning.
abstract_id: PUBMED:8319338
Glibenclamide antagonizes adenosine A1 receptor-mediated cardioprotection in stunned canine myocardium. Background: The main objective of the present study was to determine the role of adenosine in the development of myocardial stunning following multiple, brief periods of coronary artery occlusion as well as the subtype of adenosine receptor (A1 or A2) involved. A second objective was to determine if there was an interaction between the adenosine A1 receptor and the ATP-dependent K channel (KATP).
Methods And Results: The effects of the selective adenosine A1 receptor antagonist 8-cyclopentyl-1,3-dipropylxanthine (DPCPX) and agonist cyclopentyladenosine (CPA), the selective A2 receptor agonist CGS 21680, and the KATP channel blocker glibenclamide on myocardial stunning produced by repetitive coronary artery occlusions were studied in barbital-anesthetized dogs. Regional segment function was measured with sonomicrometry. Under control conditions, six 5-minute periods of coronary occlusion interspersed with 10-minute periods of reperfusion and ultimately followed by 2 hours of reperfusion produced regional segment dysfunction. Pretreatment with intravenous infusion of CPA (2.0 micrograms.kg-1.min-1) improved percent segment shortening throughout reperfusion, whereas pretreatment with DPCPX (1.0 mg/kg i.v. bolus) significantly worsened the recovery of postischemic contractile function. In contrast, neither DPCPX nor CPA had any effect on the recovery of contractile function when administered before the second coronary occlusion. Furthermore, pretreatment with CGS 21680 (0.2 micrograms.kg-1.min-1) did not affect the recovery of percent segment shortening. In addition, pretreatment with a low dose of glibenclamide (0.1 mg/kg) had no effect on percent segment shortening by itself but completely abolished the beneficial effect of CPA. Importantly, the effects of the various agents on percent segment shortening were independent of difference in systemic hemodynamics, collateral blood flow, or ischemic bed size.
Conclusions: These results suggest that stimulation of myocardial adenosine A1 receptors, particularly when induced by the initial coronary artery occlusion, is cardioprotective during repetitive, brief periods of coronary artery occlusion and that these beneficial actions may be partially mediated via a glibenclamide-sensitive mechanism, possibly opening of myocardial KATP channels.
Answer: The effects of OG-VI, a nucleoside/nucleotide mixture, on stunned myocardium in dogs have been studied to determine whether adenosine A1 receptors are involved in its mechanism of action. OG-VI is composed of inosine, sodium 5'-guanylate, cytidine, uridine, and thymidine. It has been observed to limit myocardial stunning in dogs (PUBMED:9225340; PUBMED:9589188).
In studies where dogs were anesthetized and subjected to left anterior descending coronary artery ligation followed by reperfusion, the infusion of OG-VI or its constituents was started before the ischemia. The myocardial contractile function was assessed by measuring the percentage of myocardial segment shortening (%SS). The results showed that %SS, which was decreased by ischemia, was almost completely restored by OG-VI and its inosine and sodium 5'-guanylate mixture (IG), but not by cytidine, uridine, and thymidine mixture (CUT). The cardioprotective effect of OG-VI on stunned myocardium was still observed even in the presence of 8-cyclopentyl-1,3-dipropylxanthine (DPCPX), a selective adenosine A1 receptor antagonist, indicating that adenosine A1 receptors are not involved in the mechanism of OG-VI's protective effect (PUBMED:9225340).
Further investigation confirmed that OG-VI improves the recovery of %SS during reperfusion after brief ischemia in a dose-dependent manner, and this effect is not mediated by stimulation of adenosine A1 receptors. Pretreatment with DPCPX worsened the recovery of %SS during reperfusion after ischemia in both the saline and the OG-VI groups. However, the enhancement of %SS recovery during OG-VI reperfusion was observed in the presence of DPCPX, reinforcing the conclusion that the effect of OG-VI is not brought about by stimulation of adenosine A1 receptors (PUBMED:9589188).
In contrast, other studies have shown that adenosine and adenosine A1 receptor activation can reduce postischemic ventricular dysfunction (myocardial stunning) through various mechanisms, including improved myocardial phosphorylation potential and reduction of reactive oxygen species formation (PUBMED:7677544; PUBMED:11133228). However, these findings do not apply to the effects of OG-VI, as the studies specifically examining OG-VI indicate that its beneficial effects on stunned myocardium are independent of adenosine A1 receptor activation. |
Instruction: Diagnosis of diabetes using hemoglobin A1c: should recommendations in adults be extrapolated to adolescents?
Abstracts:
abstract_id: PUBMED:21195416
Diagnosis of diabetes using hemoglobin A1c: should recommendations in adults be extrapolated to adolescents? Objective: To compare test performance of hemoglobin A1c (HbA1c) for detecting diabetes mellitus/pre-diabetes for adolescents versus adults in the United States.
Study Design: Individuals were defined as having diabetes mellitus (fasting plasma glucose [FPG] ≥ 126 mg/dL; 2-hour plasma glucose (2-hr PG) ≥ 200 mg/dL) or pre-diabetes (100 ≤ FPG < 126 mg/dL; 140 ≤ 2-hr PG < 200 mg/dL. HbA1c test performance was evaluated with receiver operator characteristic (ROC) analyses.
Results: Few adolescents had undiagnosed diabetes mellitus (n = 4). When assessing FPG to detect diabetes, an HbA1c of 6.5% had sensitivity rates of 75.0% (30.1% to 95.4%) and 53.8% (47.4% to 60.0%) and specificity rates of 99.9% (99.5% to 100.0%) and 99.5% (99.3% to 99.6%) for adolescents and adults, respectively. Additionally, when assessing FPG to detect diabetes mellitus, an HbA1c of 5.7% had sensitivity rates of 5.0% (2.6% to 9.2%) and 23.1% (21.3% to 25.0%) and specificity rates of 98.3% (97.2% to 98.9%) and 91.1% (90.3% to 91.9%) for adolescents and adults, respectively. ROC analyses suggested that HbA1c is a poorer predictor of diabetes mellitus (area under the curve, 0.88 versus 0.93) and pre-diabetes (FPG area under the curve 0.61 versus 0.74) for adolescents compared with adults. Performance was poor regardless of whether FPG or 2-hr PG measurements were used.
Conclusions: Use of HbA1c for diagnosis of diabetes mellitus and pre-diabetes in adolescents may be premature, until information from more definitive studies is available.
abstract_id: PUBMED:22443833
Using hemoglobin A1c for prediabetes and diabetes diagnosis in adolescents: can adult recommendations be upheld for pediatric use? The obesity epidemic has resulted in more young people having high-risk profiles for the development of type 2 diabetes. Screening to promote earlier diagnosis and treatment of type 2 diabetes is of significant importance, as untreated disease leads to metabolic, microvascular, and macrovascular complications. However, the choice of screening methodology in adolescents is controversial, and implementation of screening protocols is not uniform. Expert panels have recommended the use of glycated hemoglobin (A1c) for the diagnosis of prediabetes and diabetes, based on the facts that the A1c assay has technical advantages and correlates well with the risk of microvascular diabetes. However, these recommendations are based strictly on data from adult studies and lack any input based on pediatric research. The pediatric research that has been published on the topic indicates that using adult cutoff points for A1c values to predict prediabetes or diabetes significantly underestimates the prevalence of these conditions in the pediatric and adolescent population. Therefore, we call for further investigation of the role of A1c for the diagnosis of prediabetes and diabetes in adolescents before its adoption as a principal diagnostic method in pediatric populations. We contend that a more comprehensive diabetes evaluation, along with A1c, remains necessary for screening adolescents at high risk for prediabetes and type 2 diabetes. Collaborative multicentered studies of prediabetes and type 2 diabetes in the obese pediatric population are especially needed to determine the A1c cutoff points, as well as other diagnostic measures, that best predict diabetes-related comorbid conditions later in life.
abstract_id: PUBMED:35315990
The remission phase in adolescents and young adults with newly diagnosed type 1 diabetes mellitus: prevalence, predicting factors and glycemic control during follow-up. Objective: There is little data about the remission phase in adolescents and young adults with newly diagnosed type 1 diabetes mellitus (T1D). The aims of this study were to determine the prevalence of remission and its predicting factors among adolescents and young adults with newly diagnosed T1D and to assess the association between remission and long-term glycemic control in this population.
Methods: This is a longitudinal and retrospective study including 128 type 1 diabetic patients aged between 12 and 30 years at diabetes onset. Clinical, biological and therapeutic features were collected at diagnosis and for 5 years after diagnosis. Remission was defined by an HbA1c < 6.5% with a daily insulin dose < 0.5 IU/kg/day.
Results: Twenty-three patients (18%) experienced a remission. The peak of remission prevalence was at 6 months after diabetes diagnosis. An insulin dose at discharge <0.8 IU/kg/day was independently associated with remission (p=0.03, adjusted OR [CI 95%] = 0.2 [0.1-0.9]). A low socioeconomic level was independently associated with non remission (p=0.02, adjusted OR [CI 95%] = 4.3 [1.3-14.3]). HbA1c was significantly lower during the first five years of follow-up in remitters. The daily insulin dose was significantly lower during the first four years of follow-up in remitters.
Conclusion: Occurrence of remission in adolescents and young adults with newly diagnosed T1D is associated with better glycemic control and lower insulin requirements during the first 5 years of follow-up. A lower initial dose of insulin was associated with a higher percentage of remission.
abstract_id: PUBMED:37667123
Burden of diabetes attributable to dietary cadmium exposure in adolescents and adults in China. At present, the health risk assessment of cadmium exposure has become a major focus of environmental health research. However, there is still a lack of systematic research on the burden of diabetes (DM) attributable to dietary cadmium exposure in adolescents and adults in China. Using the top-down method, the blood cadmium level (B-Cd) of Chinese adolescents and adults from 2001 to 2023 was combined with the relative risk (RR) of cadmium-induced diabetes to calculate the population attribution score (PAF). Subsequently, PAF was used to assess the disease burden (DB) of diabetes caused by cadmium exposure, expressed in disability adjusted life years (DALYs), and attribution analysis was carried out for cadmium exposure from different sources. The average blood cadmium concentration in Chinese adolescents and adults was 1.54 ± 1.13 µg/L, and the burden of DM attributable to cadmium exposure was 56.52 (44.81, 70.33) × 105 DALYs. The contribution rate of dietary cadmium exposure was 59.78%, and the burden of DM attributable to dietary cadmium exposure was 337.86 (267.85, 420.42) × 108 DALYs. In addition, the highest blood cadmium concentrations were found in Henan, Shanxi, and Jiangxi provinces, while the highest burden of DM attributable to cadmium exposure was found in Jiangsu, Henan, and Guangdong provinces. Cadmium exposure is a risk factor for DM, and we need to take comprehensive action to reduce the burden of DM attributable to dietary cadmium from health, economic, and social perspectives.
abstract_id: PUBMED:36546602
Greater Telehealth Use Results in Increased Visit Frequency and Lower Physician Related-Distress in Adolescents and Young Adults With Type 1 Diabetes. Background: Type one diabetes (T1D) management is challenging for adolescents and young adults (AYAs) due to physiological changes, psychosocial challenges, and increasing independence, resulting in increased diabetes distress and hemoglobin A1c (HbA1c). Alternative care models that engage AYAs and improve diabetes-related health outcomes are needed.
Methods: A 15-month study evaluated an adaptation of the Colorado Young Adults with T1D (CoYoT1) Care model. CoYoT1 Care includes person-centered care, virtual peer groups, and physician training delivered via telehealth. AYAs (aged 16-25 years) were partially randomized to CoYoT1 or standard care, delivered via telehealth or in-person. As the study was ending, the COVID-19 pandemic forced all AYAs to transition to primarily telehealth appointments. This secondary analysis compares changes in clinic attendance, T1D-related distress, HbA1c, and device use between those who attended more than 50% of diabetes clinic visits via telehealth and those who attended more sessions in-person throughout the course of the study.
Results: Out of 68 AYA participants, individuals (n = 39, 57%) who attended most (>50%) study visits by telehealth completed more diabetes care visits (3.3 visits) than those (n = 29, 43%) who primarily attended visits in-person (2.5 visits; P = .007). AYAs who primarily attended visits via telehealth maintained stable physician-related distress, while those who attended more in-person visits reported increases in physician-related distress (P = .03).
Conclusions: Greater usage of telehealth improved AYA engagement with their care, resulting in increased clinic attendance and reduced physician-related diabetes distress. A person-centered care model delivered via telehealth effectively meets the needs of AYAs with T1D.
abstract_id: PUBMED:34069897
Perceptions of Family-Level Social Factors That Influence Health Behaviors in Latinx Adolescents and Young Adults at High Risk for Type 2 Diabetes. Given that health behaviors occur within the context of familial social relationships, a deeper understanding of social factors that influence health behaviors in Latinx families is needed to develop more effective diabetes prevention programming. This qualitative study identified perceived family-level social factors that influence health behaviors in Latinx adolescents (12-16 years; N = 16) and young adults (18-24 years; N = 15) with obesity and explored differences in perceptions across sex and age. Participants completed an in-depth interview that was recorded, transcribed, and coded using thematic content analysis. Emergent themes central to health behaviors included: perceived parental roles and responsibilities, perceived family social support for health behaviors, and familial social relationships. Mom's role as primary caregiver and dad's role as a hard worker were seen as barriers to engaging in health behaviors among adolescent females and young adults, males and females. Adolescents perceived receiving more support compared to young adults and males perceived receiving more support compared to females. Health behaviors in both age groups were shaped through early familial social interactions around physical activity. These insights suggest that traditional gender roles, social support, and social interaction around health behaviors are critical components for family-based diabetes prevention programs in high-risk Latinx youth and young adults.
abstract_id: PUBMED:35498002
The Role of Urate in Cardiovascular Risk in Adolescents and Young Adults With Hypertension, Assessed by Pulse Wave Velocity. Background: Urate is increasingly recognized as a cardiovascular risk factor. It has been associated with hypertension, metabolic syndrome, obesity, chronic kidney disease and diabetes. Its prognostic role is less clear. The aim of our study was to evaluate the association between serum urate and pulse wave velocity, a measure of arterial stiffness in hypertensive adolescents and young adults.
Methods: 269 adolescents and young adults with hypertension were included in the study. From all, anthropometric, blood pressure, pulse wave velocity and serum urate measurements were made. Variables were compared between sex, participants with or without obesity and with or without elevated urate.
Results: In multiple regression analysis for urate as dependent variable gender and diastolic pressure were found to be statistically significant. The difference between urate levels were found between boys and girls (p < 0.001), obese and non-obese (p < 0.001); however, pulse wave velocity did not differ between hyper- and eu-uricemic group (p = 0.162).
Conclusion: Associations between urate, gender, diastolic blood pressure and obesity were confirmed, however, no significant associations between pulse wave velocity and urate were detected.
abstract_id: PUBMED:29292842
Hemoglobin A1c and diagnosis of diabetes. The prevalence of diabetes is increasing markedly worldwide, especially in China. Hemoglobin A1c is an indicator of mean blood glucose concentrations and plays an important role in the assessment of glucose control and cardiovascular risk. In 2010, the American Diabetes Association included HbA1c ≥6.5% into the revised criteria for the diagnosis of diabetes. However, the debate as to whether HbA1c should be used to diagnose diabetes is far from being settled and there are still unanswered questions regarding the cut-off value of HbA1c for diabetes diagnosis in different populations and ethnicities. This review briefly introduces the history of HbA1c from discovery to diabetes diagnosis, key steps towards using HbA1c to diagnose diabetes, such as standardization of HbA1c measurements and controversies regarding HbA1c cut-off points, and the performance of HbA1c compared with glucose measurements in the diagnosis of diabetes.
abstract_id: PUBMED:26684497
Parental resolution and the adolescent's health and adjustment: The case of adolescents with type 1 diabetes. This study examines the association between parents' resolution of their adolescent child's diagnosis of type 1 diabetes and the health and mental adjustment of the adolescents themselves. Parents of 75 adolescents with type 1 diabetes were interviewed using the Reaction to Diagnosis Interview. Parents and adolescents completed questionnaires regarding the child's physical health, self-management of the disease, and behavioral and emotional problems. Physicians reported adolescents' HbA1c levels. Results showed that adolescents whose fathers were resolved with the diagnosis exhibited better diabetes self-management and adolescents whose mothers were resolved with the diagnosis exhibited fewer internalizing and externalizing problems. The findings highlight the different role of mothers and fathers in the treatment of adolescents with diabetes and provide a basis for clinical intervention that focuses not only on adolescent health, but also on parental state of mind regarding the resolution with the disease.
abstract_id: PUBMED:32550154
Initial experiences of adolescents and young adults with type 1 diabetes and high-risk glycemic control after starting flash glucose monitoring - a qualitative study. Purpose: This study explored early experiences with a flash glucose monitoring system among adolescents and young adults with type 1 diabetes and high-risk glycemic control.
Methods: Adolescents and young adults with high-risk glycemic control (HbA1c ≥ 75 mmol/mol (9.0%) in the previous 6 months) who had recently commenced on flash glucose monitoring as part of a trial took part in a semi-structured interview exploring their experiences with the technology. All interviews were recorded, transcribed and analyzed using an inductive approach.
Results: Fifteen interviews were conducted. Overall, participants enjoyed flash glucose monitoring and planned to continue using their system. Key findings included flash glucose monitoring reduced diabetes management burden and increased glucose monitoring. Other impacts of flash glucose monitoring use included perceived improved mood and energy, increased capacity for physical activity and less parental conflict. While participants reported healthier glycemic control, participants' mean interstitial glucose level remained above the target range of 3.9-10.0 mmol/L (70-180 mg/dL) over the first month of flash glucose monitoring. Common challenges included premature sensor loss and decreased scanning over the first month of use.
Conclusions: Flash glucose monitoring may be an acceptable self-management tool to increase monitoring frequency in adolescents and young adults with type 1 diabetes and high-risk glycemic control, with the potential to improve long-term glycemic control. Initial support efforts should focus on practical strategies to prolong sensor wear and motivate frequent scanning as well as education on interpreting glucose data and making informed treatment decisions to maximize the benefits of this technology.
Answer: The diagnosis of diabetes using hemoglobin A1c (HbA1c) in adults may not be directly extrapolatable to adolescents. Studies have shown that using adult HbA1c cutoff points to predict prediabetes or diabetes significantly underestimates the prevalence of these conditions in the pediatric and adolescent population (PUBMED:22443833). Furthermore, research indicates that HbA1c is a poorer predictor of diabetes mellitus and pre-diabetes in adolescents compared with adults, with receiver operator characteristic (ROC) analyses suggesting lower predictive accuracy for adolescents (PUBMED:21195416). Therefore, it is recommended that more definitive studies be conducted to provide information on the use of HbA1c for the diagnosis of diabetes mellitus and pre-diabetes in adolescents before it is adopted as a principal diagnostic method in this age group (PUBMED:21195416).
Additionally, the debate on whether HbA1c should be used to diagnose diabetes is ongoing, with unanswered questions regarding the cut-off value of HbA1c for diabetes diagnosis in different populations and ethnicities (PUBMED:29292842). Given these considerations, it is suggested that a more comprehensive diabetes evaluation, along with HbA1c, remains necessary for screening adolescents at high risk for prediabetes and type 2 diabetes (PUBMED:22443833).
In conclusion, while HbA1c is a valuable tool for diagnosing diabetes in adults, its application in adolescents requires further investigation and potentially different recommendations to ensure accurate diagnosis and management of diabetes in this younger population. |
Instruction: Can combined 18F-FDG-PET and dynamic contrast-enhanced MRI predict behavior of desmoid tumors in patients with familial adenomatous polyposis?
Abstracts:
abstract_id: PUBMED:22965401
Can combined 18F-FDG-PET and dynamic contrast-enhanced MRI predict behavior of desmoid tumors in patients with familial adenomatous polyposis? Background: Desmoid tumors associated with familial adenomatous polyposis show variable behavior; about 10% grow relentlessly, resulting in severe morbidity or mortality. Investigations that could identify the minority of desmoid tumors that behave aggressively would allow these tumors to be treated early and spare the majority of patients who have more benign disease from unnecessary intervention.
Objective: The aim of this study was to investigate whether imaging the tumor metabolic-vascular phenotype by modern methods predicts growth.
Design: This is a prospective case series study.
Settings: The study was conducted at a tertiary center specializing in familial adenomatous polyposis and desmoid disease.
Patients: Nine patients with familial adenomatous polyposis (4 male, mean age 39 years) with desmoid tumor underwent 18F-FDG-PET and dynamic contrast-enhanced MRI. Standard MRI was repeated a year later to assess tumor growth.
Main Outcome Measures: The primary outcome measured was the correlation between 18F-FDG-PET and dynamic contrast-enhanced MRI parameters and subsequent desmoid growth.
Results: Failed intravenous access precluded dynamic contrast-enhanced MRI in 1 female patient. Thirteen desmoid tumors (4 intra-abdominal, 2 extra-abdominal, 7 abdominal wall; mean area, 68 cm) were analyzed in the remaining 8 patients. Two patients died before follow-up MRI. Five tumors decreased in size, 3 increased in size, and 3 remained stable after a year. Significant correlation (Spearman rank correlation, significance at 5%) existed between maximum standardized uptake value and k(ep) (r = -0.56, p = 0.04), but not with other vascular parameters (K(trans) (r = -0.47, p = 0.09); v(e) (r = -0.11, p = 0.72); integrated area under the gadolinium-time curve at 60 seconds (r = -0.47, p = 0.10)). There was no significant difference in the maximum standardized uptake value or dynamic contrast-enhanced MRI parameters (K(trans), v(e), k(ep), integrated area under the gadolinium-time curve at 60 seconds) between the tumors that grew or decreased in size or between the tumor sites. However, vascular metabolic ratio (maximum standardized uptake value/K(trans)) was significantly different for tumor site (p = 0.001) and size (p = 0.001, 1-way ANOVA).
Limitations: This investigation is limited because of its exploratory nature and small patient numbers.
Conclusions: Although not predictive for tumor behavior, some correlations existed between dynamic contrast-enhanced MRI and 18F-FDG-PET parameters. Vascular metabolic ratio may provide further information on tumor behavior; however, this needs to be evaluated with further larger studies.
abstract_id: PUBMED:17709361
Uptake characteristics of fluorodeoxyglucose (FDG) in deep fibromatosis and abdominal desmoids: potential clinical role of FDG-PET in the management. In this preliminary report, we explore the uptake pattern of fluorodeoxyglucose (FDG) in fibromatosis and hypothesize the potential clinical role of FDG-positron emission tomography (PET) in the management of this benign but locally aggressive heterogeneous group of soft-tissue tumours. Five patients were studied (two men and three women, age range 23-35 years), among whom were three cases of deep musculoskeletal fibromatosis, one of abdominal fibromatosis (abdominal desmoid) associated with familial adenomatous polyposis (Gardner's syndrome) and one case of both deep musculoskeletal fibromatosis and abdominal desmoid. The FDG uptake in the lesions was heterogeneous in four cases and relatively homogeneous in one case. The uptake ranged from low to moderate grade with areas or foci of relatively avid FDG uptake. The maximum standardized uptake value (SUV(max)) observed was up to 4.7; the avidity probably related to the biological aggressiveness and tendency for recurrence, characteristic of fibromatosis. A dual-point FDG-PET carried out over four active foci in two cases registered an increase in SUV ranging from 6.93% to 25.85% (mean 19.28%). Treatment monitoring with chemotherapy was carried out in two cases: the reduction in FDG uptake was consistent with the histological evidence of fibrosis and reduction in mitosis. Hence, a baseline FDG-PET can serve a valuable role in monitoring the effect of systemic pharmacotherapy in patients with recurrent progressive disease after unsuccessful local-regional treatment. The findings in this report can be extrapolated and have implications for studying the utility of FDG-PET in defining aggressiveness, guiding biopsy and defining excision site in a large tumour and in monitoring therapy in fibromatosis.
abstract_id: PUBMED:22690287
Breast fibromatosis response to tamoxifen: dynamic MRI findings and review of the current treatment options. Breast fibromatosis is a rare entity responsible for 0.2% of all solid breast tumors. It has been associated with scars, pregnancy, implants, and familial adenomatous polyposis. We present an interesting case of breast fibromatosis in a 29 year old woman which encroached upon her saline implant and subsequently filled its cavity once the implant was removed. The patient was put on tamoxifen therapy and at 14 month follow-up there was a significant decrease in the size of the mass. Dynamic MRI images are offered for review and current treatment options are discussed.
abstract_id: PUBMED:22215881
Imaging assessment of desmoid tumours in familial adenomatous polyposis: is state-of-the-art 1.5 T MRI better than 64-MDCT? Objective: Desmoid tumour is a common extraintestinal manifestation of patients with familial adenomatous polyposis (FAP) who have undergone prophylactic colectomy. We aimed to determine whether MRI provides equivalent or better assessment of desmoid tumours than CT, the current first-line investigation.
Methods: Following ethics approval and informed consent, FAP patients with known desmoid tumour underwent contrast-enhanced 64-slice multidetector CT (MDCT) and 1.5 T MRI (incorporating T(1) weighted, T(2) weighted, short tau inversion-recovery and T(1) weighted with contrast, axial, sagittal and coronal sequences). The number, site, size, local extent, tumour signal intensity and desmoid-to-aorta enhancement ratio were analysed.
Results: MRI identified 23 desmoid tumours in 9 patients: 9 intra-abdominal desmoid (IAD) tumours, 10 abdominal wall desmoid (AWD) tumours and 4 extra-abdominal desmoid (EAD) tumours. CT identified only 21 desmoids; 1 EAD and 1 AWD were not identified. The two modalities were equivalent in terms of defining local extent of desmoid. Five IAD tumours involved the bowel, six caused ureteric compression and none compromised the proximal superior mesenteric artery. There was no difference in median desmoid size: 56.7 cm(2) (range 2-215 cm(2)) on MDCT and 56.3 cm(2) (3-215 cm(2)) on MRI (p=0.985). The mean MRI enhancement ratio, at 1.12 (standard deviation 0.43), was greater than the CT enhancement ratio, which was 0.48 (0.16) (p<0.0001). High signal intensity on T(2) MRI was associated with increased MRI enhancement ratio (p=0.006).
Conclusions: MRI is at least equivalent (and may be superior) to MDCT for the detection of desmoid tumours in FAP. Coupled with the advantage of avoiding radiation, it should be considered as the primary imaging modality for young FAP patients.
abstract_id: PUBMED:33322514
Desmoid Tumors Characteristics, Clinical Management, Active Surveillance, and Description of Our FAP Case Series. (1) Background: desmoid tumors (DTs) are common in patients with familial adenomatous polyposis (FAP). An active surveillance approach has been recently proposed as a valuable alternative to immediate treatment in some patients. However, no clear indication exists on which patients are suitable for active surveillance, how to establish the cut-off for an active treatment, and which imaging technique or predictive factors should be used during the surveillance period. (2) Results: we retrospectively analyzed 13 FAP patients with DTs. A surveillance protocol consisting of scheduled follow-up evaluations depending on tumor location and tissue thickening, abdominal computed tomography (CT) scan/Magnetic resonance imaging (MRI) allowed prompt intervention in 3/11 aggressive intra-abdominal DTs, while sparing further interventions in the remaining cases, despite worrisome features detected in three patients. Moreover, we identified a possible predictive marker of tumor aggressiveness, i.e., the "average monthly growth rate" (AMGR), which could distinguish patients with very aggressive/life-threatening tumor behavior (AMGR > 0.5) who need immediate active treatment, from those with stable DTs (AMGR < 0.1) in whom follow-up assessments could be delayed. (3) Conclusion: surveillance protocols may be a useful approach for DTs. Further studies on larger series are needed to confirm the usefulness of periodic CT scan/MRI and the value of AMGR as a prognostic tool to guide treatment strategies.
abstract_id: PUBMED:29564660
Diagnostic imaging and CEUS findings in a rare case of Desmoid-type fibromatosis. A case report. Desmoid-type fibromatosis (DF), also known as aggressive fibromatosis, is a locally aggressive benign fibroblastic neoplasm that can infiltrate or recur but cannot metastasize. It is rare, with an estimated annual incidence of two to four new cases per million people. Most DFs occur sporadically, but it may also be associated with the hereditary syndrome familial adenomatous polyposis. Treatment is necessary when the disease is symptomatic, especially in case of compression of critical structures. When possible, surgical resection is the treatment of choice; however, recurrence is common. Due to the high rate of recurrence, imaging plays an important role not only in diagnosis, but also in the management of DF. Although there are a number of studies describing CT and MRI findings of DF, there is no description of contrast-enhanced ultrasound findings.
abstract_id: PUBMED:19966610
Polishing the crystal ball: knowing genotype improves ability to predict desmoid disease in patients with familial adenomatous polyposis. Introduction: Desmoid disease occurs in one third of patients with familial adenomatous polyposis. Patients may be protected by changing surgical strategy. We designed a formula to predict desmoid risk and tested the value of adding genotype to the formula.
Methods: A desmoid risk factor was calculated by summing points awarded for gender (male = 1, female = 3), extracolonic manifestations (nil = 1, one = 2, >one = 3), and family history of desmoids (negative = 1, one relative = 2, more than one relative = 3). Performance of the score with and without genotype (5' 1309 = 1, 1309-1900 = 2, 3' 1900 = 3) was analyzed.
Results: There were 839 patients (138 desmoids) without genotype and 154 (30 desmoids) with genotype. The mean desmoid risk factor score of patients without desmoids (no genotype) was 4.7 (+/-1.4 SD) and for patients with desmoid the desmoid risk factor was 6.0 (+/-1.7, P < 0.001). Corresponding data for patients with genotype was 6.1 +/- 1.3 (no desmoids) and 8.4 +/- 1.8 with desmoids (P < 0.001). Of patients without genotype, 648 patients were at low risk and 9.9% had desmoid disease, 178 patients were at medium risk and 34% had desmoids, and 10 patients were at high risk and all had desmoids. Of those with genotype information, 83 patients were at low risk and 5% had desmoids, 52 patients were at medium risk and 21% had desmoids, and 18 patients were at high risk and 83% had desmoids.
Conclusion: The desmoid risk factor identifies patients with various levels of risk for developing desmoid disease, and can be used to plan surgical strategies designed to minimize desmoid risk.
abstract_id: PUBMED:26275868
Long-term outcome of sporadic and FAP-associated desmoid tumors treated with high-dose selective estrogen receptor modulators and sulindac: a single-center long-term observational study in 134 patients. Aim of this study is to evaluate the outcome of long-term conservative treatment with sulindac and high-dose selective estrogen receptor modulators (SERMs) for sporadic and FAP-associated desmoid tumors. Desmoids are very rare tumors in the general population but occur frequently in FAP patients, being encountered in 23-38 %. Treatment of desmoids is still most controversial since response cannot be predicted and they are prone to develop recurrence. This study included all desmoid patients that were treated and followed at our institution and had completed at least 1 year of treatment. Response was defined as stable size or regression of desmoid size between two CT or MRI scans. A total of 134 patients were included. 64 (47.8 %) patients had a confirmed diagnosis of FAP, 69 (51.5 %) patients were sporadic. Overall 114 (85.1 %) patients showed regressive or stable desmoid size. Patients with previous history of multiple desmoid-related surgeries showed less-favorable response. The mean time to reach at least stable size was 14.9 (±9.1) months. After regression or stabilization, medication was tapered in 69 (60.5 %) of the treated patients with only one long-term recurrence after >10 years. The results of this study fortify the role of sulindac and high-dose SERMs as an effective and safe treatment for both, sporadic and FAP-associated desmoid tumors. While invasive treatment frequently results in high recurrence rates, high morbidity and high mortality, this conservative treatment is successful in most patients. The recurrence rate is negligible with no desmoid-related mortality in this large series. Therefore surgical resection, especially for mesenteric desmoids, should be deferred favoring this convincingly effective, well tolerated regimen.
abstract_id: PUBMED:19908178
Current diagnosis and treatment of desmoid tumours in patients with familial adenomatous polyposis - the surgical view Based on a representative selection of relevant references, the aim of this study was to reflect the change of the algorithm in the surgical management of desmoid tumours (DT) in cases of accompanying familial adenomatous polyposis (FAP). Main focus is concerned with the basics of differential treatment, including additional considerations on epidemiology, diagnosis, outcome and follow-up. DT are rare benign tumours that do not metastasise but tend to invade locally. In contrast to the general population, DT in patients with FAP are more common, show a different pattern of tumour sites and cause considerable morbidity and mortality. Most DT occur in the abdominal cavity and account for the majority of serious problems. Genetic disposition and hormonal factors as well as prior surgical trauma are considered causative for the development of DT. Characteristic symptoms are abdominal pain, nausea and vomiting but DT may also present as acute abdomen. CT scan determines localisation and extension of the tumour. Treatment includes various strategies of medication, surgical resection and radiation. Data concerning diagnostic and therapeutic procedures are based on studies with small case series or case reports only. Therefore data from international multicentre studies are necessary for improving the prognosis and developing reliable and stringent guidelines.
abstract_id: PUBMED:35508462
Desmoid tumors located in the abdomen or associated with adenomatous polyposis: French intergroup clinical practice guidelines for diagnosis, treatment, and follow-up (SNFGE, FFCD, GERCOR, UNICANCER, SFCD, SFED, SFRO, ACHBT, SFR). Introduction: Desmoid tumor (DT) of the abdomen is a challenging and rare disease. The level of evidence available to document their treatment is relatively low, however, recent publications of prospective studies have allowed to precise their management.
Methods: This document is a summary of the French intergroup guidelines realized by all French medical and surgical societies involved in the management of DT located in the abdomen or associated with adenomatous polyposis. Recommendations are graded in four categories (A, B, C and D), according to the level of evidence found in the literature until January 2021.
Results: When the diagnosis of DT is suspected a percutaneous biopsy should be performed when possible. A molecular analysis looking for pathogenic mutations of the CTNNB1 and APC genes should be systematically performed. When a somatic pathogenic variant of the APC gene is present, an intestinal polyposis should be searched. Due to a high rate of spontaneous regression, non-complicated DT should first benefit from an active surveillance with MRI within 2 months after diagnosis to assess the dynamic of tumor growth. The treatment decision must be discussed in an expert center, favoring the less toxic treatments which can include broad spectrum tyrosine kinase inhibitor or conventional chemotherapy (methotrexate-vinblastine). Surgery, outside the context of emergency, should only be considered for favorable location in an expert center.
Conclusion: French guidelines for DT management were elaborated to help offering the best personalized therapeutic strategy in daily clinical practice as the DT therapeutic landscape is complexifying. Each individual case must be discussed within a multidisciplinary expert team.
Answer: The study described in abstract PUBMED:22965401 aimed to investigate whether imaging the tumor metabolic-vascular phenotype using modern methods such as 18F-FDG-PET and dynamic contrast-enhanced MRI could predict the growth of desmoid tumors in patients with familial adenomatous polyposis. The study was a prospective case series conducted at a tertiary center specializing in familial adenomatous polyposis and desmoid disease. It involved nine patients with familial adenomatous polyposis who underwent 18F-FDG-PET and dynamic contrast-enhanced MRI, with standard MRI repeated a year later to assess tumor growth.
The results showed that there was a significant correlation between maximum standardized uptake value (SUV) and k(ep) (r = -0.56, p = 0.04) from the 18F-FDG-PET and dynamic contrast-enhanced MRI parameters, but not with other vascular parameters. However, there was no significant difference in the maximum SUV or dynamic contrast-enhanced MRI parameters between the tumors that grew or decreased in size or between the tumor sites. Interestingly, the vascular metabolic ratio (maximum SUV/K(trans)) was significantly different for tumor site (p = 0.001) and size (p = 0.001, 1-way ANOVA).
The study concluded that although there were some correlations between dynamic contrast-enhanced MRI and 18F-FDG-PET parameters, these were not predictive for tumor behavior. The authors suggested that the vascular metabolic ratio may provide further information on tumor behavior, but emphasized that this needs to be evaluated with further larger studies due to the exploratory nature and small patient numbers of their investigation.
In summary, the combined use of 18F-FDG-PET and dynamic contrast-enhanced MRI showed some correlation with desmoid tumor behavior in patients with familial adenomatous polyposis, but these imaging techniques were not conclusively predictive of tumor behavior according to the study's findings. Further research with larger patient cohorts is needed to determine the potential predictive value of these imaging modalities. |
Instruction: Changes in labor regulations during economic crises: does deregulation favor health and safety?
Abstracts:
abstract_id: PUBMED:21483219
Changes in labor regulations during economic crises: does deregulation favor health and safety? Objectives: The regulatory changes in Korea during the national economic crisis 10 years ago and in the current global recession were analyzed to understand the characteristics of deregulation in labor policies.
Methods: Data for this study were derived from the Korean government's official database for administrative regulations and a government document reporting deregulation.
Results: A great deal of business-friendly deregulation took place during both economic crises. Occupational health and safety were the main targets of deregulation in both periods, and the regulation of employment promotion and vocational training was preserved relatively intact. The sector having to do with working conditions and the on-site welfare of workers was also deregulated greatly during the former economic crisis, but not in the current global recession.
Conclusions: Among the three main areas of labor policy, occupational health and safety was most vulnerable to the deregulation in economic crisis of Korea. A probable reason for this is that the impact of deregulation on the health and safety of workers would not be immediately disclosed after the policy change.
abstract_id: PUBMED:37897178
Barriers to the implementation of occupational health and safety regulations in Lebanon. This study aims to explore the barriers that prevent the implementation of occupational health and safety regulations in Lebanon. A qualitative approach was adopted including a document analysis of the available legal documents pertaining to occupational health and safety at the national level and ten in-depth interviews with professionals in the field of occupational health and safety in Lebanon. Our findings show that the implementation of the occupational health and safety regulations in Lebanon is hindered by several barriers including the lack of a holistic legal framework, lack of promotion of a health and safety culture at work, insufficient number of labor inspectors, insufficient training for labor inspectors, lack of necessary tools and equipment, lack of an adequate documentation system, hierarchy within the Ministry of Labor, weak compliance, and the influence of the informal sector.
abstract_id: PUBMED:29169305
The Vulnerability of Occupational Health and Safety to Deregulation: The Weakening of Information Regulations during the Economic Crisis in Korea. This study was conducted to investigate the causes and consequences of the vulnerability of occupational health and safety (OHS) regulations to deregulation during a period of economic crisis in the Republic of Korea. Analysis of Korea's national regulation database revealed that the vulnerability of OHS regulations to deregulation was related to the fact that OHS policy included many regulations without direct deregulatory impacts on workers. The most affected victim of this characteristic was information regulation that provided a legal basis for government's monitoring and inspection of OHS activities. The massive relaxation of information regulation has the potential to weaken government oversight and to tempt businesses to hide industrial accidents. Since changes in regulations without direct deregulatory impacts are not easily identifiable by workers, careful monitoring of deregulation is necessary to prevent policy impacts harmful to workers' health and safety.
abstract_id: PUBMED:23606055
Effects of social, economic, and labor policies on occupational health disparities. Background: This article introduces some key labor, economic, and social policies that historically and currently impact occupational health disparities in the United States.
Methods: We conducted a broad review of the peer-reviewed and gray literature on the effects of social, economic, and labor policies on occupational health disparities.
Results: Many populations such as tipped workers, public employees, immigrant workers, and misclassified workers are not protected by current laws and policies, including worker's compensation or Occupational Safety and Health Administration enforcement of standards. Local and state initiatives, such as living wage laws and community benefit agreements, as well as multiagency law enforcement contribute to reducing occupational health disparities.
Conclusions: There is a need to build coalitions and collaborations to command the resources necessary to identify, and then reduce and eliminate occupational disparities by establishing healthy, safe, and just work for all.
abstract_id: PUBMED:25261023
Organized labor and the origins of the Occupational Safety and Health Act. New Solutions is republishing this 1991 article by Robert Asher, which reviews the history of organized labor's efforts in the United States to secure health and safety protections for workers. The 1877 passage of the Massachusetts factory inspection law and the implementation of primitive industrial safety inspection systems in many states paralleled labor action for improved measures to protect workers' health and safety. In the early 1900s labor was focusing on workers' compensation laws. The New Deal expanded the federal government's role in worker protection, supported at least by the Congress of Industrial Organizations (CIO), but challenged by industry and many members of the U.S. Congress. The American Federation of Labor (AFL) and the CIO backed opposing legal and inspection strategies in the late 1940s and through the 1950s. Still, by the late 1960s, several unions were able to help craft the Occupational Safety and Health Act of 1970 and secure new federal protections for U.S. workers.
abstract_id: PUBMED:36186879
Dual process model of farmers' mindfulness and sustainable economic behavior: Mediating role of mental health and emotional labor. Mindful awareness of our interconnection with the natural environment could help to redeem our lost environmentally entrenched identity and help us to act more sustainably, concluding the predictable gaps between mindfulness and sustainable behavior. We propose more precisely that mindful attentiveness may be essential to establishing sustainable economic behavior through understanding emotional labor and enhanced mental health. Likewise, with an ever-rising concern related to mental health and emotional labor, recent industrialization and commoditization of agricultural products have stressed the need for mindfulness, and causing sustainable economic behavior of farmers that is imminent. Hence, the study will not only explore the connection between mindfulness and sustainable economic behavior, but there is a need to examine the mediating role of emotional labor and the mental health of farmers in China. The study selected the farmers because mindful awareness, emotional labor, and mental health of a farmer can significantly contribute to sustainable economic behavior and bring a connection with the natural environment. The data of 358 responses were analyzed using SPSS-AMOS. The results revealed that mindfulness, mental health, and emotional labor have a significant connection with the sustainable economic behavior of farmers in China. The results also indicated that mental health and emotional labor mediate between mindfulness and sustainable economic behavior. The results set the tone for the policy-makers to create awareness among all the stakeholders about the importance of mindfulness to help farmers manage their emotional labor and mental health for better, sustainable performance outcomes.
abstract_id: PUBMED:28758218
Trade associations and labor organizations as intermediaries for disseminating workplace safety and health information. Background: There has not been a systematic study of the nature and extent to which business and professional trade associations and labor organizations obtain and communicate workplace safety and health information to their members. These organizations can serve as important intermediaries and play a central role in transferring this information to their members.
Methods: A sample of 2294 business and professional trade associations and labor organizations in eight industrial sectors identified by the National Occupational Research Agenda was surveyed via telephone.
Results: A small percent of these organizations (40.9% of labor organizations, 15.6% of business associations, and 9.6% of professional associations) were shown to distribute workplace safety and health information to their members. Large differences were also observed between industrial sectors with construction having the highest total percent of organizations disseminating workplace safety and health information.
Conclusion: There appears to be significant potential to utilize trade and labor organizations as intermediaries for transferring workplace safety and health information to their members. Government agencies have a unique opportunity to partner with these organizations and to utilize their existing communication channels to address high risk workplace safety and health concerns.
abstract_id: PUBMED:28700187
Safety Regulations in the Healthcare Workplaces The law n. 81/2008 gathers the regulations about the health safeguard and safety in the workplaces. It provides the duty of risk management, the designation of a responsible for the prevention and protection services, the designation of a specialist doctor for the surveillance of the operators and of the workplaces and a training focused on safety for all the operators. In the healthcare workplaces usually we can distinguish between the risks for the workers safety and the risks for the workers health. The art.43 of the law 81/2008 also provides the legal guarantee both for other people attending workplaces and patients. In fact the law n.24/2017 at the art. 1 confirms that the care safety is part of the health right.
abstract_id: PUBMED:26709286
Status of Occupational Health and Safety and Related Challenges in Expanding Economy of Tanzania. Introduction: Occupational health and safety is related with economic activities undertaken in the country. As the economic activities grow and expand, occupational injuries and diseases are more likely to increase among workers in different sectors of economy such as agriculture, mining, transport, and manufacture. This may result in high occupational health and safety services demand, which might be difficult to meet by developing countries that are prioritizing economic expansion without regard to their impact on occupational health and safety.
Objective: To describe the status of occupational health and safety in Tanzania and outline the challenges in provision of occupational health services under the state of an expanding economy.
Findings: Tanzania's economy is growing steadily, with growth being driven by communications, transport, financial intermediation, construction, mining, agriculture, and manufacturing. Along with this growth, hazards emanating from work in all sectors of the economy have increased and varied. The workers exposed to these hazards suffer from illness and injuries and yet they are not provided with adequate occupational health services. Services are scanty and limited to a few enterprises that can afford it. Existing laws and regulations are not comprehensive enough to cover the entire population. Implementation of legislation is weak and does not protect the workers.
Conclusions: Most Tanzanians are not covered by the occupational health and safety law and do not access occupational health services. Thus an occupational health and safety services strategy, backed by legislations and provided with the necessary resources (competent experts, financial and technological resources), is a necessity in Tanzania. The existing legal provisions require major modifications to meet international requirements and standards. OHS regulations and legislations need refocusing, revision, and strengthening to cover all working population. Capacities should be improved through training and research to enable enforcement. Finally the facilities and resources should be made available for OHS services to match with the growing economy.
abstract_id: PUBMED:14748053
Review of the Federal Child Labor Regulations: updating hazardous and prohibited occupations. Background: Child labor regulations are intended to protect our young and most vulnerable workers. While regulations are not the only strategy for providing a safe and healthy work environment for young workers, they are the first step in setting the community standard to protect them. The world of work has changed since the first child labor laws were issued in 1939. The U.S. Department of Labor (DOL) enlisted assistance from the National Institute for Occupational Safety and Health (NIOSH) to evaluate the current list of hazardous and prohibited occupations and to make recommendations for changes to the regulations.
Methods: A group of national experts reviewed the recommendations made by NIOSH. The consensus review process identified areas of agreement, disagreement, and priorities for the DOL's Wage and Hour Division. Gaps in the hazardous orders not addressed in the recommendations were also identified.
Results: This commentary describes the background of the Hazardous Orders under the Fair Labor Standards Act, the evaluation by NIOSH, the process used to review it, and recommendations for further action.
Conclusions: The NIOSH report is a valuable resource for anyone working to prevent occupational injuries-among adult and young workers alike. It is crucial that the DOL use the guidance provided by NIOSH and the Young Workers Safety and Health Network to improve the protections for young workers.
Answer: The analysis of changes in labor regulations during economic crises suggests that deregulation does not favor health and safety. During economic crises, such as the national economic crisis in Korea 10 years ago and the global recession, a significant amount of business-friendly deregulation occurred, with occupational health and safety (OHS) being the main targets of deregulation in both periods (PUBMED:21483219). The deregulation of OHS policies, particularly the weakening of information regulations, can lead to reduced government oversight and may tempt businesses to hide industrial accidents (PUBMED:29169305). This indicates that the impact of deregulation on the health and safety of workers may not be immediately disclosed after the policy change, making OHS particularly vulnerable during times of economic crisis.
Furthermore, the implementation of OHS regulations can be hindered by several barriers, such as the lack of a holistic legal framework, insufficient number of labor inspectors, and weak compliance, as seen in the context of Lebanon (PUBMED:37897178). These barriers can prevent the effective enforcement of OHS regulations, which is crucial for protecting workers' health and safety.
Historically, labor, economic, and social policies have impacted occupational health disparities in the United States, with many populations not protected by current laws and policies (PUBMED:23606055). This suggests that deregulation, which often involves the relaxation or removal of such protections, could exacerbate these disparities.
In summary, the evidence from the provided abstracts indicates that deregulation during economic crises does not favor health and safety. Instead, it can lead to a weakening of OHS regulations and enforcement, potentially increasing the vulnerability of workers to health and safety risks. |
Instruction: Should obese patients lose weight before receiving a kidney transplant?
Abstracts:
abstract_id: PUBMED:9293872
Should obese patients lose weight before receiving a kidney transplant? Background: The results of renal transplantation in obese recipients have been controversial, with some reports finding increased morbidity prohibitive and others finding increased morbidity acceptable. We attempted to determine whether obese patients in extreme excess of their ideal body weight should undergo transplantation.
Methods: The study population included 127 obese (body mass index >30 kg/m2) patients who were compared with a matched nonobese control group (body mass index <27 kg/m2) of 127 recipients with similar demographics. There were no significant differences between the groups according to donor source, recipient race or sex, retransplants, transplant percent reactive antibodies, cause of renal failure, or hypertension. However, significantly more obese patients had a pretransplant history of angina (11.2% vs. 3.2%, P=0.02) or a previous myocardial infarction (5.6% vs. 0.8%, P=0.04).
Results: The mean follow-up was 58.9+/-40 (range 3-170) months. Nonobese patients enjoyed a significantly (P=0.0002) greater patient survival (89% vs. 67%) at 5 years and suffered only about half the number of deaths (25 vs. 46) during the period of observation. Cardiac disease was the leading cause of death (39.1%) in the obese group. Patient death had a major impact on graft survival because there were no differences between the groups when death with graft function was censored from the analysis. There were no significant differences between the groups in delayed graft function, acute rejection, chronic rejection, length of hospital stay, operative blood loss, or mean serum creatinine up to 5 years. However, obese patients experienced significantly (P=0.0001) more complications per patient (3.3 vs. 2.2) and a greater incidence (P=0.0003) of posttransplant diabetes (12% vs. 2%). Similar cyclosporine blood levels were observed in obese recipients even though they were receiving 0.75-2 mg/kg/day less cyclosporine than the nonobese recipients.
Conclusions: Outcome differences in obese renal transplant patients were primarily due to a higher mortality resulting from cardiac events. Obesity seems to have little effect on immunologic events, long-term graft function, or cyclosporine delivery. Aggressive pretransplant screening for ischemic heart disease is essential to identify an especially high-risk subgroup of obese patients. Although it would seem prudent to recommend weight reduction <30 kg/m2 to all patients before transplant, these data suggest that obese patients with a history of cardiac disease should not be transplanted until weight reduction has been accomplished.
abstract_id: PUBMED:27040156
Meal replacements as a strategy for weight loss in obese hemodialysis patients. Introduction There is currently limited evidence on the use or safety of meal replacements as part of a low- or very-low-calorie diet in patients with renal insufficiency; however, these are occasionally used under dietetic supervision in clinical practice to achieve the desired weight loss for kidney transplant. This case series reports on the safety and efficacy of a weight loss practice utilizing meal replacements among hemodialysis patients, who needed to lose weight for kidney transplant. Methods Five hemodialysis patients were prescribed a modified low-calorie diet (950 kcal and 100 g protein per day) comprising three meal replacements (Optifast® ), one main meal, and two low-potassium fruits per day. Dietary requirements and restrictions were met for all participants. Dialysis prescriptions, weight (predialysis and postdialysis), interdialytic weight gain, biochemistry, and medications were monitored during the study period for up to 12 months. Findings Participants were aged between 46 and 61 years, and the median time on the low-calorie diet was 364 days. Phosphate binders were temporarily ceased for one participant for reasons unrelated to this program and no other safety concerns were recorded. The low-calorie diet resulted in energy deficits ranging from 1170 kcal to 2160 kcal, and all participants lost weight (median 7% [range 5.2%-11.4%]). The most dramatic weight change appeared to occur by week 12, and declining adherence led to erratic weight change thereafter. Discussion This modified low-calorie diet was safe and effective to use in this population. Meal replacements are a useful weight loss strategy in hemodialysis patients, therefore, offering an alternative to usual weight loss protocols.
abstract_id: PUBMED:26436923
Influence of weight gain during the first year after kidney transplantation in the survival of grafts and patients Background: After receiving a kidney allograft, patients tend to gain weight acquiring the risk associated with overweight and obesity.
Aim: To compare the evolution during 10 years after transplantation of patients who gained more than 15% of their initial weight during the first year after receiving the graft with those who did not experience this increase.
Material And Methods: Cohort study of 182 patients transplanted in a single hospital between 1981 and 2003. Demographic data, weight gain during the first year, drugs used, complications and evolution of patients and grafts were recorded.
Results: Seventy two patients gained more than 15% of their weight during the first year. These were discharged after receiving the graft with a lower serum creatinine than their counterparts (1.46 ± 0.71 and 1.97 ± 1.74 mg/dl respectively, p = 0.02). Ten years mortality with a functioning kidney was higher among weight gainers (25 and 12.7% respectively, p = 0.03). No other differences were observed between groups.
Conclusions: Patients who gained more than 15% of their initial weight during the first year after receiving a kidney graft have a higher 10 years mortality with a functioning kidney.
abstract_id: PUBMED:27816867
Effectiveness of weight loss intervention in highly-motivated people. A variety of approaches have been implemented to address the rising obesity epidemic, with limited success. I consider the success of weight loss efforts among a group of highly motivated people: those required to lose weight in order to qualify for a life-saving kidney transplantation. Out of 246 transplantation centers, I identified 156 (63%) with explicit body mass index (BMI) requirements for transplantation, ranging from 30 to 50kg/m2. Using the United States national registry of transplant candidates, I examine outcomes for 29,608 obese deceased-donor transplant recipients between 1990 and 2010. I use value-added models to deal with potential endogeneity of center choice, in addition to correcting for sample selection bias arising from focusing on transplant recipients. Outcome variables measure BMI level and weight change (in BMI) between initial listing and transplantation. I hypothesize that those requiring weight loss to qualify for kidney transplantation will be most likely to lose weight. I find that the probability of severe and morbid obesity (BMI≥35kg/m2) decreases by 4 percentage points and the probability of patients achieving any weight loss increases by 22 percentage points at centers with explicit BMI eligibility criteria. Patients are also 13 percentage points more likely to accomplish clinically relevant weight loss of at least 5% of baseline BMI by transplantation at these centers. Nonetheless, I estimate an average decrease in BMI of only 1.7kg/m2 for those registered at centers with BMI requirements. Further analyses suggest stronger intervention effects for patients whose BMI at listing exceeds thresholds as the distance from their BMI to the thresholds increases. Even under circumstances with great potential returns for weight loss, transplant candidates exhibit modest weight-loss. This suggests that, even in high-stakes environments, weight loss remains a challenge for the obese, and altering individual incentives may not be sufficient.
abstract_id: PUBMED:35386600
Weight Loss Challenges in Achieving Transplant Eligibility in Patients With Kidney Failure: A Qualitative Study. Rationale & Objective: Patients with kidney failure need kidney replacement therapy to maximize survival. Kidney transplant is a superior mode of kidney replacement therapy for most individuals with kidney failure. Patients with obesity often are not approved for kidney transplant until they lose sufficient weight, as obesity may complicate the surgical procedure, and the risk of graft loss increases with a higher body mass index. To help potential kidney transplant recipient candidates lose weight, further knowledge of their thoughts, feelings, and attitudes is needed.
Study Design: Qualitative study with semistructured interviews and an exploratory research design, guided by qualitative content analysis.
Setting & Participants: Patients at a hospital in Denmark required to lose weight to achieve kidney transplant eligibility.
Analytical Approach: From patients' responses, we identified descriptive themes using a phenomenological approach. The factors affecting outcomes were derived reflexively from these themes.
Results: Ten interviews were analyzed. Experiences of obesity and weight-loss attempts were described across 4 themes; (1) restrictions and exhaustion, (2) hope and hopelessness, (3) support and self-discipline, and (4) motivation based on severity. A major motivating factor to achieving weight loss in the studied group of patients was their declining kidney function and the fact that kidney transplant cannot be considered until sufficient weight loss is achieved.
Limitations: Thematic saturation was reached after an unexpectedly low number of participants. The patients were only interviewed once and over the phone.
Conclusions: Patients with obesity who are seeking kidney transplant need additional help with the dietary restrictions brought on by kidney disease. They need assistance bridging between a kidney-friendly diet and a sustainable diet that will ensure weight loss. These patients also express not wanting to feel alone in their weight-loss battle. They are looking for help and support to achieve weight loss.
abstract_id: PUBMED:24777919
Intensive weight-loss in dialysis: a personalized approach Unlabelled: Obesity is increasingly encountered in dialysis patients, who have difficulty to lose weight. Several Transplant Centres require BMI <30-35 Kg/m2 at waiting-list. Thus, losing weight becomes a must for young obese patients, however the best policy to obtain it (if any) is not defined. The aim of the present case report is to suggest that tailored dialysis and intensive diets could be a successful combination, that should be tested on a larger scale. A 56-year-old obese male patient (BMI 37.7 kg/m²) on daily home hemodialysis since 10 months (ESRD due to focal segmental glomerulosclerosis) started a coach-assisted qualitative ad libitum diet. The diet, alternating 8 weeks of rapid weight loss and maintenance phases, was based on a combinations of different foods, chosen on the account of glycaemic index and biochemical properties. It was salt free and olive oil was permitted in liberal quantities. Dialysis duration was increased to allow weight loss, and dialysate Na was incremented to permit a strict low sodium diet. Over a period of 21 months, the patient attained a -18.5 Kg weight loss (50% overweight loss; BMI -6.3 Kg/m²), reaching the goal to be included in a kidney transplant waiting list. Main metabolic data remained stable (pre diet and end of the diet period: albumin 3.5-3.8 g/dL; HCO3 26.1-24.8 mmol/L discontinuing citrate) or improved (haemoglobin 11.4-12.1 g/dL, halving EPO dose; calcium 2.3-2.5 mmol/L; phosphate 1.5-1.5 mmol/L; PTHi 1718-251 pg/mL, reducing chelation).
Conclusion: Daily dialysis may allow enrolling obese hemodialysis patients in intensive weight loss programs, under strict clinical control.
abstract_id: PUBMED:21616769
The importance of body composition and dry weight assessments in patients with chronic kidney disease. Chronic volume overload is the major cause of hypertension and other cardiovascular morbidity in dialysis patients. One of the most important goals of physicians who take care of patients with chronic renal failure is to obtain near euvolemia or "dry body weight" in order to maintain or normalize blood pressure and prevent further cardiovascular events. In clinical practice, exact estimation of dry weight in hemodialysis patients remains a major challenge. Alterations in body composition, particularly malnutrition, are common in patients receiving long-term hemodialysis and contribute to a high mortality rate. In contrast, obesity - a known risk factor for cardiovascular morbidity and mortality - is prevalent amongst kidney allograft recipients in - long term after renal transplantation. Several technological tools and biochemical markers for estimation of plasma volume and body composition are available for clinical use. Our aim was to highlight the importance of control of body fluid volume and body composition in patients with chronic kidney disease and to describe the different methods available for such measurements.
abstract_id: PUBMED:32249367
Weight Loss After Bariatric Surgery in Morbidly Obese End-Stage Kidney Disease Patients as Preparation for Kidney Transplantation. Matched Pair Analysis in a High-Volume Bariatric and Transplant Center. Background: The number of morbidly obese kidney transplant candidates is growing. They have limited access to kidney transplantation and are at a higher risk of postoperative complications. Bariatric surgery is considered as a safe weight loss method in those patients.
Objectives: Matched pair analysis was designed to analyze the preparatory and postoperative weight loss after bariatric procedures in end-stage kidney disease (ESKD) and non-ESKD morbidly obese patients.
Methods: Twenty patients with ESKD underwent bariatric surgery in our Centre of Excellence for Bariatric and Metabolic Surgery between 2015 and 2019 (nine one-anastomosis gastric bypasses, nine Roux-en-Y gastric bypasses, and two sleeve gastrectomies). They were compared with matched pairs from a dataset of 1199 morbidly obese patients without ESKD. Data on demographic factors and comorbidities was recorded. BMI was obtained at the start of the preparatory period preceding the bariatric procedure, at the time of procedure, and during the 1-year follow-up.
Results: The ESKD and non-ESKD patients did not differ significantly in preoperative weight loss (13.00 ± 11.69 kg and 15.22 ± 15.96 kg respectively, p = 0.619). During the 1-year follow-up, the weight loss was similar to the non-ESKD group. In the first 3 months, faster weight loss in ESKD was observed. Initial and follow-up BMI values did not differ significantly between groups. We demonstrated that obese patients with ESKD can lose weight as effectively as non-ESKD patients.
Conclusion: Morbidly obese ESKD patients have an equal weight loss to patients without ESKD. Bariatric surgery could improve access to kidney transplantation and may potentially improve transplantation outcomes of obese patients with ESKD.
abstract_id: PUBMED:31279263
Patient Perspectives on Weight Management for Living Kidney Donation. Background: Living kidney donors (LKDs) with obesity have increased perioperative risks and risk of end-stage renal disease after donation. Consequently, obesity serves as a barrier to donation, as many transplant centers encourage or require weight loss before donation for obese LKD candidates. Therefore, this study sought to assess patients' perspectives on weight management strategies before donation among obese LKD candidates. We hypothesized that willingness to participate in a weight loss program may be associated with donor-recipient relationship.
Materials And Methods: Obese (BMI ≥30 kg/m2) LKD candidates evaluated at a single institution from September 2017 to August 2018 were recruited. A survey was administered to assess LKD candidates' baseline exercise and dietary habits and their interest in weight management strategies for the purpose of donation approval. Participants were grouped by relationship to the recipient (close relatives: first-degree relatives or spouses [n = 29], compared with all other relationships [n = 21]). Descriptive statistics were used to summarize the data.
Results: 50 of 51 obese LKD candidates who were approached completed the survey. 90% of participants expressed willingness to lose weight if necessary to become eligible for donor nephrectomy. Compared with all other LKD candidates, close relatives were more likely to be interested in combined diet and exercise programs at our institution (P = 0.01).
Conclusions: Among obese LKD candidates, there was an interest in weight loss for the purposes of living kidney donation approval, particularly among close relatives of potential recipients. Future programs designed to promote weight management efforts for obese LKD candidates should be considered.
abstract_id: PUBMED:32730692
Body weight-based initial dosing of tacrolimus in renal transplantation: Is this an ideal approach? Background: Tacrolimus dosing immediately posttransplant is based on body weight. Recent studies have highlighted that the dosing of tacrolimus purely based on weight may not be appropriate, particularly in individuals who are obese.
Objectives: This study aimed to estimate the effect of body mass index (BMI) and the weight-based dosing on tacrolimus trough levels in recipients of renal transplants.
Design And Participants: This study was conducted on 400 of the 863 patients registered in the Salford, UK, renal transplant database between 2012 and 2019 who had complete and analysable datasets. Data were collected at baseline (first tacrolimus trough level after transplantation), after 1 month and 6 months posttransplantation. The cohort was split into three groups based on BMI (kg/m2 ; Group 1 ≤ 25, Group 2 > 25-30 and Group 3 > 30) which were compared with respect to tacrolimus dose, plasma levels and concentration/dose (C/D) ratio at the three-time points.
Results: Patients in the higher BMI group (Group 3) had significantly higher baseline tacrolimus trough levels despite receiving a lower initiation dose per kilogram of body weight. After 1 and 6-months posttransplant, the higher BMI group were receiving a significantly lower tacrolimus dose relative to their body weight, with a significant negative correlation between body weight and tacrolimus/kg body weight. There was no adverse relationship evident between tacrolimus dosing or concentration and graft function.
Conclusions: Our study showed that standard dosing of tacrolimus based on body weight in individuals who were obese did not adversely affect their tacrolimus concentrations or transplant function.
Answer: The question of whether obese patients should lose weight before receiving a kidney transplant is addressed by several studies, which highlight the complexities and considerations involved in this decision.
One study found that obese renal transplant patients (body mass index >30 kg/m^2) had a significantly lower patient survival rate at 5 years compared to nonobese patients (body mass index <27 kg/m^2), primarily due to a higher mortality resulting from cardiac events. The study concluded that aggressive pretransplant screening for ischemic heart disease is essential, and it would seem prudent to recommend weight reduction to a BMI <30 kg/m^2, especially for obese patients with a history of cardiac disease (PUBMED:9293872).
Another study reported on the safety and efficacy of a modified low-calorie diet with meal replacements for hemodialysis patients needing to lose weight for kidney transplant. The diet was found to be safe and effective, suggesting that meal replacements could be a useful weight loss strategy for these patients (PUBMED:27040156).
Further research indicated that patients who gained more than 15% of their initial weight during the first year after receiving a kidney graft had a higher 10-year mortality with a functioning kidney. This suggests that weight gain post-transplantation can also have negative long-term effects (PUBMED:26436923).
A study examining the effectiveness of weight loss intervention in highly motivated people, such as those needing to lose weight for kidney transplantation, found that even with explicit BMI requirements for transplantation, the average decrease in BMI was only 1.7 kg/m^2. This suggests that weight loss remains a challenge even in high-stakes situations (PUBMED:27816867).
Qualitative research on patients required to lose weight for kidney transplant eligibility revealed that they need additional help with dietary restrictions and support in their weight-loss efforts. The study emphasized the importance of not feeling alone in the weight-loss process (PUBMED:35386600).
In summary, while weight loss before kidney transplantation can be beneficial in reducing post-transplant morbidity and mortality, particularly due to cardiac events, it remains a challenging endeavor. Supportive strategies, such as meal replacements and personalized dietary plans, may aid in achieving the necessary weight loss. However, the decision to recommend weight loss before transplantation should be individualized, taking into account the patient's overall health, motivation, and the potential risks associated with obesity in the context of kidney transplantation. |
Instruction: Do apparent diffusion coefficient measurements predict outcome in children with neonatal hypoxic-ischemic encephalopathy?
Abstracts:
abstract_id: PUBMED:28343728
Can Apparent Diffusion Coefficient Predict the Clinical Outcome in Drowned Children? Introduction: Pediatric cerebral hypoxic-ischemic injury frequently results in severe neurological outcome. Imaging with diffusion-weighted magnetic resonance imaging (DWi) demonstrates that the acute cerebral injury and apparent diffusion coefficient (ADC) allow the assessment of the severity of brain damage. The main objective was to examine if spatial distribution of reductions in ADC values is associated with clinical outcome in drowned children.
Methods: This is a retrospective study of 7 children (7 examinations) suffering from a hypoxic-ischemic event who underwent DWi. Seven subjects with normal DWi served as controls. The mean patient age was 4.88 ± 2.93 years and the male-to-female ratio was 5:2. The neurological outcome was divided into 2 categories: 4 children with Apallic syndrome and 3 deaths. We analysed the differences between the drowned children and the control group regarding clinical data, DWi abnormalities, and ADC values.
Results: The ADC values in the occipital and parietal grey matter were significantly different between the drowned children (765.14 ± 65.47 vs 920.95 ± 69.62; P = .003) and the control group (670.82 ± 233.99 vs 900.66 ± 92.72; P = .005). The ADC showed low values in the precentral area also (P = .044).
Conclusion: The ADC reduction may be useful to predict the poor outcome in drowned children and can be a valuable tool for clinical assessment.
abstract_id: PUBMED:37874257
Apparent diffusion coefficient values can predict neuromotor outcome in term neonates with hypoxic-ischaemic encephalopathy. Aim: To determine the apparent diffusion coefficient (ADC) in brain structures during the first 2 weeks of life and its relation with neurological outcome for hypoxic-ischaemic encephalopathy (HIE) in term neonates.
Methods: We retrospectively evaluated 56 term-born neonates. The ADC values were measured for 11 brain regions. The clinical outcomes at least 2 years of age were defined as normal outcome, mild disability and severe disability. The area under curves (AUCs) by ROC analysis were performed to predict the neurodevelopmental outcomes. The clinical outcomes were compared between favourable outcome and adverse outcome and also between normal outcome and unfavourable outcome.
Results: Thirty-four patients were judged as normal outcome, 10 as mild disability and 12 as severe disability. When the clinical outcomes were compared between favourable outcome and adverse outcome, the AUC on the 1st week was highest value at the thalamus. When the clinical outcomes were compared between normal outcome and unfavourable outcome, the AUC on the 1st week was highest at the thalamus.
Conclusion: The ADC values in the thalamus in the 1st week can predict the neurological outcome. The ADC values in centrum semiovale on the 2nd week can be used to predict neurodevelopmental outcomes.
abstract_id: PUBMED:18842756
Do apparent diffusion coefficient measurements predict outcome in children with neonatal hypoxic-ischemic encephalopathy? Background And Purpose: Diffusion-weighted imaging (DWI) permits early detection and quantification of hypoxic-ischemic (HI) brain lesions. Our aim was to assess the predictive value of DWI and apparent diffusion coefficient (ADC) measurements for outcome in children with perinatal asphyxia.
Materials And Methods: Term neonates underwent MR imaging within 10 days after birth because of asphyxia. MR imaging examinations were retrospectively evaluated for HI brain damage. ADC was measured in 30 standardized brain regions and in visibly abnormal areas on DWI. In survivors, developmental outcome until early school age was graded into the following categories: 1) normal, 2) mildly abnormal, and 3) definitely abnormal. For analysis, category 3 and death (category 4) were labeled "adverse," 1 and 2 were "favorable," and 2-3 and death were "abnormal" outcome. Differences in outcome between infants with and without DWI abnormalities were analyzed by using chi(2) tests. The nonparametric Mann-Whitney U test analyzed whether ADC values in visible DWI abnormalities correlated with age at imaging. Logistic regression analysis tested the predictive value for outcome of the ADC in each standardized brain region. Receiver operating characteristic analysis was used to find optimal ADC cutoff values for each region for the various outcome scores.
Results: Twenty-four infants (13 male) were included. Mean age at MR imaging was 4.3 days (range, 1-9 days). Seven infants died. There was no difference in outcome between infants with and without visible DWI abnormalities. Only ADC of the posterior limb of the internal capsule correlated with age. ADC in visibly abnormal DWI regions did not have a predictive value for outcome. Of all measurements performed, only the ADC in the normal-appearing basal ganglia and brain stem correlated significantly with outcome; low ADC values were associated with abnormal/adverse outcome, and higher ADC values, with normal/favorable outcome (basal ganglia: P = .03 for abnormal, P = .01 for adverse outcome; brain stem: P = .006 for abnormal, P = .03 for adverse outcome).
Conclusions: ADC values in normal-appearing basal ganglia and brain stem correlated with outcome, independently of all MR imaging findings including those of DWI. ADC values in visibly abnormal brain tissue on DWI did not show a predictive value for outcome.
abstract_id: PUBMED:33677143
Objective and Clinically Feasible Analysis of Diffusion MRI Data can Help Predict Dystonia After Neonatal Brain Injury. Background: Dystonia in cerebral palsy is debilitating but underdiagnosed precluding targeted treatment that is most effective if instituted early. Deep gray matter injury is associated with dystonic cerebral palsy but is difficult to quantify. Objective and clinically feasible identification of injury preceding dystonia could help determine the children at the highest risk for developing dystonia and thus facilitate early dystonia detection.
Methods: We examined brain magnetic resonance images from four- to five-day-old neonates after therapeutic hypothermia for hypoxic-ischemic encephalopathy at a tertiary care center. Apparent diffusion coefficient values in the striatum and thalamus were determined using a web-based viewer integrated with the electronic medical record (IBM iConnect Access). The notes of specialists in neonatal neurology, pediatric movement disorders, and pediatric cerebral palsy (physicians most familiar with motor phenotyping after neonatal brain injury) were screened for all subjects through age of five years for motor phenotype documentation.
Results: Striatal and thalamic apparent diffusion coefficient values significantly predicted dystonia with receiver operator characteristic areas under the curve of 0.862 (P = 0.0004) and 0.838 (P = 0.001), respectively (n = 50 subjects). Striatal apparent diffusion coefficient values less than 1.014 × 10-3 mm2/s provided 100% specificity and 70% sensitivity for dystonia. Thalamic apparent diffusion coefficient values less than 0.973 × 10-3 mm2/s provided 100% specificity and 80% sensitivity for dystonia.
Conclusions: Lower striatal and thalamic apparent diffusion coefficient values predicted dystonia in four- to five-day-old neonates who underwent therapeutic hypothermia for hypoxic ischemic encephalopathy. Objective and clinically feasible neonatal brain imaging assessment could help increase vigilance for dystonia in cerebral palsy.
abstract_id: PUBMED:24715056
Prognostic value of diffusion-weighted imaging summation scores or apparent diffusion coefficient maps in newborns with hypoxic-ischemic encephalopathy. Background: The diagnostic and prognostic assessment of newborn infants with hypoxic-ischemic encephalopathy (HIE) comprises, among other tools, diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) maps.
Objective: To compare the ability of DWI and ADC maps in newborns with HIE to predict the neurodevelopmental outcome at 2 years of age.
Materials And Methods: Thirty-four term newborns with HIE admitted to the Neonatal Intensive Care Unit of Modena University Hospital from 2004 to 2008 were consecutively enrolled in the study. All newborns received EEG, conventional MRI and DWI within the first week of life. DWI was analyzed by means of summation (S) score and regional ADC measurements. Neurodevelopmental outcome was assessed with a standard 1-4 scale and the Griffiths Mental Developmental Scales - Revised (GMDS-R).
Results: When the outcome was evaluated with a standard 1-4 scale, the DWI S scores showed very high area under the curve (AUC) (0.89) whereas regional ADC measurements in specific subregions had relatively modest predictive value. The lentiform nucleus was the region with the highest AUC (0.78). When GMDS-R were considered, DWI S scores were good to excellent predictors for some GMDS-R subscales. The predictive value of ADC measurements was both region- and subscale-specific. In particular, ADC measurements in some regions (basal ganglia, white matter or rolandic cortex) were excellent predictors for specific GMDS-R with AUCs up to 0.93.
Conclusions: DWI S scores showed the highest prognostic value for the neurological outcome at 2 years of age. Regional ADC measurements in specific subregions proved to be highly prognostic for specific neurodevelopmental outcomes.
abstract_id: PUBMED:29379241
Comparison of fractional anisotropy and apparent diffusion coefficient among hypoxic ischemic encephalopathy stages 1, 2, and 3 and with nonasphyxiated newborns in 18 areas of brain. Purpose: To determine the area and extent of injury in hypoxic encephalopathy stages by diffusion tensor imaging (DTI) using parameters apparent diffusion coefficient (ADC) and fractional anisotropy (FA) values and their comparison with controls without any evidence of asphyxia. To correlate the outcome of hypoxia severity clinically and significant changes on DTI parameter.
Materials And Methods: DTI was done in 50 cases at median age of 12 and 20 controls at median age of 7 days. FA and apparent diffusion coefficient (ADC) were measured in several regions of interest (ROI). Continuous variables were analyzed using Student's t-test. Categorical variables were compared by Fisher's exact test. Comparison among multiple groups was done using analysis of variance (ANOVA) and post hoc Bonferroni test.
Results: Abnormalities were more easily and accurately determined in ROI with the help of FA and ADC values. When compared with controls FA values were significantly decreased and ADC values were significantly increased in cases, in ROI including both right and left side of thalamus, basal ganglia, posterior limb of internal capsule, cerebral peduncle, corticospinal tracts, frontal, parietal, temporal, occipital with P value < 0.05. The extent of injury was maximum in stage-III. There was no significant difference among males and females.
Conclusion: Compared to conventional magnetic resonance imaging (MRI), the evaluation of FA and ADC values using DTI can determine the extent and severity of injury in hypoxic encephalopathy. It can be used for early determination of brain injury in these patients.
abstract_id: PUBMED:17903669
Apparent diffusion coefficient pseudonormalization time in neonatal hypoxic-ischemic encephalopathy. The apparent diffusion coefficient changes with time after hypoxic-ischemic brain injury. In this study, we quantitatively examined the relationship between the apparent diffusion coefficient and postnatal age for neonates with hypoxic-ischemic encephalopathy and poor outcome, and determined the postnatal age at which these values cannot be distinguished from those of neonates without hypoxic-ischemic encephalopathy (pseudonormalization time). Diffusion-weighted brain images were obtained from clinical scans of term neonates with hypoxic-ischemic encephalopathy and poor outcome (12 neonates, 23 scans) and from control subjects (30 neonates, 31 scans). The correlation between apparent diffusion coefficient and postnatal age was investigated for several brain regions. Pseudonormalization times were determined (1) from the intersection of the regression lines for the hypoxic-ischemic encephalopathy and control groups, as well as (2) from intrasubject apparent diffusion coefficient changes between two scans within a small subgroup. Pseudonormalization times from the regression ranged from 8.3 +/- 1.9 days to 10.1 +/- 2.1 days. Slightly (approximately 1 day) longer values were obtained from the intrasubject analysis. The results suggest that, although abnormally decreased apparent diffusion coefficient values may be evident from approximately 2 days to almost 1 week of postnatal age, abnormally elevated values may not be apparent until late in the second week of life.
abstract_id: PUBMED:24652007
Apparent diffusion coefficient histogram analysis of neonatal hypoxic-ischemic encephalopathy. Background: Diffusion-weighted imaging is a valuable tool in the assessment of the neonatal brain, and changes in diffusion are seen in normal development as well as in pathological states such as hypoxic-ischemic encephalopathy (HIE). Various methods of quantitative assessment of diffusion values have been reported. Global ischemic injury occurring during the time of rapid developmental changes in brain myelination can complicate the imaging diagnosis of neonatal HIE.
Objective: To compare a quantitative method of histographic analysis of brain apparent coefficient (ADC) maps to the qualitative interpretation of routine brain MR imaging studies. We correlate changes in diffusion values with gestational age in radiographically normal neonates, and we investigate the sensitivity of the method as a quantitative measure of hypoxic-ischemic encephalopathy.
Materials And Methods: We reviewed all brain MRI studies from the neonatal intensive care unit (NICU) at our university medical center over a 4-year period to identify cases that were radiographically normal (23 cases) and those with diffuse, global hypoxic-ischemic encephalopathy (12 cases). We histographically displayed ADC values of a single brain slice at the level of the basal ganglia and correlated peak (s-sDav) and lowest histogram values (s-sDlowest) with gestational age.
Results: Normative s-sDav values correlated significantly with gestational age and declined linearly through the neonatal period (r (2) = 0.477, P < 0.01). Six of 12 cases of known HIE demonstrated significantly lower s-sDav and s-sDlowest ADC values than were reflected in the normative distribution; several cases of HIE fell within a 95% confidence interval for normative studies, and one case demonstrated higher-than-normal s-sDav.
Conclusion: Single-slice histographic display of ADC values is a rapid and clinically feasible method of quantitative analysis of diffusion. In this study normative values derived from consecutive neonates without radiographic evidence of ischemic injury are correlated with gestational age, declining linearly throughout the perinatal period. This method does identify cases of HIE, though the overall sensitivity of the method is low.
abstract_id: PUBMED:28351039
Prognostic Value of the Apparent Diffusion Coefficient in Newborns with Hypoxic-Ischaemic Encephalopathy Treated with Therapeutic Hypothermia. Background: Apparent diffusion coefficient (ADC) quantification has been proven to be of prognostic value in term newborns with hypoxic-ischaemic encephalopathy (HIE) who were treated under normothermia.
Objectives: To evaluate the prognostic value of ADC in standardized brain regions in neonates with HIE who were treated with therapeutic hypothermia (TH).
Methods: This prospective cohort study included 54 term newborns who were admitted with HIE and treated with TH. All magnetic resonance imaging examinations were performed between days 4 and 6 of life, and ADC values were measured in 13 standardized regions of the brain. At 2 years of age we explored whether ADC values were related to composite outcomes (death or survival with abnormal neurodevelopment).
Results: The severity of HIE is inversely related to ADC values in different brain regions. We found that lower ADC values in the posterior limb of the internal capsule (PLIC), the thalami, the semioval centre, and frontal and parietal white matter were related to adverse outcomes. ADC values in the PLIC and thalami are good predictors of adverse outcomes (AUC 0.86 and 0.76).
Conclusions: Low ADC values in the PLIC, thalamus, semioval centre, and frontal and parietal white matter in full-term infants with HIE treated with TH were associated with a poor outcome.
abstract_id: PUBMED:37808174
Volumetric apparent diffusion coefficient (ADC) histogram analysis of the brain in paediatric patients with hypoxic ischaemic encephalopathy. Purpose: To evaluate the whole brain, hippocampus, thalamus, and lentiform nucleus by volumetric apparent diffusion coefficient (ADC) histogram analysis in paediatric patients with hypoxic-ischaemic encephalopathy (HIE).
Material And Methods: This retrospective study included 25 patients with HIE and 50 patients as the control group. Diffusion-weighted imaging was obtained at b-values of 1000 mm2/s. The histogram parameters of ADC values, including the mean, minimum, maximum, 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentiles, as well as skewness, kurtosis, and variance were determined. The interclass correlation coefficient (ICC) was used to assess the inter-observer agreement.
Results: ADCmin, ADCmean, and ADCmax, as well as the 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentiles of ADC values for the HIE group were all lower than those of the control group (p < 0.001) in the volumetric histogram analysis of the hippocampus, thalamus, and lentiform nucleus. In the whole-brain histogram analysis, ADC min, and the 50th and 75th percentiles of ADC values did not differ significantly, while other parameters were lower in the HIE group. The ROC curve revealed that the ADC histogram parameters of the hippocampus provided the most accurate results for the diagnosis of HIE. The area under the curve (AUC) of the 95th percentile of ADC values was the highest (AUC = 0.915; cut-off 1.262 × 10-3 mm2/s; sensitivity 88% and specificity 84%).
Conclusions: Volumetric ADC histogram analysis of the whole brain, hippocampus, thalamus, and lentiform nucleus with b-values of 1000 mm2/s can serve as an imaging marker for determining HIE.
Answer: Yes, apparent diffusion coefficient (ADC) measurements can predict outcomes in children with neonatal hypoxic-ischemic encephalopathy (HIE). Several studies have demonstrated the prognostic value of ADC in assessing the severity of brain damage and predicting clinical outcomes in neonates with HIE.
One study found that ADC values in the occipital and parietal grey matter, as well as the precentral area, were significantly different between drowned children (a form of hypoxic-ischemic event) and controls, suggesting that ADC reduction may be useful to predict poor outcomes in these cases (PUBMED:28343728).
Another study indicated that ADC values measured in the thalamus during the first week of life can predict neurological outcomes in term neonates with HIE. The study also found that ADC values in the centrum semiovale during the second week can be used to predict neurodevelopmental outcomes (PUBMED:37874257).
Further research showed that ADC values in normal-appearing basal ganglia and brain stem correlated with outcomes, with low ADC values associated with abnormal/adverse outcomes and higher ADC values with normal/favorable outcomes (PUBMED:18842756).
In addition, ADC values in the striatum and thalamus were found to significantly predict dystonia in neonates who underwent therapeutic hypothermia for HIE, with specific cutoff values providing high specificity and sensitivity for dystonia (PUBMED:33677143).
Another study compared the ability of diffusion-weighted imaging (DWI) summation scores and ADC maps to predict neurodevelopmental outcomes at 2 years of age in newborns with HIE. The study concluded that DWI summation scores showed the highest prognostic value, and regional ADC measurements in specific subregions were highly prognostic for specific neurodevelopmental outcomes (PUBMED:24715056).
Overall, these studies support the use of ADC measurements as a predictive tool for assessing the severity of brain injury and predicting long-term neurodevelopmental outcomes in neonates with HIE. |
Instruction: Asthma deaths in children in New South Wales 2004-2013: Could we have done more?
Abstracts:
abstract_id: PUBMED:26135337
Asthma deaths in children in New South Wales 2004-2013: Could we have done more? Aims: The aim of this study was to characterise the deaths of children from asthma in New South Wales (NSW) over the last 10 years and ascertain whether there were modifiable factors that could have prevented the deaths.
Methods: The hospital medical records, coronial reports, immunisation records and all relevant correspondence from general practitioners, medical specialists and hospitals were reviewed for children who died with asthma in the 10 years (2004-2013).
Results: In 10 years, there were 20 deaths (0-7 per year) with a male predominance (70%) occurring in children aged 4-17 years. Sixteen (80%) had persistent asthma and 4 (20%) had intermittent asthma. The majority (55%) had been hospitalised for asthma in the preceding 12 months, 25% in the preceding 6 weeks. The majority (55%) was aged 10-14 years. Ninety percent were atopic. Psychosocial issues were identified in the majority (55%) of families. Forty percent had a child protection history. Seventy-five percent had consulted a general practitioner in the year before their death, 45% had a current written asthma action plan and 50% had not seen a paediatrician ever in relation to their asthma. Of the 16 children at school, the schools were aware of the asthma in 14 (88%) cases, but only half had copies of written asthma plans.
Conclusions: Improved communication and oversight between health-care providers, education and community protection agencies could reduce mortality from asthma in children.
abstract_id: PUBMED:1858077
Prevalence of asthma among 12 year old children in New Zealand and South Wales: a comparative survey. A survey of 12 year old schoolchildren was carried out in New Zealand and South Wales, the same questionnaire and exercise provocation test being used. The prevalence of a history of asthma at any time was higher in New Zealand (147/873, 17%) than in South Wales (116/965, 12%). The New Zealand children were also more likely than the Welsh children to have a history of "wheeze ever" (27% versus 22%), and wheeze brought on by running (15% versus 10.5%). The sex ratio of asthmatic and wheezy children was very similar in the two countries. A history of hospital admission for chest trouble was twice as common in New Zealand as in South Wales. An exercise test produced a fall in peak expiratory flow rate of 15% or more in more New Zealand children (12.2%) than Welsh children (7.7%). These results suggest that the prevalence of childhood asthma is higher in New Zealand than in South Wales.
abstract_id: PUBMED:10373811
Hospital admission and mortality differentials of asthma between urban and rural populations in New South Wales. It remains unclear whether there are any differentials in hospital admission and mortality rates of asthma between urban and rural populations. An observational study was conducted, based on patient hospital records, to examine the distribution of asthma admissions and mortality in New South Wales. Data on all reported cases of asthma were obtained from New South Wales hospitals between 1989 and 1994. Information on deaths of asthma was collected between 1983 and 1992. The hospital admission rates of asthma varied from 4.8 per 1000 in 1990 to 5.4 per 1000 in 1992 for rural population, and from 3.0 per 1000 in 1991 to 3.4 per 1000 in 1992 for urban population. The hospital admission rates were 51.2-69.1% higher for rural residents than urban dwellers. The mortality rates of asthma ranged from 4.8 per 100,000 in 1983 to 8.0 per 100,000 in 1985 for rural population, and from 3.8 per 100,000 in 1983 to 6.0 per 100,000 in 1989 for urban population. The mortality rates of asthma were 3.62-42.85% higher for rural residents than urban dwellers. These results indicate that the non-age-adjusted hospital admission and mortality rates of asthma were considerably higher in rural populations than in urban populations in New South Wales.
abstract_id: PUBMED:1943931
The cost of asthma in New South Wales. Objective: To determine the economic cost of asthma to the New South Wales community.
Design: Direct costs (both health-care and non-health-care) plus indirect costs (loss of productivity) were estimated from various sources to assess retrospectively the dollar costs of asthma. Intangible costs (such as quality of life) were not included.
Setting: Estimates of costs were made at all levels of medical care of asthma patients, including inpatient and outpatient hospitalisations, emergency department visits, and visits to general practitioners and specialist physicians, plus costs of pharmaceuticals, nebulisers and home peak-flow monitoring devices. The cost of time lost by the patient attending for medical visits and loss of productivity due to absence from employment as a result of asthma were also included.
Results: The total cost of asthma in New South Wales was $209 million in 1989. This was made up of $142 million in direct health-care costs, $19 million in direct non-health-care costs and $48 million in indirect costs.
Conclusion: Although we believe that our estimate is an underestimate of the true dollar cost of this disease to the community, it represents $769 per asthmatic person per year, assuming a current prevalence rate for asthma in New South Wales of 6%. The cost effectiveness of any new treatment of asthma should be estimated to ensure that the economic cost to the community does not rise unnecessarily.
abstract_id: PUBMED:1603009
Air quality and respiratory disease in Newcastle, New South Wales. Objective: To investigate respiratory illnesses in the Newcastle region, their change over time, and their geographic relationship to industrialised areas.
Design: We analysed admissions to public hospitals by postcode area in the Newcastle region, for all causes and for all the various respiratory causes, for the years 1979-1988. Comparisons were made between the State of New South Wales and the Newcastle area, and between geographic areas within Newcastle. Changes over the 10-year period were noted.
Results: For both all causes and respiratory causes, admission rates to Newcastle hospitals, 1979-1988, were significantly lower than those for the rest of New South Wales in 1986. There was a correlation between living in the industrial part of the city and hospital admission for all causes and respiratory causes. There was also a correlation between mean disposable family income and hospital admissions, with those areas with the higher incomes having lower admission rates. Over the 10 years studied there was a statistically significant decline in admissions for respiratory causes, both in absolute terms and after controlling for changes in admissions for all causes. In children aged 0-14 years a significant increase in admissions for asthma occurred between 1979 and 1988, which could not be explained by diagnostic shift.
Conclusions: On the basis of hospital statistics, the members of the Newcastle population seem little different from those in the remainder of New South Wales. From 1979-1988, the efforts by industry, with the support of the community, to reduce industrial pollution have been accompanied by a reduction in hospital admission rates for respiratory diseases in general and for chronic obstructive lung disease in older people. Other contributing factors include reduced smoking rates and improved medical management. Correlations between geographic location and respiratory admission rates may be a manifestation of social class rather than poor air quality, although a contribution from the latter cannot be discounted. A concomitant rise in asthma admission rates in children aged 0-14 is likely to be unrelated to any change in air quality.
abstract_id: PUBMED:15704699
The self-reported health status of prisoners in New South Wales. Objective: To describe the physical health of the New South Wales prisoner population.
Design: Cross-sectional random sample of adult men and women prisoners.
Setting: 29 New South Wales correctional centres (27 male and two female).
Participants: 747 men and 167 women.
Main Results: Despite the comparatively young population, 81% of women and 65% of men had at least one chronic health condition; 41% of men and 59% of women reported multiple health problems. The most common conditions were back problems, poor eyesight, arthritis, high blood pressure and asthma. Chronic conditions were more prevalent among women prisoners. Thirty-seven per cent of women and 28% of men rated their health as either 'poor' or 'fair' compared with 16% of women and 15% of men in the general NSW community. Psychiatric medication was more commonly prescribed to women than men (25% vs. 13%; p < 0.001). Similarly, methadone maintenance was more common among women than men (39% vs. 13%; p < 0.001).
Conclusion: Men and women prisoners in NSW have multiple chronic health conditions. While not desirable, incarceration presents an opportunity to initiate treatment to improve the health of this disadvantaged group.
abstract_id: PUBMED:7609683
Prevalence and severity of childhood asthma and allergic sensitisation in seven climatic regions of New South Wales. Objective: To compare the prevalence and severity of asthma and of allergic sensitisation in children in different regions. We hypothesised that regions with different standardised hospital admission rates would have different prevalences of childhood asthma and that diverse climates would result in a range of sensitisations to different allergens.
Design And Setting: We studied large random population samples of children in seven regions in New South Wales (NSW) in 1991-1993. Hospitalisation rates were obtained from NSW Department of Health data.
Participants: 6394 children aged 8-11 years.
Outcome Measures: History of respiratory symptoms by self-administered questionnaire; airway hyperresponsiveness by histamine inhalation test; and sensitisation to allergens by skin-prick tests.
Results: Children in all regions had a high prevalence of recent wheeze (22%-27%), of diagnosed asthma (24%-38%) and of use of asthma medications (22%-30%), but no region was consistently higher or lower for all measurements. The prevalence of current asthma in children living in three coastal regions (where sensitisation to house-dust mites was high) and in the far west (where sensitisation to alternaria was high) was 12%-13%, which was significantly higher than the prevalence of 7%-10% in children living in three inland regions (where sensitisation to these allergens was lower) (P < 0.01).
Conclusions: We found significant variations in the prevalence and severity of childhood asthma in NSW. The prevalence of hospitalisations, diagnosed asthma, recent symptoms and medication use may relate to different regional diagnostic patterns, whereas current asthma prevalence may relate to different levels of allergic sensitisation.
abstract_id: PUBMED:10677124
The effect of parental smoking on presence of wheez or airway hyper-responsiveness in New South Wales school children. Background And Aims: To assess accurately the effect of parental smoking on the respiratory health of New South Wales (NSW) school children, we obtained a large data set by pooling data from seven cross-sectional studies conducted in NSW between 1991 and 1993.
Methods: A random sample of 6394 children age eight to 11 years was studied. Respiratory symptoms, family history of asthma and parental smoking history were measured by questionnaire, atopy by skin prick test and airway hyper-responsiveness (AHR) by histamine inhalation test.
Results: In total, 58.3% of children had at least one parent who smoked; 38.5% were exposed to maternal smoking. After adjusting for potential confounders, such as atopy, parental history of asthma and bronchitis in the first two years, children who were exposed to maternal smoking had a significantly increased risk of recent wheeze but not of AHR (odds ratios 1.33; 95% CI: 1.2-1.5 and 1.00; 95% CI: 0.9-1.2).
Conclusions: The positive association with wheeze and the lack of an association with AHR suggests that exposure to parental smoking leads to wheezing, but does not increase airway responsiveness.
abstract_id: PUBMED:35851698
COVID-19 in New South Wales children during 2021: severity and clinical spectrum. Objectives: To describe the severity and clinical spectrum of coronavirus disease 2019 (COVID-19) in children during the 2021 New South Wales outbreak of the Delta variant of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).
Design, Setting: Prospective cohort study in three metropolitan Sydney local health districts, 1 June - 31 October 2021.
Participants: Children under 16 years of age with positive SARS-CoV-2 nucleic acid test results admitted to hospital or managed by the Sydney Children's Hospital Network (SCHN) virtual care team.
Main Outcome Measures: Age-specific SARS-CoV-2 infection frequency, overall and separately for SCHN virtual and hospital patients; rates of medical and social reason admissions, intensive care admissions, and paediatric inflammatory multisystem syndrome temporally associated with SARS-CoV-2 per 100 SARS-CoV-2 infections; demographic and clinical factors that influenced likelihood of hospital admission.
Results: A total of 17 474 SARS-CoV-2 infections in children under 16 were recorded in NSW, of whom 11 985 (68.6%) received SCHN-coordinated care, including 459 admitted to SCHN hospitals: 165 for medical reasons (1.38 [95% CI, 1.17-1.59] per 100 infections), including 15 admitted to intensive care, and 294 (under 18 years of age) for social reasons (2.45 [95% CI, 2.18-2.73] per 100 infections). In an analysis that included all children admitted to hospital and a random sample of those managed by the virtual team, having another medical condition (adjusted odds ratio [aOR], 7.42; 95% CI, 3.08-19.3) was associated with increased likelihood of medical admission; in univariate analyses, non-asthmatic chronic respiratory disease was associated with greater (OR, 9.21; 95% CI, 1.61-174) and asthma/viral induced wheeze with lower likelihood of admission (OR, 0.38; 95% CI, 0.18-0.78). The likelihood of admission for medical reasons declined from infancy to 5-11 years, but rose again for those aged 12-15 years. Sex and Indigenous status did not influence the likelihood of admission.
Conclusion: Most SARS-CoV-2 infections (Delta variant) in children were asymptomatic or associated with mild disease. Hospitalisation was relatively infrequent, and most common for infants, adolescents, and children with other medical conditions. More children were hospitalised for social than for medical reasons.
abstract_id: PUBMED:11022988
Asthma is more prevalent in rural New South Wales than metropolitan Victoria, Australia. Objective: The aim of this study was to compare the prevalence of adult respiratory symptoms and conditions in a rural setting with a metropolitan setting.
Methodology: We used cross-sectional population surveys of respiratory health using the European Respiratory Health Survey screening questionnaire. A random sample of 4455 eligible young adults aged 20-44 years, from electoral rolls in south-eastern metropolitan Melbourne, Victoria, and 4521 from rural south-western New South Wales were surveyed.
Results: Response rates of 72% (n=3194) and 69% (n=3121) were achieved in Melbourne and the Riverina, respectively. Respondents from the Riverina reported significantly higher prevalences of nocturnal dyspnoea (P<0.01), chronic bronchitis (P=0.03), an asthma attack in the previous 12 months (P<0.001), ever having had asthma (P<0.001) and doctor-diagnosed asthma (P<0.001) compared to those from Melbourne. However, among those with 'asthma attacks in the last 12 months', Melbourne respondents experienced a higher frequency of attacks (P<0.05). Riverina respondents reported a higher prevalence of smoking (P<0.05) and smoked more cigarettes on average (P<0.001) than Melbourne respondents. However, annoyance from air pollution was higher in Melbourne than in the Riverina.
Conclusions: These results suggest that asthma is more prevalent in rural southern New South Wales than in Melbourne but follows a different pattern of exacerbations.
Answer: The study on asthma deaths in children in New South Wales (NSW) from 2004 to 2013 aimed to characterize these deaths and determine if there were modifiable factors that could have prevented them. The study reviewed medical records, coronial reports, immunization records, and correspondence from healthcare providers for children who died with asthma during this period. The findings revealed that there were 20 deaths, with a male predominance (70%) and the majority occurring in children aged 4-17 years. Most of the children had persistent asthma (80%), and a significant portion had been hospitalized for asthma in the preceding 12 months (55%). Psychosocial issues were identified in the majority of families (55%), and 40% had a child protection history. Despite 75% having consulted a general practitioner in the year before their death, only 45% had a current written asthma action plan, and 50% had never seen a paediatrician in relation to their asthma. Schools were aware of the asthma in 88% of cases, but only half had copies of written asthma plans.
The study concluded that improved communication and oversight between healthcare providers, education, and community protection agencies could potentially reduce mortality from asthma in children (PUBMED:26135337). This suggests that there were indeed modifiable factors, such as better management plans, more consistent follow-up with healthcare professionals, and enhanced communication between different agencies, that could have been addressed to potentially prevent some of these deaths. |
Instruction: Does Age Influence the Risk of Incident Knee Osteoarthritis After a Traumatic Anterior Cruciate Ligament Injury?
Abstracts:
abstract_id: PUBMED:27268239
Does Age Influence the Risk of Incident Knee Osteoarthritis After a Traumatic Anterior Cruciate Ligament Injury? Background: The development of radiographic knee osteoarthritis (OA) after an anterior cruciate ligament (ACL) rupture has long been studied and proven in the adolescent population. However, similar exhaustive investigations have not been conducted in mature-aged athletes or in older populations.
Purpose: To identify whether an older adult population had an increased risk of incident radiographic knee OA after a traumatic knee injury compared with a young adult population.
Study Design: Cohort study; Level of evidence, 3.
Methods: Patients with ACL ruptures who underwent primary reconstruction were enrolled in a prospective, longitudinal single-center study over 15 years. The adult cohort was defined as participants aged ≥35 years who had a knee injury resulting in an ACL tear, the adolescent-young cohort suffered similar knee injuries and were aged ≤25 years, and a third cohort of participants aged 26 to 34 years who suffered a knee injury was included to identify the existence of any age-related dose-response relationship for the onset of radiographic knee OA. A Kaplan-Meier survival analysis was employed to determine the occurrence of incident radiographic OA across the study populations at 2, 5, 10, and 15 years after reconstruction. Significance at each time point was analyzed using chi-square tests.
Results: A total of 215 patients, including 112 adolescents (mean age, 20.4 years; 50.9% female), 71 patients aged 26 to 34 years (mean age, 29.2 years; 42.3% female), and 32 adults (mean age, 40.2 years; 59.4% female), were assessed for International Knee Documentation Committee (IKDC) grading on knee radiographs. It was found that 53.0% and 77.8% of adults at a respective 10 and 15 years after reconstruction had an IKDC grade of B or greater compared with 17.7% and 61.6% of the adolescent-young cohort. Chi-square testing found that adults developed OA earlier than adolescents at 5 and 10 years after reconstruction (P = .017 and P < .0001, respectively). However, survival analysis did not demonstrate that adults were more likely to develop radiographic knee OA at 15 years after reconstruction compared with the adolescent-young cohort (P = .4).
Conclusion: The age at which an ACL injury is sustained does not appear to influence the rate of incident radiographic knee OA, although mature-aged athletes are likely to arrive at the OA endpoint sooner.
abstract_id: PUBMED:36349350
A review of Risk Factors for Post-traumatic hip and knee osteoarthritis following musculoskeletal injuries other than anterior cruciate ligament rupture. Post-traumatic osteoarthritis (PTOA) is a common form of osteoarthritis that might occur after any joint trauma. Most PTOA publications mainly focus on anterior cruciate ligament (ACL) injuries. However, many other traumatic injuries are associated with PTOA, not only for the knee but also for the hip joint. We aim to identify and summarize the existing literature on the musculoskeletal injuries associated with knee and hip PTOA and their risk factors in determining those with a worse prognosis, excluding ACL injuries. Despite the narrative nature of this review, a systematic search for published studies in the last twenty years regarding the most relevant injuries associated with a higher risk of PTOA and associated risk factors for OA was conducted. This review identified the six more relevant injuries associated with knee or hip PTOA. We describe the incidence, risk factors for the injury and risk factors for PTOA of each. Meniscal injury, proximal tibial fracture, patellar dislocation, acetabular, femoral fractures and hip dislocations are all discussed in this review.
abstract_id: PUBMED:34348184
Gait risk factors for disease progression differ between non-traumatic and post-traumatic knee osteoarthritis. Objective: To examine if relationships between knee osteoarthritis (OA) progression with knee moments and muscle activation during gait vary between patients with non-traumatic and post-traumatic knee OA.
Design: This longitudinal study included participants with non-traumatic (n = 17) and post-traumatic (n = 18) knee OA; the latter group had a previous anterior cruciate ligament rupture. Motion capture cameras, force plates, and surface electromyography measured knee moments and lower extremity muscle activation during gait. Cartilage volume change were determined over 2 years using magnetic resonance imaging in four regions: medial and lateral plateau and condyle. Linear regression analysis examined relationships between cartilage change with gait metrics (moments, muscle activation), group, and their interaction.
Results: Measures from knee adduction and rotation moments were related to lateral condyle cartilage loss in both groups, and knee adduction moment to lateral plateau cartilage loss in the non-traumatic group only [β = -1.336, 95% confidence intervals (CI) = -2.653 to -0.019]. Generally, lower levels of stance phase muscle activation were related to greater cartilage loss. The relationship between cartilage loss in some regions with muscle activation characteristics varied between non-traumatic and post-traumatic groups including for: lateral hamstring (lateral condyle β = 0.128, 95%CI = 0.003 to 0.253; medial plateau β = 0.199, 95%CI = 0.059 to 0.339), rectus femoris (medial condyle β = -0.267, 95%CI = -0.460 to -0.073), and medial hamstrings (medial plateau; β = -0.146, 95%CI = -0.244 to -0.048).
Conclusion: Findings indicate that gait risk factors for OA progression may vary between patients with non-traumatic and post-traumatic knee OA. These OA subtypes should be considered in studies that investigate gait metrics as risk factors for OA progression.
abstract_id: PUBMED:36455966
Risk factors for knee osteoarthritis after traumatic knee injury: a systematic review and meta-analysis of randomised controlled trials and cohort studies for the OPTIKNEE Consensus. Objective: To identify and quantify potential risk factors for osteoarthritis (OA) following traumatic knee injury.
Design: Systematic review and meta-analyses that estimated the odds of OA for individual risk factors assessed in more than four studies using random-effects models. Remaining risk factors underwent semiquantitative synthesis. The modified GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach for prognostic factors guided the assessment.
Data Sources: MEDLINE, EMBASE, CENTRAL, SPORTDiscus, CINAHL searched from inception to 2009-2021.
Eligibility: Randomised controlled trials and cohort studies assessing risk factors for symptomatic or structural OA in persons with a traumatic knee injury, mean injury age ≤30 years and minimum 2-year follow-up.
Results: Across 66 included studies, 81 unique potential risk factors were identified. High risk of bias due to attrition or confounding was present in 64% and 49% of studies, respectively. Ten risk factors for structural OA underwent meta-analysis (sex, rehabilitation for anterior cruciate ligament (ACL) tear, ACL reconstruction (ACLR), ACLR age, ACLR body mass index, ACLR graft source, ACLR graft augmentation, ACLR+cartilage injury, ACLR+partial meniscectomy, ACLR+total medial meniscectomy). Very-low certainty evidence suggests increased odds of structural OA related to ACLR+cartilage injury (OR=2.31; 95% CI 1.35 to 3.94), ACLR+partial meniscectomy (OR=1.87; 1.45 to 2.42) and ACLR+total medial meniscectomy (OR=3.14; 2.20 to 4.48). Semiquantitative syntheses identified moderate-certainty evidence that cruciate ligament, collateral ligament, meniscal, chondral, patellar/tibiofemoral dislocation, fracture and multistructure injuries increase the odds of symptomatic OA.
Conclusion: Moderate-certainty evidence suggests that various single and multistructure knee injuries (beyond ACL tears) increase the odds of symptomatic OA. Risk factor heterogeneity, high risk of bias, and inconsistency in risk factors and OA definition make identifying treatment targets for preventing post-traumatic knee OA challenging.
abstract_id: PUBMED:36379676
OPTIKNEE 2022: consensus recommendations to optimise knee health after traumatic knee injury to prevent osteoarthritis. The goal of the OPTIKNEE consensus is to improve knee and overall health, to prevent osteoarthritis (OA) after a traumatic knee injury. The consensus followed a seven-step hybrid process. Expert groups conducted 7 systematic reviews to synthesise the current evidence and inform recommendations on the burden of knee injuries; risk factors for post-traumatic knee OA; rehabilitation to prevent post-traumatic knee OA; and patient-reported outcomes, muscle function and functional performance tests to monitor people at risk of post-traumatic knee OA. Draft consensus definitions, and clinical and research recommendations were generated, iteratively refined, and discussed at 6, tri-weekly, 2-hour videoconferencing meetings. After each meeting, items were finalised before the expert group (n=36) rated the level of appropriateness for each using a 9-point Likert scale, and recorded dissenting viewpoints through an anonymous online survey. Seven definitions, and 8 clinical recommendations (who to target, what to target and when, rehabilitation approach and interventions, what outcomes to monitor and how) and 6 research recommendations (research priorities, study design considerations, what outcomes to monitor and how) were voted on. All definitions and recommendations were rated appropriate (median appropriateness scores of 7-9) except for two subcomponents of one clinical recommendation, which were rated uncertain (median appropriateness score of 4.5-5.5). Varying levels of evidence supported each recommendation. Clinicians, patients, researchers and other stakeholders may use the definitions and recommendations to advocate for, guide, develop, test and implement person-centred evidence-based rehabilitation programmes following traumatic knee injury, and facilitate data synthesis to reduce the burden of knee post-traumatic knee OA.
abstract_id: PUBMED:31727431
Association of chemokine expression in anterior cruciate ligament deficient knee with patient characteristics: Implications for post-traumatic osteoarthritis. Background: Stromal cell-derived factor-1a (SDF-1α) and high mobility group box chromosomal protein 1 (HMGB1) are chemokines that can drive post-traumatic osteoarthritis (PTOA) induced by anterior cruciate ligament (ACL) injury. However, the influence of patient characteristics on expression of those chemokines remains unclear. Our aim was to determine the relationship between chemokine expression in synovial fluid (SF) of the ACL-deficient (ACL-D) knee and patient characteristics including time from injury, sex, and age.
Methods: SF samples were collected immediately prior to the first-time ACL reconstruction (ACLR) from 82 patients. Expression of SDF-1α and HMGB1 was measured with human-specific solid phase sandwich enzyme-linked immunosorbent assays. The expression levels between groups divided by time from injury, or age, or sex was compared using Student's t-test. The association of SDF-1α or HMGB1 levels with those variables was determined using regression analysis and Pearson product-moment correlation coefficient.
Results: Regression and correlation analysis indicated significant correlation between SDF-1α expression and time from injury in the cohort (r = -0.266, P = 0.016, n = 82) and in females (r = -0.386, P = 0.024, n = 34). Significant correlation was also observed between SDF-1α expression and age in the cohort (r = -0.224, P = 0.043, n = 82) and in males (r = -0.289, P = 0.046, n = 48). No significant correlation between HMGB1 expression and patient characteristics was detected.
Conclusions: SDF-1α rather than HMGB1 might serve as a protein marker for monitoring the development of PTOA in the ACL-D knee, especially in female patients.
abstract_id: PUBMED:36072956
Post-traumatic knee osteoarthritis; the role of inflammation and hemarthrosis on disease progression. Knee injuries such as anterior cruciate ligament ruptures and meniscal injury are common and are most frequently sustained by young and active individuals. Knee injuries will lead to post-traumatic osteoarthritis (PTOA) in 25-50% of patients. Mechanical processes where historically believed to cause cartilage breakdown in PTOA patients. But there is increasing evidence suggesting a key role for inflammation in PTOA development. Inflammation in PTOA might be aggravated by hemarthrosis which frequently occurs in injured knees. Whereas mechanical symptoms (joint instability and locking of the knee) can be successfully treated by surgery, there still is an unmet need for anti-inflammatory therapies that prevent PTOA progression. In order to develop anti-inflammatory therapies for PTOA, more knowledge about the exact pathophysiological mechanisms and exact course of post-traumatic inflammation is needed to determine possible targets and timing of future therapies.
abstract_id: PUBMED:30887068
Post-traumatic osteoarthritis diagnosed within 5 years following ACL reconstruction. Purpose: The purpose was to calculate the incidence of osteoarthritis in individuals following Anterior Cruciate Ligament Reconstruction (ACLR) in a large, national database and to examine the risk factors associated with OA development.
Methods: A commercially available insurance database was queried to identify new diagnoses of knee OA in patients with ACLR. The cumulative incidence of knee OA diagnoses in patients after ACLR was calculated and stratified by time from reconstruction. Odds ratios were calculated using logistic regression to describe factors associated with a new OA diagnosis including age, sex, BMI, meniscus involvement, osteochondral graft use, and tobacco use.
Results: A total of 10,565 patients with ACLR were identified that did not have an existing diagnosis of OA, 517 of which had a documented new diagnosis of knee OA 5 years after ACL reconstruction. When stratified by follow-up time points, the incidence of a new OA diagnosis within 6 months was 2.3%; within a 1-year follow-up was 4.1%; within 2 years, follow-up was 6.2%, within 3 years, follow-up was 8.4%; within 4 years, follow-up was 10.4%; and within 5 years, follow-up was 12.3%. Risk factors for new OA diagnoses were age (OR 2.44, P < 0.001), sex (OR 1.2, P = 0.002), obesity (OR 1.4, P < 0.001), tobacco use (OR = 1.3, P = 0.001), and meniscal involvement (OR 1.2, P = 0.005).
Conclusion: Approximately 12% of patients presenting within 5 years following ACLR are diagnosed with OA. Demographic factors associated with an increased risk of a diagnosis of PTOA within 5 years after ACLR are age, sex, BMI, tobacco use, and concomitant meniscal surgery. Clinicians should be cognizant of these risk factors to develop risk profiles in patients with the common goal to achieve optimal long-term outcomes after ACLR.
Level Of Evidence: III.
abstract_id: PUBMED:32209130
Post-traumatic osteoarthritis following ACL injury. Post-traumatic osteoarthritis (PTOA) develops after joint injury. Specifically, patients with anterior cruciate ligament (ACL) injury have a high risk of developing PTOA. In this review, we outline the incidence of ACL injury that progresses to PTOA, analyze the role of ACL reconstruction in preventing PTOA, suggest possible mechanisms thought to be responsible for PTOA, evaluate current diagnostic methods for detecting early OA, and discuss potential interventions to combat PTOA. We also identify important directions for future research. Although much work has been done, the incidence of PTOA among patients with a history of ACL injury remains high due to the complexity of ACL injury progression to PTOA, the lack of sensitive and easily accessible diagnostic methods to detect OA development, and the limitations of current treatments. A number of factors are thought to be involved in the underlying mechanism, including structural factors, biological factors, mechanical factors, and neuromuscular factor. Since there is a clear "start point" for PTOA, early detection and intervention is of great importance. Currently, imaging modalities and specific biomarkers allow early detection of PTOA. However, none of them is both sensitive and easily accessible. After ACL injury, many patients undergo surgical reconstruction of ACL to restore joint stability and prevent excessive loading. However, convincing evidence is still lacking for the superiority of ACL-R to conservative management in term of the incidence of PTOA. As for non-surgical treatment such as anti-cytokine and chemokine interventions, most of them are investigated in animal studies and have not been applied to humans. A complete understanding of mechanisms to stratify the patients into different subgroups on the basis of risk factors is critical. And the improvement of standardized and quantitative assessment techniques is necessary to guide intervention. Moreover, treatments targeted toward different pathogenic pathways may be crucial to the management of PTOA in the future.
abstract_id: PUBMED:31039374
Decreased synovial fluid pituitary adenylate cyclase-activating polypeptide (PACAP) levels may reflect disease severity in post-traumatic knee osteoarthritis after anterior cruciate ligament injury. Background: It has been demonstrated that anterior cruciate ligament (ACL) injury-induced cartilage degeneration is the key risk factor for post-traumatic knee osteoarthritis (PTKOA).Pituitary adenylate cyclase-activating polypeptide (PACAP), a common neuropeptide exerting a wide spectrum of functions, has been proved to inhibit inflammation and prevent cartilage degeneration.
Objective: The current study was performed to investigate circulating and synovial fluid PACAP concentrations in ACL injury patients to determine their relationship with the disease progression of the severity of post-traumatic knee osteoarthritis (PTKOA).
Methods: 72 ACL injury patients receiving arthroscopical examination and surgery were enrolled in the study. Meanwhile, 60 gender-and-age non-traumatic patellar dislocation patients were enrolled as controls. The VAS score, Lysholm Score and International Knee Documentation Committee (IKDC) score were all recorded to evaluate the clinical severity. Serum and synovial fluid (SF) PACAP levels were investigated by enzyme-linked immunosorbent assay (ELISA).The IL-1β and TNF-α levels were also investigated. The degree of meniscus injury was assessed by MR imaging. The modified Mankin score was recorded to examine the cartilage histopathological alternations. Receiver operating characteristic (ROC) curve was performed to discuss the diagnostic value of PACAP levels for the prediction of the radiographic grading in comparison with IL-1β and TNF-α.
Results: Serum PACAP levels between PTKOA patients and patellar dislocation did not reach significant differences. However, SF PACAP levels were significantly lower in PTKOA patients than controls. In addition, SF PACAP levels were negatively associated with MRI imaging grade for meniscus injury and VAS score, and were positively associated with Lysholm and IKDC scores. In addition, SF PACAP levels were negatively related to Mankin score as well as the expressions of IL-1β and TNF-α. ROC analysis curve showed that attenuated PACAP may serve as a favorable marker for the diagnosis of MRI for meniscus injury.
Conclusions: SF PACAP concentrations showed an independent and negative association with disease severity in PTKOA following ACL injury. Local treatment with PACAP may act as a possible adjuvant therapy for delaying the process of PTKOA.
Answer: According to the findings from the cohort study with a level of evidence 3, age does not appear to significantly influence the rate of incident radiographic knee osteoarthritis (OA) after a traumatic anterior cruciate ligament (ACL) injury. However, it was observed that mature-aged athletes are likely to arrive at the OA endpoint sooner compared to younger individuals. Specifically, adults developed OA earlier than adolescents at 5 and 10 years after reconstruction, but survival analysis did not demonstrate that adults were more likely to develop radiographic knee OA at 15 years after reconstruction compared with the adolescent-young cohort (PUBMED:27268239).
This suggests that while the overall long-term risk of developing OA may not differ significantly with age at the time of ACL injury, the progression to OA may occur at a faster rate in older adults. Therefore, while age at injury may not be a determinant of the eventual development of OA, it may influence the timeline of OA onset following ACL injury. |
Instruction: Is SAPS 3 better than APACHE II at predicting mortality in critically ill transplant patients?
Abstracts:
abstract_id: PUBMED:30039839
APACHE II and SAPS II as predictors of brain death development in neurocritical care patients Aim: To assess the prognostic value of APACHE II and SAPS II scales to predict brain death evolution of neurocritical care patients.
Patients And Methods: Retrospective observational study performed in a tertiary hospital. Include 508 patients over 16 years old, hospitalized in ICU for at least 24 hours. The variables of interest were: demographic data, risk factors, APACHE II, SAPS II and outcome.
Results: Median age: 41 years old (IR: 25-57). Males: 76.2%. Most frequent reason for admission: trauma (55.3%). Medians: Glasgow Coma Scale (GCS), 10 points; APACHE II, 13 points; SAPS II, 31 points; and ICU stay, 5 days. Mortality in the ICU was 28.5% (n = 145) of whom 44 (8.7%) evolved to brain death. Univariate logistic regression analysis showed that GCS, APACHE II and SAPS II scores, as well as ICU stay days behaved as predictors of brain death evolution. However, the multivariate analysis performed including APACHE II and SAPS II scores showed that only APACHE II maintained statistical significance, despite the good discrimination of both scores.
Conclusion: Transplant coordinators might use the APACHE II score as a tool to detect patients at risk of progression to brain death, minimizing the loss of potential donors.
abstract_id: PUBMED:24791049
A comparison of Simplified Acute Physiology Score II, Acute Physiology and Chronic Health Evaluation II and Acute Physiology and Chronic Health Evaluation III scoring system in predicting mortality and length of stay at surgical intensive care unit. Background: In critically ill patients, several scoring systems have been developed over the last three decades. The Acute Physiology and Chronic Health Evaluation (APACHE) and the Simplified Acute Physiology Score (SAPS) are the most widely used scoring systems in the intensive care unit (ICU). The aim of this study was to assess the prognostic accuracy of SAPS II and APACHE II and APACHE III scoring systems in predicting short-term hospital mortality of surgical ICU patients.
Materials And Methods: Prospectively collected data from 202 patients admitted to Mashhad University Hospital postoperative ICU were analyzed. Calibration was estimated using the Hosmer-Lemeshow goodness-of-fit test. Discrimination was evaluated by using the receiver operating characteristic (ROC) curves and area under a ROC curve (AUC).
Result: Two hundred and two patients admitted on post-surgical ICU were evaluated. The mean SAPS II, APACHE II, and APACHE III scores for survivors were found to be significantly lower than of non-survivors. The calibration was best for APACHE II score. Discrimination was excellent for APACHE II (AUC: 0.828) score and acceptable for APACHE III (AUC: 0.782) and SAPS II (AUC: 0.778) scores.
Conclusion: APACHE II provided better discrimination than APACHE III and SAPS II calibration was good at APACHE II and poor at APACHE III and SAPS II. Use of APACHE II was excellent in this post-surgical ICU.
abstract_id: PUBMED:19775043
The performance of customised APACHE II and SAPS II in predicting mortality of mixed critically ill patients in a Thai medical intensive care unit. The aim of this study was to evaluate and compare the performance of customised Acute Physiology and Chronic Health Evaluation HII (APACHE II) and Simplified Acute Physiology Score HII (SAPS II) in predicting hospital mortality of mixed critically ill Thai patients in a medical intensive care unit. A prospective cohort study was conducted over a four-year period. The subjects were randomly divided into calibration and validation groups. Logistic regression analysis was used for customisation. The performance of the scores was evaluated by the discrimination, calibration and overall fit in the overall group and across subgroups in the validation group. Two thousand and forty consecutive intensive care unit admissions during the study period were split into two groups. Both customised models showed excellent discrimination. The area under the receiver operating characteristic curve of the customised APACHE II was greater than the customised SAPS II (0.925 and 0.892, P < 0.001). Hosmer-Lemeshow goodness-of-fit showed good calibration for the customised APACHE II in overall populations and various subgroups but insufficient calibration for the customised SAPS II. The customised SAPS II showed good calibration in only the younger, postoperative and sepsis patients subgroups. The overall performance of the customised APACHE II was better than the customised SAPS II (Brier score 0.089 and 0.109, respectively). Our results indicate that the customised APACHE II shows better performance than the customised SAPS II in predicting hospital mortality and could be used to predict mortality and quality assessment in our unit or other intensive care units with a similar case mix.
abstract_id: PUBMED:35776696
Performance of NUTRIC score to predict 28-day mortality in critically ill patients after replacing APACHE II with SAPS 3. Objectives: The Nutrition Risk in the Critically Ill (NUTRIC) score has been advocated as a screening tool for nutrition risk assessment in critically ill patients. It was developed and validated to predict 28-day mortality using Acute Physiology and Chronic Health Evaluation II (APACHE II) score as one of its components. However, nowadays the Simplified Acute Physiology Score 3 (SAPS 3) demonstrates better performance. We aimed to test the performance of NUTRIC score in predicting 28-day mortality after replacement of APACHE II by SAPS 3, and the interaction between nutrition adequacy and mortality.
Methods: Adult patients who received nutrition therapy and remained >3 days in intensive care unit were retrospectively evaluated. In order to replace APACHE II component, we used ranges of SAPS 3 with similar predicted mortality. Discrimination between these tools in predicting 28-day mortality was assessed using the ROC curve, calibration was evaluated with calibration belt, and correlation with intraclass correlation. The relationship between nutritional adequacy and mortality was assessed in a subgroup with available data.
Results: 542 patients were analyzed (median age of 78 years old, 73.4% admitted for non-surgical reasons and 28-day mortality was 18.1%). Mortality prediction discrimination did not differ between tools (p>0.05), but showed a good agreement (intraclass correlation 0.86) with good calibration. In the subgroup analysis for nutritional adequacy (n = 99), no association with mortality was observed.
Conclusion: Performance of NUTRIC score with SAPS 3 is similar to the original tool. Therefore, it might be used in settings where APACHE II is not available.
abstract_id: PUBMED:17487117
A comparison of APACHE II and SAPS II scoring systems in predicting hospital mortality in Thai adult intensive care units. Objective: To assess the performance of Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiology Score II (SAPS II) in Thai critically ill patients.
Material And Method: Prospective observational cohort study conducted between July 1, 2004 and October 31, 2005 in the Intensive Care Unit (ICU) of Songklanagarind Hospital, an 800-beds tertiary referral university teaching hospital.
Results: One thousand three hundred sixteen patients were enrolled. There were 310 deaths (23.6%) at hospital discharge. APACHE II and SAPS II predicted hospital mortality 30.5 +/- 28.2 and 30.5 +/- 29.8 respectively. Both models showed excellent discrimination. The discrimination of APACHE II was better than SAPS II (0.911 and 0.888, p < 0.001). However both systems presented a poor calibration. The Hosmer-Lemeshow goodness-of-fit Hand C statistics were 66.59 and 66.65 of APACHE II (p < 0.001) and 54.01 and 71.44 of SAPS II (p < 0.001).
Conclusion: APACHE II provided better discrimination than SAPS II, but both models showed poor calibration in over predicting mortality in our ICU patients. Customized or new severity scoring systems should be developed for critically ill patients in Thailand.
abstract_id: PUBMED:35663208
Comparison of elevated cardiac troponin I with SAPS-II and APACHE-II score in predicting outcome of severe intoxications. Background And Aims: To date, different methods have been invented to risk-stratify critically ill patients, however, there is a paucity of information regarding assessing the severity of poisonings. This study was designed to determine the comparative efficacy of Simplified Acute Physiology Score-II (SAPS-II) and Acute Physiology and Chronic Health Evaluation-II (APACHE-II)score with cardiac troponin I (cTnI) in predicting severe intoxication outcomes.
Methods: This was a prospective study conducted on patients who fulfilled defined severe intoxication criteria necessitating intensive care unit (ICU) admission over a period of 6 months. SAPS-II and APACHE-II scores were calculated and cTnI concentrations were measured. These indicators were compared to determine which has the better ability to prognosticate mortality and complications.
Results: A total of 55 cases (median age, 35 [24-49] years) were enroled. Eight patients (14.5%) died. Mean SAPS-II, median APACHE-II score and median cTnI concentrations were 32.05 ± 11.24, 13 [10-17] and 0.008 [0.002-0.300] ng/ml, respectively, which were significantly different between the survivors and non-survivors. Receiver operating characteristics curve results of SAPS-II, APACHE-II score and cTnI concentrations in predicting mortality were 0.945, 0.932 and 0.763 and in predicting complications were 0.779, 0.739 and 0.727, respectively. High cTnI concentration (>0.37 ng/ml) correlated with soft clinical outcomes, including length of ventilatory support, length of ICU stay and length of hospital stay (LOS) (r: 0.928, 0.881 and 0.735 respectively; all P < 0.001).
Conclusion: SAPS-II scores were superior in predicting death and complications, while cTnI correlated more closely with soft clinical outcomes, such as the length of ventilator support, length of ICU stay or LOS.
abstract_id: PUBMED:23525309
Is SAPS 3 better than APACHE II at predicting mortality in critically ill transplant patients? Objectives: This study compared the accuracy of the Simplified Acute Physiology Score 3 with that of Acute Physiology and Chronic Health Evaluation II at predicting hospital mortality in patients from a transplant intensive care unit.
Method: A total of 501 patients were enrolled in the study (152 liver transplants, 271 kidney transplants, 54 lung transplants, 24 kidney-pancreas transplants) between May 2006 and January 2007. The Simplified Acute Physiology Score 3 was calculated using the global equation (customized for South America) and the Acute Physiology and Chronic Health Evaluation II score; the scores were calculated within 24 hours of admission. A receiver-operating characteristic curve was generated, and the area under the receiver-operating characteristic curve was calculated to identify the patients at the greatest risk of death according to Simplified Acute Physiology Score 3 and Acute Physiology and Chronic Health Evaluation II scores. The Hosmer-Lemeshow goodness-of-fit test was used for statistically significant results and indicated a difference in performance over deciles. The standardized mortality ratio was used to estimate the overall model performance.
Results: The ability of both scores to predict hospital mortality was poor in the liver and renal transplant groups and average in the lung transplant group (area under the receiver-operating characteristic curve = 0.696 for Simplified Acute Physiology Score 3 and 0.670 for Acute Physiology and Chronic Health Evaluation II). The calibration of both scores was poor, even after customizing the Simplified Acute Physiology Score 3 score for South America.
Conclusions: The low predictive accuracy of the Simplified Acute Physiology Score 3 and Acute Physiology and Chronic Health Evaluation II scores does not warrant the use of these scores in critically ill transplant patients.
abstract_id: PUBMED:35107550
SAPS 3 in the modified NUTrition RIsk in the Critically ill score has comparable predictive accuracy to APACHE II as a severity marker. Objective: To evaluate the substitution of Acute Physiology and Chronic Health Evaluation II (APACHE II) by Simplified Acute Physiology Score 3 (SAPS 3) as a severity marker in the modified version of the NUTrition RIsk in the Critically ill score (mNUTRIC); without interleukin 6) based on an analysis of its discriminative ability for in-hospital mortality prediction.
Methods: This retrospective cohort study evaluated 1,516 adult patients admitted to an intensive care unit of a private general hospital from April 2017 to January 2018. Performance evaluation included Fleiss' Kappa and Pearson correlation analysis. The discriminative ability for estimating in-hospital mortality was assessed with the Receiver Operating Characteristic curve.
Results: The sample was randomly divided into two-thirds for model development (n = 1,025; age 72 [57 - 83]; 52.4% male) and one-third for performance evaluation (n = 490; age 72 [57 - 83]; 50.8% male). The agreement with mNUTRIC was Kappa of 0.563 (p < 0.001), and the correlation between the instruments was Pearson correlation of 0.804 (p < 0.001). The tool showed good performance in predicting in-hospital mortality (area under the curve 0.825 [0.787 - 0.863] p < 0.001).
Conclusion: The substitution of APACHE II by SAPS 3 as a severity marker in the mNUTRIC score showed good performance in predicting in-hospital mortality. These data provide the first evidence regarding the validity of the substitution of APACHE II by SAPS 3 in the mNUTRIC as a marker of severity. Multicentric studies and additional analyses of nutritional adequacy parameters are required.
abstract_id: PUBMED:27882011
Better prognostic marker in ICU - APACHE II, SOFA or SAP II! Objectives: This study was designed to determine the comparative efficacy of different scoring system in assessing the prognosis of critically ill patients.
Methods: This was a retrospective study conducted in medical intensive care unit (MICU) and high dependency unit (HDU) Medical Unit III, Civil Hospital, from April 2012 to August 2012. All patients over age 16 years old who have fulfilled the criteria for MICU admission were included. Predictive mortality of APACHE II, SAP II and SOFA were calculated. Calibration and discrimination were used for validity of each scoring model.
Results: A total of 96 patients with equal gender distribution were enrolled. The average APACHE II score in non-survivors (27.97+8.53) was higher than survivors (15.82+8.79) with statistically significant p value (<0.001). The average SOFA score in non-survivors (9.68+4.88) was higher than survivors (5.63+3.63) with statistically significant p value (<0.001). SAP II average score in non-survivors (53.71+19.05) was higher than survivors (30.18+16.24) with statistically significant p value (<0.001).
Conclusion: All three tested scoring models (APACHE II, SAP II and SOFA) would be accurate enough for a general description of our ICU patients. APACHE II has showed better calibration and discrimination power than SAP II and SOFA.
abstract_id: PUBMED:28286811
Comparison of APACHE II and SAPS II Scoring Systems in Prediction of Critically Ill Patients' Outcome. Introduction: Using physiologic scoring systems for identifying high-risk patients for mortality has been considered recently. This study was designed to evaluate the values of Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiologic Score (SAPS II) models in prediction of 1-month mortality of critically ill patients.
Methods: The present prospective cross sectional study was performed on critically ill patients presented to emergency department during 6 months. Data required for calculation of the scores were gathered and performance of the models in prediction of 1-month mortality were assessed using STATA software 11.0.
Results: 82 critically ill patients with the mean age of 53.45 ± 20.37 years were included (65.9% male). Their mortality rate was 48%. Mean SAPS II (p < 0.0001) and APACHE II (p = 0.0007) scores were significantly higher in dead patients. Area under the ROC curve of SAPS II and APACHE II for prediction of mortality were 0.75 (95% CI: 0.64 - 0.86) and 0.72 (95% CI: 0.60 - 0.83), respectively (p = 0.24). The slope and intercept of SAPS II were 1.02 and 0.04, respectively. In addition, these values were 0.92 and 0.09 for APACHE II, respectively.
Conclusion: The findings of the present study showed that APACHE II and SAPS II had similar value in predicting 1-month mortality of patients. Discriminatory powers of the mentioned models were acceptable but their calibration had some amount of lack of fit, which reveals that APACHE II and SAPS II are partially perfect.
Answer: Based on the available abstracts, it appears that SAPS 3 is not necessarily better than APACHE II at predicting mortality in critically ill transplant patients. One study specifically addressing this question found that both the Simplified Acute Physiology Score 3 (SAPS 3) and Acute Physiology and Chronic Health Evaluation II (APACHE II) had low predictive accuracy in critically ill transplant patients (PUBMED:23525309). The study concluded that the low predictive accuracy of both scores does not warrant their use in this specific patient population.
Furthermore, another study that compared the performance of SAPS 3 with APACHE II in the modified NUTrition RIsk in the Critically ill score (mNUTRIC) found that the substitution of APACHE II by SAPS 3 as a severity marker showed good performance in predicting in-hospital mortality, suggesting that SAPS 3 could be a valid alternative to APACHE II in this context (PUBMED:35107550). However, this does not directly imply that SAPS 3 is superior to APACHE II, but rather that it has comparable predictive accuracy.
Overall, the evidence does not support the conclusion that SAPS 3 is better than APACHE II at predicting mortality in critically ill transplant patients. Instead, it suggests that both scoring systems have limitations in this patient group and that there may be a need for customized or alternative severity scoring systems for critically ill patients in transplant settings (PUBMED:23525309). |
Instruction: Quality of life, alcohol detoxification and relapse: is quality of life a predictor of relapse or only a secondary outcome measure?
Abstracts:
abstract_id: PUBMED:24929832
Quality of life, alcohol detoxification and relapse: is quality of life a predictor of relapse or only a secondary outcome measure? Purpose: To estimate variations in Overall Quality Of Life (OQOL) within 12 months following alcohol detoxification and to evaluate the predictive value of OQOL for relapse and alcohol use severity.
Methods: Alcohol use disorders and four OQOL domains (physical health, psychological health, social relationships and environment) were assessed in 199 patients entering in-patient alcohol detoxification. Follow-up assessments were performed at 6 and 12 months after discharge. Cross-sectional and longitudinal analyses explored the relationship between OQOL and alcohol use severity, examining differences between abstinent and relapsed patients. The predictive value of OQOL was analyzed by logistic and linear regression.
Results: Correlation between OQOL and Alcohol Use Disorders Identification Test scores was confirmed at all stages of observation. Abstinent patients showed a significant improvement in all OQOL domains at 6 months after discharge, whereas OQOL domains did not undergo any significant change in relapsed patients. Baseline OQOL did not prove to be predictive of either relapse or alcohol use severity.
Conclusions: Overall quality of life changed in parallel with alcohol use severity throughout the duration of the study, confirming it to be a useful and sensitive measure of secondary outcome for alcohol detoxification. Conversely, none of the OQOL baseline scores functioned as predictors of relapse within 12 months following discharge or alcohol use severity in relapsed patients.
abstract_id: PUBMED:24459373
Quality of life as an outcome measure in the treatment of alcohol dependence. Background: Quality of life has emerged as an important treatment outcome measure for alcohol dependence whose natural course comprises of remission and relapse.
Materials And Methods: The purpose of this study was to examine the prospective change in Quality of life (QoL) in 56 patients aged 18-45 years of alcohol dependence over a three months' period and compare it with QoL of 150 age- and gender- matched healthy controls using WHOQoL-BREF. Severity of alcohol dependence and drinking parameters were assessed.
Results: Significant improvement in QoL of patients of alcohol dependence over three months' abstinence. The physical, psychological, social, and environment domains of QoL in alcohol dependence subjects were significantly lower before treatment initiation than the healthy controls. Alcoholic liver disease emerged as a predictor of improvement in psychological and social domains of QoL.
Conclusion: The study confirms poor quality of life in patients of alcohol dependence before intervention. The regular follow-up with the family members in out-patient setting enables the patients achieve complete abstinence, thereby improving their quality of life.
abstract_id: PUBMED:31352304
The Auckland alcohol detoxification outcome study: Measuring changes in quality of life in individuals completing a medicated withdrawal from alcohol in a detoxification unit. Aim: To measure outcomes in Quality of Life in alcohol dependent patients' following a medicated withdrawal from alcohol.
Methods: 79 patients that were admitted to a detoxification unit in Auckland, New Zealand between March 2016 and September 2016 were assessed for severity of alcohol dependence using the Alcohol Use Disorders Identification Test (AUDIT) and Severity of Alcohol Dependency Questionnaire (SADQ) and Quality of Life (QOL) using the World Health Organisation Quality of Life-abbreviated version of the WHOQOL 100 New Zealand version (WHOQOL-BREF NZ). Patients were followed up at three months and 12 months and an estimate of drinking behavior and the WHO-QOL BREF NZ were completed via telephone interview. QOL domain scores were assessed from baseline to three months and baseline to 12 months in both relapse and abstinent groups. At three months, a single question was asked in order to collect qualitative data.
Results: At baseline, the study population had statistically significantly lower mean QOL domain scores than scores reported from the general population. QOL improved in patients following detoxification at three months and 12 months in both the relapse and abstinent groups; however, the change in scores from baseline was greater in the abstinent group compared to the relapse group. The majority of patients reported that the admission had been a positive experience.
Conclusion: QOL improves in individuals following a medicated withdrawal from alcohol regardless of whether individual's relapse; however, those that remain abstinent have greater improvements in quality of life.
abstract_id: PUBMED:11104116
Application of a quality of life measure, the life situation survey (LSS), to alcohol-dependent subjects in relapse and remission. Background: Recent studies have shown that quality of life (QOL) is improved significantly when subjects do not relapse to heavy drinking, and QOL deteriorates significantly on prolonged relapse. This article further investigates these relationships using a QOL index, the Life Situation Survey (LSS).
Methods: Eighty-two DSM-IV alcohol-dependent subjects admitted for alcohol detoxification were studied at baseline and 12 week follow-up. Sociodemographic data were collected, and severity of alcohol dependence (SADQ) and General Health Questionnaire (GHQ-12) were baseline indices only. The main outcome measure, the LSS, was administered at both time points.
Results: Two subjects were lost to follow-up and one died during the study period. Thus, the relapse/ nonrelapse analysis related to 79 subjects. Fifty subjects (63%) had relapsed to heavy drinking at 3 months follow-up. There was a significant correlation between LSS and GHQ-12 scores. Significant changes occurred in total LSS scores as a result of relapse and nonrelapse. The improvement in LSS scores associated with nonrelapse was larger than the deterioration that accompanied relapse. In those subjects who did not relapse to heavy drinking, the mean follow-up score remained in the poor/borderline LSS range. Remission from heavy drinking was accompanied by significant improvements in appetite, sleep, and self-esteem. Relapse to heavy drinking coincided with a significant deterioration in mood/affect, public support, and work/life role scores.
Conclusion: QOL as assessed by the LSS in recently detoxified alcoholics is impaired significantly. In the nonrelapse group, there was a significant improvement in LSS scores after 3 months. Relapse was accompanied by a smaller deterioration in LSS scores. The LSS can play an important role in monitoring the clinical care and progress of alcohol-dependent subjects.
abstract_id: PUBMED:32208059
Development of the MobQoL patient reported outcome measure for mobility-related quality of life. Purpose: To examine how mobility and mobility impairment affect quality of life; to develop a descriptive system (i.e., questions and answers) for a novel mobility-related quality of life outcome measure.
Materials And Methods: Data were collected through semi-structured interviews. Participants were recruited predominantly from NHS posture and mobility services. Qualitative framework analysis was used to analyse data. In the first stage of analysis the key dimensions of mobility-related quality of life were defined, and in the second stage a novel descriptive system was developed from the identified dimensions.
Results: Forty-six interviews were conducted with 37 participants (aged 20-94 years). Participants had a wide range of conditions and disabilities which impaired their mobility, including cerebral palsy, multiple sclerosis, and arthritis. Eleven dimensions of mobility-related quality of life were identified: accessibility, safety, relationships, social inclusion, participation, personal care, pain and discomfort, independence, energy, self-esteem, and mental-wellbeing. A new outcome measure, known as MobQoL, was developed.
Conclusions: Mobility and mobility impairment can have significant impacts on quality of life. MobQoL is the first outcome measure designed specifically to measure the impact of mobility on quality of life, and therefore has utility in research and practice to measure patient outcomes related to rehabilitation.Implications for RehabilitationMobility impairment affects many different aspects of health and quality of life.The impact of mobility impairment on quality of life is related to processes of physical, emotional, and behavioural adaptation.MobQoL is the first patient-reported outcome measure designed specifically to measure the quality of life impacts of mobility impairment and assistive mobility technology use.MobQoL has potential to be used by rehabilitation professionals to measure and monitor mobility-related quality of life as part of routine clinical practice.
abstract_id: PUBMED:37735801
Examining Changes in Quality of Life as an Outcome Measure in Three Randomized Controlled Trials of Online Interventions That Included an Intervention for Hazardous Alcohol Use. Background: Quality of life (QOL) summarizes an individual's perceived satisfaction across multiple life domains. Many factors can impact this measure, but research has demonstrated that individuals with addictions, physical, and mental health concerns tend to score lower than general population samples. While QOL is often important to individuals, it is rarely used by researchers as an outcome measure when evaluating treatment efficacy.
Methods: This secondary analysis used data collected during three separate randomized controlled trials testing the efficacy of different online interventions to explore change in QOL over time between treatment conditions. The first project was concerned with only alcohol interventions. The other two combined either a gambling or mental health intervention with a brief alcohol intervention. Males and females were analyzed separately.
Results: This analysis found treatment effects among female participants in two projects. In the project only concerning alcohol, female quality of life improved more among those who received an extensive intervention for hazardous alcohol use compared to a brief intervention (p = .029). QOL among females who received only the mental health intervention improved more than those who also received a brief alcohol intervention (p = .049).
Conclusion: Poor QOL is often cited as a reason individuals decide to make behavior changes, yet treatment evaluations do not typically consider this patient-important outcome. This analysis found some support for different treatment effects on QOL scores in studies involving at least one intervention for hazardous alcohol use.
abstract_id: PUBMED:37839366
Fear of relapse and quality of life in multiple sclerosis: The mediating role of psychological resilience. The goal of this study was to examine the mediating role of psychological resilience in the relationship between fear of relapse and quality of life in a sample of patients with multiple sclerosis (PwMS). This cross-sectional study was developed online. A total of 240 PwMS were surveyed using the Multiple Sclerosis Quality of Life inventory, the Fear of Relapse Scale and the Connor-Davidson Resilience Scale. To perform the mediation analysis PROCESS macro was used. In our study, fear of relapse was a predictor of psychological resilience and quality of life, and psychological resilience was a predictor of quality of life. Finally, psychological resilience showed a mediating role in the relationship between fear of relapse and quality of life. Considering that resilience is a modifiable variable, the implementation of interventions aimed at enhancing resilience can have a favorable impact on the psychological well-being and quality of life of patients with multiple sclerosis.
abstract_id: PUBMED:24167441
Stressful Life Events and Relapse Among Formerly Alcohol Dependent Adults. We examined associations between stressful life events and relapse among adults in the United States with at least 1 year of remission from DSM-IV alcohol dependence. The sample consisted of individuals in remission from alcohol dependence at the Wave 1 interview (2001-2002) for the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) who also participated in a Wave 2 interview (2004-2005; N 1,707). Associations between stressful life events, demographic variables, = and the binary outcome of alcohol dependence relapse were examined with multiple logistic regression models. After adjustment for potential confounders, respondents who were divorced or separated in the year preceding the baseline assessment (Wave 1) were over two times more likely (OR = 2.32; CI = 1.01-5.34) to have relapsed 3 years later (Wave 2), compared to those not experiencing a divorce/separation in the 12 months prior to Wave 1. No other stressful life event was associated with relapse. Findings suggest that formerly alcohol dependent adults are at increased risk for relapse following divorce/separation. These results highlight the need for social work practitioners to consider the possibility of relapse following a divorce when one or both partners have a history of alcohol dependence.
abstract_id: PUBMED:11109027
Quality of life measures and outcome in alcohol-dependent men and women. A sample of 82 (41 men 41 women) DSM IV alcohol-dependent inpatients admitted for detoxification was studied at baseline and followed-up 12 weeks thereafter. The following questionnaires were administered 4-5 days after admission for detoxification: Socio-demographic information, Severity of Alcohol Dependence Questionnaire (SADQ), Alcohol Problems Questionnaire (APQ), Rotterdam Symptoms Checklist (RSCL), Life Situation Survey (LSS), Beck Depression Inventory (BDI), General Health Questionnaire (GHQ 12), and Nottingham Health Profile (NHP). All indices other than socio-demographic data, the SADQ, and APQ were administered at 12-week follow-up. After controlling for confounding factors at baseline, women were more likely to be in a higher social class, prescribed anti-depressants during the previous 12 months, drink fewer units of alcohol in a typical week, and have a higher level of psychiatric caseness scores (GHQ-12). A total of 80 subjects (97%) were successfully followed-up. Difference between gender did not significantly impact upon any of the 12-week outcome measures. There was no significant difference in the study relapse rates or time taken to relapse between men and women. The only significant total sample change was a reduction in the amount of alcohol consumed in a typical week. This was significantly related to changes in the following Quality of Life (QoL) measures, NHP emotional reaction sub-scores, LSS, and BDI scores.
abstract_id: PUBMED:26810131
Gender Differences in the Life Concerns of Persons Seeking Alcohol Detoxification. Background: This study explored the life concerns of persons seeking alcohol detoxification, a group with multiple life and psychosocial challenges. Gender may be an important contributor to the particular life concerns of persons with alcohol use disorders.
Methods: Using a 32-item, previously-validated life concerns survey that captures ten conceptual domains, we interviewed persons entering inpatient alcohol detoxification asking them to rate their level of concern about health and welfare items.
Results: Participants (n=189) were 27% female, with a mean age of 43.5 years. Overall, concern about alcohol problems was perceived as the most serious, followed by mental health, cigarette smoking, financial, and relationship problems. Men were significantly more concerned than women about six of the ten domains including money, drug use, transmissible diseases, and physical illness.
Conclusions: Recognition of the daily worries of persons seeking inpatient alcohol detoxification persons could allow providers to better tailor their services to the context of their patients' lives. Focusing on pressing life concerns such as mental health, financial, relationship problems, and other drug use may influence detoxification services and aftercare treatment choices.
Answer: Quality of life (QoL) has been studied in relation to alcohol detoxification and relapse, but the evidence suggests that it is not a reliable predictor of relapse. One study found that while overall quality of life (OQOL) changed in parallel with alcohol use severity, baseline OQOL did not predict either relapse or alcohol use severity within 12 months following discharge (PUBMED:24929832). This indicates that while QoL can be a sensitive measure of secondary outcome for alcohol detoxification, it does not function as a predictor of relapse.
Other studies have shown that QoL improves significantly in patients of alcohol dependence over periods of abstinence (PUBMED:24459373), and that QoL improves following a medicated withdrawal from alcohol, with greater improvements in those who remain abstinent compared to those who relapse (PUBMED:31352304). However, these improvements in QoL post-detoxification do not necessarily imply that initial QoL can predict future relapses.
Moreover, research using the Life Situation Survey (LSS) to assess QoL in alcohol-dependent subjects found that QoL is significantly impaired in recently detoxified alcoholics, and while there is significant improvement in QoL in the non-relapse group after 3 months, relapse is accompanied by a smaller deterioration in QoL scores (PUBMED:11104116). This further supports the notion that QoL is more of an outcome measure rather than a predictor of relapse.
In summary, while QoL is an important measure of well-being and is affected by alcohol use and detoxification, it does not appear to be a reliable predictor of relapse in individuals with alcohol use disorders. Instead, QoL should be considered a secondary outcome measure that reflects the changes in an individual's condition following treatment or relapse (PUBMED:24929832; PUBMED:24459373; PUBMED:31352304; PUBMED:11104116). |
Instruction: ASA failure: does the combination ASA/clopidogrel confer better long-term vascular protection?
Abstracts:
abstract_id: PUBMED:24384643
ASA failure: does the combination ASA/clopidogrel confer better long-term vascular protection? Objective: To assess whether adding clopidogrel to acetylsalicylic acid (ASA) has a long-term protective vascular effect in patients with lacunar stroke while taking ASA.
Methods: Post hoc analysis of 838 patients with ASA failure and recent lacunar stroke from the Secondary Prevention of Small Subcortical Strokes Trial (SPS3) cohort randomly allocated to aspirin (325 mg/day) and clopidogrel (75 mg/day) or placebo. Primary efficacy outcome was stroke recurrence (ischemic and intracranial hemorrhage) and main safety outcome was major extracranial hemorrhage. Patients were followed for a mean period of 3.5 years.
Results: The ASA failure group had a significantly higher risk of vascular events including ischemic stroke when compared with the non-ASA failure group (n = 2,151) in SPS3 (p = 0.03). Mean age was 65.6 years and 65% were men. The risk of recurrent stroke was not reduced in the dual antiplatelet group, 3.1% per year, compared to the aspirin-only group, 3.3% per year (hazard ratio [HR] 0.91; 95% confidence interval [CI] 0.61-1.37). There was also no difference between groups for ischemic stroke (HR 0.90; 95% CI 0.59-1.38). The risk of gastrointestinal bleeding was higher in the dual antiplatelet group (HR 2.7; 95% CI 1.1-6.9); however, the risk of intracranial hemorrhage was not different.
Conclusions: In patients with a recent lacunar stroke while taking ASA, the addition of clopidogrel did not result in reduction of vascular events vs continuing ASA only.
Classification Of Evidence: This study provides Class I evidence that for patients with recent lacunar stroke while taking ASA, adding clopidogrel as compared to continuing ASA alone does not reduce the risk of recurrent stroke.
abstract_id: PUBMED:27138269
Prevalence of acetylsalicylic acid (ASA) - low response in vascular surgery Background: Research has revealed that a decreased antiplatelet effect (low response [LR]/high on-treatment platelet reactivity [HPR]) of acetylsalicylic acid (ASA) and clopidogrel is associated with an increased risk of thromboembolic events. There are extensive ASA low response (ALR) and clopidogrel low response (CLR) prevalence data in the literature, but there are only a few studies concerning vascular surgical patients. The aim of this study was to examine the prevalence and risk factors of ALR and CLR in vascular surgical patients.
Materials And Methods: We examined n = 154 patients with an antiplatelet long-term therapy, who were treated due to peripheral artery occlusive disease (PAD) and/or arteria carotis interna stenosis (CVD). To detect an ALR or CLR, we examined full blood probes with impedance aggregometry (ChronoLog® Aggregometer model 590). Risk factors were examined by acquisition of concomitant disease, severity of vascular disease, laboratory test results and medication.
Results: We found a prevalence of 19.3 % in the ALR group and of 21.1 % in the CLR group. Risk factors for ALR were an increased platelet and leucocyte count and co-medication with pantoprazole. We found no significant risk factors for a decreased antiplatelet effect of clopidogrel treatment.
Conclusion: The investigated prevalence for ALR and CLR are in the range of other studies, particularly based on cardiological patients. More investigations are needed to gain a better evaluation of the risk factors for HPR and to develop an effective antiplatelet therapy regime to prevent cardiovascular complications.
abstract_id: PUBMED:32945920
Evaluation of treatment adaptation for low response to ASA in vascular surgery Background: A decreased antiplatelet prophylaxis (low response, LR/high on-treatment platelet reactivity, HPR) with acetylsalicylic acid (ASA) is associated with an increased risk of thromboembolic events. The prevalence of a LR is frequent with about 20% and a therapeutic regimen is not yet established. The aim of this prospective study was to evaluate the effectiveness of a therapeutic regimen for treatment adaptation when LR/HPR is detected in vascular surgery patients.
Methods: Overall, 36 patients under long-term antiplatelet treatment with 100 mg/day ASA and a detected ASA low response (ALR) were included in the study. In this patient group a modification of the prophylactic medication was carried out according to the established treatment plan and a control aggregometry was performed. The therapeutic regimen followed the test and treat principle. To evaluate the effect of ASA impedance, aggregometry with multiple electrodes was used (multiplate).
Results: All 36 patients were successfully transferred to response status with the treatment scheme. In 32 (88.89%) patients an increased dose of 300 mg/day ASA was carried out and in 2 (5.56%) patients the medication was changed from ASA to clopidogrel. A further 2 (5.56%) patients were switched to oral anticoagulation with phenprocoumon, due to other indications. Bleeding complications or other side effects did not occur.
Conclusion: The chosen treatment regime for a low response proved to be effective and safe in vascular surgery patients. A guideline-compliant increase of the ASA dose from 100 mg to 300 mg/day predominantly led to an effective inhibition of platelet aggregation in the aggregometry.
abstract_id: PUBMED:16444324
Inhibition of platelet aggregation for the secondary prevention after ACS: when clopidogrel instead of ASA, when clopidogrel and ASA? Long-term inhibition of platelet aggregation is essential for the secondary prevention after acute coronary syndromes (ACS). Inhibition of platelet aggregation with acetylsalicylic acid (ASA) has been established as a safe and effective therapy in this indication already end of the eighties in the preceding century. A decade later, with the introduction of the thieno-pyridines, combined platelet aggregation inhibition became possible. This opened the door for new treatment strategies in interventional cardiology. The first substance, ticlopidine was more or less replaced by the newer substance clopidogrel, which has improved pharmacological properties and less side effects. Low dose ASA (75 mg/d) is still regarded as the standard therapy for secondary prevention after ACS. However, large clinical trials established clopidogrel as at least as effective and safe as ASA in this indication. Following PCI with bare metal stent implantation, a combined therapy of ASA and clopidogrel should be given for at least 4 weeks. After ACS with non-ST-elevation myocardial infarction the combined therapy with ASA and clopidogrel gives a better outcome than ASA alone. Recently published clinical trials show superiority of this strategy in patients with ST-elevation myocardial infarction, too. If a combined long-term platelet aggregation inhibition with ASA and clopidogrel will be safe and more effective for secondary prevention is discussed.
abstract_id: PUBMED:10900900
ASA or clopidogrel? N/A
abstract_id: PUBMED:12796751
What is the role for improved long-term antiplatelet therapy after percutaneous coronary intervention? Background: Coronary stent placement has replaced balloon angioplasty as the percutaneous coronary intervention (PCI) method of choice, primarily because of its lower restenosis rate. Compared with aspirin (ASA) monotherapy or ASA plus warfarin, the ticlopidine and ASA combination is superior in reducing thrombotic events after stenting. Clopidogrel plus ASA appears to be at least as effective as ticlopidine and ASA. Intravenous glycoprotein IIb/IIIa inhibitors effectively prevent periprocedural thrombotic complications, but their short duration of action and parenteral dosing don't allow for long-term protection. This review aimed to answer how long after PCI with a stent patients are at risk for recurrent thrombotic events and what the optimal way to prevent them is.
Results: Classically, ASA has been prescribed indefinitely, whereas adenosine diphosphate receptor antagonists have been discontinued after 2 to 4 weeks. However, the Clopidogrel in Unstable Angina to Prevent Recurrent Events (CURE) trial found that long-term dual antiplatelet therapy with clopidogrel and ASA was more effective than ASA alone in preventing major cardiovascular events in patients with acute coronary syndrome, including those treated with PCI.
Conclusion: Results from additional ongoing studies are needed to clarify the role of long-term dual oral antiplatelet therapy in preventing ischemic events in patients who have undergone PCI.
abstract_id: PUBMED:15881481
Modelling the long term cost effectiveness of clopidogrel for the secondary prevention of occlusive vascular events in the UK. Objective: To assess the long term cost effectiveness of clopidogrel monotherapy compared with acetylsalicylic acid (aspirin; ASA) monotherapy in patients at risk of secondary occlusive vascular events (OVEs) in the UK.
Design: Cost utility analysis based on clinical data from CAPRIE (a multicentre randomised controlled trial, involving 19185 patients); long-term effects were extrapolated beyond the trial period using a Markov model populated with data from UK observational studies. Health economic evaluation carried out from the perspective of the UK National Health Service.
Participants: A representative cohort of 1000 UK patients aged 60 years (approximate mean age of the CAPRIE population), with the qualifying diagnoses of myocardial infarction, ischaemic stroke and peripheral arterial disease, who are at risk of secondary OVEs (non-fatal myocardial infarction, non-fatal stroke or vascular death).
Interventions: Patients were assumed to receive treatment with either clopidogrel (75 mg/day) for 2 years followed by ASA (325 mg/day, average) for their remaining lifetime, or ASA alone (325 mg/day, average) for life.
Main Outcome Measures: Incremental cost per life year gained and incremental cost per quality-adjusted life year (QALY) gained.
Results: In the base case, the incremental cost effectiveness of clopidogrel versus ASA in this population is estimated at 18888 pounds per life year gained and 21 489 pounds per QALY gained. Multiple deterministic and probabilistic sensitivity analyses suggest the model is robust to variations in a wide range of input parameters.
Conclusion: Two years of treatment with clopidogrel can be considered a cost effective intervention in patients at risk of secondary OVEs in the UK.
abstract_id: PUBMED:20013333
Medicinal therapy for interventional surgery of the peripheral vascular system The aim of medicinal treatment, during and after femoral and crural interventions is to prevent early or late onset arterial thrombosis of the treated vascular segments. Therefore, unfractionated heparin is administered during the intervention by an intra-arterial or intravenous approach. To avoid late onset thrombosis, administration of platelet function inhibitors is recommended. However, valid data are only available for acetylsalicylic acid (ASA). Therefore, ASA is recommended for long term medication. In several cardiological studies on stent implantation in coronary vessels the combination of ASA and clopidogrel for dual platelet inhibition has been proven to be effective. These results have been transferred to antithrombotic therapy of the lower extremities despite the lack of dedicated studies. There is no evidence for the use of vitamin K antagonists after peripheral interventions.
abstract_id: PUBMED:29707148
Dual antiplatelet therapy with clopidogrel and aspirin increases mortality in 4T1 metastatic breast cancer-bearing mice by inducing vascular mimicry in primary tumour. Platelet inhibition has been considered an effective strategy for combating cancer metastasis and compromising disease malignancy although recent clinical data provided evidence that long-term platelet inhibition might increase incidence of cancer deaths in initially cancer-free patients. In the present study we demonstrated that dual anti-platelet therapy based on aspirin and clopidogrel (ASA+Cl), a routine regiment in cardiovascular patients, when given to cancer-bearing mice injected orthotopically with 4T1 breast cancer cells, promoted progression of the disease and reduced mice survival in association with induction of vascular mimicry (VM) in primary tumour. In contrast, treatment with ASA+Cl or platelet depletion did reduce pulmonary metastasis in mice, if 4T1 cells were injected intravenously. In conclusion, distinct platelet-dependent mechanisms inhibited by ASA+Cl treatment promoted cancer malignancy and VM in the presence of primary tumour and afforded protection against pulmonary metastasis in the absence of primary tumour. In view of our data, long-term inhibition of platelet function by dual anti-platelet therapy (ASA+Cl) might pose a hazard when applied to a patient with undiagnosed and untreated malignant cancer prone to undergo VM.
abstract_id: PUBMED:16525572
Antithrombotic effects of ximelagatran plus acetylsalicylic acid (ASA) and clopidogrel plus ASA in a human ex vivo arterial thrombosis model. It was the objective of this study to compare the antithrombotic effects and bleeding profiles of the oral direct thrombin inhibitor ximelagatran, an anticoagulant, and the antiplatelet agent clopidogrel on top of steady-state acetylsalicylic acid (ASA) in a human arterial thrombosis model. Healthy male volunteers (n=62) received ASA (160 mg once daily), plus either clopidogrel for 6 days (loading dose 300 mg, then 75 mg once daily), or a single dose of ximelagatran (36 or 72 mg) on Day 6. Changes in total thrombus area (TTA) under low shear rate (LSR; 212 s(-1)) and high shear rate (HSR; 1690 s(-1)) conditions were measured, using the ex vivo Badimon perfusion chamber model pre-dose and 2 and 5 hours after dosing on Day 6, and capillary bleeding times (CBT) were determined. Ximelagatran plus ASA significantly reduced TTA under LSR and HSR, compared with ASA alone. Ximelagatran plus ASA reduced TTA more than clopidogrel plus ASA under LSR after 2 hours (36 mg, P=0.0011; 72 mg, P<0.0001) and 5 hours (72 mg, P=0.0057), and under HSR after 2 and 5 hours (72 mg, P<0.05). Compared with ASA alone, CBT was markedly prolonged by clopidogrel plus ASA (ratio 6.4; P<0.0001) but only slightly by ximelagatran plus ASA (72 mg ximelagatran, ratio 1.4; P=0.0010). Both drug combinations were well tolerated. Oral ximelagatran plus ASA has a greater antithrombotic effect in this human ex vivo thrombosis model and a less pronounced prolongation of bleeding time than clopidogrel plus ASA.
Answer: The combination of acetylsalicylic acid (ASA) and clopidogrel does not appear to confer better long-term vascular protection in patients with ASA failure. A post hoc analysis of 838 patients with ASA failure and recent lacunar stroke from the Secondary Prevention of Small Subcortical Strokes Trial (SPS3) cohort found that the addition of clopidogrel to ASA did not result in a reduction of vascular events compared to continuing ASA only. The risk of recurrent stroke was not reduced in the dual antiplatelet group compared to the aspirin-only group, and there was also no difference between groups for ischemic stroke. However, the risk of gastrointestinal bleeding was higher in the dual antiplatelet group (PUBMED:24384643).
In the context of vascular surgery, research has indicated that a decreased antiplatelet effect of ASA and clopidogrel is associated with an increased risk of thromboembolic events. The prevalence of ASA low response (ALR) was found to be 19.3% in a study of vascular surgical patients, and risk factors for ALR included an increased platelet and leukocyte count and co-medication with pantoprazole. No significant risk factors were found for a decreased antiplatelet effect of clopidogrel treatment (PUBMED:27138269).
Another study evaluated the effectiveness of a therapeutic regimen for treatment adaptation when low response to ASA was detected in vascular surgery patients. The chosen treatment regime for a low response proved to be effective and safe, with the majority of patients successfully transferred to response status by increasing the ASA dose from 100 mg to 300 mg/day (PUBMED:32945920).
Overall, while the combination of ASA and clopidogrel is used in certain clinical scenarios, such as after acute coronary syndromes (PUBMED:16444324) and percutaneous coronary intervention (PUBMED:12796751), the evidence does not support its use for better long-term vascular protection in patients with ASA failure (PUBMED:24384643). |
Instruction: Suicidal ideation and substance use among adolescents and young adults: a bidirectional relation?
Abstracts:
abstract_id: PUBMED:24969957
Suicidal ideation and substance use among adolescents and young adults: a bidirectional relation? Objective: To examine reciprocal associations between substance use (cigarette smoking, use of alcohol, marijuana, and other illegal drugs) and suicidal ideation among adolescents and young adults (aged 11-21 at wave 1; aged 24-32 at wave 4).
Methods: Four waves public-use Add Health data were used in the analysis (N=3342). Respondents were surveyed in 1995, 1996, 2001-2002, and 2008-2009. Current regular smoking, past-year alcohol use, past-year marijuana use, and ever use of other illegal drugs as well as past-year suicidal ideation were measured at the four waves (1995, 1996, 2001-2002, and 2008-2009). Fixed effects models with lagged dependent variables were modeled to test unidirectional associations between substance use and suicidal ideation, and nonrecursive models with feedback loops combining correlated fixed factors were conducted to examine reciprocal relations between each substance use and suicidal ideation, respectively.
Results: After adjusting for the latent time-invariant effects and lagged effects of dependent variables, the unidirectional associations from substance use to suicidal ideation were consistently significant, and vice versa. Nonrecursive model results showed that use of cigarette or alcohol increased risk of suicidal ideation, while suicidal ideation was not associated with cigarette or alcohol use. Reversely, drug use (marijuana and other drugs) did not increase risk of suicidal ideation, but suicidal ideation increased risk of illicit drug use.
Conclusion: The results suggest that relations between substance use and suicidal ideation are unidirectional, with cigarette or alcohol use increasing risk of suicidal ideation and suicidal ideation increasing risk of illicit drug use.
abstract_id: PUBMED:33536951
Associations of Substance Use Behaviors With Suicidal Ideation and Suicide Attempts Among US and Chinese Adolescents. Background: Adolescence has been described as a period of increased health risk-taking behaviors. Given the variety of cultural contexts, healthcare systems, and public health policies in different regions, the present study aimed to determine whether there are similar or different associations of substance use behaviors with suicidal ideation and suicide attempts among US and Chinese adolescents. Methods: This study included a total of 14,765 US adolescents from the 2017 National Youth Risk Behavior Surveillance System (YRBSS) and 24,345 Chinese adolescents from the 2017 School-based Chinese Adolescents Health Survey (SCAHS). Results: The proportions of suicidal ideation and suicide attempts were 17.4 and 5.7% among US adolescents, which were higher than those among Chinese adolescents (suicidal ideation: 13.7% and suicide attempts: 2.7%). Among Chinese adolescents, the most common substance use behavior was "alcohol use (55.4%)," followed by "cigarette use (11.6%)." Among US adolescents, the most popular substance was alcohol (ever used: 55.9%), followed by marijuana (ever used: 34.6%). Moreover, alcohol use was significantly related to suicidal ideation/suicide attempts only in Chinese adolescents [suicidal ideation: Adjusted odds ratio (AOR) = 1.88, 95% CI = 1.71~2.06; suicide attempts: AOR = 2.12, 95% CI = 1.71~2.63], and marijuana use was associated with suicidal ideation and suicide attempts only in the US adolescent group (suicidal ideation: AOR = 1.23, 95% CI = 1.06~1.44; suicide attempts: AOR = 1.51, 95% CI = 1.21~1.87). Moreover, although the associations of prescription pain medication use with suicide attempts were significant in both Chinese and US adolescent groups, the adjusted associations were stronger in Chinese adolescents than in US adolescents (Chinese adolescents: AOR = 3.97, 95% CI = 2.76~5.72; US adolescents: AOR = 1.76, 95% CI = 1.43~2.16; P < 0.05). Conclusions: The associations of alcohol use with suicidal ideation and suicide attempts were only significant in Chinese adolescents. Marijuana use was associated with suicidal ideation and suicide attempts only in the US adolescent group. Although the associations of prescription pain medication use with suicide attempts were significant in both Chinese and US adolescent groups, the adjusted associations were significantly stronger for Chinese adolescents. These findings might be related to the differences in cultural contexts, healthcare systems, and public health policies in the two different countries.
abstract_id: PUBMED:37434128
Associations between exposure to sexual abuse, substance use, adverse health outcomes, and use of youth health services among Norwegian adolescents. Background: A strong association between sexual abuse and adverse health outcomes has been reported among adolescents. The present study aimed to provide more information about adverse health outcomes associated with sexual abuse and substance use, and to examine the use of youth health services among Norwegian adolescents.
Methods: National representative cross-sectional study among 16-19-year-old Norwegian adolescents (n = 9784). Multivariable regression analyses, adjusted for socioeconomic status and age, were used to examine the association between exposure to sexual abuse, substance use and health risk factors, and the use of youth health services.
Results: Adolescents exposed to sexual abuse had higher odds of depressive symptoms (males: OR:3.8; 95% CI:2.5-5.8, females: 2.9;2.4-3.5), daily headache (males: 5.3;2.8-10.1, females:1.9; 1.5-2.4), high medication use (males: 3.2;1.7-6.0, females: 2.0;1.6-2.6), self-harm (males: 3.8;2.4-6.0, females:3.2; 2.6-3.9), suicidal thoughts (males: 3.3; 2.2-5.0, females:3.0; 2.5-3.6) and suicide attempts (males: 9.5;5.6-16.0, females:3.6;2.7-4.9). Furthermore, exposure to sexual abuse was associated with higher odds of using school health services (males: 3.9;2.6-5.9, females: 1.6;1.3-1.9) and health services for youth (males: 4.8;3.1-7.6, females: 2.1;1.7-2.5). In general, substance use was associated with increased odds of adverse health related outcomes and use of youth health services, but the strength of the relationships varied according to sex. Finally, results indicated a significant interaction between sexual abuse and smoking that was associated with increased odds of having suicidal thoughts for males (2.6;1.1-6.5) but a decreased odds of having suicidal thoughts and have conducted suicide attempts once or more for females (0.6;0.4-1.0 and 0.5;0.3-0.9, respectively).
Conclusions: The present study confirmed a strong relationship between exposure to sexual abuse and health risks, especially among males. Moreover, males exposed to sexual abuse were much more likely to use youth health services compared to sexually abused females. Substance use was also associated with adverse health outcomes and use of youth health services, and interactions between sexual abuse and smoking seemed to influence risk of suicidal thoughts and attempts differently according to sex. Results from this study increase knowledge about possible health related effects of sexual abuse which should be used to identify victims and provide targeted treatment by youth health services.
abstract_id: PUBMED:31749642
Early Substance Use Initiation And Psychological Distress Among Adolescents In Five ASEAN Countries: A Cross-Sectional Study. Aim: The study aimed to assess the associations between substance use early initiation (<12 years) (smoking cigarettes, alcohol and drug use) with psychological distress among adolescents in five ASEAN countries.
Methods: Cross-sectional data were analysed from 33,184 school adolescents, with a median age of 14 years, from Indonesia, Laos, Philippines, Thailand and Timor-Leste that took part in the "Global School-Based Student Health Survey (GSHS)" in 2015.
Results: The overall prevalence of pre-adolescent (<12 years) cigarette use was 10.6%, 8.1% pre-adolescent current alcohol use, and 4.2% pre-adolescent drug use initiation. In adjusted multinomial logistic regression analysis, pre-adolescent initiation of cigarette smoking, pre-adolescent initiation of alcohol use, pre-adolescent initiation of drug use and multi-substance pre-adolescent initiation were highly associated with medium (=1) and high (=2-5) psychological distress (of five psychological distress items: no close friends, loneliness, anxiety, suicidal ideation and suicide attempt). Late initiation of cigarette use and late initiation of drug use were not associated with medium and/or high psychological distress.
Conclusion: Early prevention programmes should target concurrent early substance use initiation in order to prevent possible subsequent psychological distress.
abstract_id: PUBMED:38250281
Marijuana use and its correlates among school-going Jamaican adolescents: a finding from a national survey. Introduction: The recent data indicate almost a fifth of Jamaican adolescents used marijuana in the past 30 days. To ensure the optimal allocation of resources, a country-specific understanding of factors associated with marijuana use among adolescents is essential. Therefore, this study aimed to address this gap among adolescents aged 13-17 years in Jamaica.
Methods: We analyzed data from the recent Jamaica Global School-Based Student Health Survey conducted in 2017. The sample consists of school-going Jamaican adolescents of 7th-12th grades. The prevalence of recent marijuana use was assessed and compared across different demographics, substance use, and risk behaviors using bivariate and multivariable logistic regression analyses.
Results: Older adolescents and men had a higher likelihood of recent marijuana use. Psychosocial risks, such as loneliness, frequent worry, suicidal ideation, physical attacks, and school absenteeism, were associated with higher marijuana usage. Parental smoking increased the odds, whereas strong parental support and awareness decreased it. Other substance uses, especially amphetamine and tobacco products, had strong associations with marijuana use. Early initiation of substances was associated with a higher risk of marijuana use. Sexually active adolescents, especially those initiated before the age of 14 years, had higher rates of marijuana use.
Conclusion: The intricate link between harmful and supportive psychosomatic and risk behaviors with recent marijuana use highlights the importance of holistic interventions and policies focusing on emotional health, parental guidance, substance education, and sexual activity implications.
abstract_id: PUBMED:38115827
The Role of Substance Use Disorders on Suicidal Ideation, Planning, and Attempts: A Nationally Representative Study of Adolescents and Adults in the United States, 2020. Few nationally representative studies examine suicidality and substance use during 2020; as such, we explored the role of substance use disorders (SUDs) on suicidality among adults and adolescents in 2020. Data were derived from N = 26,084 adult participants, representing 240 million U.S. adults weighted, and N = 5,723 adolescent participants, representing 25 million U.S. adolescents (12-17 years.). Separate logistic regressions for adults and adolescents were used to assess the association of DSM-5 SUDs, related factors, and suicidal thoughts and behaviors (ideation, planning, and attempts). In 2020, adults with SUDs were nearly 4 times more likely to seriously consider suicide (aOR = 3.94, 95% CI: 3.19, 4.86), 3 times more likely to make a suicide plan (aOR = 3.09, 95% CI: 2.25, 4.25), and nearly 4 times more likely to attempt suicide (aOR = 3.77, 95% CI: 2.29, 6.19) than adults without SUDs. Adolescents with SUDs were 4 times more likely to consider suicide (aOR = 3.69, 95% CI: 2.47, 5.51), 5 times as likely to make a suicide plan (aOR = 5.14, 95% CI: 3.25, 8.13) and to attempt suicide (aOR = 5.27, 95% CI: 2.91, 9.53) than adolescents without SUDs. Adult females and individuals experiencing poverty were twice as likely to attempt suicide than adult males and individuals not living in poverty. Adolescent females were 3-5 times more likely to seriously consider, plan, and attempt suicide than adolescent males. Interventions to curb suicidality among individuals with SUDs are crucial.
abstract_id: PUBMED:33839366
Childhood maltreatment, psychiatric symptoms, and suicidal thoughts among adolescents receiving substance use treatment services. Introduction: Childhood maltreatment experiences are associated with future suicidal thoughts and suicide attempts, yet the roles of specific psychiatric symptoms mediating this relation remain to be clarified. To clarify these relations, we tested a model incorporating multiple forms of childhood maltreatment (sexual abuse, physical punishment, emotional neglect), past year psychiatric disorder symptoms during adolescence (anxiety, mood, and conduct disorders) and recent suicidal thoughts.
Methods: We administered structured interviews to 394 adolescents receiving outpatient substance use treatment services in the Southeastern United States (280 males; Mage = 16.33; SDage = 1.15). Structural equation models (SEMs) were used to evaluate the degree to which relations between childhood maltreatment and suicidal thoughts were mediated by specific past-year psychiatric symptoms.
Results: Mood disorder symptoms significantly mediated the relation between neglect/negative home environment and suicidal thoughts. This path of influence did not vary by gender.
Conclusions: Childhood maltreatment and subsequent psychopathology influence suicidal thoughts among adolescents receiving substance use treatment services. The findings of the present study have implications for the adaptation and delivery of substance use treatment services to adolescents to enhance treatment engagement and outcomes.
abstract_id: PUBMED:30723431
A Review of Personality-Targeted Interventions for Prevention of Substance Misuse and Related Harm in Community Samples of Adolescents. Several school-based prevention programmes have been developed and used to prevent, delay, or reduce substance misuse, and related problems among community samples of adolescents. However, findings indicate that many of these interventions are associated with null, small, or mixed effects in reducing adolescent substance misuse, in particular for those mostly at risk of transitioning to substance use disorders. These findings highlight the need to shift the focus of substance use prevention efforts toward intervention strategies which directly target high-risk adolescents. The Preventure programme was designed to target four personality risk factors for substance misuse: hopelessness, anxiety sensitivity, impulsivity, and sensation seeking. This article reviews findings from the previous trials of personality-targeted interventions (i.e., Preventure programme) with adolescents and discuss the promises and benefits of these interventions for targeting community samples of high-risk adolescents at school level for reducing substance misuse and related mental health problems. Findings indicated that this programme has been successful in reducing the rates of alcohol and illicit drug use and substance-related harms by ~50% in high-risk adolescents with the effects last for up to 3 years. These interventions were also associated with a 25% reduction in likelihood of transitioning to mental health problems, such as anxiety, depression, suicidal ideation, and conduct problems. The programme is particularly beneficial for youth with more significant risk profiles, such as youth reporting clinically significant levels of externalizing problems, and victimized adolescents. A key strength of the Preventure programme is that it is embedded in the community and provides substance use intervention at school level to the general samples of high-risk adolescents who might not otherwise have access to those programmes.
abstract_id: PUBMED:29126919
School-based mental health services, suicide risk and substance use among at-risk adolescents in Oregon. This study examined whether an increase in the availability of mental health services at school-based health centers (SBHCs) in Oregon public schools was associated with the likelihood of suicidal ideation, suicide attempts and substance use behaviors among adolescents who experienced a depressive episode in the past year. The study sample included 168 Oregon public middle and high schools and 9073 students who participated in the Oregon Healthy Teens Survey (OHT) in 2013 and 2015. Twenty-five schools had an SBHC, and 14 of those schools increased availability of mental health services from 2013 to 2015. The OHT included questions about having a depressive episode, suicidal ideation, attempting suicide in the past year, and substance use behaviors in the past 30days. Multi-level logistic regression analyses were conducted in 2017 to examine associations between increasing mental health services and the likelihood of these outcomes. Analysis results indicated that students at SBHC schools that increased mental health services were less likely to report any suicidal ideation [odds ratio (OR) (95% C.I.)=0.66 (0.55, 0.81)], suicide attempts [OR (95% C.I.)=0.71 (0.56, 0.89)] and cigarette smoking [OR (95% C.I.)=0.77 (0.63, 0.94)] from 2013 to 2015 compared to students in all other schools. Lower frequencies of cigarette, marijuana and unauthorized prescription drug use were also observed in SBHC schools that increased mental health services relative to other schools with SBHCs. This study suggests that mental health services provided by SBHCs may help reduce suicide risk and substance use behaviors among at-risk adolescents.
abstract_id: PUBMED:38510109
Problematic substance use in depressed adolescents: Prevalence and clinical correlates. Background: Substance use among adolescents is common and associated with significant consequences, including depression. Adolescents can experience myriad problems related to early onset substance use and depression, making further understanding of this comorbidity necessary.
Method: Participants were a subset from a large-scale performance improvement project and consisted of adolescents aged 12-18 who screened positive for depression during their routine medical or psychiatric appointment and who then completed the substance use assessment Car, Relax, Alone, Forget, Friends, Trouble Version 2.1 (CRAFFT). Participants with problematic substance use had a CRAFFT score ≥2.
Results: A total of 621 participants were included in this study, and 105 (16.9%) reported problematic substance use. Compared with participants without problematic substance use, those with problematic use were more likely to have moderate to severe depression and anxiety, as well as significantly higher irritability, impulsivity, suicidal propensity, and suicidal thoughts scores. Controlling for age at screening, sex, race, and ethnicity, problematic substance use remained a significant predictor of depression severity, impulsivity, suicidal propensity, and suicidal thoughts.
Limitations: Participants were from a large, metropolitan area of the Southwest United States who must have screened positive for depression, so results may not generalize. Because all participants were underage, they may have been wary in responding to the substance use assessment accurately.
Conclusions: By using a large, diverse sample in a real-world clinical setting, findings strengthen the association between problematic substance use and depression and depression-associated symptoms among adolescents, highlighting the need for early detection and universal depression screening.
Answer: The relationship between suicidal ideation and substance use among adolescents and young adults appears to be bidirectional, but the direction and strength of the association can vary depending on the type of substance used. According to the research, there are unidirectional associations from substance use to suicidal ideation, and vice versa (PUBMED:24969957). Specifically, the use of cigarettes or alcohol has been found to increase the risk of suicidal ideation, while suicidal ideation has been shown to increase the risk of illicit drug use, but not necessarily the use of cigarettes or alcohol (PUBMED:24969957).
Furthermore, the associations between substance use and suicidal ideation or suicide attempts can differ across cultural contexts. For instance, alcohol use was significantly related to suicidal ideation and suicide attempts among Chinese adolescents, while marijuana use was associated with these outcomes only in the US adolescent group (PUBMED:33536951). Additionally, prescription pain medication use was significantly associated with suicide attempts in both Chinese and US adolescents, with stronger associations observed in Chinese adolescents (PUBMED:33536951).
Other studies have found that substance use disorders (SUDs) significantly increase the likelihood of suicidal thoughts and behaviors in both adolescents and adults (PUBMED:38115827). Moreover, early initiation of substance use has been linked to psychological distress, which includes suicidal ideation and suicide attempts (PUBMED:31749642). Adolescents exposed to sexual abuse, which is associated with higher odds of substance use, also show increased odds of depressive symptoms, self-harm, suicidal thoughts, and suicide attempts (PUBMED:37434128).
In summary, the relationship between suicidal ideation and substance use among adolescents and young adults is complex and influenced by various factors, including the type of substance used, cultural context, and individual experiences such as exposure to sexual abuse. The evidence suggests that substance use can increase the risk of suicidal ideation and behaviors, and conversely, suicidal ideation can lead to increased substance use, particularly illicit drug use. |
Instruction: Is serum cotinine a better measure of cigarette smoking than self-report?
Abstracts:
abstract_id: PUBMED:7597020
Is serum cotinine a better measure of cigarette smoking than self-report? Objectives: To address the question of whether serum cotinine is a better measure of cigarette smoking than self-reported behavior by examining the relation of biochemical, physical examination, and depression assessments to self-reported cigarette consumption and serum cotinine in a population-based sample.
Methods: Serum from 743 Mexican American participants in the Hispanic Health and Nutrition Examination Survey (HHANES) categorized by sex and number of cigarettes smoked per day (0, 1 to 9, 10 to 19, > or = 20) was analyzed for cotinine. HHANES results from hematocrit, hemoglobin, red blood cells (RBCs), white blood cells (WBCs), mean corpuscular volume (MCV), iron, transferrin, lead, erythrocyte protoporphyrin (EPP), vitamin E, vitamin A, cholesterol, body mass index (BMI), pulse rate, systolic and diastolic blood pressure (DBP), Center for Epidemiological Depression Scale (CES-D), and Diagnostic Interview Schedule (DIS) depression diagnosis were compared by category of cigarettes smoked per day and serum cotinine.
Results: Among women significant correlations were found between cigarettes per day and cotinine, respectively, and hematocrit (r = 0.148, r = 0.338), hemoglobin (r = 0.152, r = 0.342), WBCs (r = 0.160, r = 0.272), and BMI (r = -0.124, r = -0.164). Among men significant correlations were found between cigarettes per day and cotinine, respectively, and WBCs (r = 0.176, r = 0.296), MCV (r = 0.310, r = 0.264), lead (r = 0.105, r = 0.177), and BMI (r = -0.110, r = -0.192). Cotinine, but not cigarettes per day, was significantly correlated with hemoglobin (r = 0.179) and DBP (r = -0.146) in men and EPP (r = -0.135) and cholesterol (r = 0.105) in women. Mean CES-D score was correlated with cigarettes per day for both men (r = 0.106) and women (r = 0.158) but not with cotinine. CES-D caseness (score > or = 16) and a positive diagnosis of depression by DIS was not related to smoking behavior measures among men. Women smokers compared to nonsmokers had higher levels of depression. Multivariate regression models controlling for sex, age, and education indicated that serum cotinine was a significant predictor of hematocrit, hemoglobin, RBCs, WBCs, lead, and DBP; self-reported cigarettes was significant only for MCV.
Conclusions: Serum cotinine may be a better method of quantifying risks from cigarette use in epidemiological studies.
abstract_id: PUBMED:36292773
Epigenetic and Proteomic Biomarkers of Elevated Alcohol Use Predict Epigenetic Aging and Cell-Type variation Better Than Self-Report. Excessive alcohol consumption (EAC) has a generally accepted effect on morbidity and mortality, outcomes thought to be reflected in measures of epigenetic aging (EA). As the association of self-reported EAC with EA has not been consistent with these expectations, underscoring the need for readily employable non-self-report tools for accurately assessing and monitoring the contribution of EAC to accelerated EA, newly developed alcohol consumption DNA methylation indices, such as the Alcohol T Score (ATS) and Methyl DetectR (MDR), may be helpful. To test that hypothesis, we used these new indices along with the carbohydrate deficient transferrin (CDT), concurrent as well as past self-reports of EAC, and well-established measures of cigarette smoking to examine the relationship of EAC to both accelerated EA and immune cell counts in a cohort of 437 young Black American adults. We found that MDR, CDT, and ATS were intercorrelated, even after controlling for gender and cotinine effects. Correlations between EA and self-reported EAC were low or non-significant, replicating prior research, whereas correlations with non-self-report indices were significant and more substantial. Comparing non-self-report indices showed that the ATS predicted more than four times as much variance in EA, CDT4 cells and B-cells as for both the MDR and CDT, and better predicted indices of accelerated EA. We conclude that each of the non-self-report indices have differing predictive capacities with respect to key alcohol-related health outcomes, and that the ATS may be particularly useful for clinicians seeking to understand and prevent accelerated EA. The results also underscore the likelihood of substantial underestimates of problematic use when self-report is used and a reduction in correlations with EA and variance in cell-types.
abstract_id: PUBMED:33966590
Effect of smoking on periodontal health and validation of self-reported smoking status with serum cotinine levels. Objective: To investigate whether self-reported smoking and serum cotinine levels associate with periodontal pocket development and to determine the accuracy of self-reported smoking using serum cotinine.
Materials And Methods: This 4-year prospective cohort study included data from 294 dentate adults, aged ≥30 years, who participated in both the Health 2000 Survey and the Follow-up Study of Finnish Adults' Oral Health. Subjectively reported smoking status (daily smokers n = 62, occasional smokers n = 12, quitters n = 49, and never-smokers n = 171), serum cotinine levels, demographic factors, education level, dental behaviours and medical history were collected at baseline. The outcome measure was the number of teeth with periodontal pocketing ≥4 mm over 4 years.
Results: Self-reported daily smokers had 1.82 (95% CI: 1.32-2.50) higher incidence of deepened periodontal pockets than never-smokers. A positive association was observed between serum cotinine (≥42.0 μg/L) and the development of periodontal pockets. The misclassification rate of self-reported smoking was 6%.
Conclusions: Both self-reported daily smoking and higher serum cotinine were associated with periodontal pocket development. Self-reported smoking was fairly accurate in this study. However, higher cotinine levels among a few self-reported never-smokers indicated misreporting or passive smoking. Thus, self-reports alone are not enough to assess the smoking-attributable disease burden.
abstract_id: PUBMED:31512159
Self-Reported Smoking Compared to Serum Cotinine in Bariatric Surgery Patients: Smoking Is Underreported Before the Operation. Background: Smoking has been associated with postoperative complications and mortality in bariatric surgery. The evidence for smoking is based on self-report and medical charts, which can lead to misclassification and miscalculation of the associations. Determination of cotinine can objectively define nicotine exposure. We determined the accuracy of self-reported smoking compared to cotinine measurement in three phases of the bariatric surgery trajectory.
Methods: Patients in the phase of screening (screening), on the day of surgery (surgery), and more than 18 months after surgery (follow-up) were consecutively selected. Self-reported smoking was registered and serum cotinine was measured. We evaluated the accuracy of self-reported smoking compared to cotinine, and the level of agreement between self-report and cotinine for each phase.
Results: In total, 715 patients were included. In the screening, surgery, and follow-up group, 25.6%, 18.0%, and 15.5%, respectively, was smoking based on cotinine. The sensitivity of self-reported smoking was 72.5%, 31.0%, and 93.5% in the screening, surgery, and follow-up group, respectively (p < 0.001). The specificity of self-report was > 95% in all groups (p < 0.02). The level of agreement between self-report and cotinine was 0.778, 0.414, and 0.855 for the screening, surgery, and follow-up group, respectively.
Conclusions: Underreporting of smoking occurs before bariatric surgery, mainly on the day of surgery. Future studies on effects of smoking and smoking cessation in bariatric surgery should include methods taking into account the issue of underreporting.
abstract_id: PUBMED:19543411
Who is exposed to secondhand smoke? Self-reported and serum cotinine measured exposure in the U.S., 1999-2006. This study presents self-reported and serum cotinine measures of exposure to secondhand smoke (SHS) for nonsmoking children, adolescents, and adults. Estimates are disaggregated by time periods and sociodemographic characteristics based on analyses of the 1999-2006 National Health and Nutrition Examination Survey. Self-reported exposure rates are found to be highest for children, followed by adolescents and adults. Important differences in exposure are found by socioeconomic characteristics. Using serum cotinine to measure exposure yields much higher prevalence rates than self-reports. Rates of SHS exposure remain high, but cotinine levels are declining for most groups.
abstract_id: PUBMED:28578367
Accuracy of cotinine serum test to detect the smoking habit and its association with periodontal disease in a multicenter study. Background: The validity of the surveys on self-reported smoking status is often questioned because smokers underestimate cigarette use and deny the habit. It has been suggested that self-report should be accompanied by cotinine test. This report evaluates the usefulness of serum cotinine test to assess the association between smoking and periodontal status in a study with a large sample population to be used in studies with other serum markers in epidemiologic and periodontal medicine researches.
Material And Methods: 578 patients who were part of a multicenter study on blood biomarkers were evaluated about smoking and its relation to periodontal disease. Severity of periodontal disease was determinate using clinical attachment loss (CAL). Smoking was assessed by a questionnaire and a blood sample drawn for serum cotinine determination.
Results: The optimal cut-off point for serum cotinine was 10 ng/ml. Serum cotinine showed greater association with severity of CAL than self-report for mild-moderate CAL [OR 2.03 (CI95% 1.16-3.53) vs. OR 1.08 (CI95% 0.62-1.87) ] advanced periodontitis [OR 2.36 (CI95% 1.30- 4.31) vs. OR 2.06 (CI95% 0.97-4.38) ] and extension of CAL > 3 mm [ OR 1.78 (CI95% 1.16-1.71) vs. 1.37 (CI95% 0.89-2.11)]. When the two tests were evaluated together were not shown to be better than serum cotinine test.
Conclusions: Self-reported smoking and serum cotinine test ≥ 10ng/ml are accurate ,complementary and more reliable methods to assess the patient's smoking status and could be used in studies evaluating serum samples in large population and multicenter studies.
Clinical Relevance: The serum cotinine level is more reliable to make associations with the patient's periodontal status than self-report questionnaire and could be used in multicenter and periodontal medicine studies.
abstract_id: PUBMED:25324541
Smoking in infertile women with polycystic ovary syndrome: baseline validation of self-report and effects on phenotype. Study Question: Do women with polycystic ovary syndrome (PCOS) seeking fertility treatment report smoking accurately and does participation in infertility treatment alter smoking?
Summary Answer: Self-report of smoking in infertile women with PCOS is accurate (based on serum cotinine levels) and smoking is unlikely to change over time with infertility treatment.
What Is Known Already: Women with PCOS have high rates of smoking and it is associated with worse insulin resistance and metabolic dysfunction.
Study Design, Size, Duration: Secondary study of smoking history from a large randomized controlled trial of infertility treatments in women with PCOS (N = 626) including a nested case-control study (N = 148) of serum cotinine levels within this cohort to validate self-report of smoking.
Participants/materials, Setting, Methods: Women with PCOS, age 18-40, seeking fertility who participated in a multi-center clinical trial testing first-line ovulation induction agents conducted at academic health centers in the USA.
Main Results And The Role Of Chance: Overall, self-report of smoking in the nested case-control study agreed well with smoking status as determined by measure of serum cotinine levels, at 90% or better for each of the groups at baseline (98% of never smokers had cotinine levels <15 ng/ml compared with 90% of past smokers and 6% of current smokers). There were minor changes in smoking status as determined by serum cotinine levels over time, with the greatest change found in the smoking groups (past or current smokers). In the larger cohort, hirsutism scores at baseline were lower in the never smokers compared with past smokers. Total testosterone levels at baseline were also lower in the never smokers compared with current smokers. At end of study follow-up insulin levels and homeostatic index of insulin resistance increased in the current smokers (P < 0.01 for both) compared with baseline and with non-smokers. The chance for ovulation was not associated with smoking status, but live birth rates were increased (non-significantly) in never or past smokers.
Limitations, Reasons For Caution: The limitations include the selection bias involved in our nested case-control study, the possibility of misclassifying exposure to second hand smoke as smoking and our failure to capture self-reported changes in smoking status after enrollment in the trial.
Wider Implications Of The Findings: Because self-report of smoking is accurate, further testing of smoking status is not necessary in women with PCOS. Because smoking status is unlikely to change during infertility treatment, extra attention should be focused on smoking cessation in current or recent smokers who seek or who are receiving infertility treatment.
Study Funding/competing Interests: Sponsored by the Eugene Kennedy Shriver National Institute of Child Health and Human Development of the U.S. National Institutes of Health.
Clinical Trial Registration Numbers: ClinicalTrials.gov numbers, NCT00068861 and NCT00719186.
abstract_id: PUBMED:24313236
Biochemical marker of use is a better predictor of outcomes than self-report metrics in a contingency management smoking cessation analog study. Background And Objectives: This investigation compared cotinine (primary metabolite of nicotine) at study intake to self-report metrics (e.g., Fagerstrom Test of Nicotine Dependence [FTND]) and assessed their relative ability to predict smoking outcomes.
Methods: We used data from an analog model of contingency management for cigarette smoking. Non-treatment seeking participants (N = 103) could earn money in exchange for provision of a negative carbon monoxide (CO) sample indicating smoking abstinence, but were otherwise not motivated to quit. We used intake cotinine, FTND, percent of friends who smoke, and years smoked to predict longitudinal CO and attendance, time-to-first positive CO submission, and additional cross-sectional outcomes.
Results: Intake cotinine was consistently predictive (p < .05) of all outcomes (e.g., longitudinal CO and attendance, 100% abstinence, time-to-first positive CO sample), while years smoked was the only self-report metric that demonstrated any predictive ability.
Conclusions And Scientific Significance: Cotinine could be more informative for tailoring behavioral treatments compared to self-report measures.
abstract_id: PUBMED:27973944
A population-based study of smoking, serum cotinine and exhaled nitric oxide among asthmatics and a healthy population in the USA. Background: Fractional concentration of exhaled nitric oxide (FeNO) is recommended by the American Thoracic Society (ATS) as a noninvasive biomarker of airway inflammation. In addition to inflammation, many factors may be associated with FeNO, particularly tobacco exposure; however, only age has been included as an influential factor for children below 12 years. Numerous studies have demonstrated negative associations between tobacco exposure and FeNO levels with self-reported data, but few with an objective assessment of smoking.
Methods: Data from the National Health and Nutrition Examination Survey (NHANES) 2007-2012 were analyzed to examine the association between FeNO and active/passive tobacco. Exposure was assessed by both self-report and serum cotinine levels among 11,160 subjects aged 6-79 years old with asthma, or without any respiratory disease.
Results: Study results indicated 28.8% lower FeNO, 95% CI [25.2%, 32.3%] and 38.1% lower FeNO, 95% CI: [28.1, 46.2] was observed among healthy and asthmatic participants with serum cotinine in the highest quartile compared to those in the lowest quartile, respectively. Self-reported smoking status and recent tobacco use were also associated with decreased FeNO. Self-reported passive smoking was significantly associated with a 1.0% decrease in FeNO 95% CI [0.0, 2.0] among asthmatic subjects but not among healthy subjects.
Conclusions: Active smoking, whether measured by self-report or serum cotinine, was associated with decreased FeNO levels. In addition to age, increased attention should be given to tobacco exposure when using FeNO as a biomarker in clinical practice. Additional research is needed to establish reference value of FeNO considering the impact of tobacco exposure.
abstract_id: PUBMED:26856961
The Peer and Family Smoking Index: A Valid Measure of Secondhand Smoke Exposure in Adolescents. Purpose: Nearly half of all adolescents in the United States are exposed to secondhand smoke (SHS) daily, primarily at home, resulting in respiratory infections, asthma exacerbations, and reduced lung function. A concise self-report measure is needed to identity adolescents exposed to SHS. The purpose of this study was to test whether nonsmoking adolescents who reported exposure to peer and/or family smoking on the Peer and Family Smoking Index had higher salivary cotinine levels than those reporting no exposure to either peer or family smoking.
Methods: A convenience sample of 135 English-speaking adolescents ages 15-18 years was recruited from two high schools. Those who reported use of nicotine replacement therapy, cigarette smoking (past 30 days), smokeless tobacco use (past 30 days), or who self-reported pregnancy were excluded. Salivary cotinine and self-report data from the Peer and Family Smoking Index were collected after obtaining parental consent and assent from the adolescent.
Results: Mean salivary cotinine levels differed by exposure group (none, family, peer, or family and peer), F (3, 130) = 5.44, p = .001. The post hoc analysis identified a significantly higher mean cotinine level among those exposed to SHS from both family and peers than among those with no exposure (p = .001).
Conclusions: Known groups validity of the index was supported. Adolescents who reported family smoking or a combination of family and peer smoking had significantly higher salivary cotinine levels than unexposed adolescents. The Peer and Family Smoking Index is a concise and valid self-report measure for SHS exposure in adolescents.
Answer: Yes, serum cotinine is considered a better measure of cigarette smoking than self-report. Several studies have demonstrated that serum cotinine levels provide a more accurate and objective measure of tobacco exposure compared to self-reported smoking behavior.
For instance, a study on Mexican American participants found that serum cotinine was a significant predictor of various health parameters such as hematocrit, hemoglobin, red blood cells, white blood cells, lead, and diastolic blood pressure, while self-reported cigarettes per day was significant only for mean corpuscular volume. This suggests that serum cotinine may be a better method of quantifying risks from cigarette use in epidemiological studies (PUBMED:7597020).
Another study on bariatric surgery patients showed that underreporting of smoking occurs before surgery, mainly on the day of surgery. The sensitivity of self-reported smoking varied across different phases of the bariatric surgery trajectory, indicating that self-report could lead to misclassification and miscalculation of associations with smoking. In contrast, serum cotinine provided an objective definition of nicotine exposure (PUBMED:31512159).
Similarly, a study on the association between smoking and periodontal disease found that both self-reported daily smoking and higher serum cotinine were associated with periodontal pocket development. However, the misclassification rate of self-reported smoking was 6%, indicating that self-reports alone are not sufficient to assess the smoking-attributable disease burden (PUBMED:33966590).
Furthermore, a study comparing biochemical markers of use to self-report metrics in a contingency management smoking cessation analog study found that intake cotinine was consistently predictive of all outcomes, while self-report metrics demonstrated limited predictive ability (PUBMED:24313236).
In the context of secondhand smoke exposure, a study on adolescents found that the Peer and Family Smoking Index, which includes self-reported exposure to peer and family smoking, was validated against salivary cotinine levels. Adolescents who reported exposure to family and peer smoking had significantly higher salivary cotinine levels than those with no exposure, supporting the validity of the index as a measure for secondhand smoke exposure (PUBMED:26856961).
Overall, these studies support the conclusion that serum cotinine is a more reliable and accurate measure of cigarette smoking and exposure to tobacco smoke than self-reported smoking behavior. |
Instruction: Does four weeks of TENS and/or isometric exercise produce cumulative reduction of osteoarthritic knee pain?
Abstracts:
abstract_id: PUBMED:12428824
Does four weeks of TENS and/or isometric exercise produce cumulative reduction of osteoarthritic knee pain? Objective: To evaluate the cumulative effect of repeated transcutaneous electrical nerve stimulation (TENS) on chronic osteoarthritic (OA) knee pain over a four-week treatment period, comparing it to that of placebo stimulation and exercise training given alone or in combination with TENS.
Design: Sixty-two patients, aged 50-75, were stratified according to age, gender and body mass ratio before being randomly assigned to four groups.
Interventions: Patients received either (1) 60 minutes of TENS, (2) 60 minutes of placebo stimulation, (3) isometric exercise training, or (4) TENS and exercise (TENS & Ex) five days a week for four weeks.
Main Outcome Measures: Visual analogue scale (VAS) was used to measure knee pain intensity before and after each treatment session over a four-week period, and at the four-week follow-up session.
Results: Repeated measures ANOVA showed a significant cumulative reduction in the VAS scores across the four treatment sessions (session 1, 10, 20 and the follow-up) in the TENS group (45.9% by session 20, p < 0.001) and the placebo group (43.3% by session 20, p = 0.034). However, linear regression of the daily recordings of the VAS indicated that the slope in the TENS group (slope = -2.415, r = 0.943) was similar to the exercise group (slope = -2.625, r = 0.935), which were steeper than the other two groups. Note that the reduction of OA knee pain was maintained in the TENS group and the TENS & Ex group at the four-week follow-up session, but not in the other two groups.
Conclusions: The four treatment protocols did not show significant between-group difference over the study period. It was interesting to note that isometric exercise training of the quadriceps alone also reduced knee pain towards the end of the treatment period.
abstract_id: PUBMED:12691335
Optimal stimulation duration of tens in the management of osteoarthritic knee pain. Objective: This study examined the optimal stimulation duration of transcutaneous electrical nerve stimulation (TENS) for relieving osteoarthritic knee pain and the duration (as measured by half-life) of post-stimulation analgesia.
Subjects: Thirty-eight patients received either: (i) 20 minutes (TENS20); (ii) 40 minutes (TENS40); (iii) 60 minutes (TENS60) of TENS; or (iv) 60 minutes of placebo TENS (TENS(PL)) 5 days a week for 2 weeks.
Methods: A visual analogue scale recorded the magnitude and pain relief period for up to 10 hours after stimulation.
Results: By Day10, a significantly greater cumulative reduction in the visual analogue scale scores was found in the TENS40 (83.40%) and TENS60 (68.37%) groups than in the TENS20 (54.59%) and TENS(PL) (6.14%) groups (p < 0.000), such a group difference was maintained in the 2-week follow-up session (p < 0.000). In terms of the duration of post-stimulation analgesia period, the duration for the TENS40 (256 minutes) and TENS60 (258 minutes) groups was more prolonged than in the other 2 groups (TENS20 = 168 minutes, TENS(PL) = 35 minutes) by Day10 (p < 0.000). However, the TENS40 group produced the longest pain relief period by the follow-up session.
Conclusion: 40 minutes is the optimal treatment duration of TENS, in terms of both the magnitude (VAS scores) of pain reduction and the duration of post-stimulation analgesia for knee osetoarthritis.
abstract_id: PUBMED:23717775
Comparison of the effects of acupuncture and isometric exercises on symptom of knee osteoarthritis. Background: The investigation and comparison of the effects of acupuncture and isometric exercises on pain and quality of life in patients suffering from knee osteoarthritis (OA). OA is the most common form of joint disease and one leading cause of disability in the elderly. The symptoms of OA are pain, morning stiffness, and joint limited motion. Different treatments have been proposed for management of OA, but the results are not clear. We studied the effects of acupuncture and isometric exercises on symptoms of the knee OA.
Methods: Forty patients with knee OA according to, American college of rheumatology criteria were recruited using strict inclusion and exclusion criteria. All the patients were randomly divided into two groups (A and B).The acupuncture group (A) received only acupuncture at selected acupoint for knee pain. The exercise group (B) received isometric exercise of the knee. Each group received treatment 12 sessions for 4 weeks. Evaluating measuring tools were pain intensity and function was measured with knee injury and OA outcome score (KOOS) questionnaire.
Results: After treatment, acupuncture and in isometric exercise groups reported that KOOS increased significantly (improvement) in Quality of Life score as shown (P value <0.05). VAS of acupuncture group changed from 7.25 ± 0.91 to 5.41 ± 1.23.In additions, VAS in isometric exercise group changed from 7.85 ± 1.35 to 5.34 ± 1.26. Total KOOS scores have not shown significant difference in comparative with exercise group (P value > 0.11).
Conclusions: Both acupuncture and isometric exercises decrease pain and increase quality of life in patients who suffer from OA.
abstract_id: PUBMED:32201665
Time and Repetitions Needed to Train Patients with Knee Pain on a Home Exercise Program: Are Learning Styles Important? Objective This study was designed to identify the amount of time and number of repetitions needed to explain a home exercise program recommended for most of our patients, as well as to gauge how many items patients managed to remember at their 15-day follow-up. We also considered whether the learning method had any effect on these results. Methods Sixty-two patients with mechanical knee pain who were admitted to our clinic were included in this study. Patients were categorized into the following three groups: group 1 with a dominant physical learning style, group 2 with a dominant auditory learning style, and group 3 with a dominant visual learning style. Heel slide, quadriceps isometric, quadriceps stretching, adductor isometric, abductor isometric, and quadriceps isotonic exercises were explained and demonstrated to all patients by the same physiotherapist, and the required time (in seconds) and repeats of exercises until the patients learned them were recorded. Remembered/forgotten exercises at the follow-up, which occurred 15 days later, were identified. Results A statistically significant difference was observed between groups in terms of how many seconds were needed for learning the quadriceps isometric exercises (p: 0.042). In the inter-group comparison, the difference was significant when groups 2 and 3 were compared (p: 0.046). There was a significant difference between groups in terms of how many repeats were needed for learning heel sliding (p: 0.000). Moreover, there was a significant difference between group 3 and groups 1 and 2 in the inter-group comparison (p: 0.000, p: 0.000). There was also a significant difference between groups in terms of recalling the adductor isometric exercises. Patients in group 2 were able to fully recall all these exercises. Conclusion It was found that the quadriceps isometric, heel slide, and adductor isometric exercises were more quickly learned, while the quadriceps stretching exercise was forgotten. We concluded that learning style is not highly important in exercise learning or recall.
abstract_id: PUBMED:34826572
Effect of transcutaneous electrical nerve stimulation (TENS) on knee pain and physical function in patients with symptomatic knee osteoarthritis: the ETRELKA randomized clinical trial. Objective: To determine the effectiveness of TENS at relieving pain and improving physical function as compared to placebo TENS, and to determine its safety, in patients with knee osteoarthritis.
Methods: Multi-centre, parallel, 1:1 randomized, double-blind, placebo-controlled clinical trial conducted in six outpatient clinics in Switzerland. We included 220 participants with knee osteoarthritis recruited between October 15, 2012, and October 15, 2014. Patients were randomized to 3 weeks of treatment with TENS (n = 108) or placebo TENS (n = 112). Our pre-specified primary endpoint was knee pain at the end of 3-weeks treatment assessed with the WOMAC pain subscale. Secondary outcome measures included WOMAC physical function subscale and safety outcomes.
Results: There was no difference between TENS and placebo TENS in WOMAC pain at the end of treatment (mean difference -0.06; 95%CI -0.41 to 0.29; P = 0.74), nor throughout the trial duration (P = 0.98). Subgroup analyses did not indicate an interaction between patient/treatment characteristics and treatment effect on WOMAC pain at the end of treatment (P-interaction ≥0.22). The occurrence of adverse events was similar across groups, with 10.4% and 10.6% of patients reporting events in the TENS and placebo TENS groups, respectively (P = 0.95). No relevant differences were observed in secondary outcomes.
Conclusions: TENS does not improve knee osteoarthritis pain when compared to placebo TENS. Therapists should consider other potentially more effective treatment modalities to decrease knee osteoarthritis pain and facilitate strengthening and aerobic exercise. Our findings are conclusive and further trials comparing TENS and placebo TENS in this patient population are not necessary.
abstract_id: PUBMED:2919192
Evaluation of eccentric exercise in treatment of patellar tendinitis. The purpose of this study was to analyze the effects of a quadriceps femoris muscle eccentric training program on strength gain in patients with patellar tendinitis. The effect of an eight-week eccentric exercise program on quadriceps femoris muscle work was evaluated in four groups of subjects--two groups of "normal" (healthy) subjects and two groups of patients with patellar tendinitis. All four groups participated in a home muscle stretching exercise program, but only two groups--one group of normal subjects (N-A) and one group of subjects with tendinitis (T-A)--received additional eccentric training on an eccentric isokinetic dynamometer. The eccentric quadriceps femoris muscle work ratio (involved limb/uninvolved limb x 100) was used to quantify strength in the N-A and T-A Groups. Pain ratings were recorded for subjects with tendinitis before and after the eight-week experiment and were correlated with the dependent variable using a Spearman rank-order correlation coefficient. The N-A Group performed significantly better than all subjects with tendinitis (p less than .05). Subjects in the T-A Group, however, showed a trend toward increasing eccentric quadriceps femoris muscle work capacity over the eight-week training period. As pain ratings in the T-A Group increased, work ratios decreased. We concluded that eccentric exercise may be an effective treatment for patellar tendinitis, but that knee pain may limit optimal gains in strength.
abstract_id: PUBMED:14629842
The effects of electro-acupuncture and transcutaneous electrical nerve stimulation on patients with painful osteoarthritic knees: a randomized controlled trial with follow-up evaluation. Objectives: To examine the relative effectiveness of electro-acupuncture (EA) and transcutaneous electrical nerve stimulation (TENS) in alleviating osteoarthritic (OA)-induced knee pain.
Design: Single-blinded, randomized controlled study.
Subjects: Twenty-four (24) subjects (23 women and 1 man), mean age 85, were recruited from eight subsidized Care & Attention Homes for the elderly.
Interventions: Subjects were randomly assigned to the EA, TENS, or control groups. Subjects in the EA group (n = 8) received low-frequency EA (2 Hz) on two acupuncture points (ST-35, Dubi and EX-LE-4, Neixiyan) of the painful knee for 20 minutes. Subjects in the TENS group (n = 8) received low-frequency TENS of 2 Hz and pulse width of 200 micros on the same acupuncture points for 20 minutes. In both treatment groups, electrical treatment was carried out for a total of eight sessions in 2 weeks. Eight subjects received osteoarthritic knee care and education only in a control group. All subjects were evaluated before the first treatment, after the last treatment, and at 2-week follow-up periods.
Results: After eight sessions of treatment, there was significant reduction of knee pain in both EA group and TENS group, as measured by the Numeric Rating Scale (NRS) of pain (p < 0.01). Prolonged analgesic effect was maintained in the EA and the TENS groups at a 2-week follow-up evaluation. The Timed Up-and-Go Test (TUGT) score of the EA group was significantly lower than that of the control group (p < 0.05), but such change was not observed in the TENS group.
Conclusions: Both EA and TENS treatments were effective in reducing OA-induced knee pain. EA had the additional advantage of enhancing the TUGT results as opposed to TENS treatment or no treatment, which did not produce such corollary effect.
abstract_id: PUBMED:24983333
Tendinopathy alters cumulative transverse strain in the patellar tendon after exercise. Introduction: This research evaluated the effect of tendinopathy on the cumulative transverse strain response of the patellar tendon to a bout of resistive quadriceps exercise.
Methods: Nine adults with unilateral patellar tendinopathy (age, 18.2 ± 0.7 yr; height, 1.92 ± 0.06 m; weight, 76.8 ± 6.8 kg) and 10 healthy adults free of knee pain (age, 17.8 ± 0.8 yr; height, 1.83 ± 0.05 m; weight, 73.2 ± 7.6 kg) underwent standardized sagittal sonograms (7.2-14 MHz linear array transducer) of both patellar tendons immediately before and after 45 repetitions of a double-leg decline squat exercise performed against a resistance of 145% body weight. Tendon thickness was determined 5 and 25 mm distal to the patellar pole. Transverse Hencky strain was calculated as the natural log of the ratio of post- to preexercise tendon thickness and expressed as percentage. Measures of tendon echogenicity were calculated within the superficial and deep aspects of each tendon site from grayscale profiles. Intratendinous microvessels were evaluated using power Doppler ultrasound.
Results: The cumulative transverse strain response to exercise in symptomatic tendinopathy was significantly lower than that in asymptomatic and healthy tendons (P < 0.05). There was also significant reduction (57%) in the area of microvascularity immediately after exercise (P = 0.05), which was positively correlated (r = 0.93, P < 0.05) with a Victorian Institute of Sport Assessment for patellar tendinopathy score.
Conclusions: This study is the first to show that patellar tendinopathy is associated with altered morphological and mechanical response of the tendon to exercise, which is manifest by reduction in cumulative transverse strain and microvascularity, when present. Research directed toward identifying factors that influence the acute microvascular and transverse strain response of the patellar tendon to exercise in the various stages of tendinopathy is warranted.
abstract_id: PUBMED:31661578
Effects of genotype on TENS effectiveness in controlling knee pain in persons with mild to moderate osteoarthritis. Background: This study examined the extent to which genetic variability modifies Transcutaneous Electrical Nerve Stimulation (TENS) effectiveness in osteoarthritic knee pain.
Methods: Seventy-five participants with knee osteoarthritis were randomly assigned to either: (a) High-frequency TENS, (b) Low-frequency TENS or (c) Transient Placebo TENS. Pain measures were collected pre- and post-treatment. Participants were genotyped on genes implicated in central or peripheral pain pathways: NGFB, NTRK1, EDNRA, EDNRB, EDN1, OPRM1, TAC1, TACR1, BDNF, BDKRB1, 5HTT, COMT, ESR2, IL6 and IL1B. Genetic association using linear regression modelling was performed separately for the transient placebo TENS subjects, and within the High-frequency TENS + Low-frequency TENS participants, including TENS level as a covariate.
Results: In the placebo group, SNPs rs165599 (COMT) was significantly associated with an increased heat pain threshold (β = -1.87; p = .003) and rs6827096 (EDNRA) with an increased resting pain (β = 2.68; p = .001). Within the treatment groups, TENS effectiveness was reduced by the SNP rs6537485 (EDNRA) minor allele in relationship to mechanical sensation (β = 184.13; p = 5.5E-9). Individuals with the COMT rs4680 minor allele reported lowered pain at rest after TENS (β = -42.30; p = .001), with a higher magnitude of pain reduction (28 unit difference) in the low-frequency TENS group compared to the high-frequency TENS group (β = 28.37; p = .0004).
Conclusions: EDNRA and COMT are implicated in osteoarthritic knee pain and provide a basis for tailoring TENS interventions according to individual characteristics.
Significance: Findings from this study demonstrate that genetic variation within the COMT and EDNRA genes influences the effectiveness of TENS, a non-pharmacologic pain-reduction intervention, in the context of osteoarthritic knee pain. Evidence such as this may contribute to risk models that provide a clinically useful tool for personalizing TENS interventions according to individual characteristics in order to best control pain and maximize functional status.
abstract_id: PUBMED:29651675
Tackling Osteoarthritic Knee Pain with Electroacupuncture. Electroacupuncture (EA) has been widely used in pain relief. Clinical evidence has revealed its unique advantages and effectiveness in alleviating pain. Studies on EA and pain relief have revealed that EA displays greater analgesic effects for different types of pain in comparison to manual acupuncture. Here, we reviewed the clinical application and mechanism of EA in treating osteoarthritic knee pain and its influence factors in curative effect.
Answer: Yes, four weeks of TENS and/or isometric exercise can produce a cumulative reduction of osteoarthritic knee pain. A study evaluating the cumulative effect of repeated transcutaneous electrical nerve stimulation (TENS) on chronic osteoarthritic knee pain over a four-week treatment period showed a significant cumulative reduction in knee pain intensity as measured by the Visual Analogue Scale (VAS) in the TENS group (45.9% by session 20) and the placebo group (43.3% by session 20). The reduction of osteoarthritic knee pain was maintained in the TENS group and the TENS & Exercise group at the four-week follow-up session, but not in the other two groups (PUBMED:12428824).
Another study found that 40 minutes is the optimal treatment duration of TENS, in terms of both the magnitude of pain reduction and the duration of post-stimulation analgesia for knee osteoarthritis (PUBMED:12691335). Additionally, a comparison of the effects of acupuncture and isometric exercises on symptoms of knee osteoarthritis revealed that both treatments decrease pain and increase quality of life in patients suffering from osteoarthritis (PUBMED:23717775).
However, it is important to note that another randomized clinical trial found no difference between TENS and placebo TENS in improving knee osteoarthritis pain, suggesting that therapists should consider other potentially more effective treatment modalities (PUBMED:34826572). Despite this, the overall evidence from the studies suggests that TENS and isometric exercises can be beneficial in reducing knee pain associated with osteoarthritis over a period of four weeks. |
Instruction: Is colour duplex sonography-guided temporal artery biopsy useful in the diagnosis of giant cell arteritis?
Abstracts:
abstract_id: PUBMED:24939678
Is colour duplex sonography-guided temporal artery biopsy useful in the diagnosis of giant cell arteritis? A randomized study. Objective: The aim of this study was to assess the usefulness of colour duplex sonography (CDS)-guided temporal artery biopsy (TAB) for the diagnosis of GCA in patients with suspected GCA.
Methods: From September 2009 through December 2012, 112 consecutive patients with suspected GCA were randomized to undergo CDS-guided TAB or standard TAB. All patients underwent temporal artery physical examination and temporal artery CDS prior to TAB. CDS of the temporal artery was performed by the same ultrasonographer, who was unaware of the patient's clinical data, and all TABs were evaluated by the same pathologist. Seven patients in whom biopsy failed to sample temporal artery tissue were excluded from the analysis.
Results: Fifty patients were randomized to undergo CDS-guided TAB and 55 patients to standard TAB. Except for a younger age in patients who underwent standard TAB (P = 0.026), no significant differences were observed between the two groups. There were no significant differences in the frequencies of positive TAB for classic transmural inflammation (28% vs 18.2%) or for periadventitial small vessel vasculitis and/or vasa vasorum vasculitis (6% vs 14.5%) between the two groups. No significant differences in the frequency of positive TAB in the two groups were observed when we excluded the patients treated with glucocorticoids and when we stratified the patients of the two groups for the presence or absence of the halo sign.
Conclusion: Our study showed that CDS-guided TAB did not improve the sensitivity of TAB for diagnosing GCA.
abstract_id: PUBMED:35709855
The utility of the bilateral temporal artery biopsy for diagnosis of giant cell arteritis. Objective: A surgical temporal artery biopsy (TAB) is the gold standard for diagnosis of giant cell arteritis (GCA). The necessity of performing a bilateral biopsy remains under debate. The primary objective of this study was to assess the rate of discordance between pathology results in patients who underwent bilateral TAB for suspected GCA.
Methods: We performed a retrospective review of patients who underwent bilateral TAB for the diagnosis of GCA between 2011 and 2020. The primary end point was the rate of discordance between specimens for patients with pathology positive GCA. Secondary end points included assessments of the sensitivity of preoperative temporal artery duplex and the effects of specimen length and specialty of referring provider on the diagnostic yield of the biopsy.
Results: During the study period, 310 patients underwent bilateral TAB for the diagnosis of GCA. These patients were primarily female (73.9%), elderly (mean age, 70.8 years), and Caucasian (95.8%). Preoperative symptoms for patients were typically bilateral (59%) and included headache (81%), vision changes (45.2%), and temporal tenderness (32.6%). Most patients (85.2%) were on preoperative steroid therapy at the time of surgical biopsy with a mean preoperative duration of steroid therapy of 15.1 days. Overall, 91 patients (29.4%) had a positive pathologic diagnosis after bilateral TAB. Of these patients, 11 had a positive pathology result in only a single specimen, resulting in a discordance rate of 12.1%. Preoperative temporal artery duplex demonstrated a low sensitivity (27.3%) for identifying patients with pathologic positive disease. There were no significant differences between the pathology-positive and -negative patients in terms of mean surgical specimen length (1.67 cm vs 1.64 cm; P = .67) or the specialty of the referring provider (P = .73).
Conclusions: At our institution, we observed a 12.1% discordance rate between pathology results in patients who underwent bilateral TAB for diagnosis of GCA. A preoperative temporal artery duplex provided little value in identifying patients with biopsy-proven GCA.
abstract_id: PUBMED:33485088
Evaluation of Temporal Artery Duplex Ultrasound for Diagnosis of Temporal Arteritis. Background: Temporal arteritis or giant cell arteritis is a form of systemic inflammatory vasculitis closely associated with polymyalgia rheumatica. It may have serious systemic, neurologic, and ophthalmic consequences as it may lead to impaired vision and blindness. Definitive diagnosis is made after histopathologic analysis of a superficial temporal artery (TA) biopsy, which requires a small surgical procedure often under local anesthesia. We investigated whether a noninvasive technique such as duplex ultrasound of the TA could replace histopathological analysis.
Methods: Eighty-one patients referred to our department for TA biopsy were first screened with a duplex ultrasound for a surrounding halo and/or occlusion of the TA. Presence of visual disturbances and unilateral pain (headache and/or tongue/jaw claudication) was noted before TA biopsy. Pathological analysis was considered the gold standard. Correlation between duplex findings, symptoms, and pathology was determined by Spearman's Rho test. The predictive value of a halo and TA occlusion on duplex were determined by ROC curve analysis.
Results: A halo or TA occlusion was found in 16.0% and 3.7% of patients, respectively. Unilateral pain was reported in 96% of cases while 82% complained of visual disturbances. Correlation coefficients for halo and occlusion were 0.471 and 0.404, respectively (P < 0.0001), suggesting a moderate correlation between duplex and biopsy. There was no significant correlation between visual impairment or pain and histologic findings. The ROC curve analysis showed a sensitivity of 53.3% and 20.0%, and specificity of 91.9% and 100% for presence of a halo and occlusion of the TA on duplex, respectively.
Conclusions: Arterial duplex is a moderately sensitive but highly specific test for exclusion of temporal arteritis. We observed a moderate correlation between these findings on duplex and histopathological analysis as a gold standard. Arterial duplex may serve as a valuable diagnostic addition to prevent unnecessary surgical procedures and can even substitute biopsy in patients where surgery is not an option.
abstract_id: PUBMED:22499554
Comparison of histopathological findings with duplex sonography of the temporal arteries in suspected giant cell arteritis. Introduction: In clinical practice the temporal artery biopsy (TAB) in suspected giant cell arteritis (GCA) is still believed to be the "gold standard". The purpose of this study was to compare the histopathological findings of the TAB with duplex sonography of the temporal artery.
Patients And Methods: In our retrospective study we analysed 85 consecutive patients (52 female, mean age 71.5, range 55 - 91 years; 33 male, mean age 71.6, range 44 - 91 years) with suspected GCA who underwent TAB in our clinic between January 1999 - February 2011. All patients received a preoperative duplex sonography, 57 patients including description of the temporal arteries.
Results: 38 of 85 (44.7 %) of the artery biopsies were proven positive for GCA by histopathology. Interpretation of the duplex sonography was congruent of histopathological interpretation of the biopsy in 39 patients (68.4 %) and incongruent in 18 patients (31.6 %). Sensitivity of duplex-sonography was 44.4 %, specificity 90 %, positive predictive value 80 %.
Discussion: Duplex sonography is a non-invasive and very helpful diagnostic tool to guide the clinician in cases of suspected GCA but needs considerable skills. It shows a good specificity and relatively high positive predictive value as there are only few false positive results. A negative report however does not rule out GCA, so that in our opinion the TAB - at least in those cases - should still be performed.
abstract_id: PUBMED:24046471
Comparison between colour duplex sonography findings and different histological patterns of temporal artery. Objective: To assess the findings of temporal artery colour duplex sonography (CDS) in GCA characterized by a histological pattern of periadventitial small vessel vasculitis (SVV) and/or vasa vasorum vasculitis (VVV) and compare it with those observed in classic GCA with transmural vasculitis.
Methods: We studied 30 patients with SVV and/or VVV, 63 patients with classic GCA and 67 biopsy-negative patients identified over a 9-year period. CDS of the temporal arteries was performed in all patients by one ultrasonographer. Temporal artery biopsy was used as the reference standard. Sensitivities, specificities and likelihood ratios (LRs) were calculated.
Results: The frequency of the halo sign on CDS was significantly lower in the patients with SVV and/or VVV compared with those with classic GCA (20% vs 82.5%, P = 0.0001). The halo sign had a sensitivity of only 20% (95% CI 8.4, 39.1%) and a specificity of 80.6% (95% CI 68.7, 88.9%) for the diagnosis of SVV and/or VVV. The negative LR was 0.992 (CI 0.824, 1.195), and the positive LR was 1.030 (CI 0.433, 2.451). The halo sign for the diagnosis of biopsy-proven classic GCA had a higher sensitivity of 82.5% (CI 70.5, 90.5%), the same specificity of 80.6% (CI 68.7, 88.9%) and a higher positive LR (4.253; CI 2.577, 7.021).
Conclusion: The halo sign is infrequently found in GCA characterized by a histological pattern of SVV and/or VVV. This limits the sensitivity of CDS in correctly identifying patients with GCA.
abstract_id: PUBMED:15498914
The role of color duplex sonography in the diagnosis of giant cell arteritis. Objective: To determine the clinical usefulness of color duplex sonography in the diagnosis of giant cell arteritis as an alternative to temporal artery biopsy.
Methods: From May 1998 to November 2002, 68 consecutive patients seen in our hospital with a clinical suggestion of active temporal arteritis were included. Forty-eight patients were female and 20 were male, with a mean age of 77 years. Color duplex sonography with a linear array transducer (5-10 MHz) was used to assess temporal artery morphologic characteristics before a biopsy was performed. The main sonographic criterion for a positive diagnosis was visualization of a hypoechoic halo around the temporal artery. These data were compared with pathologic findings. The kappa statistic was used to determine the level of agreement. Sensitivity, specificity, positive and negative predictive values, and accuracy of duplex sonography as a diagnostic test were assessed.
Results: The color duplex sonographic findings were positive in 25 of 68 patients with a clinical suggestion of giant cell arteritis. The diagnosis was confirmed by biopsy in 22 patients; there were 4 false-positive results and 1 false-negative result by duplex sonography. The kappa value was 0.84. Sensitivity, specificity, positive and negative predictive values, and accuracy for duplex sonography were 95.4%, 91.3%, 84%, 97.6%, and 92.6%, respectively.
Conclusions: The use of high-resolution color duplex sonography may replace biopsy in the diagnosis of giant cell arteritis.
abstract_id: PUBMED:18089544
Diagnosing temporal arteritis: duplex vs. biopsy. Background: Temporal artery biopsy is the traditionally-accepted method of diagnosing temporal arteritis, but is of limited sensitivity.
Aim: To compare the clinical decisions made after negative temporal artery biopsy vs. negative temporal artery duplex, and the effects on patient outcomes.
Design: Retrospective analysis.
Methods: Of 290 patients suspected of having temporal arteritis, 147 underwent bilateral temporal artery duplex with a negative result, and 143 underwent unilateral temporal artery biopsy with a negative result. These groups were compared. Dependent measures included the proportion of patients in each group whose steroids were discontinued by their primary care doctor after either negative test, and the difference in the number of alternative diagnoses considered after a negative test. The incidence of blindness in each group was also compared, as a measure of adverse outcomes. Patients were then stratified by pre-test probability of having the disease, and compared using the same measures.
Results: Equivalent proportions of patients in the two groups had steroids discontinued after a negative test result, even when further stratified into risk groups by the probability of having temporal arteritis. No differences in adverse outcomes or number of alternative diagnoses considered were noted between groups.
Discussion: In clinical practice, bilateral temporal artery duplex served the same function as biopsy, but without subjecting patients to the potential morbidity of a surgical procedure. Temporal artery biopsy could be reserved only for situations where the duplex result is inconsistent with the clinical picture, and the biopsy result, if different from the duplex result, might influence the treatment decision.
abstract_id: PUBMED:11901283
Color duplex ultrasound of the temporal artery: replacement for biopsy in temporal arteritis The diagnosis of temporal arteritis (TA) is generally confirmed by biopsy. To investigate the diagnostic accuracy of color duplex sonography (CDS), both temporal arteries of 20 patients with suspected TA were prospectively insonated prior to biopsy. Detection of >or=1 hypoechogenic perivascular halo was used as CDS criterion, a temporal artery biopsy and the criteria of the American College of Rheumatology (ACR) as references. The frequency of halo disappearance after 3 months of steroid therapy was also studied. CDS showed TA in 6, biopsy in 12 and ACR criteria in 15 patients. CDS sensitivity was 50 and 40%, and specificity 100%, using the biopsy and the ACR criteria, respectively. After 3 months of steroid treatment, 1 patient still showed halos. In conclusion, detection of halos confirms, whereas the absence of halos does not exclude the diagnosis of TA suggesting that ultrasound may replace biopsy in single patients with typical clinical signs and symptoms and a halo.
abstract_id: PUBMED:14528514
Duplex sonography of the temporal and occipital artery in the diagnosis of temporal arteritis. A prospective study. Objective: Evaluation of the diagnostic contribution of color coded duplex sonography (CCDS) of the superficial temporal (STA) and the occipital artery (OCCA) in biopsy-controlled patients suspected of having temporal arteritis (TA).
Methods: Prospective study in 67 patients suspected of having TA who underwent CCDS of the STA in all cases and the occipital arteries if involvement of the OCCA was suspected clinically. The final diagnosis, based on biopsy results in 67 cases and standard criteria, were compared to the ultrasonographic findings to determine their diagnostic contribution.
Results: TA was diagnosed in 40 patients, other diseases in 27 patients. In the STA periarterial hypoechogenic tissue, the so-called halo, halo and stenoses, and occlusions were found in 83% of TA patients and 11% of patients with other diseases. In the OCCA, these abnormalities were found in 65% of TA patients and in no patient with other diseases. Taking both STA and OCCA together, halo, stenosis, and widespread abnormalities were found in patients with TA, but not in patients with other diseases.
Conclusion: CCDS of the STA and OCCA clearly contributes to the diagnosis of TA, with a high rate of perivascular hypoechogenic abnormalities (so-called halos) and stenosis and a low rate of these abnormalities in the control patients. However, CCDS cannot differentiate between inflammatory and degenerative artery disease and has spatial resolution limitations.
abstract_id: PUBMED:16859533
Colour duplex sonography of temporal arteries before decision for biopsy: a prospective study in 55 patients with suspected giant cell arteritis. Although a temporal artery biopsy is the gold standard for the diagnosis of giant cell arteritis (GCA), there is considerable evidence that characteristic signs demonstrated by colour duplex sonography (CDS) of the temporal arteries may be of diagnostic importance. We aimed to test the hypothesis that CDS can replace biopsy in the algorithm for the approach to diagnose GCA. Bilateral CDS was performed in consecutive patients older than 50 years with clinically suspected GCA, as well as in 15 age- and gender-matched control subjects with diabetes mellitus and/or stroke and 15 healthy subjects, to assess flow parameters and the possible presence of a dark halo around the arterial lumen. Unilateral temporal artery biopsy was then performed in patients with suspected GCA, which was directed to a particular arterial segment in case a halo was detected in CDS. Final diagnoses, after completion of a 3-month follow-up in 55 patients, included GCA (n = 22), polymyalgia rheumatica (n = 12), polyarteritis nodosa, Wegener's, and Adamantiades-Behçet's diseases (n = 3), and neoplastic (n = 8) and infectious diseases (n = 10). A dark halo of variable size (0.7-2.0 mm) around the vessel lumen was evident at baseline CDS in 21 patients (in 12 and 9 uni- or bilaterally, respectively) but in none of the controls. The presence of unilateral halo alone yielded 82% sensitivity and 91% specificity for GCA, whereas the specificity reached 100% when halos were found bilaterally. Blood-flow abnormal parameters (temporal artery diameter, peak systolic blood-flow velocities, stenoses, occlusions) were common in GCA and non-GCA patients, as well as in healthy and atherosclerotic disease-control, elderly subjects. At follow-up CDS examinations performed at 2 and 4 weeks after initiation of corticosteroid treatment for GCA, halos disappeared in all 18 patients (9 and 9, respectively). We conclude that CDS, an inexpensive, non-invasive, and easy-to-perform method, allows a directional biopsy that has an increased probability to confirm the clinical diagnosis. Biopsy is not necessary in a substantial proportion of patients in whom bilateral halo signs can be found by CDS.
Answer: The utility of colour duplex sonography (CDS)-guided temporal artery biopsy (TAB) in the diagnosis of giant cell arteritis (GCA) has been explored in several studies, with varying conclusions.
One randomized study found that CDS-guided TAB did not improve the sensitivity of TAB for diagnosing GCA compared to standard TAB. The frequencies of positive TAB for classic transmural inflammation or for periadventitial small vessel vasculitis and/or vasa vasorum vasculitis were not significantly different between the CDS-guided and standard TAB groups (PUBMED:24939678).
Another study observed a 12.1% discordance rate between pathology results in patients who underwent bilateral TAB for diagnosis of GCA, indicating that a single TAB may miss the diagnosis in a subset of patients. Preoperative temporal artery duplex demonstrated low sensitivity (27.3%) for identifying patients with biopsy-proven GCA (PUBMED:35709855).
A study evaluating temporal artery duplex ultrasound for diagnosis of temporal arteritis found that it is a moderately sensitive but highly specific test for exclusion of temporal arteritis, with a moderate correlation between duplex findings and histopathological analysis (PUBMED:33485088).
In contrast, another study suggested that high-resolution color duplex sonography may replace biopsy in the diagnosis of GCA, with a high level of agreement with pathologic findings (kappa value of 0.84) and high sensitivity (95.4%) and specificity (91.3%) (PUBMED:15498914).
A retrospective analysis compared clinical decisions made after negative temporal artery biopsy vs. negative temporal artery duplex, finding that equivalent proportions of patients had steroids discontinued after a negative test result in both groups, suggesting that duplex served the same function as biopsy without the morbidity of a surgical procedure (PUBMED:18089544).
In conclusion, while some studies suggest that CDS may serve as a valuable diagnostic addition and potentially replace biopsy in certain cases, others indicate that it does not significantly improve the sensitivity of TAB for diagnosing GCA. The usefulness of CDS-guided TAB may depend on the specific clinical context and the expertise of the ultrasonographer. |
Instruction: Are women more sensitive than men to 2-propanol and m-xylene vapours?
Abstracts:
abstract_id: PUBMED:12409535
Are women more sensitive than men to 2-propanol and m-xylene vapours? Aims: To evaluate possible differences between men and women in acute health effects after controlled short term chamber exposure to vapours of two common organic solvents.
Methods: Fifty six healthy volunteers (28 per sex) were exposed to 150 ppm 2-propanol, 50 ppm m-xylene, and clean air for two hours at rest. The subjects rated symptoms on a visual analogue scale before, during, and after the exposure. Blinking frequency was measured continuously during exposure. Pulmonary function, nasal swelling, inflammatory markers (lysozyme, eosinophilic cationic protein, myeloperoxidase, albumin) in nasal lavage and colour vision (Lanthony D-15 desaturated panel) were measured before and at 0 and 3 hours after the exposure.
Results: There were no significant sex differences in response to solvent exposure with respect to blinking frequency, lung diffusing capacity, nasal area and volume, inflammatory markers in nasal lavage, and colour vision. Increased symptoms were rated by both sexes for nearly all 10 questions during exposure to 2-propanol or m-xylene, most increases being significant at one time point at least. The rating of "discomfort in the throat or airways" increased more in women during exposure to 2-propanol or m-xylene. During exposure to 2-propanol the rating of "fatigue" was more increased in men after one hour, but more increased in women after two hours of exposure. With regard to pulmonary function, women had small but significant decreases in FVC, FEV(1)/FVC, and FEF(75) three hours after exposure to m-xylene, but only the decrease in FVC was significantly different from that in men.
Conclusion: Our results suggest that women are slightly more sensitive than men to the acute irritative effects of 2-propanol and m-xylene vapours.
abstract_id: PUBMED:34555724
Twisted bilayer arsenene sheets as a chemical sensor for toluene and M-xylene vapours - A DFT investigation. 2D (two-dimensional) materials are emerging in today's world. Among the 2D materials, arsenene sheets are prominently used as chemical and biosensors. In the present work, the twisted bilayer arsenene sheets (TB-AsNS) are used to adsorb toluene and M-xylene vapours. Moreover, the band gap of pristine TB-AsNS is calculated to be 0.437 eV. Besides, the surface adsorption of toluene and M-xylene vapours modify the electronic properties of TB-AsNS noticed from the band structure, density of states, and electron density difference diagrams. The surface assimilation of target toluene and M-xylene on TB-AsNS falls in the physisorption regime facilitating the adsorption and desorption of molecules. Also, the charge transfer analysis infers that TB-AsNS acts as acceptor and target molecules play as donors. The findings support that TB-AsNS can be used as a sensing medium towards M-xylene and toluene.
abstract_id: PUBMED:20443651
Does sensitive skin differ between men and women? Background: The term "sensitive skin" is being used with increasing frequency in the scientific literature. The general perception is that sensitive skin is more of a complaint for women, with very little emphasis on what sensitive skin means to men. HYPOTHESIS/AIMS: An epidemiologic approach was used to compare gender difference with regard to perceptions about sensitive skin.
Methods: The population consisted of 163 men with a mean (standard deviation [SD]) age of 38.6 (9.7) years and 869 women with a mean (SD) age of 35.1 (9.6) years. Participants filled out a questionnaire that was designed to evaluate perceptions of sensitive skin in general and at specific body sites and asked about perceived underlying causes (environmental factors and household and personal products) of their skin sensitivity. Comparisons were made between all men and women who responded, and between men and women of specific age groups. Comparisons were also conducted for different ethnic groups.
Results: The perceived severity of sensitive skin was comparable for men and women when asked about sensitive skin in general and sensitive skin of the body. For sensitive skin of the face and genital area, the perception of skin sensitivity appeared to shift toward less severe perceived reactions for the men. A significantly lower proportion of men >or=50 years of age perceived general sensitivity (52.9%) vs. women (78.6%), with no significant differences in the <or=30-year, 31-39-year, and 40-49-year age groups. A significantly lower proportion of men in the <or=30- and the 31-39-year age groups perceived that they had sensitive genital skin. The reasons men and women thought they had sensitive skin differed, with a significantly lower proportion of men citing visual evidence of skin irritation due to the use of products (11% of all men and 18% of all women) and a significantly higher proportion citing rubbing or friction from contact (9% of all men and 4% of all women).
abstract_id: PUBMED:32755057
Men's nutrition knowledge is important for women's and children's nutrition in Ethiopia. In an effort to address undernutrition among women and children in rural areas of low-income countries, nutrition-sensitive agriculture (NSA) and behaviour change communication (BCC) projects heavily focus on women as an entry point to effect nutritional outcomes. There is limited evidence on the role of men's contribution in improving household diets. In this Agriculture to Nutrition trial (Clinicaltrials.gov identifier: NCT03152227), we explored associations between men's and women's nutritional knowledge on households', children's and women's dietary diversity. At the midline evaluation conducted in July 2017, FAO's nutrition knowledge questionnaire was administered to male and female partners in 1396 households. There was a high degree of agreement (88%) on knowledge about exclusive breastfeeding between parents; however, only 56-66% of the households had agreement when comparing knowledge of dietary sources of vitamin A or iron. Factor analysis of knowledge dimensions resulted in identifying two domains, namely, 'dietary' and 'vitamin' knowledge. Dietary knowledge had a larger effect on women's and children's dietary diversities than vitamin knowledge. Men's dietary knowledge had strong positive associations with households' dietary diversity scores (0.24, P value = 0.001), children's dietary diversity (0.19, P value = 0.008) and women's dietary diversity (0.18, P value < 0.001). Distance to markets and men's education levels modified the effects of nutrition knowledge on dietary diversity. While previous NSA and BCC interventions predominantly focused on uptake among women, there is a large gap and strong potential for men's engagement in improving household nutrition. Interventions that expand the role of men in NSA may synergistically improve household nutrition outcomes.
abstract_id: PUBMED:24701554
Health hazards of xylene: a literature review. Xylene, an aromatic hydrocarbon is widely used in industry and medical laboratory as a solvent. It is a flammable liquid that requires utmost care during its usage. On exposure the vapours are rapidly absorbed through the lungs and the slowly through the skin. Prolonged exposure to xylene leads to significant amount of solvent accumulation in the adipose and muscle tissue. This article reviews the various acute and chronic health effects of xylene through various routes of exposure.
abstract_id: PUBMED:22720398
Treatment of mixtures of toluene and n-propanol vapours in a compost-woodchip-based biofilter. The present work describes the biofiltration of mixture of n-propanol (as a model hydrophilic volatile organic compound (VOC)) and toluene (as a model hydrophobic VOC) in a biofilter packed with a compost-woodchip mixture. Initially, the biofilter was fed with toluene vapours at loadings up to 175 g m(-3) h(-1) and removal efficiencies of 70%-99% were observed. The biofilter performance when removing mixtures of toluene and n-propanol reached elimination capacities of up to 67g(toluene) m(-3) h(-1) and 85 g(n-propanol) m(-3) h(-1) with removal efficiencies of 70%-100% for toluene and essentially 100% for n-propanol. The presence of high n-propanol loading negatively affected the toluene removal; however, n-propanol removal was not affected by the presence of toluene and was effectively removed in the biofilter despite high toluene loadings. A model for toluene and n-propanol biofiltration could predict the cross-inhibition effect of n-propanol on toluene removal.
abstract_id: PUBMED:25418576
Co-doped branched ZnO nanowires for ultraselective and sensitive detection of xylene. Co-doped branched ZnO nanowires were prepared by multistep vapor-phase reactions for the ultraselective and sensitive detection of p-xylene. Highly crystalline ZnO NWs were transformed into CoO NWs by thermal evaporation of CoCl2 powder at 700 °C. The Co-doped ZnO branches were grown subsequently by thermal evaporation of Zn metal powder at 500 °C using CoO NWs as catalyst. The response (resistance ratio) of the Co-doped branched ZnO NW network sensor to 5 ppm p-xylene at 400 °C was 19.55, which was significantly higher than those to 5 ppm toluene, C2H5OH, and other interference gases. The sensitive and selective detection of p-xylene, particularly distinguishing among benzene, toluene, and xylene with lower cross-responses to C2H5OH, can be attributed to the tuned catalytic activity of Co components, which induces preferential dissociation of p-xylene into more active species, as well as the increase of chemiresistive variation due to the abundant formation of Schottky barriers between the branches.
abstract_id: PUBMED:24843766
Fracture is additionally attributed to hyperhomocysteinemia in men and premenopausal women with type 2 diabetes. Aims/introduction: Data on hyperhomocysteinemia in relation to fractures in diabetes are limited. We aimed to explore the relationship between plasma total homocysteine concentrations and fractures in men and premenopausal women with type 2 diabetes.
Materials And Methods: Diabetic and control participants (n = 292) were enrolled in a cross-sectional hospital-based study. Bone mineral density and fractures were documented by dual energy X-ray absorptiometry and X-ray film, respectively. Plasma total homocysteine concentrations were measured using fluorescence polarization immunoassay. Risk factors for low bone mineral density or fractures and determinants of homocysteine were obtained from blood samples and the interviewer questionnaire.
Results: Plasma total homocysteine levels were higher in diabetic participants with fractures than without (8.6 [2.1] μmol/L vs 10.3 [3.0] μmol/L, P = 0.000). Diabetic participants with fractures had similar bone mineral densities as control participants. The association of homocysteine with the fracture was independent of possible risk factors for fractures (e.g., age, duration of diabetes, glycated hemoglobin, body mass index, thiazolidenediones and retinopathy) and determinants of homocysteine concentration (e.g., age, sex, serum folate and vitamin B12, renal status and biguanide use; odds ratio 1.41, 95% confidence interval 1.05-2.03, P = 0.020). Furthermore, per increase of 5.0 μmol/L plasma homocysteine was related to the fracture, after controlling for per unit increase of other factors (odds ratio 1.42, 95% confidence interval 1.12-1.78, P = 0.013).
Conclusions: Plasma total homocysteine concentration is independently associated with occurrence of fractures in men and premenopausal women with type 2 diabetes. Future prospective studies are warranted to clarify the relationship.
abstract_id: PUBMED:629889
Absorption of m-xylene vapours through the respiratory tract and excretion of m-methylhippuric acid in urine. Absorption of m-xylene and excretion of m-methylhippuric acid were investigated under controlled conditions in ten volunteers aged 17-33 years. They were exposed to m-xylene vapours at concentrations of 100, 300, and 600 mg/m3. It was found that m-xylene vapour retained in the lungs tended to decrease at the end of the exposure. An exposure test was devised, based upon the results obtained during our investigations. The amount of xylene absorbed in the course of eight hours' work can be calculated with an accuracy of about +/- 8%.
abstract_id: PUBMED:30127326
Testosterone Levels and Type 2 Diabetes-No Correlation with Age, Differential Predictive Value in Men and Women. Most studies reporting on the association of circulating testosterone levels with type 2 diabetes in men are of cross-sectional design. Reports on the relevance of altered testosterone levels in women are scarce. Here, we evaluate the role of low serum testosterone levels for incident diabetes in men and women in a population setting of 7706 subjects (3896 females). During a mean follow up time of 13.8 years, 7.8% developed type 2 diabetes. Significant correlations of testosterone with high density lipoprotein (HDL)-cholesterol (R = 0.21, p < 0.001), body-mass-index (R = -0.23, p < 0.001), and waist-to-hip-ratio (R = -0.21, p < 0.001) were found in men. No correlation was found with age in men; in women, the correlation was negligible (R = 0.04, p = 0.012). In men, low testosterone levels predicted high risk of type 2 diabetes, while in women this relationship was opposite. Men with low testosterone levels showed increased risk of future diabetes (hazard ratio (HR) 2.66, 95% confidence interval (CI) 1.91⁻3.72, p < 0.001 in basic model; HR 1.56 95%, CI 1.10⁻2.21, p = 0.003). In women, low testosterone levels indicated lower risk with (HR 0.53, 95% CI 0.37⁻0.77, p = 0.003), while the association lost significance in the fully adjusted model (HR 0.72, 95% CI 0.49⁻1.05, p = 0.09). Low levels of testosterone predicted future diabetes in men. A borderline opposite association was found in women.
Answer: Yes, women appear to be slightly more sensitive than men to the acute irritative effects of 2-propanol and m-xylene vapours. A study that exposed healthy volunteers to these vapours found that women reported a greater increase in "discomfort in the throat or airways" during exposure to both 2-propanol and m-xylene. Additionally, women experienced small but significant decreases in certain pulmonary function measurements (FVC, FEV(1)/FVC, and FEF(75)) three hours after exposure to m-xylene, with the decrease in FVC being significantly different from that in men (PUBMED:12409535). |
Instruction: Disinformation squared: was the HIV-from-Fort-Detrick myth a Stasi success?
Abstracts:
abstract_id: PUBMED:24697634
Disinformation squared: was the HIV-from-Fort-Detrick myth a Stasi success? Background: When in May 1983 the acquired immunodeficiency syndrome (AIDS) was first securely attributed to a virus, eventually called the human immunodeficiency virus (HIV), many controversies arose. Among these was one centering on HIV's origin. A startling hypothesis, called here the "HIV-from-Fort-Detrick myth," asserted that HIV had been a product, accidental or intentional, of bioweaponry research. While its earliest identifiable contributors were in the West, this myth's most dynamic propagators were in the East. The Soviet security service, the KGB, took "active measures" to create and disseminate AIDS disinformation beginning no later than July 1983 and ending no earlier than October 1987. The East German security service, a complex bureaucracy popularly known as "the Stasi," was involved, too, but how early, how deeply, how uniformly, how ably, and how successfully has not been clear. Following German reunification, claims arose attributing to the Stasi the masterful execution of ingenious elements in a disinformation campaign they helped shape and soon came to dominate. We have tested these claims.
Question: Was the HIV-from-Fort-Detrick myth a Stasi success?
Methods: Primary sources were documents and photographs assembled by the Ministry of State Security (MfS) of the German Democratic Republic (GDR or East Germany), the Ministry of Interior of the People's Republic of Bulgaria, and the United States Department of State; the estate of myth principals Jakob and Lilli Segal; the "AIDS box" in the estate of East German literary figure Stefan Heym; participant-observer recollections, interviews, and correspondence; and expert interviews. We examined secondary sources in light of primary sources.
Findings: The HIV-from-Fort-Detrick myth had debuted in print in India in 1983 and had been described in publications worldwide prior to 1986, the earliest year for which we found any Stasi document mentioning the myth in any context. Many of the myth's exponents were seemingly independent conspiracy theorists. Its single most creative exponent was Jakob Segal, an idiosyncratic Soviet biologist long resident in, and long retired in, the GDR. Segal applied to the myth a thin but tenacious layer of plausibility. We could not exclude a direct KGB influence on him but found no evidence demonstrating it. The Stasi did not direct his efforts and had difficulty tracking his activities. The Stasi were prone to interpretive error and self-aggrandizement. They credited themselves with successes they did not achieve, and, in one instance, failed to appreciate that a major presumptive success had actually been a fiasco. Senior Stasi officers came to see the myth's propagation as an embarrassment threatening broader interests, especially the GDR's interest in being accepted as a scientifically sophisticated state. In 1986, 1988, and 1989, officers of HV A/X, the Stasi's disinformation and "active measures" department, discussed the myth in meetings with the Bulgarian secret service. In the last of these meetings, HV A/X officers tried to interest their Bulgarian counterparts in taking up, or taking over, the myth's propagation. Further efforts, if any, were obscured by collapse of the East German and Bulgarian governments.
Conclusion: No, the HIV-from-Fort-Detrick myth was not a Stasi success. Impressions to the contrary can be attributed to reliance on presumptions, boasts, and inventions. Presumptions conceding to the Stasi an extraordinary operational efficiency and an irresistible competence - qualities we could not confirm in this case - made the boasts and inventions more convincing than their evidentiary basis, had it been known, would have allowed. The result was disinformation about disinformation, a product we call "disinformation squared."
abstract_id: PUBMED:31094673
Were our critics right about the Stasi? Background: Disinformation, now best known generically as "fake news," is an old and protean weapon. Prominent in the 1980s was AIDS disinformation, including the HIV-from-Fort-Detrick myth, for whose propagation some figures ultimately admitted blame while others shamelessly claimed credit. In 2013 we reported a comprehensive analysis of this myth, finding leading roles for the Soviet Union's state security service, the KGB, and for biologist and independent conspiracy theorist Jakob Segal but not for East Germany's state security service, the Stasi. We found Stasi involvement had been much less extensive and much less successful than two former Stasi officers had begun claiming following German reunification. In 2014 two historians crediting the two former Stasi officers coauthored a monograph challenging our analysis and portraying the Stasi as having directed Segal, or at least as having used him as a "conscious or unconscious multiplier," and as having successfully assisted a Soviet bloc AIDS-disinformation conspiracy that they soon inherited and thenceforth led. In 2017 a German appellate court found our 2013 analysis persuasive in a defamation suit brought by a filmmaker whose work the 2014 monograph had depicted as co-funded by the Stasi. Question and methods. Were our critics right about the Stasi? We asked and answered ten subsidiary questions bearing upon our critics' arguments, reassessing our own prior work and probing additional sources including archives of East Germany's Partei- und Staatsführung [party-and-state leadership] and the recollections of living witnesses.
Findings: Jakob Segal transformed and transmitted the myth without direction from the KGB or the Stasi or any element of East Germany's party-and-state leadership. The Stasi had trouble even tracking Segal's activities, which some officers feared would disadvantage East Germany scientifically, economically, and politically. Three officers in one Stasi section did show interest in myth propagation, but their efforts were late, limited, inept, and inconsequential.
Conclusion: The HIV-from-Fort-Detrick myth, most effectively promoted by Jakob Segal acting independently of any state's security service, was not, contrary to claims, a Stasi success.
abstract_id: PUBMED:28134043
Disinformation squared: Was the HIV-from-Fort-Detrick myth a Stasi Success? — CORRIGENDUM. Doi: 10.2990/32_2_2 , published by Association for Politics and the Life Sciences at Texas Tech University and the University of Maryland School of Public Policy, October 2013.
abstract_id: PUBMED:692650
Recombinant DNA risk-assessment studies to begin at Fort Detrick. N/A
abstract_id: PUBMED:11648735
DNA lab: Fort Detrick room set for genetic engineering. N/A
abstract_id: PUBMED:37645289
Responses to digital disinformation as part of hybrid threats: a systematic review on the effects of disinformation and the effectiveness of fact-checking/debunking. The dissemination of purposely deceitful or misleading content to target audiences for political aims or economic purposes constitutes a threat to democratic societies and institutions, and is being increasingly recognized as a major security threat, particularly after evidence and allegations of hostile foreign interference in several countries surfaced in the last five years. Disinformation can also be part of hybrid threat activities. This research paper examines findings on the effects of disinformation and addresses the question of how effective counterstrategies against digital disinformation are, with the aim of assessing the impact of responses such as the exposure and disproval of disinformation content and conspiracy theories. The paper's objective is to synthetize the main scientific findings on disinformation effects and on the effectiveness of debunking, inoculation, and forewarning strategies against digital disinformation. A mixed methodology is used, combining qualitative interpretive analysis and structured technique for evaluating scientific literature such as a systematic literature review (SLR), following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework.
abstract_id: PUBMED:36203682
Human trafficking and the growing malady of disinformation. Disinformation has endangered the most vulnerable communities within our world. The anti-trafficking movement in particular has been adversely impacted by disinformation tactics advanced through the QAnon campaign. QAnon's extremist messaging exacerbates gendered, racist, and xenophobic manifestations of trafficking victimization as well as problematic responses to trafficking that underpin historic structural inequities built into the United States' response to trafficking. We describe an overview of mechanisms used by the QAnon campaign to spread disinformation and illustrate how these mechanisms adversely affect the anti-trafficking movement. Given the critical role of healthcare providers in both the identification and connection to care for trafficked persons, as well as their susceptibility to disinformation, we provide several recommendations for the health sector to leverage their educational and advocacy power to combat trafficking disinformation while addressing the root causes of human trafficking.
abstract_id: PUBMED:36554727
Disinformation: A Bibliometric Review. Objectives: This paper aimed to provide a systematic review of relevant articles from the perspectives of literature distribution, research hotspots, and existing results to obtain the frontier directions in the field of disinformation.
Methods: We analyzed disinformation publications published between 2002 and 2021 using bibliometric methods based on the Web of Science. There were 5666 papers analyzed using Derwent Data Analyzer (DDA).
Results: The result shows that the USA was the most influential country in this area, while Ecker and Lewandowsky from the University of Western Australia published the largest volumes of papers. Keywords such as "social media", "COVID-19", and "vaccination" have gained immense popularity recently.
Conclusions: We summarized four themes that are of the biggest concern to scholars: group heterogeneity of misinformation in memory, disinformation mechanism in social media, public health related to COVID-19, and application of big data technology in the infodemic. The future agenda of disinformation is summarized from three aspects: the mechanism of disinformation, social media users, and the application of algorithms. This work can be a meaningful resource for researchers' study in the area of disinformation.
abstract_id: PUBMED:38134524
Trust or distrust? Neither! The right mindset for confronting disinformation. A primary explanation for why individuals believe disinformation is the truth bias, a predisposition to accept information as true. However, this bias is context-dependent, as research shows that rejection becomes the predominant process in a distrust mindset. Consequently, trust and distrust emerge as pivotal factors in addressing disinformation. The current review offers a more nuanced perspective by illustrating that whereas distrust may act as an antidote to the truth bias, it can also paradoxically serve as a catalyst for belief in disinformation. The review concludes that mindsets other than those rooted solely in trust (or distrust), such as an evaluative mindset, may prove to be more effective in detecting and refuting disinformation.
abstract_id: PUBMED:11648698
Genetic research at Ft. Detrick raises few hackles in Frederick. N/A
Answer: No, the HIV-from-Fort-Detrick myth was not a Stasi success. The comprehensive analysis of the myth's origins and propagation found that while the Soviet Union's KGB and biologist Jakob Segal played leading roles in spreading the myth, East Germany's Stasi was not as extensively or successfully involved as some claims suggested (PUBMED:31094673). The myth had already appeared in print in India in 1983 and had been described in publications worldwide prior to 1986, which is earlier than any Stasi document mentioning the myth (PUBMED:24697634). The Stasi did not direct Segal's efforts and had difficulty tracking his activities. They were prone to interpretive errors and self-aggrandizement, often crediting themselves with successes they did not achieve (PUBMED:24697634). The Stasi's involvement in the myth's propagation was found to be late, limited, inept, and inconsequential (PUBMED:31094673). Ultimately, the impression that the Stasi had masterfully executed a disinformation campaign was based on presumptions, boasts, and inventions rather than evidence, leading to what the authors term "disinformation squared" (PUBMED:24697634). |
Instruction: The CBCL dysregulated profile: an indicator of pediatric bipolar disorder or of psychopathology severity?
Abstracts:
abstract_id: PUBMED:24230916
The CBCL dysregulated profile: an indicator of pediatric bipolar disorder or of psychopathology severity? Background: To evaluate whether the Child Behavior Checklist Dysregulated Profile (CBCL-DP) can be used as an effective predictor of psychopathological severity as indicated by suicidality and comorbidities, as well as a predictor of pediatric bipolar disorder (PBD).
Method: CBCL-DP scores for 397 youths seeking treatment for mood disorders were calculated by summing the t-scores of the Anxious/Depressed, Aggressive Behaviors, and Attention Problems subscales such that a clinical cut-off of 210 was used to indicate the presence of a dysregulated profile. Suicidality and an increased number of diagnoses were used as markers of illness severity.
Results: Those with a dysregulated profile presented more severe suicidal ideation when compared to those without the profile. They also had a significantly larger number of Axis I diagnoses. Groups did not differ in the amount of individuals diagnosed with PBD.
Limitations: Suicidal ideation was assessed by a third-party informant and not from the youths themselves. No other forms of suicidal behavior such as self-harm or suicide attempt were measured. Also there may not be complete convergence between parental reports on behavior and youth reports, which might have affected the results.
Conclusions: These findings suggest that the CBCL-DP is an effective indicator of psychopathological severity through its association with more comorbidities and more severe suicidality. Earlier detection of psychopathological severity through an initial screening tool could aid clinicians in planning treatment and providing quicker and more structured care based on the client's needs.
abstract_id: PUBMED:38389275
The Child Behavior Check List Usefulness in Screening for Severe Psychopathology in Youth: A Narrative Literature Review. Objective: This article will review the use of the CBCL to diagnose youth with psychopathological disorders focusing on: ADHD, Mood Disorders, Autism Spectrum disorders, and Disruptive Disorders.
Method: Using a narrative review approach, we investigate the usefulness of the CBCL as a screening tool to detect childhood onset psychopathology across different diagnostic syndromes.
Results: The available literature supports the use of the CBCL for ADHD screening and as a measure of ADHD severity. While some studies support a specific profile linked with childhood bipolar disorder, replication studies for this profile found mixed results. The CBCL was also found to be useful in screening for patients presenting with Autism Spectrum Disorders, Conduct Disorder, and Childhood Bipolar Disorder all of which presents with more severely impaired scores.
Conclusion: The CBCL holds promise as a screening tool for childhood psychopathology.
abstract_id: PUBMED:19486226
Child Behavior Checklist Juvenile Bipolar Disorder (CBCL-JBD) and CBCL Posttraumatic Stress Problems (CBCL-PTSP) scales are measures of a single dysregulatory syndrome. Background: The Child Behavior Checklist Juvenile Bipolar Disorder (CBCL-JBD) profile and Posttraumatic Stress Problems (CBCL-PTSP) scale have been used to assess juvenile bipolar disorder (JBD) and posttraumatic stress disorder (PTSD), respectively. However, their validity is questionable according to previous research. Both measures are associated with severe psychopathology often encompassing multiple DSM-IV diagnoses. Further, children who score highly on one of these scales often have elevated scores on the other, independent of PTSD or JBD diagnoses. We hypothesized that the two scales may be indicators of a single syndrome related to dysregulated mood, attention, and behavior. We aimed to describe and identify the overlap between the CBCL-JBD profile and CBCL-PTSP scales.
Method: Two thousand and twenty-nine (2029) children from a nationally representative sample (1073 boys, 956 girls; mean age = 11.98; age range = 6-18) were rated on emotional and behavior problems by their parents using the CBCL. Comparative model testing via structural equation modeling was conducted to determine whether the CBCL-JBD profile and CBCL-PTSP scale are best described as measuring separate versus unitary constructs. Associations with suicidality and competency scores were also examined.
Results: The CBCL-JBD and CBCL-PTSP demonstrated a high degree of overlap (r = .89) at the latent variable level. The best fitting, most parsimonious model was one in which the CBCL-JBD and CBCL-PTSP items identified a single latent construct, which was associated with higher parental endorsement of child suicidal behavior, and lower functioning.
Conclusions: The CBCL-JBD profile and CBCL-PTSP scale overlap to a remarkable degree, and may be best described as measures of a single syndrome. This syndrome appears to be related to severe psychopathology, but may not conform to traditional DSM-IV classification. These results contribute to the ongoing debate about the utility of the CBCL-JBD and CBCL-PTSP profiles, and offer promising methods of empirically based measurement of disordered self-regulation in youth.
abstract_id: PUBMED:37579549
Diagnostic efficiency and psychometric properties of CBCL DSM-oriented scales in a large sample of Chinese school-attending students aged 5-16. Background: Children and adolescents are vulnerable to various psychiatric disorders during the critical phase of individual development. In China, the child behavior checklist (CBCL) is a widely employed psychometric questionnaire for assessing children and adolescents. However, further validation of the psychometric properties and diagnostic effectiveness of the CBCL DSM-oriented scales is necessary. These scales were developed based on DSM diagnosis and require evaluation using a substantial sample of Chinese individuals.
Methods: This study involved the analysis of a substantial dataset consisting of 72,109 samples collected from five provinces in China. Data was gathered using the CBCL (Parent Rating Scale), and rigorous assessments of reliability and validity were conducted. The mini-international neuropsychiatric interview for children and adolescents (MINI-KID) and the diagnostic and statistical manual of mental disorders-IV (DSM-IV) interview were employed to diagnose the participants. To ensure the accuracy of the diagnoses, receiver operating characteristic curve (ROC curves) were utilized, and the Youden Index was calculated to determine the appropriate diagnostic cut-off point for each specific target diagnosis.
Results: The study included a total sample of 72,109 cases, out of which 19,782 cases underwent MINI-KID assessment and structured or semi-structured interviews based on DSM-IV to clarify the diagnosis. Reliability and validity analyses showed that the reliability of the subscales and total scales was good, except for Anxiety Problems. The Cronbach's alpha for the CBCL DSM-oriented scales was 0.92. In addition, the validity of all scales was good (CFI = 0.80). For the sample with a clear diagnosis, all five subscales of the CBCL DSM-oriented scales showed fair diagnostic efficiency for the target diagnosis. Among them, the area under curves (AUC) of Mood disorder, Anxiety, Attention deficit and hyperactivity disorder (ADHD), Oppositional defiant disorder (ODD) and Conduct disorder (CD) are 0.80, 0.74, 0.75, 0.74, 0.74. Among the three sample groups, the highest diagnostic efficiency was found in Affective Problems to Mania. The diagnostic cut-off point for each subscale on target diagnoses was clearly defined.
Conclusion: Overall, the reliability, validity and diagnostic efficiency of CBCL DSM-oriented scales in Chinese children and adolescents were within acceptable limits. In addition, we used ROC curves and cut-off points to predict the cut-off values of common child and adolescent psychiatric disorders mentioned in the CBCL DSM-oriented scales. This provides an important reference for the clinical application of the CBCL DSM-oriented scales in Chinese samples.
abstract_id: PUBMED:24372351
Dimensional psychopathology in preschool offspring of parents with bipolar disorder. Background: The purpose of this study is to compare the dimensional psychopathology, as ascertained by parental report, in preschool offspring of parents with bipolar disorder (BP) and offspring of community control parents.
Methods: 122 preschool offspring (mean age 3.3 years) of 84 parents with BP, with 102 offspring of 65 control parents (36 healthy, 29 with non-BP psychopathology), were evaluated using the Child Behavior Checklist (CBCL), the CBCL-Dysregulation Profile (CBCL-DP), the Early Childhood Inventory (ECI-4), and the Emotionality Activity Sociability (EAS) survey. Teachers' Report Forms (TRF) were available for 51 preschoolers.
Results: After adjusting for confounders, offspring of parents with BP showed higher scores in the CBCL total, externalizing, somatic, sleep, aggressive, and CBCL-DP subscales; the ECI-4 sleep problem scale; and the EAS total and emotionality scale. The proportion of offspring with CBCL T-scores ≥ 2 SD above the norm was significantly higher on most CBCL subscales and the CBCL-DP in offspring of parents with BP compared to offspring of controls even after excluding offspring with attention deficit hyperactivity disorder and/or oppositional defiant disorder. Compared to offspring of parents with BP-I, offspring of parents with BP-II showed significantly higher scores in total and most CBCL subscales, the ECI-4 anxiety and sleep scales and the EAS emotionality scale. For both groups of parents, there were significant correlations between CBCL and TRF scores (r = .32-.38, p-values ≤.02).
Conclusions: Independent of categorical axis-I psychopathology and other demographic or clinical factors in both biological parents, preschool offspring of parents with BP have significantly greater aggression, mood dysregulation, sleep disturbances, and somatic complaints compared to offspring of control parents. Interventions to target these symptoms are warranted.
abstract_id: PUBMED:19232020
The Child Behavior Checklist (CBCL) and the CBCL-bipolar phenotype are not useful in diagnosing pediatric bipolar disorder. Objectives: Previous studies have suggested that the sum of Attention, Aggression, and Anxious/Depressed subscales of Child Behavior Checklist (CBCL-PBD; pediatric bipolar disorder phenotype) may be specific to pediatric bipolar disorder (BP). The purpose of this study was to evaluate the usefulness of the CBCL and CBCL-PBD to identify BP in children <12 years old.
Methods: A sample of children with BP I, II, and not otherwise specified (NOS) (n = 157) ascertained through the Course and Outcome for Bipolar Disorder in Youth (COBY) study were compared with a group of children with major depressive/anxiety disorders (MDD/ANX; n = 101), disruptive behavior disorder (DBD) (n = 127), and healthy control (HC) (n = 128). The CBCL T-scores and area under the curve (AUC) scores were calculated and compared among the above-noted groups.
Results: Forty one percent of BP children did not have significantly elevated CBCL-PBD scores (>or=2 standard deviations [SD]). The sensitivity and specificity of CBCL-PBD >or= 2 SD for diagnosis of BP was 57% and 70-77%, respectively, and the accuracy of CBCL-PBD for identifying a BP diagnosis was moderate (AUC = 0.72-0.78).
Conclusion: The CBCL and the CBCL-PBD showed that BP children have more severe psychopathology than HC and children with other psychopathology, but they were not useful as a proxy for Diagnostic and Statistical Manual of Mental Disorders, 4(th) edition (DSM-IV) diagnosis of BP.
abstract_id: PUBMED:2522116
Psychopathology in children aged 10-17 of bipolar parents: psychopathology rate and correlates of the severity of the psychopathology. Seventy-two proband children aged 10-17 of bipolar parents, matched with 72 control children of normal parents, were investigated using DSM-III diagnostic criteria and multiple sources of information. The psychopathology rate in children (61% in probands versus 25% in controls) was related to the impact of psychic disorders on the children's adaptive functioning. The effect of several variables describing the psychiatric status of both parents and familial environment on the severity of psychopathology in children was analysed. Disordered and non-disordered probands were compared with respect to illness characteristics of their parents, familial environment, personality traits, and IQ by means of canonical discriminant analysis.
abstract_id: PUBMED:22085480
Dimensional psychopathology in offspring of parents with bipolar disorder. Objectives: To compare the dimensional psychopathology in offspring of parents with bipolar disorder (BP) with offspring of community control parents as assessed by the Child Behavior Checklist (CBCL).
Methods: Offspring of parents with BP, who were healthy or had non-BP disorders (any psychiatric disorder other than BP; n = 319) or who had bipolar spectrum disorders (n = 35), and offspring of community controls (n = 235) ages 6-18 years were compared using the CBCL, the CBCL-Dysregulation Profile (CBCL-DP), and a sum of the CBCL items associated with mood lability. The results were adjusted for multiple comparisons and for any significant between-group demographic and clinical differences in both biological parents and offspring.
Results: With few exceptions, several CBCL (e.g., Total, Internalizing, and Aggression Problems), CBCL-DP, and mood lability scores in non-BP offspring of parents with BP were significantly higher than in offspring of control parents. In addition, both groups of offspring showed significantly lower scores in most scales when compared with offspring of parents with BP who had already developed BP. Similar results were obtained when analyzing the rates of subjects with CBCL T-scores that were two standard deviations or higher above the mean.
Conclusions: Even before developing BP, offspring of parents with BP had more severe and higher rates of dimensional psychopathology than offspring of control parents. Prospective follow-up studies in non-BP offspring of parents with BP are warranted to evaluate whether these dimensional profiles are prodromal manifestations of mood or other disorders, and can predict those who are at higher risk to develop BP.
abstract_id: PUBMED:27605916
Comparative Evaluation of Child Behavior Checklist-Derived Scales in Children Clinically Referred for Emotional and Behavioral Dysregulation. Background: We recently developed the Child Behavior Checklist-Mania Scale (CBCL-MS), a novel and short instrument for the assessment of mania-like symptoms in children and adolescents derived from the CBCL item pool and have demonstrated its construct validity and temporal stability in a longitudinal general population sample.
Objective: The aim of this study was to evaluate the construct validity of the 19-item CBCL-MS in a clinical sample and to compare its discriminatory ability to that of the 40-item CBCL-dysregulation profile (CBCL-DP) and the 34-item CBCL-Externalizing Scale.
Methods: The study sample comprised 202 children, aged 7-12 years, diagnosed with DSM-defined attention deficit hyperactivity disorder (ADHD), conduct disorder (CD), oppositional defiant disorder (ODD), and mood and anxiety disorders based on the Diagnostic Interview Schedule for Children. The construct validity of the CBCL-MS was tested by means of a confirmatory factor analysis. Receiver operating characteristics (ROC) curves and logistic regression analyses adjusted for sex and age were used to assess the discriminatory ability relative to that of the CBCL-DP and the CBCL-Externalizing Scale.
Results: The CBCL-MS had excellent construct validity (comparative fit index = 0.97; Tucker-Lewis index = 0.96; root mean square error of approximation = 0.04). Despite similar overall performance across scales, the clinical range scores of the CBCL-DP and the CBCL-Externalizing Scale were associated with higher odds for ODD and CD, while the clinical range scores of the CBCL-MS were associated with higher odds for mood disorders. The concordance rate among the children who scored within the clinical range of each scale was over 90%.
Conclusion: CBCL-MS has good construct validity in general population and clinical samples and is therefore suitable for both clinical practice and research.
abstract_id: PUBMED:12127602
Characterization of children of bipolar parents by parent report CBCL. In past research the Child Behavior Checklist (CBCL) has differentiated among various diagnostic categories for children and adolescents. However, research has not been conducted on whether the CBCL differentiates among diagnostic categories for children at high risk for development of psychopathology. This study compares four diagnostic groups [bipolar disorder (BD), attention/deficit-hyperactivity disorder (ADHD), Depressed/Anxious and No Diagnosis] within a cohort of 58 children of bipolar parents to determine whether their CBCL scores will replicate the scores of children not at high risk for bipolar disorder. The cohort of children of bipolar parents received elevated scores on the CBCL scales in comparison with non-clinical populations. In addition, the CBCL distinguished between children of bipolar parents with and without clinical disorders. Finally the BD group differed from the ADHD group only on the Aggressive Behaviors, Withdrawn and Anxious/Depressed subscales of the CBCL. Therefore the CBCL did not discriminate between the BD and ADHD groups as it had in previous studies of children with BD and unspecified family history. It is possible that this discrepancy is due to a group of children of bipolar parents with ADHD who are currently prodromal for bipolar disorder and therefore received higher scores on the CBCL based on prodromal symptomatology. A longitudinal follow-up of this cohort is necessary to ascertain whether this is the case.
Answer: The Child Behavior Checklist Dysregulated Profile (CBCL-DP) has been investigated as a potential indicator of pediatric bipolar disorder (PBD) and psychopathology severity. The evidence suggests that while the CBCL-DP is associated with more severe psychopathology, it may not be a specific indicator of PBD.
Research has shown that youths with a dysregulated profile present with more severe suicidal ideation and a significantly larger number of Axis I diagnoses compared to those without the profile. However, there was no significant difference in the number of individuals diagnosed with PBD between the groups with and without a dysregulated profile (PUBMED:24230916). This suggests that the CBCL-DP may be more indicative of psychopathological severity rather than being specific to PBD.
Further studies have found that the CBCL is useful in screening for various childhood psychopathologies, including ADHD, Autism Spectrum Disorders, Conduct Disorder, and Childhood Bipolar Disorder, all of which present with more severely impaired scores (PUBMED:38389275). However, the CBCL-Juvenile Bipolar Disorder (CBCL-JBD) profile and the CBCL-Posttraumatic Stress Problems (CBCL-PTSP) scales, which have been used to assess JBD and PTSD respectively, may actually measure a single dysregulatory syndrome related to severe psychopathology rather than conforming to traditional DSM-IV classifications (PUBMED:19486226).
Moreover, the CBCL and the CBCL-PBD phenotype have been found not to be useful in diagnosing pediatric bipolar disorder, as they did not serve as a proxy for DSM-IV diagnosis of BP, despite showing that BP children have more severe psychopathology than healthy controls and children with other psychopathology (PUBMED:24372351).
In summary, while the CBCL-DP is associated with greater psychopathological severity, including suicidality and comorbidities, it does not appear to be a specific indicator of pediatric bipolar disorder. Instead, it may reflect a broader dysregulatory syndrome that encompasses multiple psychopathological dimensions (PUBMED:24230916, PUBMED:38389275, PUBMED:19486226, PUBMED:24372351). |
Instruction: Are carbon and nitrogen exchange between fungi and the orchid Goodyera repens affected by irradiance?
Abstracts:
abstract_id: PUBMED:25538109
Are carbon and nitrogen exchange between fungi and the orchid Goodyera repens affected by irradiance? Background And Aims: The green orchid Goodyera repens has been shown to transfer carbon to its mycorrhizal partner, and this flux may therefore be affected by light availability. This study aimed to test whether the C and N exchange between plant and fungus is dependent on light availability, and in addition addressed the question of whether flowering and/or fruiting individuals of G. repens compensate for changes in leaf chlorophyll concentration with changes in C and N flows from fungus to plant.
Methods: The natural abundances of stable isotopes of plant C and N were used to infer changes in fluxes between orchid and fungus across natural gradients of irradiance at five sites. Mycorrhizal fungi in the roots of G. repens were identified by molecular analyses. Chlorophyll concentrations in the leaves of the orchid and of reference plants were measured directly in the field.
Key Results: Leaf δ(13)C values of G. repens responded to changes in light availability in a similar manner to autotrophic reference plants, and different mycorrhizal fungal associations also did not affect the isotope abundance patterns of the orchid. Flowering/fruiting individuals had lower leaf total N and chlorophyll concentrations, which is most probably explained by N investments to form flowers, seeds and shoot.
Conclusions: The results indicate that mycorrhizal physiology is relatively fixed in G. repens, and changes in the amount and direction of C flow between plant and fungus were not observed to depend on light availability. The orchid may instead react to low-light sites through increased clonal growth. The orchid does not compensate for low leaf total N and chlorophyll concentrations by using a (13)C- and (15)N-enriched fungal source.
abstract_id: PUBMED:17339276
Mycorrhizal acquisition of inorganic phosphorus by the green-leaved terrestrial orchid Goodyera repens. Background And Aims: Mycorrhizal fungi play a vital role in providing a carbon subsidy to support the germination and establishment of orchids from tiny seeds, but their roles in adult orchids have not been adequately characterized. Recent evidence that carbon is supplied by Goodyera repens to its fungal partner in return for nitrogen has established the mutualistic nature of the symbiosis in this orchid. In this paper the role of the fungus in the capture and transfer of inorganic phosphorus (P) to the orchid is unequivocally demonstrated for the first time.
Methods: Mycorrhiza-mediated uptake of phosphorus in G. repens was investigated using spatially separated, two-dimensional agar-based microcosms.
Results: External mycelium growing from this green orchid is shown to be effective in assimilating and transporting the radiotracer (33)P orthophosphate into the plant. After 7 d of exposure, over 10 % of the P supplied was transported over a diffusion barrier by the fungus and to the plants, more than half of this to the shoots.
Conclusions: Goodyera repens can obtain significant amounts of P from its mycorrhizal partner. These results provide further support for the view that mycorrhizal associations in some adult green orchids are mutualistic.
abstract_id: PUBMED:16866946
Mutualistic mycorrhiza in orchids: evidence from plant-fungus carbon and nitrogen transfers in the green-leaved terrestrial orchid Goodyera repens. The roles of mycorrhiza in facilitating the acquisition and transfer of carbon (C) and nitrogen (N) to adult orchids are poorly understood. Here, we employed isotopically labelled sources of C and N to investigate these processes in the green forest orchid, Goodyera repens. Fungus-to-orchid transfers of C and N were measured using mass spectrometry after supplying extraradical mycelial systems with double-labelled [13C-15N]glycine. Orchid-to-fungus C transfer was revealed and quantified by radioisotope imaging and liquid scintillation counting of extraradical mycelium following 14CO2 fixation by shoots. Both 13C and 15N were assimilated by the fungus and transferred to the roots and shoots of the orchid. Contrary to previous reports, considerable quantities (2.6% over 72 h) of fixed C were shown to be allocated to the extraradical mycelium of the fungus. This study demonstrates, for the first time, mutualism in orchid mycorrhiza, bidirectional transfer of C between a green orchid and its fungal symbiont, and a fungus-dependent pathway for organic N acquisition by an orchid.
abstract_id: PUBMED:18627489
Giving and receiving: measuring the carbon cost of mycorrhizas in the green orchid, Goodyera repens. Direct measurement of the carbon (C) 'cost' of mycorrhizas is problematic. Although estimates have been made for arbuscular and ectomycorrhizal symbioses, these are based on incomplete budgets or indirect measurements. Furthermore, the conventional model of unidirectional plant-to-fungus C flux is too simplistic. Net fungus-to-plant C transfer supports seedling establishment in c. 10% of plant species, including most orchids, and bidirectional C flows occur in ectomycorrhiza utilizing soil amino acids. Here, the C cost of mycorrhizas to the green orchid Goodyera repens was determined by measurement of simultaneous bidirectional fluxes of 14C labelled sources using a monoxenic system with the fungus Ceratobasidium cornigerum. Transfer of C from fungus to plant ('up-flow') occurs in the photosynthesizing orchid G. repens (max. 0.06 microg) whereas over five times more current assimilate (min. 0.355 microg) is simultaneously allocated in the reverse direction to the mycorrhizal fungus ('down-flow') after 8 d. Carbon is transferred rapidly, being detected in plant-fungal respiration within 31 h of labelling. This study provides the most complete C budget for an orchid-mycorrhizal symbiosis, and clearly shows net plant-to-fungus C flux. The rapidity of bidirectional C flux is indicative of dynamic transfer at an interfacial apoplast as opposed to reliance on digestion of fungal pelotons.
abstract_id: PUBMED:29419910
The giant mycoheterotrophic orchid Erythrorchis altissima is associated mainly with a divergent set of wood-decaying fungi. The climbing orchid Erythrorchis altissima is the largest mycoheterotroph in the world. Although previous in vitro work suggests that E. altissima has a unique symbiosis with wood-decaying fungi, little is known about how this giant orchid meets its carbon and nutrient demands exclusively via mycorrhizal fungi. In this study, the mycorrhizal fungi of E. altissima were molecularly identified using root samples from 26 individuals. Furthermore, in vitro symbiotic germination with five fungi and stable isotope compositions in five E. altissima at one site were examined. In total, 37 fungal operational taxonomic units (OTUs) belonging to nine orders in Basidiomycota were identified from the orchid roots. Most of the fungal OTUs were wood-decaying fungi, but underground roots had ectomycorrhizal Russula. Two fungal isolates from mycorrhizal roots induced seed germination and subsequent seedling development in vitro. Measurement of carbon and nitrogen stable isotope abundances revealed that E. altissima is a full mycoheterotroph whose carbon originates mainly from wood-decaying fungi. All of the results show that E. altissima is associated with a wide range of wood- and soil-inhabiting fungi, the majority of which are wood-decaying taxa. This generalist association enables E. altissima to access a large carbon pool in woody debris and has been key to the evolution of such a large mycoheterotroph.
abstract_id: PUBMED:33366047
The complete chloroplast genome sequence of Goodyera foliosa(Orchidaceae). Goodyera foliosa is a terrestrial orchid in Asia and has been listed as an endangered species in the Red List. In this study, we assembled the complete chloroplast genome of G. foliosa using Illumina sequencing data. Its full-length of 154,008 bp including a pair of invert repeats (IR) regions of 25,045 bp, large single-copy (LSC) region of 83,248 bp, and small single-copy (SSC) region of 20,670 bp. The chloroplast genome contains 127 genes, including 80 protein-coding genes, 39 tRNA genes, and 8 rRNA genes. In addition, the phylogenetic analysis base on 12 chloroplast genomes of Orchidaceae indicates that G. schlechtendaliana is closely related to G. foliosa. Our study would be helpful for the formulation of conservation strategies and further research of G. foliosa.
abstract_id: PUBMED:29790578
Mycorrhizal fungi affect orchid distribution and population dynamics. Symbioses are ubiquitous in nature and influence individual plants and populations. Orchids have life history stages that depend fully or partially on fungi for carbon and other essential resources. As a result, orchid populations depend on the distribution of orchid mycorrhizal fungi (OMFs). We focused on evidence that local-scale distribution and population dynamics of orchids can be limited by the patchy distribution and abundance of OMFs, after an update of an earlier review confirmed that orchids are rarely limited by OMF distribution at geographic scales. Recent evidence points to a relationship between OMF abundance and orchid density and dormancy, which results in apparent density differences. Orchids were more abundant, less likely to enter dormancy, and more likely to re-emerge when OMF were abundant. We highlight the need for additional studies on OMF quantity, more emphasis on tropical species, and development and application of next-generation sequencing techniques to quantify OMF abundance in substrates and determine their function in association with orchids. Research is also needed to distinguish between OMFs and endophytic fungi and to determine the function of nonmycorrhizal endophytes in orchid roots. These studies will be especially important if we are to link orchids and OMFs in efforts to inform conservation.
abstract_id: PUBMED:34992588
Extracellular Enzyme Activities and Carbon/Nitrogen Utilization in Mycorrhizal Fungi Isolated From Epiphytic and Terrestrial Orchids. Fungi employ extracellular enzymes to initiate the degradation of organic macromolecules into smaller units and to acquire the nutrients for their growth. As such, these enzymes represent important functional components in terrestrial ecosystems. While it is well-known that the regulation and efficiency of extracellular enzymes to degrade organic macromolecules and nutrient-acquisition patterns strongly differ between major fungal groups, less is known about variation in enzymatic activity and carbon/nitrogen preference in mycorrhizal fungi. In this research, we investigated variation in extracellular enzyme activities and carbon/nitrogen preferences in orchid mycorrhizal fungi (OMF). Previous research has shown that the mycorrhizal fungi associating with terrestrial orchids often differ from those associating with epiphytic orchids, but whether extracellular enzyme activities and carbon/nitrogen preference differ between growth forms remains largely unknown. To fill this gap, we compared the activities of five extracellular enzymes [cellulase, xylanase, lignin peroxidase, laccase, and superoxide dismutase (SOD)] between fungi isolated from epiphytic and terrestrial orchids. In total, 24 fungal strains belonging to Tulasnellaceae were investigated. Cellulase and xylanase activities were significantly higher in fungi isolated from terrestrial orchids (0.050 ± 0.006 U/ml and 0.531 ± 0.071 U/ml, respectively) than those from epiphytic orchids (0.043 ± 0.003 U/ml and 0.295 ± 0.067 U/ml, respectively), while SOD activity was significantly higher in OMF from epiphytic orchids (5.663 ± 0.164 U/ml) than those from terrestrial orchids (3.780 ± 0.180 U/ml). Carboxymethyl cellulose was more efficiently used by fungi from terrestrial orchids, while starch and arginine were more suitable for fungi from epiphytic orchids. Overall, the results of this study show that extracellular enzyme activities and to a lesser extent carbon/nitrogen preferences differ between fungi isolated from terrestrial and epiphytic orchids and may indicate functional differentiation and ecological adaptation of OMF to local growth conditions.
abstract_id: PUBMED:34956293
Integrative Study Supports the Role of Trehalose in Carbon Transfer From Fungi to Mycotrophic Orchid. Orchids rely on mycorrhizal symbiosis, especially in the stage of mycoheterotrophic protocorms, which depend on carbon and energy supply from fungi. The transfer of carbon from fungi to orchids is well-documented, but the identity of compounds ensuring this transfer remains elusive. Some evidence has been obtained for the role of amino acids, but there is also vague and neglected evidence for the role of soluble carbohydrates, probably trehalose, which is an abundant fungal carbohydrate. We therefore focused on the possible role of trehalose in carbon and energy transfer. We investigated the common marsh orchid (Dactylorhiza majalis) and its symbiotic fungus Ceratobasidium sp. using a combination of cultivation approaches, high-performance liquid chromatography, application of a specific inhibitor of the enzyme trehalase, and histochemical localization of trehalase activity. We found that axenically grown orchid protocorms possess an efficient, trehalase-dependent, metabolic pathway for utilizing exogenous trehalose, which can be as good a source of carbon and energy as their major endogenous soluble carbohydrates. This is in contrast to non-orchid plants that cannot utilize trehalose to such an extent. In symbiotically grown protocorms and roots of adult orchids, trehalase activity was tightly colocalized with mycorrhizal structures indicating its pronounced role in the mycorrhizal interface. Inhibition of trehalase activity arrested the growth of both symbiotically grown protocorms and trehalose-supported axenic protocorms. Since trehalose constitutes only an inconsiderable part of the endogenous saccharide spectrum of orchids, degradation of fungal trehalose likely takes place in orchid mycorrhiza. Our results strongly support the neglected view of the fungal trehalose, or the glucose produced by its cleavage as compounds transported from fungi to orchids to ensure carbon and energy flow. Therefore, we suggest that not only amino acids, but also soluble carbohydrates are transported. We may propose that the soluble carbohydrates would be a better source of energy for plant metabolism than amino acids, which is partially supported by our finding of the essential role of trehalase.
abstract_id: PUBMED:25382295
Temporal variation in mycorrhizal diversity and carbon and nitrogen stable isotope abundance in the wintergreen meadow orchid Anacamptis morio. Many adult orchids, especially photoautotrophic species, associate with a diverse range of mycorrhizal fungi, but little is known about the temporal changes that might occur in the diversity and functioning of orchid mycorrhiza during vegetative and reproductive plant growth. Temporal variations in the spectrum of mycorrhizal fungi and in stable isotope natural abundance were investigated in adult plants of Anacamptis morio, a wintergreen meadow orchid. Anacamptis morio associated with mycorrhizal fungi belonging to Tulasnella, Ceratobasidium and a clade of Pezizaceae (Ascomycetes). When a complete growing season was investigated, multivariate analyses indicated significant differences in the mycorrhizal fungal community. Among fungi identified from manually isolated pelotons, Tulasnella was more common in autumn and winter, the pezizacean clade was very frequent in spring, and Ceratobasidium was more frequent in summer. By contrast, relatively small variations were found in carbon (C) and nitrogen (N) stable isotope natural abundance, A. morio samples showing similar (15)N enrichment and (13)C depletion at the different sampling times. These observations suggest that, irrespective of differences in the seasonal environmental conditions, the plant phenological stages and the associated fungi, the isotopic content in mycorrhizal A. morio remains fairly constant over time.
Answer: No, carbon and nitrogen exchange between fungi and the orchid Goodyera repens are not affected by irradiance. A study aimed to test whether the carbon (C) and nitrogen (N) exchange between the plant and fungus is dependent on light availability found that changes in the amount and direction of C flow between Goodyera repens and its mycorrhizal fungus were not observed to depend on light availability. Leaf δ(13)C values of G. repens responded to changes in light availability in a similar manner to autotrophic reference plants, and different mycorrhizal fungal associations did not affect the isotope abundance patterns of the orchid. The results indicate that mycorrhizal physiology is relatively fixed in G. repens, and the orchid does not compensate for low leaf total N and chlorophyll concentrations by using a (13)C- and (15)N-enriched fungal source (PUBMED:25538109). |
Instruction: Tobacco: a potential inductor of lipid peroxidation of the human spermatozoon membrane?
Abstracts:
abstract_id: PUBMED:15563426
Tobacco: a potential inductor of lipid peroxidation of the human spermatozoon membrane? Objective: To determine the membrane lipid peroxidation of human spermatozoon in a cohort of smokers in comparison of never-smokers.
Materials And Methods: Malondialdehyde (MDA), a stable product of the membrane lipid peroxidation, was assessed in 25 smokers and in 17 never-smokers. In parallel, an evaluation of sperm characteristics was realized for all the studied patients.
Results: For the first time, between smokers and never-smokers, a significative increase of MDA concentrations was found by the U-Mann and Whitney test (0.118 +/- 0.176 vs 0.0392 +/- 0.0117 nmol/10(6) spermatozoa), a decrease of the forward motility (grade A), (18 +/- 8 vs 25 +/- 8%) and total sperm count (265.56 +/- 186.96 x 10(6) vs 399.30 +/- 322.23 x10(6)), and also an increase of tapering heads (6 +/- 4 vs 2 +/- 2%) or morphological stress pattern cells (39 +/- 6 vs 24 +/- 5%). In the smokers group, negative significative correlations were found by the non-parametric Spearman test between the MDA concentrations and the sperm count per mL (r=-0.767, p<0.001), the total sperm count (r=-0.656, p<0.001) and the percentage of normal morphology (r=-0.644, p<0.001).
Conclusions: Given of deleterious effects of tobacco in a large panel of human cells and specially on the male gametes, the increase of spermatozoon membrane MDA concentrations and the sperm abnormalities found in the group of smokers may be linked to cigarette smoking.
abstract_id: PUBMED:20201647
Effect of ultraviolet C irradiation on human sperm motility and lipid peroxidation. Purpose: Ultraviolet C (UVC) irradiation of aqueous solutions is known to be a good source of reactive oxygen species (ROS). The aim of this study is to examine the effect of increasing doses of UVC irradiation, in the presence and absence of the antioxidant butylated hydroxytoluene (BHT), on human sperm motility and lipid peroxidation of its membranes.
Materials And Methods: Human sperm samples were irradiated with UVC light (254 nm) for different periods of time. A computer-assisted semen analysis of sperm motility was carried out after UV irradiation. The percentage of motile sperm (%MOT), progressive motility, straight line velocity (VSL), curvilinear velocity (VCL) and the percentage of linearity (%LIN) were evaluated. The level of lipid peroxidation of sperm membranes was estimated by measurement of the thiobarbituric acid reactive substances (TBARS).
Results: UVC irradiation of human spermatozoa produced a diminution of the sperm motility (%MOT, progressive motility, VSL, VCL, %LIN), viability and, concomitantly, an increase of the level of lipid peroxidation of the sperm membranes. The observed effects of the UVC irradiation were prevented by addition of the antioxidant BHT, indicating that the effects of UVC on the tested sperm parameters are mediated by an important rise in lipid peroxidation of the sperm membrane.
Conclusion: Lipid peroxidation of the human sperm plasma membrane leads to a decrease in the sperm motility (%MOT, progressive motility, VSL, VCL, %LIN) and viability. The protective effect of BHT on the UVC-irradiated sperm cells indicates the effects of ROS on sperm function.
abstract_id: PUBMED:12647003
Chlamydia trachomatis and sperm lipid peroxidation in infertile men. Aim: To relate the presence of anti-Chlamydial trachomatis IgA in semen with sperm lipid membrane peroxidation and changes in seminal parameters.
Methods: Semen samples of the male partners of 52 couples assessed for undiagnosed infertility were examined for the presence of IgA antibody against C. trachomatis. The level of sperm membrane lipid peroxidation was estimated by determining the malondialdehyde (MDA) formation.
Results: Sperm membrane of infertile males with positive IgA antibodies against C. trachomatis showed a higher level of lipid peroxidation than that of infertile males with negative IgA antibody (P<0.05). There was a positive correlation (P<0.01) between the level of C. trachomatis antibody and the magnitude of sperm membrane lipid peroxidation. All the other tested semen parameters were found to be similar in the two groups.
Conclusion: The activation of immune system by C. trachomatis may promote lipid peroxidation of the sperm membrane. This could be the way by which C. trachomatis affects fertility.
abstract_id: PUBMED:20072917
Using fluorescence-activated flow cytometry to determine reactive oxygen species formation and membrane lipid peroxidation in viable boar spermatozoa. Fluorescence-activated flow cytometry analyses were developed for determination of reactive oxygen species (ROS) formation and membrane lipid peroxidation in live spermatozoa loaded with, respectively, hydroethidine (HE) or the lipophilic probe 4,4-difluoro-5-(4-phenyl-1,3-butadienyl)-4-bora-3a,4a-diaza-s-indacene-3-undecanoic acid, C(11)BODIPY(581/591) (BODIPY). ROS was detected by red fluorescence emission from oxidization of HE and membrane lipid peroxidation was detected by green fluorescence emission from oxidation of BODIPY in individual live sperm. Of the reactive oxygen species generators tested, BODIPY oxidation was specific for FeSo4/ascorbate (FeAc), because menadione and H(2)O(2) had little or no effect. The oxidization of hydroethidine to ethidium was specific for menadione and H(2)O(2); FeAc had no effect. The incidence of basal or spontaneous ROS formation and membrane lipid peroxidation were low in boar sperm (<1% of live sperm) in fresh semen or after low temperature storage; however the sperm were quite susceptible to treatment-induced ROS formation and membrane lipid peroxidation.
abstract_id: PUBMED:2553141
Generation of reactive oxygen species, lipid peroxidation, and human sperm function. Recent studies have demonstrated that human spermatozoa are capable of generating reactive oxygen species and that this activity is significantly accelerated in cases of defective sperm function. In view of the pivotal role played by lipid peroxidation in mediating free radical damage to cells, we have examined the relationships between reactive oxygen species production, lipid peroxidation, and the functional competence of human spermatozoa. Using malondialdehyde production in the presence of ferrous ion promoter as an index of lipid peroxidation, we have shown that lipid peroxidation is significantly accelerated in populations of defective spermatozoa exhibiting high levels of reactive oxygen species production or in normal cells stimulated to produce oxygen radicals by the ionophore, A23187. The functional consequences of lipid peroxidation included a dose-dependent reduction in the ability of human spermatozoa to exhibit sperm oocyte-fusion, which could be reversed by the inclusion of a chain-breaking antioxidant, alpha-tocopherol. Low levels of lipid peroxidation also had a slight enhancing effect on the generation of reactive oxygen species in response to ionophore, without influencing the steady-state activity. At higher levels of lipid peroxidation, both the basal level of reactive oxygen species production and the response to A23187 were significantly diminished. In contrast, lipid peroxidation had a highly significant, enhancing effect on the ability of human spermatozoa to bind to both homologous and heterologous zonae pellucidae via mechanisms that could again be reversed by alpha-tocopherol. These results are consistent with a causative role for lipid peroxidation in the etiology of defective sperm function and also suggest a possible physiological role for the reactive oxygen species generated by human spermatozoa in mediating sperm-zona interaction.
abstract_id: PUBMED:2757459
Suppression of lipid peroxidation in human spermatozoa by prostatic inhibin. Loss of sperm motility owing to the production of hydrogen peroxide by lipid peroxidation is regulated by yet unidentified prostatic factor(s). Inhibinlike peptide (HSPI) of prostatic origin isolated from human seminal plasma and having a molecular weight of about 10,400 daltons was studied for its effect on ascorbate-induced lipid peroxidation in human spermatozoa. Dose-related suppression of lipid peroxidation occurred at a dose level of 0.25, 0.5, and 1.0 micrograms. HSPI may be one of the factors involved in the regulation of lipid peroxidation and therefore sperm motility.
abstract_id: PUBMED:22503480
Sea bass sperm freezability is influenced by motility variables and membrane lipid composition but not by membrane integrity and lipid peroxidation. Cryopreserved sperm quality depends on the characteristics of fresh sperm. Thus, it is necessary to establish a group of variables to predict the cryopreservation potential of the fresh samples with the aim of optimizing resources. Motility, viability, lipid peroxidation and lipid profile of European sea bass (Dicentrarchus labrax) sperm were determined before and after cryopreservation to establish which variables more accurately predict the sperm cryopreservation potential in this species. Cryopreservation compromised sperm quality, expressed as a reduction of motility (46.5 ± 2.0% to 35.3 ± 2.5%; P<0.01) and viability (91.3 ± 0.7% to 69.9 ± 1.6%; P<0.01), and produced an increase in lipid peroxidation (2.4 ± 0.4 to 4.0 ± 0.4 μmoles MDA/mill spz; P<0.01). Also, significant changes were observed in the lipid composition before and after freezing, resulting in a reduction in the cholesterol/phospholipids ratio (1.4 ± 0.1 to 1.1 ± 0.0; P<0.01), phosphatidylcholine (47.7 ± 0.8% to 44.2 ± 0.8%; P<0.01) and oleic acid (8.7 ± 0.2% to 8.3 ± 0.2%; P<0.05) in cryopreserved sperm, as well as an increase in lysophosphatidylcholine (4.4 ± 0.3% to 4.8 ± 0.3%; P<0.01) and C24:1n9 fatty acid (0.5 ± 0.1% to 0.6 ± 0.1%; P<0.05). Motility, velocity, cholesterol/phospholipids ratio, monounsaturated fatty acids and the n3/n6 ratio were positively correlated (P<0.05) before and after freezing, whereas, viability and lipid peroxidation were not correlated. Motility and the cholesterol/phospholipids (CHO/PL) ratio were negatively correlated (P<0.05) with each other and the CHO/PL ratio was positively correlated (P<0.05) with lipid peroxidation. Therefore, the results demonstrated that motility and plasma membrane lipid composition (CHO/PL) were the most desirable variables determined in fresh samples to predict cryo-resistance in European sea bass sperm, taking into account the effect of both on cryopreserved sperm quality.
abstract_id: PUBMED:2793053
Lipid peroxidation in human spermatozoa as related to midpiece abnormalities and motility. The formation of malondialdehyde (MDA), a product of lipid peroxidation (LPO), was measured in human spermatozoa from 27 subjects with normal sperm characteristics. Peroxidation of lipids in washed spermatozoa was induced by catalytic amounts of ferrous ions and ascorbate and malondialdehyde determined by thiobarbituric method. MDA formation varied considerably from one sample to another. The studied population showed a strong correlation between lipid peroxidation potential (amount of MDA formed by 10(8) spermatozoa after 1 hour of incubation) and 1) initial motility r = -0.623, P = 0.001; and 2) morphologic abnormalities of the midpiece r = 0.405, P = 0.05. These results suggest that poor motility is linked with membrane fragility and that spermatozoa with midpiece abnormalities probably have membrane and/or cytoplasmic antiperoxidant system defects. Because LPO potential is related to the two most important characteristics of fertility, it seems possible that it has the potential to become a good biochemical index of semen quality.
abstract_id: PUBMED:24358256
Soluble products of Escherichia coli induce mitochondrial dysfunction-related sperm membrane lipid peroxidation which is prevented by lactobacilli. Unidentified soluble factors secreted by E. coli, a frequently isolated microorganism in genitourinary infections, have been reported to inhibit mitochondrial membrane potential (ΔΨm), motility and vitality of human spermatozoa. Here we explore the mechanisms involved in the adverse impact of E. coli on sperm motility, focusing mainly on sperm mitochondrial function and possible membrane damage induced by mitochondrial-generated reactive oxygen species (ROS). Furthermore, as lactobacilli, which dominate the vaginal ecosystem of healthy women, have been shown to exert anti-oxidant protective effects on spermatozoa, we also evaluated whether soluble products from these microorganisms could protect spermatozoa against the effects of E. coli. We assessed motility (by computer-aided semen analysis), ΔΨm (with JC-1 dye by flow cytometry), mitochondrial ROS generation (with MitoSOX red dye by flow cytometry) and membrane lipid-peroxidation (with the fluorophore BODIPY C11 by flow cytometry) of sperm suspensions exposed to E. coli in the presence and in the absence of a combination of 3 selected strains of lactobacilli (L. brevis, L. salivarius, L. plantarum). A Transwell system was used to avoid direct contact between spermatozoa and microorganisms. Soluble products of E. coli induced ΔΨm loss, mitochondrial generation of ROS and membrane lipid-peroxidation, resulting in motility loss. Soluble factors of lactobacilli prevented membrane lipid-peroxidation of E. coli-exposed spermatozoa, thus preserving their motility. In conclusion, sperm motility loss by soluble products of E. coli reflects a mitochondrial dysfunction-related membrane lipid-peroxidation. Lactobacilli could protect spermatozoa in the presence of vaginal disorders, by preventing ROS-induced membrane damage.
abstract_id: PUBMED:37727970
Lipid peroxidation in avian semen. The main cause of sperm chromatin damage is oxidative stress related to embryo development failure and adult infertility in mammals and also avian. Oxidative stress results in lipid peroxidation (LPO) causing cell damage. Lipid peroxidation is the oxidation of polyunsaturated fatty acids (PUFAs) in biological systems and causes changes in the physical structure and characteristics of the cell membrane. Due to the high amounts of PUFAs in the avian sperm membrane, its sperm seem susceptible to pe-roxidative damage and is a substantial factor in the fertilization capacity of sperm. The most commonly used methods for measuring LPO or its by-products, such as malondialdehyde (MDA) and 4-hydroksy-2-nonenal (4-HNE), in bird semen are based on the colorimetric method TBARS (thiobarbituric acid reactive substances) and on the use of a fluorescence probe (CC 11-BODIPY 581/591) as a marker to evaluate membrane lipid peroxidation. This review aims first to introduce LPO in avian semen and its effects on avian sperm and second to summarize the commonly applied methods of evaluating LPO and its damage in fresh and stored avian semen.
Answer: Yes, tobacco is a potential inductor of lipid peroxidation of the human spermatozoon membrane. A study comparing smokers to never-smokers found a significant increase in malondialdehyde (MDA) concentrations in the spermatozoa of smokers (PUBMED:15563426). MDA is a stable product of membrane lipid peroxidation, and its elevated levels in smokers suggest that tobacco smoking may be linked to increased spermatozoon membrane lipid peroxidation. Additionally, the study observed a decrease in forward motility (grade A) and total sperm count, as well as an increase in tapering heads and morphological stress pattern cells in the group of smokers. Negative significant correlations were found between MDA concentrations and sperm count per mL, total sperm count, and the percentage of normal morphology, indicating that the sperm abnormalities found in smokers may be associated with the lipid peroxidation induced by cigarette smoking (PUBMED:15563426). |
Instruction: Does scripting operative plans in advance lead to better preparedness of trainees?
Abstracts:
abstract_id: PUBMED:27839687
Does scripting operative plans in advance lead to better preparedness of trainees? A pilot study. Background: We pondered if preoperative scripting might better prepare residents for the operating room (OR).
Methods: Interns rotating on a general surgeon's service were instructed to script randomized cases prior to entering the OR. Scripts contained up to 20 points highlighting patient information perceived important for surgical management. The attending was blinded to the scripting process and completed a feedback sheet (Likert scale) following each procedure. Feedback questions were categorized into "preparedness" (aware of patient specific details, etc.) and "performance" (provided better assistance, etc.).
Results: Eight surgical interns completed 55 scripted and 61 non-scripted cases. Total scores were higher in scripted cases (p = 0.02). Performance scores were higher for scripted cases (3.31 versus 3.13, p = 0.007), while preparedness did not differ (3.65 and 3.62, p = 0.51).
Conclusions: This pilot study suggests scripting cases may be a useful preoperative planning tool to increase interns' operative and patient care performance but may not affect perceived preparedness.
abstract_id: PUBMED:37922539
Death Preparedness: Development and Initial Validation of the Advance Planning Preparedness Scale. Delayed advance planning and costs of life sustaining treatments at end of life significantly contribute to the economic burden of healthcare. Clinician barriers include perceptions of inappropriate timing, lack of skills in end-of-life communication and viewing readiness as a behavior rather than a death attitude. This study developed and validated a measurement of psychological preparedness for advance directive completion. Confirmatory factor analysis (N = 543) of a 35 item pool (Cronbach α = .96) supported five sub-scales; psychological comfort (α = .87), desire to know (α = .88), thinking (α = .84), willingness (α = .82) and existential reflection (α = .79) with a possible common factor (α = .84). Results suggested significant predictors of completing directives in 30 days included discussion (OR .08, p < .001), preparedness (OR 4.08, p = .03) and uncertainty (OR 4.37, p = .02). APP = 35 is a reliable and valid measure with utility to assess readiness for completion of EOL documents.
abstract_id: PUBMED:28526918
Effect of problem and scripting-based learning on spine surgical trainees' learning outcomes. Purpose: To assess the impact of problem and scripting-based learning (PSBL) on spine surgical trainees' learning outcomes.
Methods: 30 spine surgery postgraduate-year-1 residents (PGY-1s) from the First Hospital of China Medical University were randomly divided into two groups. The first group studied spine surgical skills and developed individual judgment under a conventional didactic model, whereas the PSBL group used PBL and Scripted model. A feedback questionnaire and the satisfaction of residents were evaluated by the first assistant surgeon immediately following each procedure. At the end of the study, residents filled out questionnaires focused on identifying the strengths of each teaching method and took a multiple-choice theoretical examination. The results were analyzed by t tests.
Results: Significant difference was found between the two groups in total mean score of preparedness and performance feedback statement (P = 0.01) and the questionnaire by PGY-1's opinion on the effectiveness of the two teaching methods (P = 0.004). Compared with the non-PSBL group, the PSBL group had significantly higher mean score of pre-operative preparedness (P = 0.01), but there was no significant difference between the two groups in theoretical examination, intra-operative performance, and overall satisfaction with the PGY-1s. The residents found that PSBL could develop their judgment (P = 0.03) and provide greater satisfaction (P = 0.02), and would like to repeat the experience (P = 0.03).
Conclusions: The PSBL method can activate spine residents' prior knowledge and building on existing cognitive frameworks, which is an important tool for improving pre-operative preparedness. We believe that PSBL is an important first step in training spine residents to become confident and safe spine surgeons.
abstract_id: PUBMED:31328711
European Pandemic Influenza Preparedness Planning: A Review of National Plans, July 2016. Pandemic influenza A (H1N1) commenced in April 2009. Robust planning and preparedness are needed to minimize the impact of a pandemic. This study aims to review if key elements of pandemic preparedness are included in national plans of European countries. Key elements were identified before and during the evaluations of the 2009 pandemic and are defined in this study by 42 items. These items are used to score a total of 28 publicly available national pandemic influenza plans. We found that plans published before the 2009 influenza pandemic score lower than plans published after the pandemic. Plans from countries with a small population size score significantly lower compared to national plans from countries with a big population (P &lt;.05). We stress that the review of written plans does not reflect the actual preparedness level, as the level of preparedness entails much more than the existence of a plan. However, we do identify areas of improvement for the written plans, such as including aspects on the recovery and transition phase and several opportunities to improve coordination and communication, including a description of the handover of leadership from health to wider sector management and communication activities during the pre-pandemic phase. (Disaster Med Public Health Preparedness. 2019;13:582-592).
abstract_id: PUBMED:29029975
Distressed setting and profound challenges: Pandemic influenza preparedness plans in the Eastern Mediterranean Region. Background: Influenza pandemics are unpredictable and can have severe health and economic implications. Preparedness for pandemic influenza as advised by the World Health Organization (WHO) is key in minimizing the potential impacts. Pandemic Influenza Preparedness (PIP) Framework is a global public-private initiative to strengthen the preparedness. A total of 43 countries receive funds through Partnership Contribution (PC) component of PIP Framework to enhance preparedness; seven of these fall in the WHO's Eastern Mediterranean Region. We report findings of a desk review of preparedness plans of six such countries from the Region.
Methods: The assessment was done using a standardized checklist containing five criteria and 68 indicators. The checklist was developed using the latest WHO guidelines, in consultation with influenza experts from the Region. The criteria included preparation, surveillance, prevention and containment, case investigation and treatment, and risk communication. Two evaluators independently examined and scored the plans.
Results: Pandemic preparedness plan of only one country scored above 70% on aggregate and above 50% on all individual criteria. Plans from rest of the countries scored below satisfactory on aggregate, as well as on individual preparedness criteria. Among the individual criteria, prevention and containment scored highest while case investigation and treatment, the lowest for majority of the countries. In general, surveillance also scored low while it was absent altogether, in one of the plans.
Conclusions: This was a desk review of the plans and not the actual assessment of the influenza preparedness. Moreover, only plans of countries facilitated through funds provided under the PC implementation plan were included. The preparedness scores of majority of reviewed plans were not satisfactory. This warrants a larger study of a representative sample from the Region and also calls for immediate policy action to improve the pandemic influenza preparedness plans and thereby enhance pandemic preparedness in the Region.
abstract_id: PUBMED:34888200
Redeployment of psychiatrist trainees during the COVID-19 pandemic: evaluation of attitude and preparedness. Background: The coronavirus disease-2019 (COVID-19) pandemic has imposed an unprecedented strain on healthcare systems worldwide. In response, psychiatrist trainees were redeployed from their training sites to help manage patients with COVID-19. This study aimed to examine the attitude of psychiatrist trainees toward redeployment to COVID-19 sites and their perceived preparedness for managing physical health conditions during redeployment. Methods: A cross-sectional researcher-developed online survey was administered among psychiatrist trainees in May 2020 at the Department of Psychiatry, Hamad Medical Corporation, Qatar. Results: Of the 45 psychiatrist trainees, 40 (88.9%) responded to the survey. Most trainees reported being comfortable dealing with chronic medical conditions, but less so with acute life-threatening medical conditions. Half reported feeling anxious about redeployment, and most felt the need for additional training. We found that trainees' perceived redeployment preparedness was significantly associated with their level of postgraduate training and the time since and duration of their last medical or surgical training. Conclusion: Adequate preparation and training of psychiatrist trainees is important before redeployment to COVID-19 sites to ensure that they can effectively and safely manage patients with COVID-19.
abstract_id: PUBMED:27212709
Adversaries at the Bedside: Advance Care Plans and Future Welfare. Advance care planning refers to the process of determining how one wants to be cared for in the event that one is no longer competent to make one's own medical decisions. Some have argued that advance care plans often fail to be normatively binding on caretakers because those plans do not reflect the interests of patients once they enter an incompetent state. In this article, we argue that when the core medical ethical principles of respect for patient autonomy, honest and adequate disclosure of information, institutional transparency, and concern for patient welfare are upheld, a policy that would allow for the disregard of advance care plans is self-defeating. This is because when the four principles are upheld, a patient's willingness to undergo treatment depends critically on the willingness of her caretakers to honor the wishes she has outlined in her advance care plan. A patient who fears that her caretakers will not honor her wishes may choose to avoid medical care so as to limit the influence of her caretakers in the future, which may lead to worse medical outcomes than if she had undergone care. In order to avoid worse medical outcomes and uphold the four core principles, caregivers who are concerned about the future welfare of their patients should focus on improving advance care planning and commit to honoring their patients' advance care plans.
abstract_id: PUBMED:32534981
Cardiopulmonary resuscitation and endotracheal intubation decisions for adults with advance care directive and resuscitation plans in the emergency department. Background: Emergency departments routinely offer cardiopulmonary resuscitation and endotracheal intubation to patients in resuscitative states. With increasing longevity and prevalence of chronic conditions in Australia, there has been growing need to uptake and implement advance care directives and resuscitation plans. This study investigates the frequency of the presence of advance care directives and resuscitation plans and its utilisation in cardiopulmonary and endotracheal intubation decision making.
Methods: Retrospective audit of electronic patients' medical records aged ≥65 years presenting over a 3-month period. Data collected included demographics, triage categories, advance care directive and/or resuscitation plans/orders status.
Results: A total of 6439 patients were included representing 29% of the total patient population during the study period. Participants were randomly selected (N = 300); mean age was 78.7 (±8.1) years. An advance care directive was present in only 8% and one in three patients (37%) had a previous resuscitation plan/order. Senior consultant was present at the department for consultation by junior doctors for most of the patients (82%). Acknowledgment of either advance care directive or resuscitation plans/orders in clinical notes was only 9.5% (n = 116).
Conclusion: Advance care directive prevalence was low with resuscitation plans/orders being more common. However, clinician acknowledgement was infrequent for both.
abstract_id: PUBMED:26948258
Framing post-pandemic preparedness: Comparing eight European plans. Framing has previously been studied in the field of pandemic preparedness and global health governance and influenza pandemics have usually been framed in terms of security and evidence-based medicine on a global scale. This paper is based on the pandemic preparedness plans, published after 2009, from eight European countries. We study how pandemic preparedness is framed and how pandemic influenza in general is narrated in the plans. All plans contain references to 'uncertainty', 'pandemic phases', 'risk management', 'vulnerability' and 'surveillance'. These themes were all framed differently in the studied plans. The preparedness plans in the member states diverge in ways that will challenge the ambition of the European Union to make the pandemic preparedness plans interoperable and to co-ordinate the member states during future pandemics.
abstract_id: PUBMED:29176589
Australia's influenza pandemic preparedness plans: an analysis. We analysed Australian plans issued by the public sector and current at the time of the last human pandemic in 2009. They came from various levels of governance, and offered guidance in key domains. Using 13 established criteria, we rated 10 plans (national, state, and territorial) for their usefulness to guide health and medical intervention, business continuity, and crisis communication, plus consideration of at-risk populations. The intended end-user of most plans was not clear, whether hospital manager, health worker, or policy maker. Scores ranged from 8 to 29 of a maximum possible of 39, with many inconsistencies between plans. Health system-related issues were better addressed than critical infrastructure and essential systems resilience. The needs of Indigenous populations and use of pneumococcal vaccination and antibiotics were rarely considered in plans. Pandemic response would be more effective if plans were standardised, clear, and were to include overlooked dimensions of a pandemic's impact as well as guidance for specified end-users.
Answer: Yes, scripting operative plans in advance does lead to better preparedness of trainees. A pilot study involving surgical interns found that scripting cases prior to entering the operating room led to higher total scores in performance, suggesting that scripting may be a useful preoperative planning tool to increase interns' operative and patient care performance. However, it did not significantly affect the perceived preparedness of the interns (PUBMED:27839687). Additionally, another study on spine surgical trainees using problem and scripting-based learning (PSBL) showed that the PSBL group had significantly higher mean scores of pre-operative preparedness compared to the non-PSBL group, indicating that PSBL can improve pre-operative preparedness (PUBMED:28526918). |
Instruction: Do 1-year outcomes predict 2-year outcomes for adult deformity surgery?
Abstracts:
abstract_id: PUBMED:28288920
Early Patient-Reported Outcomes Predict 3-Year Outcomes in Operatively Treated Patients with Adult Spinal Deformity. Background: For patients with adult spinal deformity (ASD), surgical treatment may improve their health-related quality of life. This study investigates when the greatest improvement in outcomes occurs and whether incremental improvements in patient-reported outcomes during the first postoperative year predict outcomes at 3 years.
Methods: Using a multicenter registry, we identified 84 adults with ASD treated surgically from 2008 to 2012 with complete 3-year follow-up. Pairwise t tests and multivariate regression were used for analysis. Significance was set at P < 0.01.
Results: Mean Oswestry Disability Index (ODI) and Scoliosis Research Society-22r total (SRS-22r) scores improved by 13 and 0.8 points, respectively, from preoperatively to 3 years (both P < 0.001). From preoperatively to 6 weeks postoperatively, ODI scores worsened by 5 points (P = 0.049) and SRS-22r scores improved by 0.3 points (P < 0.001). Between 6 weeks and 1 year, ODI and SRS-22r scores improved by 19 and 0.5 points, respectively (both P < 0.001). Incremental improvements during the first postoperative year predicted 3-year outcomes in ODI and SRS-22r scores (adjusted R2 = 0.52 and 0.42, respectively). There were no significant differences in the measured or predicted 3-year ODI (P = 0.991) or SRS-22r scores (P = 0.986).
Conclusions: In surgically treated patients with ASD, the greatest improvements in outcomes occurred between 6 weeks and 1 year postoperatively. A model with incremental improvements from baseline to 6 weeks and from 6 weeks to 1 year can be used to predict ODI and SRS-22r scores at 3 years.
abstract_id: PUBMED:35220775
Determining a Cutoff Value for Hand Grip Strength to Predict Favorable Outcomes of Adult Spinal Deformity Surgery. Study Design: Retrospective review.
Objectives: To establish a cutoff value for hand grip strength and predict the favorable outcomes of adult spinal deformity surgery.
Summary Of Background Data: Hand grip strength (HGS) has been suggested to predict surgical outcomes in various fields, including adult spinal deformity (ASD). However, to the best of our knowledge, no study has established a cutoff value for HGS in patients with ASD.
Methods: This study included 115 female patients who underwent reconstructive spinal surgery for ASD between September 2016 and September 2020. HGS was measured preoperatively. The Oswestry Disability Index (ODI), EuroQOL-5-dimension (EQ-5D), and visual analog scale (VAS) scores for back pain were all recorded both before and after surgery. Patients were dichotomized either into favorable or unfavorable outcome groups using an ODI cutoff score of 22 at 1 year after surgery. Multivariate logistic regression analysis was done to identify significant factors leading to favorable outcomes. A receiver operating characteristic (ROC) curve was drawn to define the cutoff value of HGS for favorable outcomes.
Results: Multivariate logistic regression analysis showed that HGS is significantly associated with favorable surgical outcomes in ASD (P = .031). The ROC curve suggested a cutoff value of 14.20 kg for HGS (area under the curve (AUC) = .678, P = .013) to predict favorable surgical outcomes in ASD. The surgical complications were not significantly affected by HGS.
Conclusion: The HGS of patients with ASD can be interpreted with a cutoff value of 14.20 kg. Patients with HGS above this cutoff value showed superior surgical outcomes at 1 year after surgery compared to those below this cutoff value.
abstract_id: PUBMED:35068819
What are the major drivers of outcomes in cervical deformity surgery? Background Context: Cervical deformity (CD) correction is becoming more challenging and complex. Understanding the factors that drive optimal outcomes has been understudied in CD correction surgery.
Purpose: The purpose of this study is to assess the factors associated with improved outcomes (IO) following CD surgery.
Study Design Setting: Retrospective review of a single-center database.
Patient Sample: Sixty-one patients with CD.
Outcome Measures: The primary outcomes measured were radiographic and clinical "IO" or "poor outcome" (PO). Radiographic IO or PO was assessed utilizing Schwab pelvic tilt (PT)/sagittal vertical axis (SVA), and Ames cervical SVA (cSVA)/TS-CL. Clinical IO or PO was assessed using MCID EQ5D, Neck Disability Index (NDI), and/or improvement in Modified Japanese Orthopedic Association Scale (mJOA) modifier. The secondary outcomes assessed were complication and reoperation rates.
Materials And Methods: CD patients with data available on baseline (BL) and 1-year (1Y) radiographic measures and health-related quality of life s were included in our study. Patients with reoperations for infection were excluded. Patients were categorized by IO, PO, or not. IO was defined as "nondeformed" radiographic measures as well as improved clinical outcomes. PO was defined as "moderate or severe deformed" radiographic measures as well as worsening clinical outcome measures. Random forest assessed ratios of predictors for IO and PO. The categorical regression models were utilized to predict BL regional deformity (Ames cSVA, TS-CL, horizontal gaze), BL global deformity (Schwab PI-LL, SVA, PT), regional/global change (BL to 1Y), BL disability (mJOA score), and BL pain/function impact outcomes.
Results: Sixty-one patients met inclusion criteria for our study (mean age of 55.8 years with 54.1% female). The most common surgical approaches were as follows: 18.3% anterior, 51.7% posterior, and 30% combined. Average number of levels fused was 7.7. The mean operative time was 823 min and mean estimated blood loss was 1037 ml. At 1 year, 24.6% of patients were found to have an IO and 9.8% to have a PO. Random forest analysis showed the top 5 individual factors associated with an "IO" were: BL Maximum Kyphosis, Maximum Lordosis, C0-C2 Angle, L4-Pelvic Angle, and NSR Back Pain (80% radiographic, 20% clinical). Categorical IO regression model (R2 = 0.328, P = 0.007) found following factors to be significant: low BL regional deformity (β = ‒0.082), low BL global deformity (β = ‒0.099), global improve (β = 0.532), regional improve (β = 0.230), low BL disability (β = 0.100), and low BL NDI (β = 0.024). Random forest found the top 5 individual BL factors associated with "PO" (80% were radiographic): BL CL Apex, DJK angle, cervical lordosis, T1 slope, and NSR neck pain. Categorical PO regression model (R2 = 0.306, P = 0.012) found following factors to be significant: high BL regional deformity (β = ‒0.108), high BL global deformity (β = ‒0.255), global decline (β = 0.272), regional decline (β = 0.443), BL disability (β = ‒0.164), and BL severe NDI (>69) (β = 0.181).
Conclusions: The categorical weight demonstrated radiographic as the strongest predictor of both improved (global alignment) and PO (regional deformity/deterioration). Radiographic factors carry the most weight in determining an improved or PO and can be ultimately utilized in preoperative planning and surgical decision-making to optimize the outcomes.
abstract_id: PUBMED:37450042
Adult spinal deformity patients revised for pseudarthrosis have comparable two-year outcomes to those not undergoing any revision surgery. Purpose: This study aimed to evaluate whether adult spinal deformity patients undergoing revision for symptomatic pseudarthrosis have comparable two-year outcomes as patients who do not experience pseudarthrosis.
Methods: Patients whose indexed procedure was revision for pseudarthrosis (pseudo) were compared with patients who underwent a primary procedure and did not have pseudarthrosis by 2Y post-op (non-pseudo). Patients were propensity-matched (PSM) based on baseline (BL) sagittal alignment, specifically C7SVA and CrSVA-Hip. Key outcomes were 2Y PROs (SRS and ODI) and reoperation. All patients had a minimum follow-up period of two years.
Results: A total of 224 patients with min 2-year FU were included (pseudo = 42, non-pseudo = 182). Compared to non-pseudo, pseudo-patients were more often female (P = 0.0018) and had worse BL sagittal alignment, including T1PA (P = 0.02], C2-C7 SVA [P = 0.0002], and CrSVA-Hip [P = 0.004]. After 37 PSM pairs were generated, there was no significant difference in demographics, BL and 2Y alignment, or operative/procedural variables. PSM pairs did not report any significantly different PROs at BL. Consistently, at 2Y, there were no significant differences in PROs, including SRS function [3.9(0.2) vs 3.7(0.2), P = 0.44], pain [4.0 (0.2) vs. 3.57 (0.2), P = 0.12], and ODI [25.7 (5.2) vs 27.7 (3.7), P = 0.76]. There were no differences in 1Y (10.8% vs 10.8%, P > 0.99) and 2Y (13.2% vs 15.8%, P = 0.64) reoperation, PJK rate (2.6% vs 10.5%, P = 0.62), or implant failure (2.6% vs 10.5%, P = 0.37). Notably, only 2 patients (5.4%) had recurrent pseudarthrosis following revision. Kaplan-Meier curves indicated that patients undergoing intervention for pseudarthrosis had comparable overall reoperation-free survival (log-rank test, χ2 = 0.1975 and P = 0.66).
Conclusions: Patients undergoing revision for pseudarthrosis have comparable PROs and clinical outcomes as patients who never experienced pseudarthrosis. Recurrence of symptomatic pseudarthrosis was infrequent.
abstract_id: PUBMED:27927395
Clinical Results and Functional Outcomes in Adult Patients After Revision Surgery for Spinal Deformity Correction: Patients Younger than 65 Years Versus 65 Years and Older. Study Design: Retrospective comparison.
Objective: To compare complications and radiographic and functional outcomes of patients undergoing revision spinal deformity surgery, who were 40-64 years of age and 65 years of age or older.
Summary Of Background Data: The effect of age on radiographic and functional outcomes has not been well established in the literature for patients undergoing revision adult deformity surgery. The hypothesis was that the complications and radiographic and functional outcomes of younger and older adult patients would be comparable.
Methods: The authors retrospectively reviewed prospectively collected data on 109 consecutive patients (84 women and 25 men) undergoing revision spinal deformity surgery who were 40 years of age or older. All surgeries were performed at 1 institution by the senior author. Patients were divided into groups based on age: younger than 65 years of age (70 patients) or 65 years of age or older (39 patients), and complications and radiographic and functional outcomes were compared. All patients had at least 2 years' clinical follow-up. Hotelling's t2 test and the χ2 test were used to compare differences; statistical significance was set at p < .05.
Results: There was no significant difference between the 2 groups in major complications (p = .62), minor complications (p = .34), or reoperation rate (p = .08). Major correction was achieved in the coronal and sagittal planes in both groups after surgery. By final follow-up, patients in both groups had significant improvements from baseline in Oswestry disability index (p < .05) and in all Scoliosis Research Society-22 domains (p < .001); there was no significant difference in any domain score between groups (p > .05).
Conclusions: Older adult patients undergoing revision deformity correction surgery achieved functional outcome benefits comparable to those in younger adults without significantly more complications. Surgeons should be aware of these factors when counseling patients regarding revision surgery for deformity correction.
abstract_id: PUBMED:29248781
Drivers of Cervical Deformity Have a Strong Influence on Achieving Optimal Radiographic and Clinical Outcomes at 1 Year After Cervical Deformity Surgery. Objective: The primary driver (PD) of cervical malalignment is important in characterizing cervical deformity (CD) and should be included in fusion to achieve alignment and quality-of-life goals. This study aims to define how PDs improve understanding of the mechanisms of CD and assesses the impact of driver region on realignment/outcomes.
Methods: Inclusion: radiographic CD, age >18 years, 1 year follow-up. PD apex was classified by spinal region: cervical, cervicothoracic junction (CTJ), thoracic, or spinopelvic by a panel of spine deformity surgeons. Primary analysis evaluated PD groups meeting alignment goals (by Ames modifiers cervical sagittal vertical axis/T1 slope minus cervical lordosis/chin-brow vergical angle/modified Japanese Orthopaedics Association questionnaire) and health-related quality of life (HRQL) goals (EuroQol-5 Dimensions questionnaire/Neck Disability Index/modified Japanese Orthopaedics Association questionnaire) using t tests. Secondary analysis grouped interventions by fusion constructs including the primary or secondary apex based on lowest instrumented vertebra: cervical, lowest instrumented vertebra (LIV) ≤C7; CTJ, LIV ≤T3; and thoracic, LIV ≤T12.
Results: A total of 73 patients (mean age, 61.8 years; 59% female) were evaluated with the following PDs of their sagittal cervical deformity: cervical, 49.3%; CTJ, 31.5%; thoracic, 13.7%; and spinopelvic, 2.7%. Cervical drivers (n = 36) showed the greatest 1-year postoperative cervical and global alignment changes (improvement in T1S, CL, C0-C2, C1 slope). Thoracic drivers were more likely to have persistent severe T1 slope minus cervical lordosis modifier grade at 1 year (0, 20.0%; +, 0.0%; ++, 80.0%). Cervical deformity modifiers tended to improve in cervical patients whose construct included the PD apex (included, 26%; not, 0%; P = 0.068). Thoracic and cervicothoracic PD apex patients did not improve in HRQL goals when PD apex was not treated.
Conclusions: CD structural drivers have an important effect on treatment and 1-year postoperative outcomes. Cervical or thoracic drivers not included in the construct result in residual deformity and inferior HRQL goals. These factors should be considered when discussing treatment plans for patients with CD.
abstract_id: PUBMED:37718115
Measuring Outcomes in Spinal Deformity Surgery. Outcome assessment in adult spinal deformity has evolved from radiographic analysis of curve correction to patient-centered perception of health-related quality-of-life. Oswestry Disability Index and the Scoliosis Research Society-22 Patient Questionnaire are the predominantly used patient-reported outcome (PRO) measurements for deformity surgery. Correction of sagittal alignment correlates with improved PRO. Functional outcomes and accelerometer measurements represent newer methods of measuring outcomes but have not yet been widely adopted or validated. Further adoption of a minimum set of core outcome domains will help facilitate international comparisons and benchmarking, and ultimately enhance value-based healthcare.
abstract_id: PUBMED:32730730
Classifying Complications: Assessing Adult Spinal Deformity 2-Year Surgical Outcomes. Study Design: Retrospective review of prospective database.
Objective: Complication rates for adult spinal deformity (ASD) surgery vary widely because there is no accepted system for categorization. Our objective was to identify the impact of complication occurrence, minor-major complication, and Clavien-Dindo complication classification (Cc) on clinical variables and patient-reported outcomes.
Methods: Complications in surgical ASD patients with complete baseline and 2-year data were considered intraoperatively, perioperatively (<6 weeks), and postoperatively (>6 weeks). Primary outcome measures were complication timing and severity according to 3 scales: complication presence (yes/no), minor-major, and Cc score. Secondary outcomes were surgical outcomes (estimated blood loss [EBL], length of stay [LOS], reoperation) and health-related quality of life (HRQL) scores. Univariate analyses determined complication presence, type, and Cc grade impact on operative variables and on HRQL scores.
Results: Of 167 patients, 30.5% (n = 51) had intraoperative, 48.5% (n = 81) had perioperative, and 58.7% (n = 98) had postoperative complications. Major intraoperative complications were associated with increased EBL (P < .001) and LOS (P = .0092). Postoperative complication presence and major postoperative complication were associated with reoperation (P < .001). At 2 years, major perioperative complications were associated with worse ODI, SF-36, and SRS activity and appearance scores (P < .02). Increasing perioperative Cc score and postoperative complication presence were the best predictors of worse HRQL outcomes (P < .05).
Conclusion: The Cc Scale was most useful in predicting changes in patient outcomes; at 2 years, patients with raised perioperative Cc scores and postoperative complications saw reduced HRQL improvement. Intraoperative and perioperative complications were associated with worse short-term surgical and inpatient outcomes.
abstract_id: PUBMED:31192099
Comparison of Best Versus Worst Clinical Outcomes for Adult Cervical Deformity Surgery. Study Design: Retrospective cohort study.
Objective: Factors that predict outcomes for adult cervical spine deformity (ACSD) have not been well defined. To compare ACSD patients with best versus worst outcomes.
Methods: This study was based on a prospective, multicenter observational ACSD cohort. Best versus worst outcomes were compared based on Neck Disability Index (NDI), Neck Pain Numeric Rating Scale (NP-NRS), and modified Japanese Orthopaedic Association (mJOA) scores.
Results: Of 111 patients, 80 (72%) had minimum 1-year follow-up. For NDI, compared with best outcome patients (n = 28), worst outcome patients (n = 32) were more likely to have had a major complication (P = .004) and to have undergone a posterior-only procedure (P = .039), had greater Charlson Comorbidity Index (P = .009), and had worse postoperative C7-S1 sagittal vertical axis (SVA; P = .027). For NP-NRS, compared with best outcome patients (n = 26), worst outcome patients (n = 18) were younger (P = .045), had worse baseline NP-NRS (P = .034), and were more likely to have had a minor complication (P = .030). For the mJOA, compared with best outcome patients (n = 16), worst outcome patients (n = 18) were more likely to have had a major complication (P = .007) and to have a better baseline mJOA (P = .030). Multivariate models for NDI included posterior-only surgery (P = .006), major complication (P = .002), and postoperative C7-S1 SVA (P = .012); models for NP-NRS included baseline NP-NRS (P = .009), age (P = .017), and posterior-only surgery (P = .038); and models for mJOA included major complication (P = .008).
Conclusions: Factors distinguishing best and worst ACSD surgery outcomes included patient, surgical, and radiographic factors. These findings suggest areas that may warrant greater awareness to optimize patient counseling and outcomes.
abstract_id: PUBMED:33034220
Narrative Review of Predictive Analytics of Patient-Reported Outcomes in Adult Spinal Deformity Surgery. Study Design: Narrative review.
Objective: Decision making in surgery for adult spinal deformity (ASD) is complex due to the multifactorial etiology, numerous surgical options, and influence of multiple medical and psychosocial factors on patient outcomes. Predictive analytics provide computational tools to analyze large data sets and generate hypotheses regarding new data. In this review, we examine the use of predictive analytics to predict patient-reported outcomes (PROs) in ASD surgery.
Methods: A search of PubMed, Web of Science, and Embase databases was performed to identify all potentially relevant studies up to February 1, 2020. Studies were included based on the use of predictive analytics to predict PROs in ASD.
Results: Of 57 studies identified and reviewed, 7 studies were included. Multiple algorithms including supervised and unsupervised methods were used. Significant heterogeneity was observed with choice of PROs modeled including ODI, SRS22, and SF36, assessment of model accuracy, and with the model accuracy and area under the receiver operating curve values (ranging from 30% to 86% and 0.57 to 0.96, respectively). Models were built with data sets of patients ranging from 89 to 570 patients with a range of 22 to 267 variables.
Conclusions: Predictive analytics makes accurate predictions regarding PROs regarding pain, disability, and work and social function; PROs regarding satisfaction, self-image, and psychologic aspects of ASD were predicted with the lowest accuracy. Our review demonstrates a relative paucity of studies on ASD with limited databases. Future studies should include larger and more diverse databases and provide external validation of preexisting models.
Answer: The abstracts provided do not directly address the question of whether 1-year outcomes predict 2-year outcomes for adult deformity surgery. However, some of the abstracts do discuss the predictive value of early postoperative outcomes for longer-term results.
For instance, one study found that early patient-reported outcomes during the first postoperative year can predict outcomes at 3 years in patients with adult spinal deformity (ASD) who were treated surgically. Specifically, incremental improvements in the Oswestry Disability Index (ODI) and Scoliosis Research Society-22r (SRS-22r) scores from baseline to 6 weeks and from 6 weeks to 1 year postoperatively were predictive of 3-year outcomes (PUBMED:28288920).
Another study established a cutoff value for hand grip strength (HGS) to predict favorable outcomes of ASD surgery at 1 year postoperatively. This suggests that certain measurements taken preoperatively can be predictive of outcomes at least up to the 1-year mark (PUBMED:35220775).
While these studies provide some insight into the predictive value of early outcomes, they do not specifically correlate 1-year outcomes with 2-year outcomes. To answer the question definitively, research that directly compares outcomes at these two time points would be required. The abstracts provided do not contain such a comparison, and therefore, we cannot conclude from the given abstracts whether 1-year outcomes predict 2-year outcomes for adult deformity surgery. |
Instruction: Does PGA external stenting reduce compliance mismatch in venous grafts?
Abstracts:
abstract_id: PUBMED:17437638
Does PGA external stenting reduce compliance mismatch in venous grafts? Background: Autogenous vein grafting is widely used in regular bypassing procedures. Due to its mismatch with the host artery in both mechanical property and geometry, the graft often over expands under high arterial blood pressure and forms a step-depth where eddy flow develops, thus causing restenosis, fibrous graft wall, etc. External stents, such as sheaths being used to cuff the graft, have been introduced to eliminate these mismatches and increase the patency. Although histological and immunochemical studies have shown some positive effects of the external stent, the mechanical mismatch under the protection of an external stent remains poorly analyzed.
Methods: In this study, the jugular veins taken from hypercholesterolemic rabbits were transplanted into the carotid arteries, and non-woven polyglycolic acid (PGA) fabric was used to fabricate the external stents to study the effect of the biodegradable external stent. Eight weeks after the operation, the grafts were harvested to perform mechanical tests and histological examinations. An arc tangent function was suggested to describe the relationship between pressure and cross-sectional area to analyse the compliance of the graft.
Results: The results from the mechanical tests indicated that grafts either with or without external stents displayed large compliance in the low-pressure range and were almost inextensible in the high-pressure range. This was very different from the behavior of the arteries or veins in vivo. The data from histological tests showed that, with external stents, collagen fibers were more compact, whilst those in the graft without protection were looser and thicker. No elastic fiber was found in either kind of grafts. Furthermore, grafts without protection were over-expanded which resulted in much bigger cross-sectional areas.
Conclusion: The PGA external extent contributes little to the reduction of the mechanical mismatch between the graft and its host artery while remodeling develops. For the geometric mismatch, it reduces the cross-section area, therefore matching with the host artery much better. Although there are some positive effects, conclusively the PGA is not an ideal material for external stent.
abstract_id: PUBMED:7489209
Compliance mismatch and formation of distal anastomotic intimal hyperplasia in externally stiffened and lumen-adapted venous grafts. Objective: Compliance and formation of distal anastomotic intimal hyperplasia (DAIH) were investigated in externally stiffened venous grafts of varying calibers.
Methods: 36 femoropopliteal reconstructions were performed in 18 sheep. The autologous venous grafts were inserted into tubes made of Dacron mesh to achieve compliance-mismatch and lumen adaptation. Compliance was measured by echotracked ultrasonography and profiles of DAIH were generated from histologic sections harvested after 8.3 months.
Main Results: The external mesh tube significantly lowered the local compliance of graft and host artery. DAIH appeared extensively in those groups where mesh tube constricted venous grafts met untreated host arteries (p = 0.002). No differences in compliance and DAIH formation were observed when grafts with large and adapted diameters were compared.
Conclusions: For prevention of DAIH the distal venous graft diameter is not important, while the local compliance of an autologous vein is a predictive factor for DAIH formation and thus long-term patency.
abstract_id: PUBMED:24083105
Percutaneous transluminal renal angioplasty with stenting for stenotic venous bypass grafts: report of two cases. Cases of percutaneous transluminal renal angioplasty for renal artery stenosis are increasing. However, percutaneous transluminal renal angioplasty with stenting for stenotic venous bypass grafts has never been reported. Herein, the authors describe two cases of percutaneous transluminal renal angioplasty with stenting for a stenotic venous bypass graft. The patients in both cases had undergone bypass grafting using autologous saphenous veins, which were anastomosed directly to their abdominal aortas. We successfully conducted percutaneous transluminal renal angioplasty with stenting. One of the keys for technical success is an appropriate selection of guiding catheter compatible with postoperative nonanatomical vasculature, and the other is relatively high pressure dilation for venous stenosis.
abstract_id: PUBMED:38374996
Therapeutic strategies based on non-ionizing radiation to prevent venous neointimal hyperplasia: the relevance for stenosed arteriovenous fistula, and the role of vascular compliance. We have reviewed the development and current status of therapies based on exposure to non-ionizing radiation (with a photon energy less than 10 eV) aimed at suppressing the venous neointimal hyperplasia, and consequentially at avoiding stenosis in arteriovenous grafts. Due to the drawbacks associated with the medical use of ionizing radiation, prominently the radiation-induced cardiovascular disease, the availability of procedures using non-ionizing radiation is becoming a noteworthy objective for the current research. Further, the focus of the review was the use of such procedures for improving the vascular access function and assuring the clinical success of arteriovenous fistulae in hemodialysis patients. Following a brief discussion of the physical principles underlying radiotherapy, the current methods based on non-ionizing radiation, either in use or under development, were described in detail. There are currently five such techniques, including photodynamic therapy (PDT), far-infrared therapy, photochemical tissue passivation (PTP), Alucent vascular scaffolding, and adventitial photocrosslinking. The last three are contingent on the mechanical stiffening achievable by the exogenous photochemical crosslinking of tissular collagen, a process that leads to the decrease of venous compliance. As there are conflicting opinions on the role of compliance mismatch between arterial and venous conduits in a graft, this aspect was also considered in our review.
abstract_id: PUBMED:36943136
In vivo evaluation of compliance mismatch on intimal hyperplasia formation in small diameter vascular grafts. Small diameter synthetic vascular grafts have high failure rate due to the thrombosis and intimal hyperplasia formation. Compliance mismatch between the synthetic graft and native artery has been speculated to be one of the main causes of intimal hyperplasia. However, changing the compliance of synthetic materials without altering material chemistry remains a challenge. Here, we used poly(vinyl alcohol) (PVA) hydrogel as a graft material due to its biocompatibility and tunable mechanical properties to investigate the role of graft compliance in the development of intimal hyperplasia and in vivo patency. Two groups of PVA small diameter grafts with low compliance and high compliance were fabricated by dip casting method and implanted in a rabbit carotid artery end-to-side anastomosis model for 4 weeks. We demonstrated that the grafts with compliance that more closely matched with rabbit carotid artery had lower anastomotic intimal hyperplasia formation and higher graft patency compared to low compliance grafts. Overall, this study suggested that reducing the compliance mismatch between the native artery and vascular grafts is beneficial for reducing intimal hyperplasia formation.
abstract_id: PUBMED:37456727
Design and computational optimization of compliance-matching aortic grafts. Introduction: Synthetic vascular grafts have been widely used in clinical practice for aortic replacement surgery. Despite their high rates of surgical success, they remain significantly less compliant than the native aorta, resulting in a phenomenon called compliance mismatch. This incompatibility of elastic properties may cause serious post-operative complications, including hypertension and myocardial hypertrophy. Methods: To mitigate the risk for these complications, we designed a multi-layer compliance-matching stent-graft, that we optimized computationally using finite element analysis, and subsequently evaluated in vitro. Results: We found that our compliance-matching grafts attained the distensibility of healthy human aortas, including those of young adults, thereby significantly exceeding the distensibility of gold-standard grafts. The compliant grafts maintained their properties in a wide range of conditions that are expected after the implantation. Furthermore, the computational model predicted the graft radius with enough accuracy to allow computational optimization to be performed effectively. Conclusion: Compliance-matching grafts may offer a valuable improvement over existing prostheses and they could potentially mitigate the risk for post-operative complications attributed to excessive graft stiffness.
abstract_id: PUBMED:38438692
Patient-Specific Haemodynamic Analysis of Virtual Grafting Strategies in Type-B Aortic Dissection: Impact of Compliance Mismatch. Introduction: Compliance mismatch between the aortic wall and Dacron Grafts is a clinical problem concerning aortic haemodynamics and morphological degeneration. The aortic stiffness introduced by grafts can lead to an increased left ventricular (LV) afterload. This study quantifies the impact of compliance mismatch by virtually testing different Type-B aortic dissection (TBAD) surgical grafting strategies in patient-specific, compliant computational fluid dynamics (CFD) simulations.
Materials And Methods: A post-operative case of TBAD was segmented from computed tomography angiography data. Three virtual surgeries were generated using different grafts; two additional cases with compliant grafts were assessed. Compliant CFD simulations were performed using a patient-specific inlet flow rate and three-element Windkessel outlet boundary conditions informed by 2D-Flow MRI data. The wall compliance was calibrated using Cine-MRI images. Pressure, wall shear stress (WSS) indices and energy loss (EL) were computed.
Results: Increased aortic stiffness and longer grafts increased aortic pressure and EL. Implementing a compliant graft matching the aortic compliance of the patient reduced the pulse pressure by 11% and EL by 4%. The endothelial cell activation potential (ECAP) differed the most within the aneurysm, where the maximum percentage difference between the reference case and the mid (MDA) and complete (CDA) descending aorta replacements increased by 16% and 20%, respectively.
Conclusion: This study suggests that by minimising graft length and matching its compliance to the native aorta whilst aligning with surgical requirements, the risk of LV hypertrophy may be reduced. This provides evidence that compliance-matching grafts may enhance patient outcomes.
abstract_id: PUBMED:34024615
External stenting and disease progression in saphenous vein grafts two years after coronary artery bypass grafting: A multicenter randomized trial. Objectives: Little data exist regarding the potential of external stents to mitigate long-term disease progression in saphenous vein grafts. We investigated the effect of external stents on the progression of saphenous vein graft disease.
Methods: A total of 184 patients undergoing isolated coronary artery bypass grafting, using an internal thoracic artery graft and at least 2 additional saphenous vein grafts, were enrolled in 14 European centers. One saphenous vein graft was randomized to an external stent, and 1 nonstented saphenous vein graft served as the control. The primary end point was the saphenous vein graft Fitzgibbon patency scale assessed by angiography, and the secondary end point was saphenous vein graft intimal hyperplasia assessed by intravascular ultrasound in a prespecified subgroup at 2 years.
Results: Angiography was completed in 128 patients and intravascular ultrasound in the entire prespecified cohort (n = 51) at 2 years. Overall patency rates were similar between stented and nonstented saphenous vein grafts (78.3% vs 82.2%, P = .43). However, the Fitzgibbon patency scale was significantly improved in stented versus nonstented saphenous vein grafts, with Fitzgibbon patency scale I, II, and III rates of 66.7% versus 54.9%, 27.8% versus 34.3%, and 5.5% versus 10.8%, respectively (odds ratio, 2.02; P = .03). Fitzgibbon patency scale was inversely related to saphenous vein graft minimal lumen diameter, with Fitzgibbon patency scale I, II, and III saphenous vein grafts having an average minimal lumen diameter of 2.62 mm, 1.98 mm, and 1.32 mm, respectively (P < .05). Externally stented saphenous vein grafts also showed significant reductions in mean intimal hyperplasia area (22.5%; P < .001) and thickness (23.5%; P < .001).
Conclusions: Two years after coronary artery bypass grafting, external stenting improves Fitzgibbon patency scales of saphenous vein grafts and significantly reduces intimal hyperplasia area and thickness. Whether this will eventually lead to improved long-term patency is still unknown.
abstract_id: PUBMED:33845865
External stenting of vein grafts in coronary artery bypass grating: interim results from a two centers prospective study. Background: previous studies evaluating external stents for saphenous vein grafts (SVG) in CABG were limited to on-pump isolated CABG and single grafting technique with one external stent per patient. The objective of this prospective study was to evaluate the safety and the short-term performance of external stents in a heterogeneous group of patients who underwent on- and off-pump CABG, single and sequential grafting.
Methods: 102 patients undergoing CABG were enrolled in two centers. All patients received internal mammary artery to the left anterior descending artery and additional arterial and/or venous grafts. In each patient, at least one SVG was supported with an external stent. Grafts' patency and SVG lumen uniformity were assessed using CT angiography at a pre-defined time window of 6-12 months post procedure. All patients were prospectively followed-up via phone call and/or visit every 6 months for Major Adverse Cardiac and Cerebrovascular Events.
Results: 51 patients (50%) underwent off-pump CABG and 23 patients (23%) were grafted with bilateral internal mammary arteries. Each patient received one or more SVG grafted in a sequential technique (44%) or as a single graft (56%). All SVG were externally stented in 84% of patients and in 16% (n = 16) one SVG was stented and one remained unsupported. At 6-12 months, patency rates of LIMA, RIMA, externally stented SVG and none-stented SVG were 100, 100, 98 and 87.5% respectively. 90% of the externally stented SVG had uniform lumen compared to 37% of the non-stented SVG. Clinical follow-up was completed for all patients with a mean duration of 20 months (range 6-54 months). During follow up period, one patient experienced myocardial infarction due to occlusion of the LIMA-LAD graft and one patient experienced a transient ischemic attack.
Conclusions: External stenting of SVG is feasible and safe in CABG setting which includes off pump CABG and sequential SVG grafting and associated with acceptable early patency rates.
Trial Registration: Study was registered at ClinicalTrials.gov. NCT01860274 (initial release 20.05.2013).
abstract_id: PUBMED:33592294
External stenting and disease progression in vein grafts 1 year after open surgical repair of popliteal artery aneurysm. Objective: Open surgical repair remains the gold standard treatment for popliteal artery aneurysms (PAA). The objective of this study was to evaluate the safety of external stenting and its medium-term effect on vein graft disease after open PAA repair.
Methods: Between December 2017 and September 2019, 12 consecutive patients with PAA underwent open surgical repair with externally stented saphenous vein grafts. Duplex ultrasound scanning of the grafts was performed at discharge and at 3, 6, and 12 months after the procedure to evaluate graft patency, average lumen diameter and lumen uniformity.
Results: Eleven patients underwent aneurysm ligation and bypass grafting and one patient was treated with aneurysm exclusion and interposition of a venous segment. External stenting of the vein graft was successful in all patients. The mean follow-up time was 12 months (range, 7-17 months), with a primary patency rate of 100% and no graft revisions or reinterventions. The mean lumen diameters at 3, 6, and 12 months were 5.9 ± 1.2 mm, 5.7 ± 0.8 mm, and 5.7 ± 0.7 mm, respectively, with no significant changes between 3 and 6 (P = .34) and between 6 and 12 months (P = .34). The coefficient of variance at 3, 6, and 12 months was 8.2 ± 9.3, 9.4 ± 7.2, and 10.4 ± 8.9, respectively, with no significant change between 3 and 6 months (P = .78) or 6 and 12 months (P = .98). No mortality or amputations were recorded throughout the follow-up period.
Conclusions: External stenting of vein grafts in open surgical repair of PAA is feasible and safe. This technique may potentially improve the outcomes of surgical repair in patients with PAA.
Answer: The use of PGA (polyglycolic acid) external stenting in venous grafts does not significantly reduce the mechanical compliance mismatch between the graft and the host artery. A study involving hypercholesterolemic rabbits transplanted with jugular veins into carotid arteries and using PGA fabric for external stents found that the PGA external stent contributed little to reducing the mechanical mismatch. However, it did reduce the cross-sectional area of the grafts, thus better matching the geometry with the host artery. Despite some positive effects, PGA was not considered an ideal material for external stents (PUBMED:17437638).
Other studies have focused on the importance of compliance matching to prevent complications such as distal anastomotic intimal hyperplasia (DAIH) and have suggested that local compliance of the autologous vein is a predictive factor for DAIH formation and long-term patency, rather than the distal venous graft diameter (PUBMED:7489209). Additionally, research on small diameter synthetic vascular grafts has indicated that grafts with compliance more closely matching the native artery had lower anastomotic intimal hyperplasia formation and higher graft patency compared to low compliance grafts (PUBMED:36943136).
Furthermore, computational optimization of compliance-matching aortic grafts has shown that grafts designed to match the distensibility of healthy human aortas could potentially mitigate the risk for post-operative complications attributed to excessive graft stiffness (PUBMED:37456727). Similarly, patient-specific hemodynamic analysis of virtual grafting strategies in Type-B aortic dissection has suggested that minimizing graft length and matching graft compliance to the native aorta may reduce the risk of left ventricular hypertrophy and enhance patient outcomes (PUBMED:38438692).
In summary, while PGA external stenting may not significantly reduce mechanical compliance mismatch, the concept of compliance matching in vascular grafts is important for reducing complications and improving long-term patency. Other materials and design strategies that more closely match the compliance of native vessels are being explored to address this issue. |
Instruction: Does the internal thoracic artery graft delay the recovery of myocardial metabolism?
Abstracts:
abstract_id: PUBMED:8823087
Does the internal thoracic artery graft delay the recovery of myocardial metabolism? Background: The left internal thoracic artery (LITA) bypass graft to the left anterior descending artery has greater long-term patency than a saphenous vein graft. However, surgeons may be reluctant to use the LITA graft in some patients because they are unable to deliver cardioplegia to the left anterior descending artery territory.
Methods: We compared the myocardial levels of high-energy phosphates and their metabolites in patients who received an LITA graft with those in patients who received a saphenous vein graft to the left anterior descending artery territory during elective coronary artery bypass grafting. Right and left ventricular biopsy specimens were obtained at three times: before aortic cross-clamping, after cross-clamp removal, and after 10 minutes of reperfusion.
Results: No differences were found between the LITA graft group and the saphenous vein graft group in any right ventricular metabolites. There was an improvement in myocardial protection over time and a higher proportion of LITA graft patients in the late time period (early group, 63% versus late group, 80%; p < 0.01). Within each time period, there were no differences between the LITA and saphenous vein graft groups. Among patients receiving cold antegrade cardioplegia, the myocardial levels of high-energy phosphates were better preserved in those receiving an LITA graft.
Conclusions: Advances in myocardial protection have led to improved preservation of high-energy phosphate levels after cardioplegic arrest. In patients undergoing elective coronary artery bypass grafting, the use of an LITA graft does not adversely affect myocardial metabolism. Further investigations are required to determine the effects of the use of the LITA during urgent or emergent procedures.
abstract_id: PUBMED:31638700
Bilateral internal thoracic artery grafting: propensity analysis of the left internal thoracic artery versus the right internal thoracic artery as a bypass graft to the left anterior descending artery. Objectives: To compare different configurations of the bilateral internal thoracic arteries for the left coronary system and examine early and late outcomes, including mid-term graft patency.
Methods: We reviewed 877 patients who underwent primary isolated coronary artery bypass grafting using in situ bilateral internal thoracic arteries [in situ right internal thoracic artery (RITA)-to-left anterior descending artery (LAD) grafting, n = 683; in situ left internal thoracic artery (LITA)-to-LAD grafting, n = 194]. We compared mid-term patency between the grafts. Propensity score matching was performed to investigate early and long-term outcomes.
Results: The 2-year patency rate for RITA-to-LAD and LITA-to-LAD grafts were similar. Multivariate analysis revealed that RITA-to-non-LAD anastomosis (P = 0.029), postoperative length of stay (P = 0.003) and chronic obstructive pulmonary disease (P = 0.005) were associated with graft failure. After statistical adjustment, 176 propensity-matched pairs were available for comparison. RITA-to-LAD grafting enabled a more distal anastomosis. Kaplan-Meier analysis revealed that the incidences of death, repeat revascularization and myocardial infarction were significantly higher in the LITA-to-LAD group among both the unmatched and matched samples (P = 0.045 and 0.029, respectively).
Conclusions: The mid-term patency and outcomes of RITA-to-LAD grafting are good and reduces future cardiac event, in contrast to LITA-to-LAD grafting.
abstract_id: PUBMED:26907619
Management of a Left Internal Thoracic Artery Graft Injury during Left Thoracotomy for Thoracic Surgery. There have been some recent reports on the surgical treatment of lung cancer in patients following previous coronary artery bypass graft surgery. Use of internal thoracic artery graft is a gold standard in cardiac surgery with superior long-term patency. Left internal thoracic artery graft is usually patent during left lung resection in patients who present to the surgeon with an operable lung cancer. We have presented our institutional experience with left-sided thoracic surgery in patients who have had previous coronary artery surgery with a patent internal thoracic artery graft.
abstract_id: PUBMED:17992305
Immediate results of right internal thoracic artery and radial artery as the second arterial graft in myocardial revascularization. Objective: We sought to compare early clinical outcomes in patients receiving a right internal thoracic artery or a radial artery as the second arterial graft in myocardial revascularization.
Methods: We retrospectively studied 58 consecutive patients who underwent coronary artery bypass surgery and received both a left internal thoracic artery graft and either a right internal thoracic artery (n=20) or a radial artery graft (n=38), between January 2004 and March 2006. Hospital mortality, pleural drainage, operative time and postoperative complications were analyzed.
Results: There were no significant preoperative differences between groups. There was only one (1.7%) in-hospital death which occurred in the Radial Group. Operative times was significantly higher in the Right Internal Thoracic Group (p-value = 0.0018), but were not associated with increased Intensive Care Unit stays, mechanical ventilation or other postoperative complications. We were able to perform significantly more distal anastomosis using the radial artery than the right internal thoracic artery (1.57 versus 1.05: p-value =0.003).
Conclusion: In our group of patients, the use of a right internal thoracic artery as a second arterial graft was associated with a prolonged operative time, but had no interference with the immediate clinical outcomes.
abstract_id: PUBMED:30505754
Saphenous vein as a composite graft from the internal thoracic artery. The saphenous vein (SV) has been used as an aortocoronary bypass graft for coronary artery bypass grafting (CABG) for the past 50 years. However, CABG using the aortocoronary SV has shown disadvantages of lower long-term graft patency rates and subsequently worse clinical outcomes, compared with CABG using the internal thoracic artery (ITA). The advantages of CABG using the ITA prompted interest in total arterial revascularization, using the bilateral ITAs and other arterial conduits as composite graft configurations in patients exhibiting multi-vessel disease. Total arterial revascularization using a Y- or T-composite graft based on the in situ ITA increases the length of the arterial graft and allows the extensive use of arterial conduits to revascularize both the left and right coronary territories. Further, it has demonstrated favorable outcomes in terms of angiographic patency rates, myocardial perfusion and thickening by single photon emission computed tomography, and long-term clinical outcomes. However, previous studies describing the use of the SV conduit as a composite graft have produced conflicting results. In this article, a recent surgical strategy of using the SV as part of a composite graft based on the in situ left ITA will be discussed.
abstract_id: PUBMED:16358150
Left internal thoracic artery to left pulmonary artery fistula after coronary artery bypass graft surgery. A rare cause of myocardial ischemia We report a patient who developed dyspnea on mild exertion six years after coronary artery bypass graft surgery (CABG). Myocardial ischemia was documented by radionuclide imaging, and coronary angiography showed patency of all grafts and a large fistula between the left internal thoracic artery (LITA) and the left pulmonary artery (LPA). The patient was submitted to surgical closure of the fistula and made an excellent recovery.
abstract_id: PUBMED:37066715
Internal thoracic artery on the aorta: A simple radial connection. Although performing total arterial coronary artery bypass revascularisation, using internal thoracic arteries as in situ grafts is not always feasible. The implantation of an internal thoracic artery on the aorta could be necessary, in a situation rarely planned preoperatively. Herein, we describe a simple and original way to perform this anastomosis. A 2-cm length of extra radial artery graft ended by a clip is anastomosed to the aorta in a standard fashion. The internal thoracic artery is then sown on the radial dome. We obtain a wide arterial anastomotic chamber using a standard technique, safe and easily reproducible.
abstract_id: PUBMED:10735687
Complete myocardial revascularization with bilateral internal thoracic artery T graft. Background: The internal thoracic artery is widely recognized as the ideal graft for coronary artery bypass procedures. However, because of the inadequate length of the conduit, use of bilateral internal thoracic artery grafting was not suitable for complete revascularization. To overcome this limitation, the T graft was introduced in the 1990s. We decided to prospectively assess the safety of this technique.
Methods: One hundred six patients with a mean age of 51.5 years underwent complete revascularization with an internal thoracic artery T graft. Mean left ventricular ejection fraction was 0.60 (range, 0.22 to 0.85).
Results: No patient required reexploration for bleeding, and no patient died within 30 days after operation. On the basis of electrocardiographic changes, 3 patients sustained a perioperative myocardial infarction. One patient had a sternal wound infection. Mean follow-up was 35 months (range, 15 to 61 months). The actuarial survival rate was 99% +/- 1% at 5 years. No myocardial infarctions were reported during the follow-up. Seven patients had recurrent angina. Eighty patients (76%) underwent postoperative stress tests, and 90% had negative results.
Conclusions: Complete myocardial revascularization with the T graft is a safe and reliable technique with excellent midterm results.
abstract_id: PUBMED:35295725
Late spontaneous internal thoracic artery graft dissection after coronary bypass grafting: a case report. Background: Internal thoracic artery (ITA) grafts are commonly used for coronary artery bypass grafting, with dissection to the graft being a rare occurrence. Herein, we describe a case of spontaneous ITA graft dissection occurring 11 years after grafting, with no clear precipitating incidence.
Case Summary: The patient was a 61-year-old man who presented with a 3-month history of chest pain and dyspnoea. Dissection of the left internal thoracic artery (LITA) graft was observed on angiography, with a thrombolysis in the myocardial infarction (TIMI) grade 2 blood flow. Intravascular ultrasound confirmed an intimal tear in the proximal graft, with an intramural haematoma. In the absence of atherosclerotic changes, the dissection was treated directly using multiple drug-eluting stents to prevent further extension of the intramural haematoma proximally into the subclavian artery and distally to the anastomosis site. Post-procedural angiography revealed an enlarged true lumen of the LITA, shrinking of the intramural haematoma, and improvement in blood flow to a TIMI grade 3. Chest symptoms resolved immediately after the procedure, with the patient remaining asymptomatic over the 6-month period following the procedure.
Discussion: Dissection of the ITA graft can occur spontaneously long after the initial grafting. Intravascular ultrasound is useful for diagnosis. Ensuring adequate coverage of the edges of the dissection with stenting could prevent further extension of the intramural haematoma.
abstract_id: PUBMED:19436806
"Pulmonary slit" procedure for preventing tension on the left internal thoracic artery graft. The gold-standard bypass graft to the left anterior descending coronary artery is the left internal thoracic artery harvested with its pedicle. At times, however, the length of the internal thoracic artery is insufficient for distal anastomosis. Different methods of lengthening the internal thoracic artery or of reducing the distance to the anastomosis site have been described, but at times even these may be inadequate. In order to extend the benefits of the left internal thoracic artery graft to more patients, we perform the "pulmonary slit" procedure as described here.
Answer: No, the use of an internal thoracic artery (ITA) graft does not delay the recovery of myocardial metabolism. A study comparing myocardial levels of high-energy phosphates and their metabolites in patients who received a left internal thoracic artery (LITA) graft with those in patients who received a saphenous vein graft to the left anterior descending artery territory during elective coronary artery bypass grafting found no differences in myocardial metabolism between the two groups. In fact, advances in myocardial protection have led to improved preservation of high-energy phosphate levels after cardioplegic arrest, and the use of an LITA graft does not adversely affect myocardial metabolism (PUBMED:8823087). |
Instruction: Is a randomised controlled trial of a maternity care intervention for pregnant adolescents possible?
Abstracts:
abstract_id: PUBMED:24225138
Is a randomised controlled trial of a maternity care intervention for pregnant adolescents possible? An Australian feasibility study. Background: The way in which maternity care is provided affects perinatal outcomes for pregnant adolescents; including the likelihood of preterm birth. The study purpose was to assess the feasibility of recruiting pregnant adolescents into a randomised controlled trial, in order to inform the design of an adequately powered trial which could test the effect of caseload midwifery on preterm birth for pregnant adolescents.
Methods: We recruited pregnant adolescents into a feasibility study of a prospective, un-blinded, two-arm, randomised controlled trial of caseload midwifery compared to standard care. We recorded and analysed recruitment data in order to provide estimates to be used in the design of a larger study.
Results: The proportion of women aged 15-17 years who were eligible for the study was 34% (n=10), however the proportion who agreed to be randomised was only 11% (n = 1). Barriers to recruitment were restrictive eligibility criteria, unwillingness of hospital staff to assist with recruitment, and unwillingness of pregnant adolescents to have their choice of maternity carer removed through randomisation.
Conclusions: A randomised controlled trial of caseload midwifery care for pregnant adolescents would not be feasible in this setting without modifications to the research protocol. The recruitment plan should maximise opportunities for participation by increasing the upper age limit and enabling women to be recruited at a later gestation. Strategies to engage the support of hospital-employed staff are essential and would require substantial, and ongoing, work. A Zelen method of post-randomisation consent, monetary incentives and 'peer recruiters' could also be considered.
abstract_id: PUBMED:29506563
Evaluation of community-level interventions to increase early initiation of antenatal care in pregnancy: protocol for the Community REACH study, a cluster randomised controlled trial with integrated process and economic evaluations. Background: The provision of high-quality maternity services is a priority for reducing inequalities in health outcomes for mothers and infants. Best practice includes women having their initial antenatal appointment within the first trimester of pregnancy in order to provide screening and support for healthy lifestyles, well-being and self-care in pregnancy. Previous research has identified inequalities in access to antenatal care, yet there is little evidence on interventions to improve early initiation of antenatal care. The Community REACH trial will assess the effectiveness and cost-effectiveness of engaging communities in the co-production and delivery of an intervention that addresses this issue.
Methods/design: The study design is a matched cluster randomised controlled trial with integrated process and economic evaluations. The unit of randomisation is electoral ward. The intervention will be delivered in 10 wards; 10 comparator wards will have normal practice. The primary outcome is the proportion of pregnant women attending their antenatal booking appointment by the 12th completed week of pregnancy. This and a number of secondary outcomes will be assessed for cohorts of women (n = approximately 1450 per arm) who give birth 2-7 and 8-13 months after intervention delivery completion in the included wards, using routinely collected maternity data. Eight hospitals commissioned to provide maternity services in six NHS trusts in north and east London and Essex have been recruited to the study. These trusts will provide anonymised routine data for randomisation and outcomes analysis. The process evaluation will examine intervention implementation, acceptability, reach and possible causal pathways. The economic evaluation will use a cost-consequences analysis and decision model to evaluate the intervention. Targeted community engagement in the research process was a priority.
Discussion: Community REACH aims to increase early initiation of antenatal care using an intervention that is co-produced and delivered by local communities. This pragmatic cluster randomised controlled trial, with integrated process and economic evaluation, aims to rigorously assess the effectiveness of this public health intervention, which is particularly complex due to the required combination of standardisation with local flexibility. It will also answer questions about scalability and generalisability.
Trial Registration: ISRCTN registry: registration number 63066975 . Registered on 18 August 2015.
abstract_id: PUBMED:27084751
Improving maternity care using a personal health record: study protocol for a stepped-wedge, randomised, controlled trial. Background: A personal health record (PHR) is an online application through which individuals can access, manage, and share their health information in a private, secure, and confidential environment. Personal health records empower patients, facilitate collaboration among healthcare professionals, and improve health outcomes. Given these anticipated positive effects, we want to implement a PHR, named MyPregn@ncy, in a Dutch maternity care setting and to evaluate its effects in routine care. This paper presents the study protocol.
Methods/design: The effects of implementing a PHR in maternity care on patients and professionals will be identified in a stepped-wedge, cluster-randomised, controlled trial. The study will be performed in the region of Nijmegen, a Dutch area with an average of 4,500 births a year and more than 230 healthcare professionals involved in maternity care. Data analyses will describe the effects of MyPregn@ncy on health outcomes in maternity care, quality of care from the patients' perspectives, and collaboration among healthcare professionals. Additionally, a process evaluation of the implementation of MyPregn@ncy will be performed. Data will be collected using data from the Dutch perinatal registry, questionnaires, interviews, and log data.
Discussion: The study is expected to yield new information about the effects, strengths, possibilities, and challenges to the implementation and usage of a PHR in routine maternal care settings. Results may lead to new insights and improvements in the quality of maternal and perinatal care.
Trial Registration: Netherlands Trial Register: NTR4063.
abstract_id: PUBMED:34360168
Effects of a Midwife-Coordinated Maternity Care Intervention (ChroPreg) vs. Standard Care in Pregnant Women with Chronic Medical Conditions: Results from a Randomized Controlled Trial. The proportion of childbearing women with pre-existing chronic medical conditions (CMC) is rising. In a randomized controlled trial, we aimed to evaluate the effects of a midwife-coordinated maternity care intervention (ChroPreg) in pregnant women with CMC. The intervention consisted of three main components: (1) Midwife-coordinated and individualized care, (2) Additional ante-and postpartum consultations, and (3) Specialized known midwives. The primary outcome was the total length of hospital stay (LOS). Secondary outcomes were patient-reported outcomes measuring psychological well-being and satisfaction with maternity care, health utilization, and maternal and infant outcomes. A total of 362 women were randomized to the ChroPreg intervention (n = 131) or Standard Care (n = 131). No differences in LOS were found between groups (median 3.0 days, ChroPreg group 0.1% lower LOS, 95% CI -7.8 to 7%, p = 0.97). Women in the ChroPreg group reported being more satisfied with maternity care measured by the Pregnancy and Childbirth Questionnaire (PCQ) compared with the Standard Care group (mean PCQ 104.5 vs. 98.2, mean difference 6.3, 95% CI 3.0-10.0, p < 0.0001). In conclusion, the ChroPreg intervention did not reduce LOS. However, women in the ChroPreg group were more satisfied with maternity care.
abstract_id: PUBMED:30459959
Testing the effectiveness of REACH Pregnancy Circles group antenatal care: protocol for a randomised controlled pilot trial. Background: Antenatal care is an important public health priority. Women from socially disadvantaged, and culturally and linguistically diverse groups often have difficulties with accessing antenatal care and report more negative experiences with care. Although group antenatal care has been shown in some settings to be effective for improving women's experiences of care and for improving other maternal as well as newborn health outcomes, these outcomes have not been rigorously assessed in the UK. A pilot trial will be conducted to determine the feasibility of, and optimum methods for, testing the effectiveness of group antenatal care in an NHS setting serving populations with high levels of social deprivation and cultural, linguistic and ethnic diversity. Outcomes will inform the protocol for a future full trial.
Methods: This protocol outlines an individual-level randomised controlled external pilot trial with integrated process and economic evaluations. The two trial arms will be group care and standard antenatal care. The trial will involve the recruitment of 72 pregnant women across three maternity services within one large NHS Acute Trust. Baseline, outcomes and economic data will be collected via questionnaires completed by the participants at three time points, with the final scheduled for 4 months postnatal. Routine maternity service data will also be collected for outcomes assessment and economic evaluation purposes. Stakeholder interviews will provide insights into the acceptability of research and intervention processes, including the use of interpreters to support women who do not speak English. Pre-agreed criteria have been selected to guide the decision about whether or not to progress to a full trial.
Discussion: This pilot trial will determine if it is appropriate to proceed to a full trial of group antenatal care in this setting. If progression is supported, the pilot will provide authoritative high-quality evidence to inform the design and conduct of a trial in this important area that holds significant potential to influence maternity care, outcomes and experience.
Trial Registration: ISRCTN ISRCTN66925258. Registered 03 April 2017. Retrospectively registered.
abstract_id: PUBMED:24861802
Professional breastfeeding support for first-time mothers: a multicentre cluster randomised controlled trial. Objective: To evaluate the effect of two postnatal professional support interventions on the duration of any and exclusive breastfeeding.
Design: Multicentre, three-arm, cluster randomised controlled trial.
Population: A cohort of 722 primiparous breastfeeding mothers with uncomplicated, full-term pregnancies.
Methods: The three study interventions were: (1) standard postnatal maternity care; (2) standard care plus three in-hospital professional breastfeeding support sessions, of 30-45 minutes in duration; or (2) standard care plus weekly post-discharge breastfeeding telephone support, of 20-30 minutes in duration, for 4 weeks. The interventions were delivered by four trained research nurses, who were either highly experienced registered midwives or certified lactation consultants.
Main Outcome Measures: Prevalence of any and exclusive breastfeeding at 1, 2, and 3 months postpartum.
Results: Rates of any and exclusive breastfeeding were higher among participants in the two intervention groups at all follow-up points, when compared with those who received standard care. Participants receiving telephone support were significantly more likely to continue any breastfeeding at 1 month (76.2 versus 67.3%; odds ratio, OR 1.63, 95% confidence interval, 95% CI 1.10-2.41) and at 2 months (58.6 versus 48.9%; OR 1.48, 95% CI 1.04-2.10), and to be exclusively breastfeeding at 1 month (28.4 versus 16.9%; OR 1.89, 95% CI 1.24-2.90). Participants in the in-hospital support group were also more likely to be breastfeeding at all time points, but the effect was not statistically significant.
Conclusions: Professional breastfeeding telephone support provided early in the postnatal period, and continued for the first month postpartum, improves breastfeeding duration among first-time mothers. It is also possible that it was the continuing nature of the support that increased the effectiveness of the intervention, rather than the delivery of the support by telephone specifically.
abstract_id: PUBMED:31138296
Efficacy of a midwife-coordinated, individualized, and specialized maternity care intervention (ChroPreg) in addition to standard care in pregnant women with chronic disease: protocol for a parallel randomized controlled trial. Background And Objectives: The number of women of childbearing age with chronic diseases is rising. Evidence has shown that obstetric complications and poor psychological well-being are more prevalent among this group, in addition to these women reporting experiences of less than satisfactory care. More research is needed to investigate how to best meet the special needs of this group during pregnancy and postpartum. Previous research has shown that care coordination, continuity of care, woman-centered care, and specialized maternity care interventions delivered to women with high-risk pregnancies can improve patient-reported outcomes and pregnancy outcomes and be cost-effective. However, no previous trials have examined the efficacy and cost-effectiveness of such interventions among pregnant women with chronic diseases. This paper describes the protocol of a randomized controlled trial (RCT) of a midwife-coordinated, individualized and specialized maternity care intervention (ChroPreg) as an add-on to standard care for pregnant women with chronic diseases.
Methods/design: This two-arm parallel group RCT will be conducted from October 2018 through June 2020 at the Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Denmark. Pregnant women with chronic diseases are invited to participate; women will be randomized and allocated 1:1 to the ChroPreg intervention plus standard care or standard care alone. The ChroPreg intervention consists of three main components: (1) coordinated and individualized care, (2) additional ante- and postpartum consultations, and (3) specialized midwives. The primary outcome is length of hospital stay during pregnancy and in the postpartum period, and secondary outcomes are psychological well-being (five-item World Health Organization Well-Being Index, Edinburgh Postnatal Depression Scale, Cambridge Worry Scale), health-related quality of life (12-Item Short Form Health Survey), patient satisfaction (Pregnancy and Childbirth Questionnaire), number of antenatal contacts, and pregnancy and delivery outcomes. Data are collected via patient-administered questionnaires and medical records.
Discussion: This trial is anticipated to contribute to the field of knowledge on which planning of improved antenatal, intra-, and postpartum care for women with chronic disease is founded.
Trial Registration: ClinicalTrials.gov, NCT03511508 . Registered April 27, 2018.
abstract_id: PUBMED:33213446
WeChat-based intervention to support breastfeeding for Chinese mothers: protocol of a randomised controlled trial. Background: Exclusive breastfeeding for the first 6 months of life is the optimal way to feed infants. However, recent studies suggest that exclusive breastfeeding rates in China remain low and are well below the recommended target. There has been evidence that a lack of awareness of, or exposure to, breastfeeding information is associated with poor breastfeeding practices. WeChat, the most widely used social networking platform in China, has shown some potential to promote health behaviours. We thus hypothesised that a breastfeeding intervention program delivered via WeChat would achieve at least a 10% increase in exclusive breastfeeding prevalence at 6 months compared to the control group.
Methods: A two-arm, parallel, multicentre randomised controlled trial of 1000 pregnant women will be conducted at four maternity hospitals of Chengdu, China. Eligible women who consent to participate in the trial will be recruited at 28-30 weeks of gestation, and randomly allocated to either the intervention group (participants receive breastfeeding-related information from WeChat) or the control group (participants receive non-breastfeeding information from WeChat) using a central randomisation system on a 1:1 ratio at each participating site. The primary outcomes are exclusive breastfeeding rate and full breastfeeding rate at 6 months postpartum. All randomised participants will be included in the outcome analyses with missing data being imputed based on the best-case and worst-case scenarios. Multilevel mixed regression models will be used in the primary analyses to assess the effectiveness of intervention program on the breastfeeding rates.
Discussion: This trial uses the most widely used social media program as a means of delivering messages to mothers to increase exclusive breastfeeding in China. Increasing exclusive breastfeeding will contribute to meeting the health and environmental goals of the Sustainable Development Guidelines. Trial registration ClinicalTrials.gov, NCT04499404. Registered 5 August 2020-Retrospectively registered, https://clinicaltrials.gov/show/NCT04499404.
abstract_id: PUBMED:29148346
Increasing the Use of Comparative Quality Information in Maternity Care: Results From a Randomized Controlled Trial. This randomized controlled trial tested an intervention to increase uptake of hospital-level maternity care quality reports among 245 pregnant women in North Carolina (123 treatment; 122 control). The intervention included three enhancements to the quality report offered to the control: (a) biweekly text messages or e-mails directing women to the website, (b) videos and materials describing the relevance of quality measures to pregnant women's interests, and (c) tools to support discussions with clinicians. Compared with controls, intervention participants were significantly more likely to visit the website and report adopting behaviors to inform care, such as thinking through preferences, talking with their doctor, or creating a birth plan. Reports designed to put quality information into the larger context of what consumers want and need to know, along with targeted and timely communications, can increase consumer use of quality information and prompt them to talk with providers about care preferences and evidence-based practices.
abstract_id: PUBMED:26498455
The effect of primary midwife-led care on women's experience of childbirth: results from the COSMOS randomised controlled trial. Objective: To determine the effect of primary midwife-led care ('caseload midwifery') on women's experiences of childbirth.
Design: Randomised controlled trial.
Setting: Tertiary care women's hospital in Melbourne, Australia.
Population: A total of 2314 low-risk pregnant women.
Methods: Women randomised to caseload care received antenatal, intrapartum and postpartum care from a primary midwife, with some care provided by a 'back-up' midwife. Women in standard care received midwifery-led care with varying levels of continuity, junior obstetric care or community-based medical care.
Main Outcome Measures: The primary outcome of the study was caesarean section. This paper presents a secondary outcome, women's experience of childbirth. Women's views and experiences were sought using seven-point rating scales via postal questionnaires 2 months after the birth.
Results: A total of 2314 women were randomised between September 2007 and June 2010; 1156 to caseload and 1158 to standard care. Response rates to the follow-up questionnaire were 88 and 74%, respectively. Women in the caseload group were more positive about their overall birth experience than women in the standard care group (adjusted odds ratio 1.50, 95% CI 1.22-1.84). They also felt more in control during labour, were more proud of themselves, less anxious, and more likely to have a positive experience of pain.
Conclusions: Compared with standard maternity care, caseload midwifery may improve women's experiences of childbirth.
Tweetable Abstract: Primary midwife-led care ('caseload midwifery') improves women's experiences of childbirth.
Answer: Based on the abstract from PUBMED:24225138, a randomised controlled trial (RCT) of a maternity care intervention for pregnant adolescents is possible, but it faces several challenges that need to be addressed to ensure feasibility. The feasibility study conducted in the Australian setting encountered barriers such as restrictive eligibility criteria, the unwillingness of hospital staff to assist with recruitment, and the reluctance of pregnant adolescents to have their choice of maternity carer removed through randomisation. Only 11% of eligible pregnant adolescents agreed to be randomised in the study.
To make an RCT for pregnant adolescents feasible, the authors suggest modifications to the research protocol. These include increasing the upper age limit for participants, allowing women to be recruited at a later gestation, and implementing strategies to engage the support of hospital-employed staff. Additionally, the authors propose considering the use of a Zelen method of post-randomisation consent, offering monetary incentives, and employing 'peer recruiters' to enhance recruitment efforts.
Therefore, while conducting an RCT for a maternity care intervention targeting pregnant adolescents is challenging, it is possible with careful consideration and modification of the study design and recruitment strategies. |
Instruction: English Longitudinal Study of Aging: can Internet/E-mail use reduce cognitive decline?
Abstracts:
abstract_id: PUBMED:25116923
English Longitudinal Study of Aging: can Internet/E-mail use reduce cognitive decline? Background: Cognitive decline is a major risk factor for disability, dementia, and death. The use of Internet/E-mail, also known as digital literacy, might decrease dementia incidence among the older population. The aim was to investigate whether digital literacy might be associated with decreased cognitive decline in older adulthood.
Methods: Data from the English Longitudinal Study of Aging cohort with 6,442 participants aged 50-89 years, followed for 8 years, with baseline cognitive testing and four additional time points. The main outcome variable was the relative percentage change in delayed recall from a 10-word-list learning task across five separate measurement points. In addition to digital literacy, socioeconomic variables, including wealth and education, comorbidities, and baseline cognitive function were included in predictive models. The analysis used Generalized Estimating Equations.
Results: Higher education, no functional impairment, fewer depressive symptoms, no diabetes, and Internet/E-mail use predicted better performance in delayed recall.
Conclusions: Digital literacy may help reduce cognitive decline among persons aged between 50 and 89 years.
abstract_id: PUBMED:28795579
Is use of the internet in midlife associated with lower dementia incidence? Results from the English Longitudinal Study of Ageing. Objectives: Dementia is expected to affect one million individuals in the United Kingdom by 2025; its prodromal phase may start decades before its clinical onset. The aim of this study is to investigate whether use of internet from 50 years of age is associated with a lower incidence of dementia over a ten-year follow-up.
Methods: We analysed data based on 8,238 dementia free (at baseline in 2002-2004) core participants from the English Longitudinal Study of Ageing. Information on baseline use of internet was obtained through questionnaires; dementia casesness was based on participant (or informant) reported physician diagnosed dementia or overall score on the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Cox proportional hazards regression analysis was used for examining the relationship between internet use and incident dementia.
Results: There were 301 (5.01%) incident dementia cases during the follow-up. After full multivariable adjustment for potential confounding factors, baseline internet use was associated with a 40% reduction in dementia risk assessed between 2006-2012 (HR = 0.60 CI: 0.42-0.85; p < 0.05).
Conclusion: This study suggests that use of internet by individuals aged 50 years or older is associated with a reduced risk of dementia. Additional studies are needed to better understand the potential causal mechanisms underlying this association.
abstract_id: PUBMED:29039993
Physical activity pre- and post-dementia: English Longitudinal Study of Ageing. Background: To inform public health interventions, further investigation is needed to identify: (1) frequency/intensity of everyday physical activity (PA) needed to reduce dementia risk; (2) whether post-diagnosis reduction in PA is associated with cognitive outcomes in people with dementia.
Methods: Data from 11,391 men and women (aged ≥50) were obtained from the English Longitudinal Study of Ageing cohort. Assessments were carried out at baseline (2002-2003) and at biannual follow-ups (2004-2013).
Results: Older adults who carried out moderate to vigorous activity at least once per week had a 34%-50% lower risk for cognitive decline and dementia over an 8-10 year follow-up period. From pre- to post-dementia diagnosis, those who decreased PA levels had a larger decrease in immediate recall scores, compared to those who maintained or increased PA levels (analyses were adjusted for changes in physical function).
Conclusion: PA was associated with cognitive outcomes in a dose-dependent manner. Reduction in PA after diagnosis was associated with accelerated cognitive decline and maintaining PA may reduce symptom progression in dementia.
abstract_id: PUBMED:33969762
Determinants of verbal fluency trajectories among older adults from the English Longitudinal Study of Aging. Background: Prevalence of dementia and cognitive impairment increase creating the need for identifying modifiable risk factors to reduce their burden. The aim of this study was to identify latent groups following similar trajectories in cognitive performance assessed with the verbal fluency test, as well as their determinants.
Methods: Data from English Longitudinal Study of Aging (ELSA) were studied. Latent groups of similar course through a 6-year period in the outcome variable (verbal fluency) were investigated, along with their determinants, using Group Based Trajectory Modeling (GBTM).
Results: Four latent groups of verbal fluency trajectories were revealed. Education was the strongest predictor for a favorable trajectory, while cardiovascular disease and depression symptoms were associated with lower within each trajectory.
Conclusion: Cardiovascular diseases and depressive symptoms are associated with a worse course of verbal fluency through aging, implying that they might serve as targets for interventions to prevent cognitive decline in the aging population. Contrarily, higher level of education is associated with a more favorable course through aging.
abstract_id: PUBMED:34863810
Can internet use reduce the incidence of cognitive impairment? Analysis of the EpiFloripa Aging Cohort Study (2009-2019). This study aims to estimate the effect of internet use on the incidence of cognitive impairment in older adults. Data are from the EpiFloripa Aging Cohort Study which has been following a population-based sample of older adults (60+) residing in Florianópolis, southern Brazil, for ten years. The outcome was the incidence of cognitive decline in follow-up waves measured by the Mini-Mental State Examination using cutoff points according to education. The exposure was internet use according to wave (yes/no). We excluded individuals with cognitive impairment from Wave 1 (n = 453). We used a longitudinal analysis model (Generalized Estimating Equations) to estimate incidence rate ratios (IRR) with 95% confidence intervals. We estimated the risk of cognitive impairment in Wave 2 or Wave 3 according to internet use in the previous wave. The incidence of cognitive impairment was 13.4% in Wave 2 and 13.3% in Wave 3. Despite the aging of this cohort, the prevalence of internet users increased from 26.4% in Wave 1 to 32.8% in Wave 2 and 46.8% in Wave 3. The risk of cognitive impairment in Wave 2 or Wave 3 was 70% lower for older adults who used the internet in the previous wave, adjusted for sex, age, years of education, household income, and self-reported comorbidities (IRR = 0.30; 95% CI: 0.15-0.61; p = 0.001). Internet use was associated with a decline in the incidence of cognitive impairment among older adults living in the urban areas of southern Brazil after a period of ten years.
abstract_id: PUBMED:29368156
HbA1c, diabetes and cognitive decline: the English Longitudinal Study of Ageing. Aims/hypothesis: The aim of the study was to evaluate longitudinal associations between HbA1c levels, diabetes status and subsequent cognitive decline over a 10 year follow-up period.
Methods: Data from wave 2 (2004-2005) to wave 7 (2014-2015) of the English Longitudinal Study of Ageing (ELSA) were analysed. Cognitive function was assessed at baseline (wave 2) and reassessed every 2 years at waves 3-7. Linear mixed models were used to evaluate longitudinal associations.
Results: The study comprised 5189 participants (55.1% women, mean age 65.6 ± 9.4 years) with baseline HbA1c levels ranging from 15.9 to 126.3 mmol/mol (3.6-13.7%). The mean follow-up duration was 8.1 ± 2.8 years and the mean number of cognitive assessments was 4.9 ± 1.5. A 1 mmol/mol increment in HbA1c was significantly associated with an increased rate of decline in global cognitive z scores (-0.0009 SD/year, 95% CI -0.0014, -0.0003), memory z scores (-0.0005 SD/year, 95% CI -0.0009, -0.0001) and executive function z scores (-0.0008 SD/year, 95% CI -0.0013, -0.0004) after adjustment for baseline age, sex, total cholesterol, HDL-cholesterol, triacylglycerol, high-sensitivity C-reactive protein, BMI, education, marital status, depressive symptoms, current smoking, alcohol consumption, hypertension, CHD, stroke, chronic lung disease and cancer. Compared with participants with normoglycaemia, the multivariable-adjusted rate of global cognitive decline associated with prediabetes and diabetes was increased by -0.012 SD/year (95% CI -0.022, -0.002) and -0.031 SD/year (95% CI -0.046, -0.015), respectively (p for trend <0.001). Similarly, memory, executive function and orientation z scores showed an increased rate of cognitive decline with diabetes.
Conclusions/interpretation: Significant longitudinal associations between HbA1c levels, diabetes status and long-term cognitive decline were observed in this study. Future studies are required to determine the effects of maintaining optimal glucose control on the rate of cognitive decline in people with diabetes.
abstract_id: PUBMED:30820029
Television viewing and cognitive decline in older age: findings from the English Longitudinal Study of Ageing. There has been significant interest in the effects of television on cognition in children, but much less research has been carried out into the effects in older adults. This study aimed to explore whether television viewing behaviours in adults aged 50 or over are associated with a decline in cognition. Using data from the English Longitudinal Study of Aging involving 3,662 adults aged 50+, we used multivariate linear regression models to explore longitudinal associations between baseline television watching (2008/2009) and cognition 6 years later (2014/2015) while controlling for demographic factors, socio-economic status, depression, physical health, health behaviours and a range of other sedentary behaviours. Watching television for more than 3.5 hours per day is associated with a dose-response decline in verbal memory over the following six years, independent of confounding variables. These results are found in particular amongst those with better cognition at baseline and are robust to a range of sensitivity analyses exploring reverse causality, differential non-response and stability of television viewing. Watching television is not longitudinally associated with changes in semantic fluency. Overall our results provide preliminary data to suggest that television viewing for more than 3.5 hours per day is related to cognitive decline.
abstract_id: PUBMED:35072642
Impact of Internet Use on Cognitive Decline in Middle-Aged and Older Adults in China: Longitudinal Observational Study. Background: Given that cognitive decline lacks effective treatment options and has severe implications for healthy aging, internet use may achieve nonpharmacological relief of cognitive decline through cognitive stimulation and social engagement.
Objective: This longitudinal study aimed to investigate the relationship between the diversity, frequency, and type of internet use and cognitive decline, and to provide theoretical support and suggestions for mitigating cognitive decline in middle-aged and older adults.
Methods: Data were obtained from a total of 10,532 survey respondents from the China Family Panel Studies database from wave 3 (2014) and wave 5 (2018) of the survey. Cognitive function was measured using vocabulary tests, and internet use was categorized into five aspects: study, work, socializing, entertainment, and commercial-related activities. Associations between the diversity, frequency, and type of internet use and cognitive decline were estimated by controlling for demographic variables and health status risk factors through fixed-effects models.
Results: After controlling for demographic and health status risk factors, the type and frequency of internet use were found to be associated with cognitive functioning during the subsequent 4-year period, and different types of internet use had different effects on cognitive decline. Frequency of internet use of at least once a week for study (β=0.620, 95% CI 0.061 to 1.180; P=.04), work (β=0.896, 95% CI 0.271 to 1.520; P=.01), and entertainment (β=0.385, 95% CI -0.008 to 0.778; P=.06), as well as less than once a week for social purposes (β=0.860, 95% CI 0.074 to 1.650; P=.06), were associated with better cognitive function. Frequency of internet use of less than once a week for commercial-related activities (β=-0.906, 95% CI -1.480 to -0.337; P=.005) was associated with poorer cognitive function. Using the internet for more than one type of activity (β=0.458, 95% CI 0.065 to 0.850; P=.03) and at least once a week (β=0.436, 95% CI 0.066 to 0.806; P=.02) was associated with better cognitive function.
Conclusions: This study shows that breadth and depth of internet use are positively associated with cognitive function and that different types of internet use have different roles in cognitive decline. The importance of the internet as a nonpharmacological intervention pathway for cognitive decline is emphasized. Future research could explore specific mechanisms of influence.
abstract_id: PUBMED:10910416
A longitudinal study of apolipoprotein-E genotype and depressive symptoms in community-dwelling older adults. The Apolipoprotein-E (APOE) epsilon 4 allele is a risk factor for Alzheimer's disease (AD) and cognitive decline in older adults. Depression may also be a risk factor for dementia, and depression is important in the differential diagnosis of dementia. The authors performed a 5-year longitudinal study of APOE genotype and change in Geriatric Depression Scale scores in 113 community-dwelling older adults. No association was observed between APOE genotype and change in depressive symptoms. These results do not support the hypothesis that the APOE epsilon 4 allele is associated with depression. Important objections have been raised to APOE genotyping in the diagnosis of AD. However, the specificity of APOE genotyping in AD diagnosis would not appear to be compromised by an association with depression.
abstract_id: PUBMED:25428933
Internet use, social engagement and health literacy decline during ageing in a longitudinal cohort of older English adults. Background: Health literacy skills tend to decline during ageing, which is often attributed to age-related cognitive decline. Whether health literacy skills may be influenced by technological and social factors during ageing is unknown.
Methods: We investigated whether internet use and social engagement protect against health literacy decline during ageing, independent of cognitive decline. We used prospective data from 4368 men and women aged ≥52 years in the English Longitudinal Study of Ageing from 2004 to 2011. Health literacy was measured at baseline (2004-2005) and at follow-up (2010-2011) using a reading comprehension test of a fictitious medicine label. The influences of consistent internet use and engagement in each of the civic, leisure and cultural activities on health literacy decline over the follow-up were estimated.
Results: After adjusting for cognitive decline and other covariates, consistent internet use (1379/4368; 32%) was protectively associated with health literacy decline (OR=0.77; 95% CI 0.60 to 0.99), as was consistent engagement in cultural activities (1715/4368; 39%; OR=0.73; 95% CI 0.56 to 0.93). As the number of activities engaged in increased, the likelihood of health literacy decline steadily decreased (ptrend<0.0001), with OR=0.51 (95% CI 0.33 to 0.79) for engaging in all four of the internet use and civic, leisure and cultural activities versus none.
Conclusions: Internet use and social engagement, particularly in cultural activities (eg, attending the cinema, art galleries, museums and the theatre), may help older adults to maintain health literacy during ageing. Support for older adults to maintain socially engaged lives and to access the internet should help promote the maintenance of functional literacy skills during ageing.
Answer: The English Longitudinal Study of Aging has provided evidence suggesting that Internet/E-mail use, or digital literacy, may be associated with decreased cognitive decline in older adults. Specifically, one study found that higher education, no functional impairment, fewer depressive symptoms, no diabetes, and Internet/E-mail use predicted better performance in delayed recall among persons aged between 50 and 89 years (PUBMED:25116923). Another study from the same cohort indicated that baseline Internet use was associated with a 40% reduction in dementia risk over a ten-year follow-up period, after adjusting for potential confounding factors (PUBMED:28795579). These findings suggest that engaging in digital literacy activities may have a protective effect on cognitive health in older age.
Additionally, the research has shown that social engagement and health literacy may also play a role in maintaining cognitive function during aging. Consistent Internet use and engagement in cultural activities were protectively associated with health literacy decline, and as the number of activities engaged in increased, the likelihood of health literacy decline decreased (PUBMED:25428933). This further supports the idea that Internet use, as a form of social and cognitive engagement, could contribute to reducing cognitive decline.
Overall, the studies from the English Longitudinal Study of Aging provide evidence that Internet use, which can include activities such as using E-mail, may be beneficial in reducing the risk of cognitive decline and dementia in older adults. However, it is important to note that these studies are observational and cannot definitively establish causality. Additional research is needed to better understand the potential causal mechanisms underlying this association. |
Instruction: Surgical outcomes following pancreatic resection at a low-volume community hospital: do all patients need to be sent to a regional cancer center?
Abstracts:
abstract_id: PUBMED:19306974
Surgical outcomes following pancreatic resection at a low-volume community hospital: do all patients need to be sent to a regional cancer center? Background: The only curative option for patients with pancreatic cancer is surgical resection. The potential for significant morbidity and mortality following these procedures along with short-term survival benefit has called into question the role of surgery in this disease. Several recent reports have shown that morbidity, mortality, and survival can be improved if these pancreatic resections are performed at centers where large volumes of cases are done annually.
Methods: A retrospective review of the tumor registry from 1994 to 2003 identified 242 cases of pancreatic cancer diagnosed and/or treated at our institution. During this period, 31/242 (13%) patients underwent surgical resection. Patients' charts were reviewed for diagnosis, stage of tumor, presenting symptoms, surgery, length of stay, and survival. Morbidity and mortality rates were calculated for all patients.
Results: Thirty-one resections were performed in 16 males and 15 females. The median age at presentation was 69 years. The most common presenting symptom was painless jaundice. A pancreaticoduodenectomy was the most common procedure (n = 24), while 7 distal pancreatectomies were also performed. Eight surgeons performed the 31 resections with one surgeon performing 12 of the cases. The median length of stay was 16 days. Complications arose in 15/31 (48%) patients. There was no 30-day surgical or in-hospital mortality.
Conclusions: Major pancreatic surgery can be performed safely at community hospitals. It is imperative that each hospital is responsible for providing morbidity and mortality figures related to pancreatic procedures performed at their institution. In this changing climate of reimbursement and pay for performance, institutions that do not do this may be required to send these cases to regional centers.
abstract_id: PUBMED:30217298
Complex distal pancreatectomy outcomes performed at a single institution. Objective: Discuss the outcomes of distal pancreatectomy in a high volume academic community cancer center.
Introduction: Distal pancreatectomy can be done with minimal morbidity and mortality in high volume centers. However, there are limited reports of distal pancreatectomy being performed in the community. This study sought to define the experience with distal pancreatectomy at a high volume community cancer center with a dedicated surgical oncology team.
Methods: A retrospective chart review was performed for patients undergoing distal pancreatectomy performed over a twelve year period (2005-2017) at an academic community cancer center.
Results: 157 patients underwent distal pancreatectomy. The distribution of open, laparoscopic and robotic resections were 96 (61%), 42 (27%) and 19 (12%) respectively. Concomitant organ resection other than splenectomy was performed in 54 (34%) patients. Spleen sparing resections were performed in 6 (4%) patients. 84 (54%) out of the 157 resections had a malignant lesion on final pathology. Median length of stay was 6 days with 25 (16%) patients readmitted within 30 days. Grade 3 or 4 morbidity rate was 18% (28/157). The incidence of clinically significant pancreatic fistula (Grade B/C) was 8% (13/157). The reoperative rate was 3% (5/157). Overall 30 day mortality in all patients was 0.6% (1/157).
Conclusion: This is the largest series of distal pancreatic resections reported in a community cancer hospital. In a high volume academic community cancer center with a dedicated surgical oncology team, distal pancreatic resections can be performed with short hospital stays, minimal morbidity, and a mortality rate of less than 1%.
abstract_id: PUBMED:34993230
Postoperative Outcomes Analysis After Pancreatic Duct Occlusion: A Safe Option to Treat the Pancreatic Stump After Pancreaticoduodenectomy in Low-Volume Centers. Background: Surgical resection is the only possible choice of treatment in several pancreatic disorders that included periampullar neoplasms. The development of a postoperative pancreatic fistula (POPF) is the main complication. Despite three different surgical strategies that have been proposed-pancreatojejunostomy (PJ), pancreatogastrostomy (PG), and pancreatic duct occlusion (DO)-none of them has been clearly validated to be superior. The aim of this study was to analyse the postoperative outcomes after DO. Methods: We retrospectively reviewed 56 consecutive patients who underwent Whipple's procedure from January 2007 to December 2014 in a tertiary Hepatobiliary Surgery and Liver Transplant Unit. After pancreatic resection in open surgery, we performed DO of the Wirsung duct with Cyanoacrylate glue independently from the stump characteristics. The mean follow-up was 24.5 months. Results: In total, 29 (60.4%) were men and 19 were (39.6%) women with a mean age of 62.79 (SD ± 10.02) years. Surgical indications were in 95% of cases malignant diseases. The incidence of POPF after DO was 31 (64.5%): 10 (20.8%) patients had a Grade A fistula, 18 (37.5%) Grade B fistula, and 3 (6.2%) Grade C fistula. No statistical differences were demonstrated in the development of POPF according to pancreatic duct diameter groups (p = 0.2145). Nevertheless, the POPF rate was significantly higher in the soft pancreatic group (p = 0.0164). The mean operative time was 358.12 min (SD ± 77.03, range: 221-480 min). Hospital stay was significantly longer in patients who developed POPF (p < 0.001). According to the Clavien-Dindo (CD) classification, seven of 48 (14.58%) patients were classified as CD III-IV. At the last follow-up, 27 of the 31 (87%) patients were alive. Conclusions: Duct occlusion could be proposed as a safe alternative to pancreatic anastomosis especially in low-/medium-volume centers in selected cases at higher risk of clinically relevant POPF.
abstract_id: PUBMED:20609731
Surgical outcomes following pancreatic resection at a low-volume community hospital. Do all patients need to be sent to a regional cancer center? N/A
abstract_id: PUBMED:37310685
Nationwide Outcomes of Pancreaticoduodenectomy for Pancreatic Malignancies: Center Volume Matters. Background: Complex surgeries such as pancreaticoduodenectomies (PD) have been shown to have better outcomes when performed at high-volume centers (HVCs) compared to low-volume centers (LVCs). Few studies have compared these factors on a national level. The purpose of this study was to analyze nationwide outcomes for patients undergoing PD across hospital centers with different surgical volumes.
Methods: The Nationwide Readmissions Database (2010-2014) was queried for all patients who underwent open PD for pancreatic carcinoma. High-volume centers were defined as hospitals where 20 or more PDs were performed per year. Sociodemographic factors, readmission rates, and perioperative outcomes were compared before and after propensity score-matched analysis (PSMA) for 76 covariates including demographics, hospital factors, comorbidities, and additional diagnoses. Results were weighted for national estimates.
Results: A total of 19,810 patients were identified with age 66 ± 11 years. There were 6,840 (35%) cases performed at LVCs, and 12,970 (65%) at HVCs. Patient comorbidities were greater in the LVC cohort, and more PDs were performed at teaching hospitals in the HVC cohort. These discrepancies were controlled for with PSMA. Length of stay (LOS), mortality, invasive procedures, and perioperative complications were greater in LVCs when compared to HVCs before and after PSMA. Additionally, readmission rates at one year (38% vs 34%, P < .001) and readmission complications were greater in the LVC cohort.
Conclusions: Pancreaticoduodenectomy is more commonly performed at HVCs, which is associated with less complications and improved outcomes compared to LVCs.
abstract_id: PUBMED:37746627
Arterial Resection for Pancreatic Cancer: Feasibility and Current Standing in a High-Volume Center. Background: Arterial resection (AR) during pancreatectomy for curative R0 resection of pancreatic ductal adenocarcinoma (PDAC) remains a controversial procedure with high morbidity.
Objective: To investigate the feasibility and oncological outcomes of pancreatectomy combined with AR at a high-volume center for pancreatic surgery.
Methods: We retrospectively analyzed our experience in PDAC patients, who underwent pancreatic resection with AR and/or venous resection (VR) between 2007 and 2021.
Results: In total 259 PDAC patients with borderline resectable (n = 138) or locally advanced (n = 121) PDAC underwent vascular resection during tumor resection. From these, 23 patients had AR (n = 4 due to intraoperative injury, n = 19 due to suspected arterial infiltration). However, 12 out of 23 patients (52.2%) underwent simultaneous VR including 1 case with intraoperative arterial injury. In comparison, 11 patients (47.8%) underwent AR only including 3 intraoperative arterial injury patients. Although the operation time and bleeding rate of patients with AR were respectively longer and higher than in VR, no significant difference was detected in postoperative complications between VR and AR (P = 0.11). The final histopathological findings of PDAC patients were similar, including M stage, regional lymph node metastases, and R0 margin resection. The mortality of the entire cohort was 6.2% (16/259), with a tendency to increase mortality in the AR cohort, yet without statistical significance (VR: 5% vs AR: 21.1%; P = 0.05). Although 19 (82.6%) patients had PDAC in the final histopathology, only 6 were confirmed to have infiltrated arteria. The microscopic distribution of PDAC in these infiltrated arterial walls on hematoxylin-eosin staining was classified into 3 patterns. Strikingly, the perivascular nerves frequently exhibited perineural invasion.
Conclusions: AR can be performed in high-volume centers for pancreatic surgery with an acceptable morbidity, which is comparable to that of VR. However, the likelihood of arterial infiltration seems to be rather overestimated, and as such, AR might be avoidable or replaced by less invasive techniques such as divestment during PDAC surgery.
abstract_id: PUBMED:27184672
Determinants of Outcomes Following Resection for Pancreatic Cancer-a Population-Based Study. Background: Patient and health system determinants of outcomes following pancreatic cancer resection, particularly the relative importance of hospital and surgeon volume, are unclear. Our objective was to identify patient, tumour and health service factors related to mortality and survival amongst a cohort of patients who underwent completed resection for pancreatic cancer.
Methods: Eligible patients were diagnosed with pancreatic adenocarcinoma between July 2009 and June 2011 and had a completed resection performed in Queensland or New South Wales, Australia, with either tumour-free (R0) or microscopically involved margins (R1) (n = 270). Associations were examined using logistic regression (for binary outcomes) and Cox proportional hazards or stratified Cox models (for time-to-event outcomes).
Results: Patients treated by surgeons who performed <4 resections/year were more likely to die from a surgical complication (versus ≥4 resections/year, P = 0.04), had higher 1-year mortality (P = 0.03), and worse overall survival up to 1.5 years after surgery (adjusted hazard ratio 1.58, 95 % confidence interval 1.07-2.34). Amongst patients who had ≥1 complication within 30 days of surgery, those aged ≥70 years had higher 1-year mortality compared to patients aged <60 years. Adjuvant chemotherapy treatment improved recurrence-free survival (P = 0.01). There were no significant associations between hospital volume and mortality or survival.
Conclusions: Systems should be implemented to ensure that surgeons are completing a sufficient number of resections to optimize patient outcomes. These findings may be particularly relevant for countries with a relatively small and geographically dispersed population.
abstract_id: PUBMED:36102533
Association of Textbook Outcome and Surgical Case Volume with Long-Term Survival in Patients Undergoing Surgical Resection for Pancreatic Cancer. Background: Current literature has identified textbook outcome (TO) as a quality metric after cancer surgery. We studied whether TO after pancreatic resection has a stronger association with long-term survival than individual hospital case volume.
Study Design: Patients undergoing surgery for pancreatic adenocarcinoma from 2010 to 2015 were identified from the National Cancer Database. Hospitals were stratified by volume (low less than 6, medium 6 to 19, and high 20 cases or more per year), and overall survival data were abstracted. We defined TO as adequate lymph node count, negative margins, length of stay less than the 75th percentile, appropriate systemic therapy, timely systemic therapy, and without a mortality event or readmission within 30 days. The association of TO and case volume was assessed using a multivariable Cox regression model for survival.
Results: Overall, 7270 patients underwent surgery, with 30.7%, 48.7%, and 20.6% performed at low-, medium-, and high-volume hospitals, respectively. Patients treated at low-volume hospitals were more likely to be Black, be uninsured or on Medicaid, have higher Charlson comorbidity scores, and be less likely to achieve TO (23.4% TO achievement vs 37.5% achievement at high-volume hospitals). However, high hospital volume was no longer associated with overall survival once TO was added to the multivariable model stratified by volume status. Achievement of TO corresponded to a 31% decrease in mortality (hazard ratio 0.69; p < 0.001), independent of hospital volume.
Conclusions: Improved long-term survival after pancreatic resection was associated with TO rather than high hospital volume. Quality improvement efforts focused on TO criteria have the potential to improve outcomes irrespective of case volume.
abstract_id: PUBMED:28832964
Nationwide outcomes in patients undergoing surgical exploration without resection for pancreatic cancer. Background: Despite improvements in diagnostic imaging and staging, unresectable pancreatic cancer is still encountered during surgical exploration with curative intent. This nationwide study investigated outcomes in patients with unresectable pancreatic cancer found during surgical exploration.
Methods: All patients diagnosed with primary pancreatic (adeno)carcinoma (2009-2013) in the Netherlands Cancer Registry were included. Predictors of unresectability, 30-day mortality and poor survival were evaluated using logistic and Cox proportional hazards regression analysis.
Results: There were 10 595 patients with pancreatic cancer during the study interval. The proportion of patients undergoing surgical exploration increased from 19·9 to 27·0 per cent (P < 0·001). Among 2356 patients who underwent surgical exploration, the proportion of patients with tumour resection increased from 61·6 per cent in 2009 to 71·3 per cent in 2013 (P < 0·001), whereas the contribution of M1 disease (18·5 per cent overall) remained stable. Patients who had exploration only had an increased 30-day mortality rate compared with those who underwent tumour resection (7·8 versus 3·8 per cent; P < 0·001). In the non-resected group, among those with M0 (383 patients) and M1 (435) disease at surgical exploration, the 30-day mortality rate was 4·7 and 10·6 per cent (P = 0·002), median survival was 7·2 and 4·4 months (P < 0·001), and 1-year survival rates were 28·0 and 12·9 per cent, respectively. Among other factors, low hospital volume (0-20 resections per year) was an independent predictor for not undergoing tumour resection, but also for 30-day mortality and poor survival among patients without tumour resection.
Conclusion: Exploration and resection rates increased, but one-third of patients who had surgical exploration for pancreatic cancer did not undergo resection. Non-resectional surgery doubled the 30-day mortality rate compared with that in patients undergoing tumour resection.
abstract_id: PUBMED:28010955
Three-year and five-year outcomes of surgical resection for pancreatic ductal adenocarcinoma: Long-term experiences in one medical center. Objective: Pancreatic ductal adenocarcinoma is one of the most malignant types of cancer. This study evaluated the 3-year and 5-year surgical outcomes associated with the cancer and determined whether statistically identified factors can be used to predict survival.
Methods: This retrospective review was conducted from 1995 to 2010. Patients who had resectable pancreatic ductal adenocarcinoma and received surgical treatment were included. Cases of hospital mortality were excluded. The relationships between several clinicopathological factors and the survival rate were analyzed.
Results: A total of 223 patients were included in this study. The 3-year and 5-year survival rates were 21.4% and 10.1%, respectively, and the median survival was 16.1 months. Tumor size, N status, and resection margins were independent predictive factors for 3-year survival. Tumor size independently predicted 5-year survival.
Conclusion: Tumor size is the most important independent prognostic factor for 3-year and 5-year survival. Lymph node status and the resection margins also independently affected the 3-year survival. These patient outcomes might be improved by early diagnosis and radical resection. Future studies should focus on the tumor biology of this aggressive cancer.
Answer: The question of whether all patients requiring pancreatic resection should be referred to a regional cancer center is addressed by examining the surgical outcomes at both low-volume community hospitals and high-volume centers.
A study conducted at a low-volume community hospital showed that major pancreatic surgery can be performed safely with no 30-day surgical or in-hospital mortality, although complications arose in 48% of patients (PUBMED:19306974). This suggests that with careful monitoring of morbidity and mortality figures, low-volume community hospitals can manage pancreatic resections.
However, nationwide outcomes indicate that pancreaticoduodenectomies (PD) have better outcomes when performed at high-volume centers (HVCs), with lower length of stay, mortality, invasive procedures, perioperative complications, and readmission rates compared to low-volume centers (LVCs) (PUBMED:37310685). This supports the notion that complex surgeries like PDs might be better managed at HVCs.
Another study at a high-volume academic community cancer center reported minimal morbidity and a mortality rate of less than 1% for distal pancreatic resections, suggesting that with a dedicated surgical oncology team, high-quality outcomes are achievable in a community setting (PUBMED:30217298).
The feasibility of pancreatic duct occlusion as an alternative to pancreatic anastomosis in low-/medium-volume centers was also demonstrated, particularly in selected cases at higher risk of clinically relevant postoperative pancreatic fistula (POPF) (PUBMED:34993230).
Furthermore, a population-based study highlighted the importance of surgeon volume, with surgeons performing fewer than four resections per year associated with higher surgical complication mortality and worse survival up to 1.5 years post-surgery (PUBMED:27184672). This suggests that surgeon experience is a critical factor in patient outcomes.
Lastly, the concept of a "textbook outcome" (TO) as a quality metric after cancer surgery was introduced, with TO achievement being associated with improved long-term survival after pancreatic resection, independent of hospital volume (PUBMED:36102533).
In conclusion, while some low-volume community hospitals can safely perform pancreatic resections, the evidence suggests that outcomes are generally better at high-volume centers, particularly for complex surgeries like PDs. Surgeon experience and adherence to quality metrics like TO also play significant roles in patient outcomes. |
Instruction: Does Daylight Savings Time encourage physical activity?
Abstracts:
abstract_id: PUBMED:23676324
Does Daylight Savings Time encourage physical activity? Background: Extending Daylight Savings Time (DST) has been identified as a policy intervention that may encourage physical activity. However, there has been little research on the question of if DST encourages adults to be more physically active.
Methods: Data from residents of Arizona, Colorado, New Mexico, and Utah ages 18-64 who participated in the 2003-2009 American Time Use Survey are used to assess whether DST is associated with increased time spent in moderate-to-vigorous physical activity (MVPA). The analysis capitalizes on the natural experiment created because Arizona does not observe DST.
Results: Both bivariate and multivariate analyses indicate that shifting 1 hour of daylight from morning to evening does not impact MVPA of Americans living in the southwest.
Conclusions: While DST may affect the choices people make about the timing and location of their sports/recreational activities, the potential for DST to serve as a broad-based intervention that encourages greater sports/recreation participation is not supported by this analysis. Whether this null effect would persist in other climate situations is an open question.
abstract_id: PUBMED:34530660
Marathon run performance on daylight savings time transition days: results from a natural experiment. Advancing clock times by 1 h in the spring to daylight savings time and setting clock times back 1 h in the autumn to standard time disrupts circadian timing, sleep and skilled motor behavior such as driving an automobile. It is unknown if endurance performance is impacted by daylight savings transition (DST). The natural experiment described here examined whether exposure to a DST in the 10 h prior to the start of a marathon race was associated with a different mean completion time compared to participants who ran the same course but were unexposed to a recent DST. The primary outcome was the average running time of finishers of United States marathons that were completed on either spring-DST or autumn-DST days in the years 2000-2018. Comparisons were made to results from the same marathon held in a different year that was not run on a DST day. Data were obtained from the public data base marathonguide.com/results. Analysis of the primary outcome used paired samples t-tests weighted by sample size. Spring and autumn data were analyzed separately. Eighteen spring and 29 autumn marathons met the inclusion criteria. Compared to control marathons, the weighted spring-DST performance was worse by 12.3 min (4.1%; P < .001) and equal to a moderate standardized effect size of 0.57 while autumn-DST was trivially worse by 1.4 min (0.5%), which was equivalent to an effect size of 0.13. Ambient temperatures for the DST and control races did not differ for either the spring (10.6 vs. 8.9℃; P = .212) or autumn marathons (7.6 vs. 9.3℃; P = .131). Within the limitations of a natural experiment research design, it is concluded that the findings support worse running performance in marathon races held in the spring on the day of transition to daylight savings time when there is a forced circadian change and sleep loss.
abstract_id: PUBMED:28156172
Impact of daylight savings time on spontaneous pregnancy loss in in vitro fertilization patients. Transition into daylight savings time (DST) has studied negative impacts on health, but little is known regarding impact on fertility. This retrospective cohort study evaluates DST impact on pregnancy and pregnancy loss rates in 1,654 autologous in vitro fertilization cycles (2009 to 2012). Study groups were identified based on the relationship of DST to embryo transfer. Pregnancy rates were similar in Spring and Fall (41.4%, 42.2%). Pregnancy loss rates were also comparable between Spring and Fall (15.5%, 17.1%), but rates of loss were significantly higher in Spring when DST occurred after embryo transfer (24.3%). Loss was marked in patients with a history of prior spontaneous pregnancy loss (60.5%).
abstract_id: PUBMED:30413364
Effects of seasonality and daylight savings time on emergency department visits for mental health disorders. Objectives: Emergency Department (ED) utilization accounts for a large portion of healthcare services in the US. Disturbance of circadian rhythms may affect mental and behavioral health (MBH) conditions, which could result in increased ED visits and subsequent hospitalizations, thus potentially inducing staffing shortages and increasing ED wait time. Predicting the burden of ED admissions helps to better plan care at the EDs and provides significant benefits. This study investigates if increased ED visits for MBH conditions are associated with seasonality and changes in daylight savings time.
Methods: Using ED encounter data from a large academic medical center, we have examined univariate and multivariate associations between ED visits for MBH conditions and the annual time periods during which MBH conditions are more elevated due to changes in the seasons. We hypothesize that ED visits for MBH conditions increase within the 2-week period following the daylight savings time changes.
Results: Increased MBH ED visits were observed in certain seasons. This was especially true for non-bipolar depressive illness. We saw no significant changes in MBH visits as associated with changes in the daylight savings time.
Conclusions: Data do not provide conclusive evidence of a uniform seasonal increase in ED visits for MBH conditions. Variation in ED MBH visits may be due to secular trends, such as socioeconomic factors. Future research should explore contemporaneous associations between time-driven events and MBH ED visits. It will allow for greater understanding of challenges regarding psychiatric patients and opportunities for improvement.
abstract_id: PUBMED:34571135
Daylight savings time transitions and risk of out-of-hospital cardiac arrest: An interrupted time series analysis. Background: Many studies have reported increases in the risk of acute cardiovascular events following daylight savings time (DST) transitions. We sought to investigate the effect of DST transition on the incidence of out-of-hospital cardiac arrest (OHCA).
Methods: Between January 2000 and December 2020, we performed an interrupted time series analysis of the daily number of OHCA cases of medical aetiology from the Victorian Ambulance Cardiac Arrest Registry. The effect of DST transition on OHCA incidence was estimated using negative binomial models, adjusted for temporal trends, population growth, and public holidays.
Results: A total of 89,409 adult OHCA of medical aetiology were included. Following the spring DST transition (i.e. shorter day), there was an immediate 13% (IRR 1.13, 95% CI: 1.02, 1.25; p = 0.02) increased risk of OHCA on the day of transition (Sunday) and the cumulative risk of OHCA remained higher over the first 2 days (IRR 1.17, 95% CI: 1.02, 1.34; p = 0.03) compared to non-transitional days. Following the autumn DST transition (i.e. longer day), there was a significant lagged effect on the Tuesday with a 12% (IRR 0.88, 95% CI: 0.77, 0.99; p = 0.04) reduced risk of OHCA. The cumulative effect following the autumn DST transition was also significant, with a 30% (IRR 0.70, 95% CI: 0.51, 0.96; p = 0.03) reduction in the incidence of OHCA by the end of the transitional week.
Conclusion: We observed both harmful and protective effects from DST transitions on the risk of OHCA. Strategies to reduce this risk in vulnerable populations should be considered.
abstract_id: PUBMED:36003308
Daylight savings time transition and the incidence of femur fractures in the older population: a nationwide registry-based study. Background: Daylight Savings Time (DST) transition is known to cause sleep disruption, and thus may increase the incidence of injuries and accidents during the week following the transition. The aim of this study was to assess the incidence of femur fractures after DST transition.
Methods: We conducted retrospective population-based register study. All Finnish patients 70 years or older who were admitted to hospital due to femur fracture between 1997 and 2020 were gathered from the Finnish National Hospital Discharge Register. Negative binomial regression with 95% confidence intervals (CI) was used to evaluate the incidence of femur fractures after DST transition.
Results: The data included a total of 112,658 femur fractures during the study period between 1997 and 2020, with an annual mean (SD) of 4,694 (206) fractures. The incidence of femur fractures decreased at the beginning of the study period from 968 to 688 per 100,000 person-years between 1997 and 2007. The weekly mean of femur fractures remained lower during the summer (from 130 to 150 per 100,000 person-weeks) than in winter (from 160 to 180 per 100,000 person-weeks). Incidence rate ratio for the Monday following DST transition was 1.10 (CI [0.98-1.24]) in spring and 1.10 (CI [0.97-1.24]) in fall, and for the whole week 1.07 (CI [1.01-1.14]) in spring and 0.97 (CI [0.83-1.13]) in fall.
Conclusion: We found weak evidence that the incidence of femur fractures increases after DST transition in the spring.
abstract_id: PUBMED:36494722
Association of physical activity, sedentary behaviour, and daylight exposure with sleep in an ageing population: findings from the Whitehall accelerometer sub-study. Background: Ageing is accompanied by changes in sleep, while poor sleep is suggested as a risk factor for several health outcomes. Non-pharmacological approaches have been proposed to improve sleep in elderly; their impact remains to be investigated. The aim of this study was to examine the independent day-to-day associations of physical behaviours and daylight exposure with sleep characteristics among older adults.
Methods: Data were drawn from 3942 participants (age range: 60-83 years; 27% women) from the Whitehall II accelerometer sub-study. Day-to-day associations of objectively-assessed daytime physical behaviours (sedentary behaviour, light-intensity physical activity (LIPA), moderate-to-vigorous physical activity (MVPA), mean acceleration, physical activity chronotype) and daylight exposure (proportion of waking window with light exposure > 1000 lx and light chronotype) with sleep characteristics were examined using mixed models.
Results: A 10%-increase in proportion of the waking period spent sedentary was associated with 5.12-minute (4.31, 5.92) later sleep onset and 1.76-minute shorter sleep duration (95%confidence interval: 0.86, 2.66). Similar increases in LIPA and MVPA were associated with 6.69 (5.67, 7.71) and 4.15 (2.49, 5.81) earlier sleep onset respectively and around 2-minute longer sleep duration (2.02 (0.87, 3.17) and 2.23 (0.36, 4.11), respectively), although the association was attenuated for MVPA after adjustment for daylight exposure (1.11 (- 0.84, 3.06)). A 3-hour later physical activity chronotype was associated with a 4.79-minute later sleep onset (4.15, 5.43) and 2.73-minute shorter sleep duration (1.99, 3.47). A 10%-increase in proportion of waking period exposed to light> 1000 lx was associated with 1.36-minute longer sleep (0.69, 2.03), independently from mean acceleration. Associations found for sleep duration were also evident for duration of the sleep windows with slightly larger effect size (for example, 3.60 (2.37, 4.82) minutes for 10%-increase in LIPA), resulting in associations with sleep efficiency in the opposite direction (for example, - 0.29% (- 0.42, - 0.16) for 10%-increase in LIPA). Overall, associations were stronger for women than for men.
Conclusions: In this study, higher levels of physical activity and daylight exposure were associated with slightly longer sleep in older adults. Given the small effect sizes of the associations, increased physical activity and daylight exposure might not be enough to improve sleep.
abstract_id: PUBMED:27878694
Physical activity and time preference. This paper investigates the link between time preference (whether a person is more present or future oriented) and time spent participating in physical activity. Using data on time spent engaged in physical activity from the National Longitudinal Surveys of Youth 1979 cohort, 2006 wave, where time preference is proxied by the expected share of money saved from a hypothetical $1000 cash prize. I find that time preference is a significant predictor of the amount of time spent participating in both vigorous and light-to-moderate physical activity for women and vigorous physical activity for men. The results are robust to various sample restrictions and alternative measures of time preference. The findings in this paper fill in a gap in the relationship between time preference and body composition by examining one of the pathways through which the former might affect the latter using a large, nationally representative dataset.
abstract_id: PUBMED:32315901
Human activity, daylight saving time and wildfire occurrence. Wildfires shape landscapes and ecosystems, affecting health and infrastructure. Understanding the complex interactions between social organization, human activity and the natural environment that drive wildfire occurrence is becoming increasingly important as changing global environmental conditions combined with the expanding human-wildland interface, are expected to increase wildfire frequency and severity. This paper examines the anthropogenic drivers of wildfire, and the relationship between the organization of human activity in time and wildfire occurrence focusing on the effects of transitions into and out of Daylight Saving Time (DST). DST transitions shift activity in relation to natural wildfire risk within a solar day, induce changes in the time allocated to wildfire-causing activities and disrupt sleep patterns. The paper estimates short and medium run effects of DST-induced changes in the temporal organization of human activity through a Regression Discontinuity Design with time as the running variable and Fixed Effects models, using data from over 1.88 million non-prescribed ignitions recorded in the contiguous US over 23 years. Estimates suggest that DST has a quantitatively and statistically significant immediate and medium-run effect on wildfire occurrence. Wildfire occurrence jumps by around 30% in the immediate aftermath of transitions into DST, adding about 98 human-caused wildfires across the contiguous US per year, while the transition's effect is detectable for 3 weeks. Transitions induce within-day temporal displacement of wildfires in a pattern compatible with the shifting of human activity mechanism, while the result cannot be attributed exclusively on disruptions in sleep patterns. Naturally arising lightning-strike wildfires do not respond to changes in civil time, while the results are robust to changes in assumptions. Results suggest that wildfire policy should account for the temporal organization of human activity.
abstract_id: PUBMED:27775953
Daylight Savings Time Transitions and the Incidence Rate of Unipolar Depressive Episodes. Background: Daylight savings time transitions affect approximately 1.6 billion people worldwide. Prior studies have documented associations between daylight savings time transitions and adverse health outcomes, but it remains unknown whether they also cause an increase in the incidence rate of depressive episodes. This seems likely because daylight savings time transitions affect circadian rhythms, which are implicated in the etiology of depressive disorder. Therefore, we investigated the effects of daylight savings time transitions on the incidence rate of unipolar depressive episodes.
Methods: Using time series intervention analysis of nationwide data from the Danish Psychiatric Central Research Register from 1995 to 2012, we compared the observed trend in the incidence rate of hospital contacts for unipolar depressive episodes after the transitions to and from summer time to the predicted trend in the incidence rate.
Results: The analyses were based on 185,419 hospital contacts for unipolar depression and showed that the transition from summer time to standard time were associated with an 11% increase (95% CI = 7%, 15%) in the incidence rate of unipolar depressive episodes that dissipated over approximately 10 weeks. The transition from standard time to summer time was not associated with a parallel change in the incidence rate of unipolar depressive episodes.
Conclusion: This study shows that the transition from summer time to standard time was associated with an increase in the incidence rate of unipolar depressive episodes. Distress associated with the sudden advancement of sunset, marking the coming of a long period of short days, may explain this finding. See video abstract at, http://links.lww.com/EDE/B179.
Answer: The research on whether Daylight Savings Time (DST) encourages physical activity presents mixed findings. One study that analyzed data from residents of Arizona, Colorado, New Mexico, and Utah who participated in the 2003-2009 American Time Use Survey found that shifting one hour of daylight from morning to evening does not impact moderate-to-vigorous physical activity (MVPA) of Americans living in the southwest. This suggests that DST may not serve as a broad-based intervention that encourages greater sports/recreation participation (PUBMED:23676324).
However, it is important to note that this study's conclusion is specific to the American southwest and may not generalize to other regions or climates. Additionally, the study did not find evidence that DST affects the timing and location of sports/recreational activities, which could be an area for further research.
Other studies have focused on the impact of DST on specific activities or health outcomes, such as marathon performance or cardiac arrest incidence, but these do not directly address the question of whether DST encourages general physical activity in the population (PUBMED:34530660, PUBMED:34571135).
In summary, based on the available abstract (PUBMED:23676324), there is no strong evidence to support the idea that DST encourages physical activity among adults in the American southwest. Further research would be needed to determine if this finding holds true in other regions or under different climate conditions. |
Instruction: Evaluating invasive cutaneous melanoma: is the initial biopsy representative of the final depth?
Abstracts:
abstract_id: PUBMED:12637923
Evaluating invasive cutaneous melanoma: is the initial biopsy representative of the final depth? Background: An accurate initial biopsy of the deepest portion of the melanoma is vital to the management of patients with melanomas.
Objective: Our goal was to evaluate the accuracy of preliminary biopsies performed by a group of predominantly experienced dermatologists (n = 46/72).
Methods: A total of 145 cases of cutaneous melanoma were examined retrospectively. We compared Breslow depth on preliminary biopsy with Breslow depth on subsequent excision. Was the initial diagnostic biopsy performed on the deepest part of the melanoma?
Results: Of nonexcisional initial shave and punch biopsies, 88% were accurate, with Breslow depth greater than or equal to subsequent excision Breslow depth. Both superficial and deep shave biopsies were more accurate than punch biopsy for melanomas less than 1 mm. Excisional biopsy was found to be the most accurate method of biopsy.
Conclusions: Deep shave biopsy is preferable to superficial shave or punch biopsy for thin and intermediate depth (<2 mm) melanomas when an initial sample is taken for diagnosis instead of complete excision. We found that a group of predominantly experienced dermatologists accurately assessed the depth of invasive melanoma by use of a variety of initial biopsy types.
abstract_id: PUBMED:19445285
Does shave biopsy accurately predict the final breslow depth of primary cutaneous melanoma? Shave biopsy (SB) is used for the diagnosis of suspicious skin lesions, including melanoma. Its accuracy for melanoma has not been confirmed. We examined our experience with SB to determine its ability to predict true Breslow depth (BD). We performed a retrospective review of the tumor registry for all patients diagnosed with melanoma by SB from 1995 to 2004. Site and depth of lesion, tumor stage, correlation of BD between SB and wide local excision (WLE), and changes in surgical management due to discordance were examined. Melanoma-in-situ was defined as a depth of 0 for this analysis. One hundred thirty-nine patients were diagnosed with melanoma by SB. Pathology after WLE were as follows: 54 (39%) patients had no residual disease, 67 (48%) had a BD equal to or less than the SB, and 18 (13%) had a thicker BD compared with the SB. For these 18 patients, the median BD by SB and WLE was 1.1 mm (range 0-6.5) and 3.5 mm (range 0.5-20.5), respectively (P = 0.0017). Upstaging of final BD from SB to WLE was significantly associated with increasing tumor depth and higher stage of melanoma (P < 0.0001). Only seven of the 139 patients (5%) required further surgery because of the increased depth of the WLE. SB underestimated the final BD of melanoma in 13 per cent of patients, but changed the management of few patients. SB is a valuable tool for practitioners in the diagnosis of melanoma. Nevertheless, patients diagnosed with melanoma by SB should be counseled about the rare need for additional surgery.
abstract_id: PUBMED:34615974
Effect of changes in Breslow thickness between the initial punch biopsy results and final pathology reports in acral lentiginous melanoma patients. Acral lentiginous melanoma (ALM) is the most common subtype of cutaneous melanoma among Asians; punch biopsy is widely performed for its diagnosis. However, the pathologic parameters evaluated via punch biopsy may not be sufficient for predicting disease prognosis compared to the parameters evaluated via excisional biopsy. We investigated whether changes in Breslow thickness (BT) between initial punch biopsy results and final pathology reports can affect the prognosis of ALM. Pathologic parameters were recorded from specimens acquired through the initial punch biopsy and wide excision. Patients were classified into two groups based on a change in Breslow depth: the BT increased or decreased on comparing the samples from the initial punch biopsy and final wide excision. We compared clinical characteristics, and a Cox regression model was used to identify independent prognostic factors influencing melanoma-specific death (MSD). Changes in BT did not affect MSD (hazard ratio [HR]: 0.55, P = 0.447). In multivariate analysis, a higher BT (> 2 mm) (HR: 9.93, P = 0.046) and nodal metastasis (HR: 5.66, P = 0.041) were significantly associated with an increased MSD risk. The use of punch biopsy did not affect MSD despite the inaccuracy of BT measurement as long as ALM was accurately diagnosed.
abstract_id: PUBMED:32529271
The Devil's in the Details: Discrepancy Between Biopsy Thickness and Final Pathology in Acral Melanoma. Purpose: We hypothesized that initial biopsy may understage acral lentiginous melanoma (ALM) and lead to undertreatment or incomplete staging. Understanding this possibility can potentially aid surgical planning and improve primary tumor staging.
Methods: A retrospective review of primary ALMs treated from 2000 to 2017 in the US Melanoma Consortium database was performed. We reviewed pathology characteristics of initial biopsy, final excision specimens, surgical margins, and sentinel lymph node biopsy (SLNB).
Results: We identified 418 primary ALMs (321 plantar, 34 palmar, 63 subungual) with initial biopsy and final pathology results. Median final thickness was 1.8 mm (range 0.0-19.0). There was a discrepancy between initial biopsy and final pathology thickness in 180 (43%) patients with a median difference of 1.6 mm (range 0.1-16.4). Final T category was increased in 132 patients (32%), including 47% of initially in situ, 32% of T1, 39% of T2, and 28% of T3 lesions. T category was more likely to be increased in subungual (46%) and palmar (38%) melanomas than plantar (28%, p = 0.01). Among patients upstaged to T2 or higher, 71% had ≤ 1-cm margins taken. Among the 27 patients upstaged to T1b or higher, 8 (30%) did not have a SLNB performed, resulting in incomplete initial staging.
Conclusions: In this large series of ALMs, final T category was frequently increased on final pathology. A high index of suspicion is necessary for lesions initially in situ or T1 and consideration should be given to performing additional punch biopsies, wider margin excisions, and/or SLNB.
abstract_id: PUBMED:9220549
Biopsy technique for pigmented lesions. The biopsy technique that should be used when sampling a pigmented lesion may not always be readily apparent. The final arbiter is whether the specimen that will be generated will be representative of the entire process so that an accurate and complete diagnosis will be able to be rendered. In some cases, melanoma may not be clinically suspected so that it is essential that any biopsy that is performed will detect these lesions.
abstract_id: PUBMED:24947251
Sentinel lymph node biopsy in melanoma: final results of MSLT-I. In 1994 an international randomized controlled clinical trial, MSLT-I, opened to study the utility of sentinel lymph node biopsy (SLNB) for patients with clinically localized melanoma. This trial compared outcomes of patients treated with wide local excision (WLE) and SLNB (followed by immediate completion lymph node dissection [CLND] for those with a positive sentinel node [SN]) with outcomes of patients treated with WLE alone and CLND upon the development of clinically apparent disease. In February 2014 the final analysis of long-term outcomes data was published. Importantly, these data showed that the rates of nodal positivity were the same between the two arms of the trial. Although no difference in 10-year melanoma-specific survival was noted between the two arms, this was not entirely surprising as the overall rate of nodal disease within the trial was 20.8%, meaning that 79.2% of patients could not derive a benefit from SLNB. Subset analysis was performed to determine the impact of early intervention for those patients most likely to have a benefit from early detection. This analysis showed that for patients with nodal disease and intermediate-thickness melanoma (defined as 1.2-3.5-mm Breslow depth), early treatment following positive SLNB was associated with improved 10-year distant disease-free survival and improved 10-year melanoma-specific survival.
abstract_id: PUBMED:31406562
Re-biopsy of partially sampled thin melanoma impacts sentinel lymph node sampling as well as surgical margins. Aim: To assess the impact of re-biopsy on partially sampled melanoma in situ (MIS), atypical melanocytic proliferation (AMP) and thin invasive melanoma.
Materials & Methods: We retrospectively identified cases of re-biopsied partially sampled neoplasms initially diagnosed as melanoma in situ, AMP or thin melanoma (Breslow depth ≤0.75 mm).
Results & Conclusion: Re-biopsy led to sentinel lymph node biopsy (SLNB) in 18.3% of cases. No patients upstaged from AMP or MIS had a positive SLNB. One out of nine (11.1%) initially diagnosed as a thin melanoma ≤0.75 mm, upstaged with a re-biopsy, had a positive SLNB. After re-biopsy 8.5% underwent an increased surgical margin. Selective re-biopsy of partially sampled melanoma with gross residual disease can increase the accuracy of microstaging and optimize treatment regarding surgical margins and SLNB.
abstract_id: PUBMED:15858469
Microstaging accuracy after subtotal incisional biopsy of cutaneous melanoma. Background: A significant portion of cutaneous melanoma may remain after subtotal incisional biopsy. The accuracy of microstaging and impact on clinical practice in this scenario are unknown.
Objective: Our purpose was to examine microstaging accuracy of an initial incisional biopsy with a significant portion of the clinical lesion remaining (> or =50%).
Methods: Patients with cutaneous melanoma, diagnosed by incisional biopsy with > or =50% of the lesion remaining, were prospectively evaluated for microstaging accuracy, comparing initial Breslow depth (BD1) to final depth (BD2) after excision of the residual lesion. Impact on prognosis and treatment was also evaluated.
Results: Two hundred fifty of 1783 patients (14%) presented with > or =50% residual clinical lesion after incisional biopsy. The mean BD1 was 0.66 mm; the mean BD2, 1.07 mm (P = .001). After complete excision of the residual lesion, upstaging occurred in 21% and 10% became candidates for sentinel node biopsy.
Conclusion: An incisional biopsy with > or =50% clinical lesion remaining afterward may be inadequate for accurate microstaging of melanoma. This scenario is relatively uncommon but clinically significant.
abstract_id: PUBMED:24809875
A retrospective comparison between preoperative and postoperative Breslow depth in primary cutaneous melanoma: how preoperative shave biopsies affect surgical management. Background: Accurate histopathologic staging of preoperative biopsy specimens is critical for determining optimal surgical management for patients with primary cutaneous melanoma. The American Academy of Dermatology (AAD) and National Comprehensive Cancer Network (NCCN) currently list narrow excisional biopsy (fusiform excision) as the preferred technique for biopsying lesions suspicious for melanoma. However, preoperative shave biopsies are routinely performed on lesions concerning for melanoma in many medical centers out of convenience.
Objective: The current retrospective chart review was performed to determine whether preoperative shave biopsies are acceptable for evaluating lesions suspicious for melanoma and whether shave biopsies lead to underestimation of Breslow depth great enough to require additional surgeries.
Methods: A consecutive sample of 242 primary cutaneous melanoma cases surgically excised between January 1, 2004 and December 31, 2010 in a private practice setting was analyzed for this study.
Results: Breslow depth underestimation occurred in 8 of 226 shave biopsy cases (3.5%). Differences in preoperative and postoperative Breslow depths in shave biopsy cases were not statistically significant (P=0.48). Underestimation of Breslow depth, melanoma transection, positive deep biopsy margins, and tumor upstaging did not lead to statistically significant changes in surgical management.
Conclusions: Based on the results from the current study and available literature, the authors posit that preoperative deep excisional shave biopsies performed by dermatologists are accurate for determining Breslow depth and for planning surgical management of melanomas.
abstract_id: PUBMED:24862925
Clinical impact of biopsy method on the quality of surgical management in melanoma. Background And Objectives: Though guidelines recommend excisional biopsy for diagnosing melanoma, partial biopsy techniques are commonly performed, risking underestimation of Breslow depth and altering surgical management. Biopsy choice patterns by specialty and subsequent impact on surgical management was examined.
Methods: Retrospective review of the University of Vermont Cancer Registry. All patients with a single, primary cutaneous melanoma from 1/02 to 6/12 analyzed.
Results: Among 853 lesions analyzed, 606 had Breslow depth <1 mm. Dermatologists perform 62.6% of biopsies, favoring shave biopsies; surgeons favor excisional biopsies (48%), and primary care doctors favor punch biopsies (44.8%), (P < 0.001). Final Breslow depth was upstaged in 107 (12.5%); however, only 23 of 488 partial biopsies (4.7%) displayed a discrepancy great enough to change surgical recommendations (P < 0.001). There was no statistically significant relationship with presence of ulceration, regression, high Clark level, or high mitotic index.
Conclusions: Partial biopsy techniques are commonly performed in diagnosing melanoma; especially among dermatologists, who perform the majority of biopsies. Though partial biopsies were less accurate in determining Breslow thickness; they rarely alter recommendations for surgical management. Predictive features could not be determined to identify the few cases where a Breslow discrepancy was clinically relevant.
Answer: The initial biopsy of invasive cutaneous melanoma is generally representative of the final depth, but there are instances where it may not be entirely accurate. A retrospective study of 145 cases of cutaneous melanoma found that 88% of nonexcisional initial shave and punch biopsies were accurate, with Breslow depth greater than or equal to subsequent excision Breslow depth. Excisional biopsy was deemed the most accurate method, and deep shave biopsy was preferable to superficial shave or punch biopsy for thin and intermediate depth melanomas (<2 mm) (PUBMED:12637923).
However, another study showed that shave biopsy (SB) underestimated the final Breslow depth (BD) of melanoma in 13% of patients, although it changed the management of only a few patients (PUBMED:19445285). In the case of acral lentiginous melanoma (ALM), changes in Breslow thickness between the initial punch biopsy results and final pathology reports did not affect melanoma-specific death, suggesting that the use of punch biopsy did not impact overall survival despite potential inaccuracies in Breslow thickness measurement (PUBMED:34615974).
A study on ALM also indicated that there was a discrepancy between initial biopsy and final pathology thickness in 43% of patients, with a median difference of 1.6 mm. This discrepancy could lead to an increase in the final T category and potentially affect surgical planning and staging (PUBMED:32529271).
In cases where a significant portion of the melanoma remains after an incisional biopsy, microstaging accuracy may be compromised, with upstaging occurring in 21% of such cases (PUBMED:15858469). A retrospective comparison between preoperative and postoperative Breslow depth in primary cutaneous melanoma found that preoperative shave biopsies underestimated Breslow depth in 3.5% of cases, but this did not lead to statistically significant changes in surgical management (PUBMED:24809875).
Lastly, a study examining biopsy choice patterns by specialty and subsequent impact on surgical management found that partial biopsy techniques were less accurate in determining Breslow thickness but rarely altered recommendations for surgical management (PUBMED:24862925).
In summary, while initial biopsies are often representative of the final depth of invasive cutaneous melanoma, there is a possibility of underestimation, particularly with nonexcisional bi biopsies such as shave or punch biopsies. This underestimation can lead to discrepancies between the initial biopsy and the final pathology, potentially affecting the surgical management and staging of the melanoma. Despite this, the impact on surgical management is often minimal, and the initial biopsy remains a critical tool for the diagnosis and initial staging of melanoma. Excisional biopsies are generally the most accurate, but deep shave biopsies can be an alternative for thin melanomas when complete excision is not performed initially. It is important for clinicians to be aware of the potential for underestimation and to consider additional biopsies, wider margin excisions, or sentinel lymph node biopsy when initial findings suggest in situ or T1 melanoma (PUBMED:32529271). Overall, while there are limitations to the accuracy of initial biopsies, they are largely representative of the final depth in the majority of cases. |
Instruction: Does maternal prenatal care use predict infant immunization delay?
Abstracts:
abstract_id: PUBMED:8026663
Does maternal prenatal care use predict infant immunization delay? Background: The proportion of infants in the United States adequately immunized with DTP (Diphtheria, Tetanus and Pertussis) vaccine is below other industrialized nations and lowest among infants living in urban areas. At present, predictors of poor immunization are not well developed. In particular, the relationship between prenatal care utilization and childhood vaccination is not clearly defined.
Methods: Using medical record data, we measured associations between prenatal variables and adequacy of immunization (three DTP vaccines by age 10 months) in 163 mother-infant pairs who received prenatal and child care at a neighborhood community health center.
Results: At the end of 10 months' follow-up, 29.4% of infants had not received three DTP immunizations. Logistic regression identified the following independent risk factors for immunization delay: multiparity, being an English-speaking Hispanic mother, and failing to attend scheduled prenatal care appointments. In the regression model, interactions also existed between having a high proportion of missed visits to total prenatal visits scheduled (> or = 25%) and receiving social services, with the highest risk existing for women with a high proportion of missed appointments who also received social services, and the lowest for those with < 25% missed appointments who received social services.
Conclusions: Maternal demographics and health care utilization predict infant immunization rates. Use of these variables may permit early identification and case management of mothers of infants at high risk for immunization delay.
abstract_id: PUBMED:10888450
Prenatal care and infant emergency department use. Objective: To determine the relationship between mothers' use of prenatal care and pediatric emergency department (ED) use by their infants in the first 3 months of life.
Methods: This is a retrospective, cohort-control study of well, full-term infants who use a children's hospital ED. Using logistic regression, the likelihood of an emergency visit in the first 3 months of life was compared between infants of women with fewer than two prenatal visits and infants of women with two or more prenatal visits. Covariates were maternal age, race, substance abuse history, parity, infant birth weight, insurance status, and distance from the ED.
Results: The odds of an ED visit before age 3 months by infants of mothers with less than two prenatal visits was 29% lower than the comparison group. ED use was increased by proximity, Medicaid or no health insurance and younger maternal age. Seventy percent (70%) of visits by both cohorts were classified as unjustified. The odds of making an unjustified ED visit were increased by younger maternal age and proximity to the emergency department.
Conclusions: Women with poor prenatal care are less likely to seek ED care for their young infants. Although suboptimal prenatal care is associated with negative health outcomes, it is not known whether fewer infant ED visits are similarly deleterious.
abstract_id: PUBMED:33904025
Improved Maternal and Infant Outcomes with Serial, Self-Reported Early Prenatal Substance Use Screening. Introduction: Most screening tools identifying women with substance use are not validated, used once in pregnancy, and are not reflective of continued substance use. We hypothesized that serial early prenatal substance screening leads to decreased substance use by the end of pregnancy and improved outcomes.
Methods: This is a retrospective cohort study of mothers and their infants between 1/2015 and 12/2017. A self-reported substance screening tool was administered on the first prenatal visit and subsequent visits until delivery. For analysis, mothers were divided into three groups based on the trimester of their first screen and adjusted for demographics and risk factors.
Results: Early first trimester screening resulted in 52% of mothers having ≥ 3 screens throughout pregnancy vs. 6% of mothers with late third trimester screens (p < 0.001). Compared to third trimester screening, there was a five-fold decrease of any substance use at second trimester, a seven-fold decrease at first trimester, and a nine-fold decrease for marijuana at first trimester. Compared to third trimester screening, there was a significant five-fold increase of negative maternal urine drug screen, 3 ½ -fold increase in well newborn diagnosis, and a five-fold increase of no infant morphine treatment at first trimester.
Discussion: We identified improved maternal and infant outcomes with serial early prenatal substance use screening. Early maternal substance use identification is crucial for immediate referral for prevention and treatment, and for social and community services. Further research is needed on universal serial early prenatal screenings.
abstract_id: PUBMED:12348453
Maternal and child immunization on infant survival in Kerala, India. The authors examine the impact of maternal and child health programs on the reduction of infant mortality in the state of Kerala, India. "To assess the maternal and child health program, a survey was carried out in Ernakulam, Palakkad and Malappuram districts of Kerala under the auspices of the Centre for Development Studies, Trivandrum. The analysis indicates that the decline in the 1980s is [due to] the Universal Immunization program. The program gave ante-natal care and immunization and above all, brought the pregnant women closer to the health system."
abstract_id: PUBMED:8704889
Pattern of prenatal care and infant immunization status in a comprehensive adolescent-oriented maternity program. Objective: To examine the relationship between patterns of prenatal care utilization and the subsequent pattern of preventive infant health care utilization among patients in a comprehensive, multidisciplinary, adolescent-oriented maternity program.
Methods: We hypothesized that the mothers of incompletely immunized 8-month-olds were less compliant with their own prenatal care appointments than were mothers of fully immunized 8-month-olds. We retrospectively reviewed the medical records of 150 consecutively delivered infants and their adolescent mothers. Data concerning the pattern of prenatal and postnatal use of preventive health care services and potentially confounding maternal characteristics were collected.
Results: Of the 150 infants aged 8 months, 22 (14.7%) were incompletely immunized. Mothers of completely and incompletely immunized infants did not differ in age, school enrollment status, or compliance with prenatal appointments. However, the latter group initiated prenatal care later, obtained fewer prenatal visits, returned later for postpartum care, and were more likely to be black and to report inadequate family support after delivery. Three of the 5 characteristics entered a logistic regression function that predicted the risk of incomplete immunizations at 8 months of age: third-trimester initiation of prenatal care (odds ratio, 4.05; 95% confidence interval, 1.19-13.7), inadequate family support (odds ratio, 3.42; 95% confidence interval, 1.17-10.0), and black race (odds ratio, 3.14; 95% confidence interval, 1.19-8.69). The total model chi 2 was 15.8 (P < .001).
Conclusions: Among patients in a comprehensive adolescent-oriented maternity program, the timing of the first prenatal visit helps to identify infants who are at increased risk for incomplete primary immunization status. Our findings favor preferential allocation of scarce, costly outreach services to infants born to adolescent mothers who enter prenatal care during the third trimester.
abstract_id: PUBMED:17826581
Prenatal immunization education the pediatric prenatal visit and routine obstetric care. Background: Vaccine safety concerns and lack of knowledge regarding vaccines contribute to delays in infant immunization. Prenatal vaccine education could improve risk communication and timely vaccination. This study sought to determine the proportion of obstetric practices and hospital-based prenatal education classes that provide pregnant women with infant immunization information, the willingness of obstetric practices to provide infant immunization information, and the proportion of first-time mothers who receive a pediatric prenatal visit.
Methods: A telephone survey was conducted of 100 pediatric practices and 100 obstetric practices randomly selected from the American Medical Association Physician Masterfile between January and March 2005, with analysis performed April 2005.
Results: Seventy-one of 100 (71%) selected obstetric practices and 85 of 100 (85%) selected pediatric practices participated. Sixteen obstetric practices (23%) reported providing pregnant women with information on routine childhood immunizations. Thirty-four of the 52 practices (65%) that did not provide such information reported willingness to do so. Ten of 51 hospitals (20%) did not provide information about routine childhood immunizations to prenatal class participants. Sixty-six of the 85 pediatric practices (78%) provided a pediatric prenatal visit. Among these, the median percentage of first-time mothers who received a visit was 30%.
Conclusions: Prenatal visits are a missed opportunity for providing education about infant immunizations. Incorporating immunization education into routine obstetric prenatal care may increase maternal knowledge of infant vaccines and reduce delayed immunization.
abstract_id: PUBMED:23973343
Assessments of vaccines for prenatal immunization. The strategy of prenatal maternal immunization to protect the pregnant woman and her infant was first used with tetanus toxoid, when it was recognized that young infants had very high rates of tetanus disease, well before the age when infant immunizations are provided. Antenatal immunization has now been recommended and utilized for additional vaccines to prevent infections in pregnancy and the young infant. There are several issues to consider which are unique to the strategy of antenatal immunization. The first is that immunization of the pregnant woman will affect the woman who receives the vaccine, her developing fetus, and the young infant for several months after delivery. For this discussion, we will consider the availability of data for the maternal-fetal-infant triad in 4 aspects: This discussion will review available data from vaccines for prevention of tetanus, pneumococcal, influenza and pertussis infections used in antenatal maternal immunization programs.
abstract_id: PUBMED:33171766
Maternal Prenatal Cortisol and Breastfeeding Predict Infant Growth. Fetal/infant growth affects adult obesity and morbidities/mortality and has been associated with prenatal exposure to cortisol. Bidirectional relations between maternal stress and breastfeeding suggest that they interact to influence offspring growth. No models have tested this hypothesis, particularly regarding longer-term offspring outcomes. We used a subset of the IDAHO Mom Study (n = 19-95) to examine associations among maternal prenatal cortisol (cortisol awakening response (CAR) and area under the curve), and standardized weight-for-length (WLZ) and length-for-age (LAZ) z-scores from birth-18 months, and main and interactive effects of prenatal cortisol and breastfeeding on infant growth from birth-6 months. CAR was negatively associated with LAZ at birth (r = -0.247, p = 0.039) but positively associated at 13-14 months (r = 0.378, p = 0.033), suggesting infant catch-up growth with lower birth weights, likely related to elevated cortisol exposure, continues beyond early infancy. A negative correlation between breastfeeding and 10-month WLZ (r = -0.344, p = 0.037) and LAZ (r = -0.468, p = 0.005) suggests that breastfeeding assists in managing infant growth. WLZ and LAZ increased from birth to 6 months (ps < 0.01), though this was unrelated to interactions between prenatal cortisol and breastfeeding (i.e., no significant moderation), suggesting that other factors played a role, which should be further investigated. Findings add to our understanding of the predictors of infant growth.
abstract_id: PUBMED:26189913
Does prenatal care benefit maternal health? A study of post-partum maternal care use. Most studies on prenatal care focus on its effects on infant health, while studying less about the effects on maternal health. Using the Longitudinal Health Insurance claims data in Taiwan in a recursive bivariate probit model, this study examines the impact of adequate prenatal care on the probability of post-partum maternal hospitalization during the first 6 months after birth. The results show that adequate prenatal care significantly reduces the probability of post-partum maternal hospitalization among women who have had vaginal delivery by 43.8%. This finding suggests that the benefits of prenatal care may have been underestimated among women with vaginal delivery. Timely and adequate prenatal care not only creates a positive impact on infant health, but also yields significant benefits for post-partum maternal health. However, we do not find similar benefits of prenatal care for women undergoing a cesarean section.
abstract_id: PUBMED:30639165
The Association of Inadequate and Intensive Prenatal Care With Maternal, Fetal, and Infant Outcomes: A Population-Based Study in Manitoba, Canada. Objective: Little is known about how prenatal care influences health outcomes in Canada. The objective of this study was to examine the association of prenatal care utilization with maternal, fetal, and infant outcomes in Manitoba.
Methods: This retrospective cohort study conducted at the Manitoba Centre for Health Policy investigated all deliveries of singleton births from 2004-2005 to 2008-2009 (N = 67 076). The proportion of women receiving inadequate, intermediate/adequate, and intensive prenatal care was calculated. Multivariable logistic regression was used to examine the association of inadequate and intensive prenatal care with maternal and fetal-infant health outcomes, health care use, and maternal health-related behaviours.
Results: The distribution of prenatal care utilization was 11.6% inadequate, 84.4% intermediate/adequate, and 4.0% intensive. After adjusting for sociodemographic factors and maternal health conditions, inadequate prenatal care was associated with increased odds of stillbirth, preterm birth, low birth weight, small for gestational age (SGA), admission to the NICU, postpartum depressive/anxiety disorders, and short interpregnancy interval to next birth. Women with inadequate prenatal care had reduced odds of initiating breastfeeding or having their infant immunized. Intensive prenatal care was associated with reduced odds of stillbirth, preterm birth, and low birth weight and increased odds of postpartum depressive/anxiety disorders, initiation of breastfeeding, and infant immunization.
Conclusion: Inadequate prenatal care was associated with increased odds of several adverse pregnancy outcomes and lower likelihood of health-related behaviours, whereas intensive prenatal care was associated with reduced odds of some adverse pregnancy outcomes and higher likelihood of health-related behaviours. Ensuring women receive adequate prenatal care may improve pregnancy outcomes.
Answer: Yes, maternal prenatal care use does predict infant immunization delay. According to the study with PUBMED:8026663, maternal demographics and health care utilization, including prenatal care, are predictors of infant immunization rates. The study found that factors such as multiparity, being an English-speaking Hispanic mother, and failing to attend scheduled prenatal care appointments were independent risk factors for immunization delay in infants. Additionally, there was a higher risk of immunization delay for women with a high proportion of missed prenatal care appointments who also received social services, compared to those with fewer missed appointments who received social services. This suggests that the use of prenatal care is associated with the timeliness of infant immunizations, and that inadequate prenatal care can be a predictor of delayed immunizations in infants. |
Instruction: Do high-risk features support the use of adjuvant chemotherapy in stage II colon cancer?
Abstracts:
abstract_id: PUBMED:34760701
High-Risk Features Are Prognostic in dMMR/MSI-H Stage II Colon Cancer. Background: High-risk features, such as T4 disease, bowel obstruction, poorly/undifferentiated histology, lymphovascular, perineural invasion, and <12 lymph nodes sampled, indicate poor prognosis and define high-risk stage II disease in proficient mismatch repair stage II colon cancer (CC). The prognostic role of high-risk features in dMMR/MSI-H stage II CC is unknown. Similarly, the role of adjuvant therapy in high-risk stage II CC with dMMR/MSI-H (≥1 high-risk feature) has not been studied in prospective trials. The aim of this analysis of the National Cancer Database is to evaluate the prognostic value of high-risk features in stage II dMMR/MSI-H CC.
Methods: Univariate (UVA) and multivariate (MVA) Cox proportional hazards (Cox-PH) models were built to assess the association between clinical and demographic characteristics and overall survival. Kaplan-Meier survival curves were generated with log-rank tests to evaluate the association between adjuvant chemotherapy in high-risk and low-risk cohorts separately.
Results: A total of 2,293 stage II CC patients have dMMR/MSI-H; of those, 29.5% (n = 676) had high-risk features. The high-risk dMMR/MSI-H patients had worse overall survival [5-year survival and 95%CI, 73.2% (67.3-78.1%) vs. 80.3% (76.7-83.5%), p = 0.0001]. In patients with stage II dMMR/MSI-H CC, the high-risk features were associated with shorter overall survival (OS) along with male sex, positive carcinoembryonic antigen, Charlson-Deyo score >1, and older age. Adjuvant chemotherapy administration was associated with better OS, regardless of the high-risk features in dMMR/MSI-H (log-rank test, p = 0.001) or not (p = 0.0006). When stratified by age, the benefit of chemotherapy was evident only in patients age ≥65 with high-risk features.
Conclusion: High-risk features are prognostic in the setting of dMMR/MSI-H stage II CC. Adjuvant chemotherapy may improve survival specifically in patients ≥65 years and with high-risk features.
abstract_id: PUBMED:32927771
Adjuvant Chemotherapy for Stage II Colon Cancer. In stage II colon cancer management, surgery alone has shown a high cure rate (about 80%), and the role of adjuvant chemotherapy is still a matter of debate. Patients with high-risk features (T4, insufficient nodal sampling, grading, etc.) have a poorer prognosis and, usually, adjuvant chemotherapy is recommended. The purpose of the present study is to highlight and discuss what is still unclear and not completely defined from the previous trials regarding risk stratification and therapeutic benefit of adjuvant chemotherapy. With all the limitations of generalizing, we make the effort of trying to quantify the relative contribution of each prognostic factor and the benefit of adjuvant chemotherapy for stage II colon cancer. Finally, we propose a decision algorithm with the aim of summarizing the current evidence and translating it to clinical practice.
abstract_id: PUBMED:26914273
Adjuvant chemotherapy is not associated with improved survival for all high-risk factors in stage II colon cancer. Adjuvant chemotherapy can be considered in high-risk stage II colon cancer comprising pT4, poor/undifferentiated grade, vascular invasion, emergency surgery and/or <10 evaluated lymph nodes (LNs). Adjuvant chemotherapy administration and its effect on survival was evaluated for each known risk factor. All patients with high-risk stage II colon cancer who underwent resection and were diagnosed in the Netherlands between 2008 and 2012 were included. After stratification by risk factor(s) (vascular invasion could not be included), Cox regression was used to discriminate the independent association of adjuvant chemotherapy with the probability of death. Relative survival was used to estimate disease-specific survival. A total of 4,940 of 10,935 patients with stage II colon cancer were identified as high risk, of whom 790 (16%) patients received adjuvant chemotherapy. Patients with a pT4 received adjuvant chemotherapy more often (37%). Probability of death in pT4 patients receiving chemotherapy was lower compared to non-recipients (3-year overall survival 91% vs. 73%, HR 0.43, 95% CI 0.28-0.66). The relative excess risk (RER) of dying was also lower for pT4 patients receiving chemotherapy compared to non-recipients (3-year relative survival 94% vs. 85%, RER 0.36, 95% CI 0.17-0.74). For patients with only poor/undifferentiated grade, emergency surgery or <10 LNs evaluated, no association between receipt of adjuvant chemotherapy and survival was observed. In high-risk stage II colon cancer, adjuvant chemotherapy was associated with higher survival in pT4 only. To prevent unnecessary chemotherapy-induced toxicity, further refinement of patient subgroups within stage II colon cancer who could benefit from adjuvant chemotherapy seems indicated.
abstract_id: PUBMED:34933441
Clinical implication of adjuvant chemotherapy according to mismatch repair status in patients with intermediate-risk stage II colon cancer: a retrospective study. Backgruound: The present study evaluated the clinical implications of adjuvant chemotherapy according to the mismatch repair (MMR) status and clinicopathologic features of patients with intermediate- and high-risk stage II colon cancer (CC).
Methods: This study retrospectively reviewed 5,774 patients who were diagnosed with CC and underwent curative surgical resection at Kyungpook National University Chilgok Hospital. The patients were enrolled according to the following criteria: (1) pathologically diagnosed with primary CC; (2) stage II CC classified based on the 7th edition of the American Joint Committee on Cancer staging system; (3) intermediate- and high-risk features; and (4) available test results for MMR status. A total of 286 patients met these criteria and were included in the study.
Results: Among the 286 patients, 54 (18.9%) were identified as microsatellite instability-high (MSI-H) or deficient MMR (dMMR). Although all the patients identified as MSI-H/dMMR showed better survival outcomes, T4 tumors and adjuvant chemotherapy were identified as independent prognostic factors for survival. For the intermediate-risk patients identified as MSI-low (MSI-L)/microsatellite stable (MSS) or proficient MMR (pMMR), adjuvant chemotherapy exhibited a significantly better disease-free survival (DFS) but had no impact on overall survival (OS). Oxaliplatin-containing regimens showed no association with DFS or OS. Adjuvant chemotherapy was not associated with DFS in intermediate-risk patients identified as MSI-H/dMMR.
Conclusion: The current study found that the use of adjuvant chemotherapy was correlated with better DFS in MSI-L/MSS or pMMR intermediate-risk stage II CC patients.
abstract_id: PUBMED:37725517
Impact of adjuvant chemotherapy on long-term overall survival in patients with high-risk stage II colon cancer: a nationwide cohort study. Background: This study aimed to investigate the impact of adjuvant chemotherapy on long-term survival in unselected patients with high-risk stage II colon cancer including an analysis of each high-risk feature.
Materials And Methods: Data from the Danish Colorectal Cancer Group, the National Patient Registry and the Danish Pathology Registry from 2014 to 2018 were merged. Patients surviving > 90 days were included. High-risk features were defined as emergency presentation, including self-expanding metal stents (SEMS)/loop-ostomy as a bridge to resection, grade B or C anastomotic leakage, pT4 tumors, lymph node yield < 12 or signet cell carcinoma. Eligibility criteria for chemotherapy were age < 75 years, proficient MMR gene expression, and performance status ≤ 2. The primary outcome was 5-year overall survival. Secondary outcomes included the proportion of eligible patients allocated for adjuvant chemotherapy and the time to first administration.
Results: In total 939 of 3937 patients with stage II colon cancer had high-risk features, of whom 408 were eligible for chemotherapy. 201 (49.3%) patients received adjuvant chemotherapy, with a median time to first administration of 35 days after surgery. The crude 5-year overall survival was 84.9% in patients receiving adjuvant chemotherapy compared with 66.3% in patients not receiving chemotherapy, p < 0.001. This association corresponded to an absolute risk difference of 14%.
Conclusion: 5-year overall survival was significantly higher in patients with high-risk stage II colon cancer treated with adjuvant chemotherapy compared with no chemotherapy. Adjuvant treatment was given to less than half of the patients who were eligible for it.
abstract_id: PUBMED:25332117
Adjuvant chemotherapy use and outcomes of patients with high-risk versus low-risk stage II colon cancer. Background: Adjuvant chemotherapy (AC) is frequently considered in patients with stage II colon cancer who are considered to be at high risk. However, to the authors' knowledge, the survival benefits associated with AC in these patients remain largely unproven. In the current study, the authors sought to examine the use of AC in patients with AJCC stage II colon cancer and to compare the impact of AC on outcomes in patients with high-risk versus low-risk disease in a population-based setting.
Methods: Patients with stage II colon cancer who were evaluated at 1 of 5 regional cancer centers in British Columbia from 1999 to 2008 were analyzed. Kaplan-Meier and Cox regression methods were used to correlate high-risk versus low-risk status and receipt of AC with recurrence-free survival (RFS), disease-specific survival (DSS), and overall survival (OS).
Results: A total of 1697 patients were identified: 1286 (76%) with high-risk and 411 (24%) with low-risk disease, among whom 373 (29%) and 51 (12%),respectively, received AC. Individuals with high-risk disease treated with AC were younger (median age, 62 years vs 72 years; P<.001) and had better Eastern Cooperative Oncology Group performance status (0/1: 47% vs 33%; P = .001). For high-risk patients, AC was associated with improved OS (hazard ratio [HR], 0.65; 95% confidence interval [95% CI], 0.50-0.83 [P = .001]). However, no significant benefits with regard to RFS or DSS were observed. Subgroup analyses revealed that AC in patients with T4 disease was associated with significantly improved RFS (HR, 0.63; 95% CI, 0.42-0.95 [P = .03]), DSS (HR, 0.59; 95% CI, 0.37-0.93 [P = .02]), and OS (HR, 0.50; 95% CI, 0.33-0.77 [P = .002]). For patients with low-risk disease, AC was associated with inferior RFS (HR, 2.18; 95% CI, 1.00-4.79 [P = .05]) and DSS (HR, 3.01; 95% CI, 1.10-8.23 [P = .03]).
Conclusions: In this population-based analysis, AC was associated with an OS advantage in high-risk patients, most likely due to patient selection. RFS, DSS, and OS benefits were mainly observed in patients with T4 disease, suggesting a limited role for AC in patients deemed to be high risk by non-T4 features.
abstract_id: PUBMED:24224835
Conventional adverse features do not predict response to adjuvant chemotherapy in stage II colon cancer. Background: The role of adjuvant chemotherapy in patients with stage II colon cancer is unclear. Current guidelines recommend adjuvant chemotherapy for high-risk patients, although the benefit demonstrated to date is small. Our study examined if adjuvant chemotherapy is associated with improved cancer-specific survival in high-risk patients with stage II colon cancer.
Methods: A retrospective review was performed on patients with stage II (T3-4N0M0) colon cancer in a multi-institutional database from 1999 to 2007. Additionally, histology slides were reviewed and cancer-specific survival data were obtained from the state cancer registry. Adverse features examined were perforation, obstruction, T4 disease, poor differentiation, nodal yield less than 12, lymphovascular invasion and perineural invasion. Survival analysis was performed using the Kaplan-Meier method and Cox regression.
Results: There were 458 patients in the study, with a median follow-up of 5.2 years. Four patients (0.8%) were lost to follow-up. There were 290 (63%) high-risk patients, defined as having at least one adverse feature. Patients who had adjuvant chemotherapy were significantly younger (median 61 years versus 72 years, P < 0.001) but had comparable ASA score (median 2 versus 2, P = 0.3). There was no significant survival benefit observed associated with any one factor or when grouped. In high-risk patients the 5-year cancer specific survival with adjuvant chemotherapy was 84.8% (95% CI 78.7-91.9) compared to surgery alone 92.7% (95% CI 88.5-96.1), P = 0.85).
Conclusion: Adjuvant chemotherapy did not significantly improve cancer-specific survival in patients with stage II colon cancer with adverse features. Other markers for selecting appropriate patients for adjuvant treatment are required.
abstract_id: PUBMED:37718392
Interaction analysis of high-risk pathological features on adjuvant chemotherapy survival benefit in stage II colon cancer patients: a multi-center, retrospective study. Background: We aimed to analyze the benefit of adjuvant chemotherapy in high-risk stage II colon cancer patients and the impact of high-risk factors on the prognostic effect of adjuvant chemotherapy.
Methods: This study is a multi-center, retrospective study, A total of 931 patients with stage II colon cancer who underwent curative surgery in 8 tertiary hospitals in China between 2016 and 2017 were enrolled in the study. Cox proportional hazard model was used to assess the risk factors of disease-free survival (DFS) and overall survival (OS) and to test the multiplicative interaction of pathological factors and adjuvant chemotherapy (ACT). The additive interaction was presented using the relative excess risk due to interaction (RERI). The Subpopulation Treatment Effect Pattern Plot (STEPP) was utilized to assess the interaction of continuous variables on the ACT effect.
Results: A total of 931 stage II colon cancer patients were enrolled in this study, the median age was 63 years old (interquartile range: 54-72 years) and 565 (60.7%) patients were male. Younger patients (median age, 58 years vs 65 years; P < 0.001) and patients with the following high-risk features, such as T4 tumors (30.8% vs 7.8%; P < 0.001), grade 3 lesions (36.0% vs 22.7%; P < 0.001), lymphovascular invasion (22.1% vs 6.8%; P < 0.001) and perineural invasion (19.4% vs 13.6%; P = 0.031) were more likely to receive ACT. Patients with perineural invasion showed a worse OS and marginally worse DFS (hazardous ratio [HR] 2.166, 95% confidence interval [CI] 1.282-3.660, P = 0.004; HR 1.583, 95% CI 0.985-2.545, P = 0.058, respectively). Computing the interaction on a multiplicative and additive scale revealed that there was a significant interaction between PNI and ACT in terms of DFS (HR for multiplicative interaction 0.196, p = 0.038; RERI, -1.996; 95%CI, -3.600 to -0.392) and OS (HR for multiplicative interaction 0.112, p = 0.042; RERI, -2.842; 95%CI, -4.959 to -0.725).
Conclusions: Perineural invasion had prognostic value, and it could also influence the effect of ACT after curative surgery. However, other high-risk features showed no implication of efficacy for ACT in our study.
Trial Registration: This study is registered on ClinicalTrials.gov, NCT03794193 (04/01/2019).
abstract_id: PUBMED:27417445
Adjuvant chemotherapy is associated with improved survival in patients with stage II colon cancer. Background: The role of adjuvant chemotherapy in patients with stage II colon cancer remains to be elucidated and its use varies between patients and institutions. Currently, clinical guidelines suggest discussing adjuvant chemotherapy for patients with high-risk stage II disease in the absence of conclusive randomized controlled trial data. To further investigate this relationship, the objective of the current study was to determine whether an association exists between overall survival (OS) and adjuvant chemotherapy in patients stratified by age and pathological risk features.
Methods: Data from the National Cancer Data Base were analyzed for demographics, tumor characteristics, management, and survival of patients with stage II colon cancer who were diagnosed from 1998 to 2006 with survival information through 2011. Pearson Chi-square tests and binary logistic regression were used to analyze disease and demographic data. Survival analysis was performed with the log-rank test and Cox proportional hazards regression modeling. Propensity score weighting was used to match cohorts.
Results: Among 153,110 patients with stage II colon cancer, predictors of receiving chemotherapy included age <65 years, male sex, nonwhite race, use of a community treatment facility, non-Medicare insurance, and diagnosis before 2004. Improved and clinically relevant OS was associated with the receipt of adjuvant chemotherapy in all patient subgroups regardless of high-risk tumor pathologic features (poor or undifferentiated histology, <12 lymph nodes evaluated, positive resection margins, or T4 histology), age, or chemotherapy regimen, even after adjustment for covariates and propensity score weighting (hazard ratio, 0.76; P<.001). There was no difference in survival noted between single and multiagent adjuvant chemotherapy regimens.
Conclusions: In what to the authors' knowledge is the largest group of patients with stage II colon cancer evaluated to date, improved OS was found to be associated with adjuvant chemotherapy regardless of treatment regimen, patient age, or high-risk pathologic risk features. Cancer 2016;122:3277-3287. © 2016 American Cancer Society.
abstract_id: PUBMED:26648796
Chemotherapy for Stage II Colon Cancer. The adjuvant treatment of patients with stage II colon cancer is an area of controversy in medical oncology. Adjuvant chemotherapy aims to eradicate micrometastatic disease present at the time of surgery, preventing the development of distant metastatic disease and thereby curing those patients of their cancer. National and international guidelines for the adjuvant treatment of stage II colon cancer recommend a range of treatment options from observation to chemotherapy with single-agent or combination regimens, depending on the presence or absence of high-risk features (poorly differentiated histology, presence of lymphovascular invasion, presence of perineural invasion, report of < 12 lymph nodes, bowel obstruction, localized perforation, or positive margins). In the one prospective study designed to address the role of adjuvant chemotherapy in stage II colon cancer, a small but statistically significant benefit in overall survival was seen for those patients who received adjuvant chemotherapy; however, multiple meta-analyses and retrospective subgroup analyses have called these findings into question. Though there may be a role for adjuvant chemotherapy in the treatment of patients with stage II colon cancer, its incremental benefit is small, at best, and comes with the risks of real and rarely fatal complications of chemotherapy.
Answer: High-risk features in stage II colon cancer, such as T4 disease, bowel obstruction, poorly/undifferentiated histology, lymphovascular, perineural invasion, and fewer than 12 lymph nodes sampled, are associated with a poorer prognosis and are often used to identify patients who may benefit from adjuvant chemotherapy (PUBMED:32927771; PUBMED:26914273; PUBMED:37725517; PUBMED:25332117). Studies have shown that adjuvant chemotherapy can improve overall survival in patients with high-risk stage II colon cancer, particularly in those with T4 disease (PUBMED:26914273; PUBMED:37725517; PUBMED:25332117).
For instance, in patients with stage II colon cancer with deficient mismatch repair (dMMR) or high microsatellite instability (MSI-H), high-risk features were associated with shorter overall survival, and adjuvant chemotherapy administration was associated with better overall survival, especially in patients aged ≥65 years with high-risk features (PUBMED:34760701). Similarly, another study found that adjuvant chemotherapy was associated with higher survival in pT4 patients (PUBMED:26914273).
However, the benefit of adjuvant chemotherapy may not be uniform across all high-risk features. For example, one study found no association between receipt of adjuvant chemotherapy and survival for patients with only poor/undifferentiated grade, emergency surgery, or fewer than 10 lymph nodes evaluated (PUBMED:26914273). Another study suggested that adjuvant chemotherapy did not significantly improve cancer-specific survival in patients with stage II colon cancer with adverse features, indicating that other markers might be needed to select appropriate patients for adjuvant treatment (PUBMED:24224835).
Furthermore, a nationwide cohort study reported that 5-year overall survival was significantly higher in patients with high-risk stage II colon cancer treated with adjuvant chemotherapy compared with no chemotherapy (PUBMED:37725517). A population-based analysis also indicated an overall survival advantage in high-risk patients, particularly those with T4 disease (PUBMED:25332117).
In summary, high-risk features in stage II colon cancer do support the use of adjuvant chemotherapy, as they are associated with a poorer prognosis and studies have shown that adjuvant chemotherapy can improve survival outcomes, especially in certain high-risk subgroups such as those with T4 disease. However, the benefit may not extend to all high-risk features, and further refinement of patient subgroups who could benefit from adjuvant chemotherapy is indicated (PUBMED:26914273; PUBMED:24224835). |
Instruction: Does low-level laser therapy enhance the efficacy of intravenous regional anesthesia?
Abstracts:
abstract_id: PUBMED:24945286
Does low-level laser therapy enhance the efficacy of intravenous regional anesthesia? Background: The use of intravenous regional anesthesia (IVRA) is limited by pain resulting from the application of tourniquets and postoperative pain.
Objective: To assess the efficacy of low-level laser therapy added to IVRA for improving pain related to surgical fixation of distal radius fractures.
Methods: The present double-blinded, placebo-controlled, randomized clinical trial involved 48 patients who were undergoing surgical fixation of distal radius fractures. Participants were randomly assigned to either an intervention group (n=24), who received 808 nm laser irradiation as 4 J⁄point for 20 s over ipsilateral three nerve roots in the cervical region corresponding to C5-C8 vertebrae, and 808 nm laser irradiation as 0.1 J⁄cm2 for 5 min in a tangential scanning mode over the affected extremity; or a control group (n=24), who underwent the same protocol and timing of laser probe application with the laser switched off. Both groups received the same IVRA protocol using 2% lidocaine.
Results: The mean visual analogue scale scores were significantly lower in the laser-assisted group than in the lidocaine-only group on all measurements during and after operation (P<0.05). The mean time to the first need for fentanyl administration during the operation was longer in the laser group (P=0.04). The total amount of fentanyl administered to patients was significantly lower in the laser-assisted group (P=0.003). The laser group needed significantly less pethidine for pain relief (P=0.001) and at a later time (P=0.002) compared with the lidocaine-only group. There was no difference between the groups in terms of mean arterial pressure and heart rate.
Conclusion: The addition of gallium-aluminum-arsenide laser irradiation to intravenous regional anesthesia is safe, and reduces pain during and after the operation.
abstract_id: PUBMED:34738278
Effects of low level laser therapy on injection pain and anesthesia efficacy during local anesthesia in children: A randomized clinical trial. Background: The use of low level laser therapy (LLLT) to reduce injection pain associated with dental local anesthesia is reported in a limited number of studies in adults, but research on the effects of LLLT in children is needed.
Aim: This study aimed to evaluate the effects of topical anesthesia + LLLT on injection pain, anesthesia efficacy, and duration in local anesthesia of children who are undergoing pulpotomy treatment.
Design: The study was conducted as a randomized, controlled-crossover, double-blind clinical trial with 60 children aged 6-9 years. Before local infiltration anesthesia was administered, only topical anesthesia was applied in one side (control group/CG), and topical anesthesia plus LLLT (a diode laser: 810 nm; continuous mode; 0.3W; 20 s; 69 J/cm2 ) was applied in the contralateral side (LG) as pre-anesthesia. The injection pain and anesthesia efficacy were evaluated subjectively and objectively using the Wong-Baker Faces Pain Rating Scale (PRS) and the Face, Legs, Activity, Cry, Consolability (FLACC) scale respectively. Data were analyzed for statistical significance (p < .05).
Results: The "no pain" and "severe pain" rates in the PRS were 41.7% and 3.3% for the LG and 21.7% and 11.7% for the CG, respectively, during injection. Similarly, in the FLACC data, the number of "no pain" responses was higher for the LG than the CG (40%, 33.3%) and no "severe pain" rate was observed in both groups. The only statistically significant difference found for the PRS was p < .05. The median pain score was "0" for the LG and the CG in the FLACC data for the evaluation of anesthesia efficacy, and there was no statistically significant difference between the groups in terms of pain and anesthesia duration (p > .05). Also, most of the children preferred injection with topical anesthesia + LLLT (66.7%).
Conclusions: It has been determined that the application of topical anesthesia + LLLT with an 810-nm diode laser before local infiltration anesthesia reduced injection pain and did not have an effect on anesthesia efficacy and duration in children.
abstract_id: PUBMED:34824496
Effect of Low-level Laser on LI4 Acupoint in Pain Reduction during Local Anesthesia in Children. Background: Pain is a multidimensional construct that involves sensory, emotional, and cognitive processes. It is an essential component of child behavior guidance. The injection of a local anesthetic agent during pediatric dental treatment is one of the most painful and distressing procedures performed, stimulation of acupoint LI4 provides an analgesic effect in the orofacial region, thus decreasing the pain during injection.
Aims And Objectives: To compare and evaluate the effect of low-level laser on LI4 acupoint and surface-acting 20% benzocaine gel during local anesthesia.
Materials And Methods: Children of age-group between 5 years and 9 years receiving bilateral local anesthesia were scheduled for dental treatment. Split-mouth cross-over study was planned and was divided into two groups, receiving low-level laser acupuncture on LI4 acupoint with placebo as a moist cotton swab in the first visit and 20% benzocaine gel with placebo as low-level laser acupuncture off mode in second visit and vice versa. Pain intensity was evaluated using the sound eye motor scale as subjective scale, Wong-Bakers pain rating scale. Pulse rate was measured before, during, and after the procedure using a pulse oximeter.
Results: The average heart rate, Wong-Bakers pain rating scale, and Sound Eye Motor scale were significantly lower in the group having low-level laser when compared with the group having placebo low-level laser therapy.
Conclusion: The low-level laser can be used to control pain during local anesthesia in children.
How To Cite This Article: Sandhyarani B, Pawar RR, Patil AT, et al. Effect of Low-level Laser on LI4 Acupoint in Pain Reduction during Local Anesthesia in Children. Int J Clin Pediatr Dent 2021;14(4):462-466.
abstract_id: PUBMED:24156887
Intravenous regional anesthesia with long-acting local anesthetics. An update Intravenous regional anesthesia is a widely used technique for brief surgical interventions, primarily on the upper limbs and less frequently, on the lower limbs. It began being used at the beginning of the 20th century, when Bier injected procaine as a local anesthetic. The technique to accomplish anesthesia has not changed much since then, although different drugs, particularly long-acting local anesthetics, such as ropivacaine and levobupivacaine in low concentrations, were introduced. Additionally, drugs like opioids, muscle relaxants, paracetamol, neostigmine, magnesium, ketamine, clonidine, and ketorolac, have all been investigated as adjuncts to intravenous regional anesthesia, and were found to be fairly useful in terms of an increased onset of operative anesthesia and longer lasting perioperative analgesia. The present article provides an overview of current knowledge with emphasis on long-acting local anesthetic drugs.
abstract_id: PUBMED:33054004
The combined use of kinesio- and laser therapy in the regional hemodynamic disorders correction in dilated cardiomyopathy The search for new methods of symptomatic therapy of dilated cardiomyopathy (DCM) remains a relevant objective of modern cardiology. This is due to the low and short-term existing methods effectiveness of conservative and surgical treatment, including drug therapy.
Purpose Of The Study: Efficacy evaluation of the combined use of kinesio- and laser therapy for the correction of regional hemodynamics in patients with DCM against the background of maintenance drug therapy.
Material And Methods: 100 patients with DCM were examined. All patients received differential maintenance drug therapy (beta-blockers, ACE inhibitors, with intolerance to the latter - angiotensin II receptor blockers, aldosterone receptor antagonists, diuretics, cardiac glycosides, antiarrhythmic drugs). Patients were divided into 2 groups at least 3 months after the selection of drug therapy. Intravenous laser blood irradiation (ILBI) and the selection of unloading therapeutic exercises were performed for patients of the main group during therapy. Patients in the control group received only drug therapy. The main research method was venous occlusion plethysmography. It was used to assess regional hemodynamics with the determination of recirculating blood flow (Qr) and regional vascular resistance (Rr) at rest, venous tone (Vt), reserve blood flow (QH) and regional vascular resistance (RH) against a functional stress test.
Results: Data obtained in the dynamic observation process (after 1, 3, 6, 9 and 12 months) in the main group indicate a significant increase in Qr and QH, a decrease in Rr and RH, Vt. Significant positive dynamics in the control group was not observed. The regional hemodynamics indices after 9 and 12 months of observation significantly worsened.
Conclusion: Thus, according to venous-occlusal plethysmography, the use of unloading therapeutic exercises in combination with ILBI against the background of rationally selected differentiated drug therapy in patients with DCM significantly improves the regional hemodynamics. The developed symptomatic therapy methods can be applied in the practice of cardiologists, general practitioners, therapeutists, rehabilitation physicians to optimize the treatment of patients with DCM.
abstract_id: PUBMED:38268643
Low-level Laser Therapy to Alleviate Pain of Local Anesthesia Injection in Children: A Randomized Control Trial. Aim: The aim of our study was to evaluate and compare pain perception following photobiomodulation (PBM), topical anesthesia, precooling of the injection site, and vibration during administration of local anesthesia injection in pediatric patients aged 6-13 years.
Materials And Methods: In this split-mouth study, a total of 120 patients between the age group of 6 and 13 years were selected and randomly divided into three equal groups with 40 subjects in each. Pain was assessed using visual analog scale (VAS) and the Wong-Baker Faces Pain Rating Scale after the administration of local anesthesia. Behavior during the procedure was assessed using the Face, Legs, Activity, Cry, Consolability (FLACC) scale filled by the operator. Pulse rate was recorded before and during the administration of local anesthesia using pulse oximeter. After the procedure, patient compliance was also recorded using validated questionnaire. The level of significance was set at p < 0.05.
Results: The study showed PBM exhibited the lowest mean scores of anxiety/pain using VAS, Wong-Baker Faces Pain Rating Scale, FLACC scale and pulse rate as compared to precooling, vibration, and topical anesthesia. The differences in pain scores recorded were found to be statistically significant. Children were not anxious about the PBM method and exhibited good compliance (p < 0.001).
Conclusion: Photobiomodulation (PBM) was found to be effective means of reducing injection pain, demonstrating much better efficacy than other tested methods.
Clinical Significance: Photobiomodulation (PBM) can be used effectively to better manage procedures that patients frequently find painful without the need for prescription drugs, which frequently have several side effects.
How To Cite This Article: Khan BS, Setty JV, Srinivasan I, et al. Low-level Laser Therapy to Alleviate Pain of Local Anesthesia Injection in Children: A Randomized Control Trial. Int J Clin Pediatr Dent 2023;16(S-3):S283-S287.
abstract_id: PUBMED:26647089
Comparison of tramadol and lornoxicam in intravenous regional anesthesia: a randomized controlled trial Background And Objectives: Tourniquet pain is one of the major obstacles for intravenous regional anesthesia. We aimed to compare tramadol and lornoxicam used in intravenous regional anesthesia as regards their effects on the quality of anesthesia, tourniquet pain and postoperative pain as well.
Methods: After the ethics committee approval 51 patients of ASA physical status I-II aged 18-65 years were enrolled. The patients were divided into three groups. Group P (n=17) received 3mg/kg 0.5% prilocaine; group PT (n=17) 3mg/kg 0.5% prilocaine+2mL (100mg) tramadol and group PL (n=17) 3mg/kg 0.5% prilocaine+2mL (8mg) lornoxicam for intravenous regional anesthesia. Sensory and motor block onset and recovery times were noted, as well as tourniquet pains and postoperative analgesic consumptions.
Results: Sensory block onset times in the groups PT and PL were shorter, whereas the corresponding recovery times were longer than those in the group P. Motor block onset times in the groups PT and PL were shorter than that in the group P, whereas recovery time in the group PL was longer than those in the groups P and PT. Tourniquet pain onset time was shortest in the group P and longest in the group PL. There was no difference regarding tourniquet pain among the groups. Group PL displayed the lowest analgesic consumption postoperatively.
Conclusion: Adding tramadol and lornoxicam to prilocaine for intravenous regional anesthesia produces favorable effects on sensory and motor blockade. Postoperative analgesic consumption can be decreased by adding tramadol and lornoxicam to prilocaine in intravenous regional anesthesia.
abstract_id: PUBMED:28466181
Low level laser therapy : A narrative literature review on the efficacy in the treatment of rheumatic orthopaedic conditions Background: In low level laser therapy (LLLT) low wattage lasers are used to irradiate the affected skin areas, joints, nerves, muscles and tendons without any sensation or thermal damage. Although the exact mechanism of its effect is still unknown, it seems beyond dispute that LLLT induces a variety of stimulating processes at the cellular level affecting cell repair mechanisms, the vascular system and lymphatic system. LLLT has been popular among orthopaedic practitioners for many years, whereas university medicine has remained rather sceptical about it.
Objectives: Overview of studies on the efficacy of LLLT in the treatment of rheumatic orthopaedic conditions, i. e. muscle, tendon lesions and arthropathies.
Materials And Methods: Narrative literature review (PubMed, Web of Science).
Results: While earlier studies often failed to demonstrate the efficacy of LLLT, several recent studies of increasing quality proved the efficacy of LLLT in the treatment of multiple musculoskeletal pain syndromes like neck or lower back pain, tendinopathies (especially of the Achilles tendon) and epicondylolpathies, chronic inflammatory joint disorders like rheumatoid arthritis or chronic degenerative osteoarthritis of the large and small joints. In addition, there is recent evidence that LLLT can have a preventive capacity and can enhance muscle strength and accelerate muscle regeneration.
Conclusion: LLLT shows potential as an effective, noninvasive, safe and cost-efficient means to treat and prevent a variety of acute and chronic musculoskeletal conditions. Further randomized controlled studies, however, are required to confirm this positive assessment.
abstract_id: PUBMED:27919663
Efficacy of low level laser therapy in the treatment of burning mouth syndrome: A systematic review. Background: Burning mouth syndrome (BMS) is a chronic pain condition with indefinite cure, predominantly affecting post-menopausal women. The aim of this study was to systematically review the efficacy of low level laser therapy in the treatment of burning mouth syndrome (BMS).
Methods: PubMed, Embase and Scopus were searched from date of inception till and including October 2016 using various combinations of the following keywords: burning mouth syndrome, BMS, stomatodynia, laser therapy, laser treatment and phototherapy. The inclusion criteria were: Prospective, retrospective and case series studies. Letter to editors, reviews, experimental studies, studies that were not published in English, theses, monographs, and abstracts presented in scientific events were excluded. Due to heterogeneity of data no statistical analyses were performed.
Results: Ten clinical studies fulfilled the eligibility criteria, five of which were randomized clinical trials. In these studies, the laser wavelengths, power output and duration of irradiation ranged between 630-980nm, 20-300mW, 10s-15min, respectively. Most of studies reported laser to be an effective therapy strategy for management of BMS.
Conclusion: Majority of the studies showed that laser therapy seemed to be effective in reducing pain in BMS patients. However, due to the varied methodologies and substantial variations in laser parameters among these studies, more clinical trials are required to ascertain the efficacy of laser for treating BMS.
abstract_id: PUBMED:28536905
Efficacy of low-level laser therapy in management of symptomatic oral lichen planus: a systematic review. Oral lichen planus (OLP) is a chronic inflammatory disease of unknown etiology and indefinite cure. This systematic review assessed the efficacy of low-level laser therapy in the treatment of symptomatic OLP. Electronic databases (PubMed, Scopus, and Web of Science) were searched from date of inception till and including December 2016, using various combinations of the following keywords: oral lichen planus, laser therapy, low-level laser therapy, and phototherapy. Owing to heterogeneity of data, no statistical analyses were conducted. Initially, 227 publications were identified. After selection, only six studies were included in this systematic review. In these studies, the laser wavelengths, power output, and duration of irradiation ranged between 630-980 nm, 20-300 mW, and 10 s-15 min, respectively. All of the included studies found laser to be effective in management of OLP, without any reported adverse effects. The results of the included studies confirm that low-level laser therapy is effective in management of symptomatic OLP and can be used as an alternative to corticosteroids. However, due to variety of methods and substantial variations in laser parameters among these studies, more randomized clinical trials with large sample sizes are highly warranted.
Answer: Yes, low-level laser therapy (LLLT) has been shown to enhance the efficacy of intravenous regional anesthesia (IVRA). A double-blinded, placebo-controlled, randomized clinical trial involving 48 patients undergoing surgical fixation of distal radius fractures found that the addition of gallium-aluminum-arsenide laser irradiation to IVRA reduced pain during and after the operation. The study reported significantly lower visual analogue scale scores in the laser-assisted group compared to the lidocaine-only group, a longer mean time to the first need for fentanyl administration, and significantly lower total fentanyl and pethidine requirements for pain relief in the laser group (PUBMED:24945286).
This finding is consistent with the broader application of LLLT in pain management, as other studies have also reported the efficacy of LLLT in reducing pain in various clinical scenarios. For instance, LLLT has been found to reduce injection pain in children undergoing dental procedures without affecting the efficacy and duration of anesthesia (PUBMED:34738278), and to control pain during local anesthesia in children when applied to the LI4 acupoint (PUBMED:34824496). Moreover, LLLT has been shown to be an effective means of reducing injection pain in children, with better efficacy than other tested methods such as precooling, vibration, and topical anesthesia (PUBMED:38268643).
In summary, the evidence suggests that low-level laser therapy can enhance the efficacy of intravenous regional anesthesia by reducing pain during and after surgical procedures. |
Instruction: Planning irreversible electroporation in the porcine kidney: are numerical simulations reliable for predicting empiric ablation outcomes?
Abstracts:
abstract_id: PUBMED:24831827
Planning irreversible electroporation in the porcine kidney: are numerical simulations reliable for predicting empiric ablation outcomes? Purpose: Numerical simulations are used for treatment planning in clinical applications of irreversible electroporation (IRE) to determine ablation size and shape. To assess the reliability of simulations for treatment planning, we compared simulation results with empiric outcomes of renal IRE using computed tomography (CT) and histology in an animal model.
Methods: The ablation size and shape for six different IRE parameter sets (70-90 pulses, 2,000-2,700 V, 70-100 µs) for monopolar and bipolar electrodes was simulated using a numerical model. Employing these treatment parameters, 35 CT-guided IRE ablations were created in both kidneys of six pigs and followed up with CT immediately and after 24 h. Histopathology was analyzed from postablation day 1.
Results: Ablation zones on CT measured 81 ± 18 % (day 0, p ≤ 0.05) and 115 ± 18 % (day 1, p ≤ 0.09) of the simulated size for monopolar electrodes, and 190 ± 33 % (day 0, p ≤ 0.001) and 234 ± 12 % (day 1, p ≤ 0.0001) for bipolar electrodes. Histopathology indicated smaller ablation zones than simulated (71 ± 41 %, p ≤ 0.047) and measured on CT (47 ± 16 %, p ≤ 0.005) with complete ablation of kidney parenchyma within the central zone and incomplete ablation in the periphery.
Conclusion: Both numerical simulations for planning renal IRE and CT measurements may overestimate the size of ablation compared to histology, and ablation effects may be incomplete in the periphery.
abstract_id: PUBMED:29089031
Anistropically varying conductivity in irreversible electroporation simulations. Background: One recent area of cancer research is irreversible electroporation (IRE). Irreversible electroporation is a minimally invasive procedure where needle electrodes are inserted into the body to ablate tumor cells with electricity. The aim of this paper is to propose a mathematical model that incorporates a tissue's conductivity increasing more in the direction of the electrical field as this has been shown to occur in experiments.
Method: It was necessary to mathematically derive a valid form of the conductivity tensor such that it is dependent on the electrical field direction and can be easily implemented into numerical software. The derivation of a conductivity tensor that can take arbitrary functions for the conductivity in the directions tangent and normal to the electrical field is the main contribution of this paper. Numerical simulations were performed for isotropic-varying and anisotropic-varying conductivities to evaluate the importance of including the electrical field's direction in the formulation for conductivity.
Results: By starting from previously published experimental results, this paper derived a general formulation for an anistropic-varying tensor for implementation into irreversible electroporation modeling software. The anistropic-varying tensor formulation allows the conductivity to take into consideration both electrical field direction and magnitude, as opposed to previous published works that only took into account electrical field magnitude. The anisotropic formulation predicts roughly a five percent decrease in ablation size for the monopolar simulation and approximately a ten percent decrease in ablation size for the bipolar simulations. This is a positive result as previously reported results found the isotropic formulation to overpredict ablation size for both monopolar and bipolar simulations. Furthermore, it was also reported that the isotropic formulation overpredicts the ablation size more for the bipolar case than the monopolar case. Thus, our results are following the experimental trend by having a larger percentage change in volume for the bipolar case than the monopolar case.
Conclusions: The predicted volume of ablated cells decreased, and could be a possible explanation for the slight over-prediction seen by isotropic-varying formulations.
abstract_id: PUBMED:28952520
Uncertainty Quantification in Irreversible Electroporation Simulations. One recent area of cancer research is irreversible electroporation (IRE). Irreversible electroporation is a minimally invasive procedure where needle electrodes are inserted into the body to ablate tumor cells with electricity. The aim of this paper is to investigate how uncertainty in tissue and tumor conductivity propagate into final ablation predictions used for treatment planning. Two dimensional simulations were performed for a circular tumor surrounded by healthy tissue, and electroporated from two monopolar electrodes. The conductivity values were treated as random variables whose distributions were taken from published literature on the average and standard deviation of liver tissue and liver tumors. Three different Monte Carlo setups were simulated each at three different voltages. Average and standard deviation data was reported for a multitude of electrical field properties experienced by the tumor. Plots showing the variability in the electrical field distribution throughout the tumor are also presented.
abstract_id: PUBMED:26443800
Feasibility of a Modified Biopsy Needle for Irreversible Electroporation Ablation and Periprocedural Tissue Sampling. Objectives: To test the feasibility of modified biopsy needles as probes for irreversible electroporation ablation and periprocedural biopsy.
Methods: Core biopsy needles of 16-G/9-cm were customized to serve as experimental ablation probes. Computed tomography-guided percutaneous irreversible electroporation was performed in in vivo porcine kidneys with pairs of experimental (n = 10) or standard probes (n = 10) using a single parameter set (1667 V/cm, ninety 100 µs pulses). Two biopsy samples were taken immediately following ablation using the experimental probes (n = 20). Ablation outcomes were compared using computed tomography, simulation, and histology. Biopsy and necropsy histology were compared.
Results: Simulation-suggested ablations with experimental probes were smaller than that with standard electrodes (455.23 vs 543.16 mm2), although both exhibited similar shape. Computed tomography (standard: 556 ± 61 mm2, experimental: 515 ± 67 mm2; P = .25) and histology (standard: 313 ± 77 mm2, experimental: 275 ± 75 mm2; P = .29) indicated ablations with experimental probes were not significantly different from the standard. Histopathology indicated similar morphological changes in both groups. Biopsies from the ablation zone yielded at least 1 core with sufficient tissue for analysis (11 of the 20).
Conclusions: A combined probe for irreversible electroporation ablation and periprocedural tissue sampling from the ablation zone is feasible. Ablation outcomes are comparable to those of standard electrodes.
abstract_id: PUBMED:36635955
Global sensitivity study for irreversible electroporation: Towards treatment planning under uncertainty. Background: Electroporation-based cancer treatments are minimally invasive, nonthermal interventional techniques that leverage cell permeabilization to ablate the target tumor. However, the amount of permeabilization is susceptible to the numerous uncertainties during treatment, such as patient-specific variations in the tissue, type of the tumor, and the resolution of imaging equipment. These uncertainties can reduce the extent of ablation in the tissue, thereby affecting the effectiveness of the treatment.
Purpose: The aim of this work is to understand the effect of these treatment uncertainties on the treatment outcome for irreversible electroporation (IRE) in the case of colorectal liver metastasis (CRLM). Understanding the nature and extent of these effects can help us identify the influential treatment parameters and build better models for predicting the treatment outcome.
Methods: This is an in silico study using a static computational model with a custom applicator design, spherical tissue, and tumor geometry. A nonlinear electrical conductivity, dependent on the local electric field, is considered. Morris analysis is used to identify the influential treatment parameters on the treatment outcome. Seven treatment parameters pertaining to the relative tumor location with respect to the applicator, the tumor growth pattern, and the electrical conductivity of tissue are analyzed. The treatment outcome is measured in terms of the relative tumor ablation with respect to the target ablation volume and total ablation volume.
Results: The Morris analysis was performed with 800 model evaluations, sampled from the seven dimensional input parameter space. Electrical properties of the tissue, especially the electrical conductivity of the tumor before ablation, were found to be the most influential parameter for relative tumor ablation and total ablation volume. This parameter was found to be about 4-15 times more influential than the least influential parameter, depending on the tumor size. The tumor border configuration was identified as the least important parameter for treatment effectiveness. The most desired treatment outcome is obtained by a combination of high healthy liver conductivity and low tumor conductivity. This information can be used to tackle worst-case scenarios in treatment planning. Finally, when the safety margins used in the clinical applications are accounted for, the effects of uncertainties in the treatment parameters reduce drastically.
Conclusions: The results of this work can be used to create an efficient surrogate estimator for uncertainty quantification in the treatment outcome, that can be utilized in optimal real-time treatment planning solutions.
abstract_id: PUBMED:28093955
Anatomically Realistic Simulations of Liver Ablation by Irreversible Electroporation: Impact of Blood Vessels on Ablation Volumes and Undertreatment. Irreversible electroporation is a novel tissue ablation technique which entails delivering intense electrical pulses to target tissue, hence producing fatal defects in the cell membrane. The present study numerically analyzes the potential impact of liver blood vessels on ablation by irreversible electroporation because of their influence on the electric field distribution. An anatomically realistic computer model of the liver and its vasculature within an abdominal section was employed, and blood vessels down to 0.4 mm in diameter were considered. In this model, the electric field distribution was simulated in a large series of scenarios (N = 576) corresponding to plausible percutaneous irreversible electroporation treatments by needle electrode pairs. These modeled treatments were relatively superficial (maximum penetration depth of the electrode within the liver = 26 mm) and it was ensured that the electrodes did not penetrate the vessels nor were in contact with them. In terms of total ablation volume, the maximum deviation caused by the presence of the vessels was 6%, which could be considered negligible compared to the impact by other sources of uncertainty. Sublethal field magnitudes were noticed around vessels covering volumes of up to 228 mm3. If in this model the blood was substituted by a liquid with a low electrical conductivity (0.1 S/m), the maximum volume covered by sublethal field magnitudes was 3.7 mm3 and almost no sublethal regions were observable. We conclude that undertreatment around blood vessels may occur in current liver ablation procedures by irreversible electroporation. Infusion of isotonic low conductivity liquids into the liver vasculature could prevent this risk.
abstract_id: PUBMED:27026913
Comparison of ablation defect on MR imaging with computer simulation estimated treatment zone following irreversible electroporation of patient prostate. To determine whether patient specific numerical simulations of irreversible electroporation (IRE) of the prostate correlates with the treatment effect seen on follow-up MR imaging. Computer models were created using intra-operative US images, post-treatment follow-up MR images and clinical data from six patients receiving IRE. Isoelectric contours drawn using simulation results were compared with MR imaging to estimate the energy threshold separating treated and untreated tissue. Simulation estimates of injury to the neurovascular bundle and rectum were compared with clinical follow-up and patient reported outcomes. At the electric field strength of 700 V/cm, simulation estimated electric field distribution was not different from the ablation defect seen on follow-up MR imaging (p = 0.43). Simulation predicted cross sectional area of treatment (mean 532.33 ± 142.32 mm(2)) corresponded well with the treatment zone seen on MR imaging (mean 540.16 ± 237.13 mm(2)). Simulation results did not suggest injury to the rectum or neurovascular bundle, matching clinical follow-up at 3 months. Computer simulation estimated zone of irreversible electroporation in the prostate at 700 V/cm was comparable to measurements made on follow-up MR imaging. Numerical simulation may aid treatment planning for irreversible electroporation of the prostate in patients.
abstract_id: PUBMED:28318494
The role of the irreversible electroporation in the hepato-pancreatico-biliary surgery. Irreversible electroporation is a novel technique growing in popularity over the last years among the ablative modalities. Its unique action mechanism produces irreversible nanopores in the membrane of the cell leading to apoptosis; therefore irreversible electroporation can be used to ablate substantial volumes of tissue without the undesirable thermal effects as the "heat sink effect". Moreover the extracellular matrix is left unperturbed, thus sparing the structural architecture of surrounding structures such as bile ducts and blood vessels. In the last years its use has been widespread in both liver and pancreatic ablation. Irreversible electroporation has shown its safety with however some caution, feasibility and favorable outcomes in clinical settings such as unresectable locally advanced disease in which the surgical and therapeutic options are very limited.
abstract_id: PUBMED:27546157
Ultrasound and Contrast-Enhanced Ultrasound for Evaluation of Irreversible Electroporation Ablation: In Vivo Proof of Concept in Normal Porcine Liver. The objective of this study was to describe the performance of ultrasound (US) and contrast-enhanced ultrasound (CEUS) within 2 h after irreversible electroporation (IRE) ablation of porcine liver. Six IRE ablations were performed on porcine liver in vivo; ultrasound assessments were performed within 2 h after IRE ablation. On US images, the ablation zone appeared as a hypo-echoic area within 10 min after the ablation, and then the echo of the ablation zone gradually increased. On CEUS images, the ablation zone appeared as a non-enhanced area within 10 min after ablation and then was gradually centripetally filled by microbubbles. A hyper-echoic rim on US images and a hyper-enhanced rim on CEUS images appeared in the periphery of the ablation zone 60 min after the ablation. Characteristic and dynamic ultrasound images of the IRE ablation zone were obtained within 2 h after IRE ablation of in vivo porcine liver.
abstract_id: PUBMED:31938222
In vivo evaluation of bronchial injury of irreversible electroporation in a porcine lung ablation model by using laboratory, pathological, and CT findings. Irreversible electroporation (IRE) creates permanent pores in the cell membrane, leading to irreversible cell death. In this study, the impact of IRE on bronchial injury was comprehensively examined in a timed series study. Altogether, 8 Bama miniature pigs were included in this study and were randomly assigned to experimental and control groups. The experimental group underwent IRE that was guided and monitored by spiral computed tomography (CT). The monopole probe of the IRE was positioned at the right pulmonary hilum. Specimens were collected at 0 h, 2 h, 2 d, 7 d, and 14 d after the IRE procedure for a pathological examination. A small amount of needle-tract bleeding occurred in two animals, and mild pneumothorax occurred in another. IRE can elicit acute bronchial inflammation, bleeding, and mucosal injury, but severe complications were not found. Pathological examinations and transmission electron microscopy (TEM) showed dead vascular epithelium cells in the region of the ablation, while the bronchioli and the vascular extracellular matrix were preserved. At 2 hours post-IRE, there were marked increases in bronchoalveolar macrophages (P<0.001), but the inflammation could recover after 14 days and showed no statistical significance when compared with the control group at the same time. In conclusion, CT-guided IRE ablation can elicit acute but recoverable bronchial inflammation, bleeding, and mucosal injury in porcine lung tissues. However, longer follow-up is still required to establish an evaluation of the long-term safety.
Answer: Numerical simulations are commonly used for treatment planning in clinical applications of irreversible electroporation (IRE) to determine ablation size and shape. However, the reliability of these simulations for predicting empirical ablation outcomes has been questioned. A study comparing simulation results with empirical outcomes of renal IRE using computed tomography (CT) and histology in a porcine model found that both numerical simulations and CT measurements may overestimate the size of ablation compared to histology. Additionally, ablation effects may be incomplete in the periphery (PUBMED:24831827).
Another study that proposed a mathematical model incorporating anisotropically varying conductivity, which increases more in the direction of the electrical field, found that this anisotropic formulation predicts a decrease in ablation size compared to isotropic formulations, which have been reported to overpredict ablation size (PUBMED:29089031).
Furthermore, uncertainty quantification in IRE simulations has been investigated, revealing that uncertainty in tissue and tumor conductivity can significantly affect final ablation predictions used for treatment planning (PUBMED:28952520).
In the context of liver ablation, anatomically realistic simulations have shown that the presence of blood vessels can impact the electric field distribution and potentially lead to undertreatment around vessels (PUBMED:28093955).
Overall, while numerical simulations are a valuable tool for planning IRE treatments, their reliability for predicting empirical outcomes can be affected by various factors, including the electrical properties of tissues and the presence of blood vessels. Adjustments to the simulation models, such as incorporating anisotropic conductivity and accounting for uncertainties, may improve their predictive accuracy. |
Instruction: Is intermittent dobutamine treatment beneficial in patients with dilated cardiomyopathy?
Abstracts:
abstract_id: PUBMED:12223320
Is intermittent dobutamine treatment beneficial in patients with dilated cardiomyopathy? Objective: Dobutamine is a sympathomimetic drug, which can be used in patients with dilated cardiomyopathy (DCM). We investigated the effects of intermittent dobutamine use on cardiac parameters and quality of life in patients with DCM.
Methods: Twelve patients with ischemic and idiopathic DCM, refractory to conventional therapy, have been included in the study. In addition to traditional treatment, dobutamine (1-2 micro g/kg/min infusion increasing up to 10 micro g/kg/min for 3 days) was administered, and repeated at the 1st, 2nd and 3rd months. The patients were evaluated 3 times, before and immediately after the first treatment and after the treatment on the third month, using echocardiography, exercise stress testing, ambulatory ECG, right ventricular catheterization, cardiac enzymes (creatine kinase MB isoenzyme - CK-MB, troponin-T) and the Minnesota Living with Heart Failure Questionnaire for quality of life.
Results: After the first treatment, left ventricular ejection fraction (LVEF), cardiac output, cardiac index (CI), pulmonary wedge pressure and life quality improved significantly (p<0.05); but, after the treatment on the third month, these parameters except PCWP returned to nearly baseline values. Additionally, a significant increase in the number of patients with ventricular premature beats and with troponin-T positivity was detected after the third month of treatment.
Conclusion: The use of dobutamine in addition to conventional therapy in patients with DCM provided improvements in some systolic parameters and quality of life particularly after the first treatment. In the late period of the treatment, however, it was determined that these beneficial effects tended to disappear and harmful effects became more evident.
abstract_id: PUBMED:16023232
The long-term survival benefit conferred by intermittent dobutamine infusions and oral amiodarone is greater in patients with idiopathic dilated cardiomyopathy than with ischemic heart disease. Background: Intermittent dobutamine infusions (IDI) combined with oral amiodarone improve the survival of patients with end-stage congestive heart failure (CHF). The purpose of the present study was to evaluate whether the response to long-term treatment with IDI+amiodarone is different in patients with ischemic heart disease (IHD) versus idiopathic dilated cardiomyopathy (IDC).
Methods: The prospective study population consisted of 21 patients with IHD (the IHD Group) and 16 patients with IDC (the IDC Group) who presented with decompensated CHF despite optimal medical therapy, and were successfully weaned from an initial 72-h infusion of dobutamine. They were placed on a regimen of oral amiodarone, 400 mg/day and weekly IDI, 10 microg/kg/min, for 8 h.
Results: There were no differences in baseline clinical and hemodynamic characteristics between the 2 groups. The probability of 2-year survival was 44% in the IDC Group versus 5% in the IHD Group (long-rank, P=0.004). Patients with IDC had a 77% relative risk reduction in death from all causes compared to patients with IHD (odd ratio 0.27, 95% confidence interval 0.13 to 0.70, P=0.007). In contrast, no underlying disease-related difference in outcomes was observed in a retrospectively analyzed historical Comparison Group of 29 patients with end stage CHF treated by standard methods.
Conclusions: Patients with end stage CHF due to IDC derived a greater survival benefit from IDI and oral amiodarone than patients with IHD.
abstract_id: PUBMED:8122866
Intermittent infusions of dobutamine in the treatment of chronic cardiac failure in the terminal stage This study was designed to evaluate the efficacy of intermittent dobutamine infusions in patients with end-stage congestive heart failure. Twenty-four patients with NYHA Stage IV congestive heart failure were included. Mean age was 65.8 years (range 40-85). All patients were under optimal medical therapy with digitalis, diuretics and vasodilating agents. Diagnosis was coronary heart disease in 11 patients, dilated cardiomyopathy in 9, and valvular incompetence in 4. Dobutamine was given in an initial dosage of 2.5 micrograms/kg/min increased by 2.5 micrograms/kg/min every 15 minutes until the desired effect was achieved. Mean dosage was 10 micrograms/kg/min. Patients were given an identical infusion two weeks later, then at monthly intervals after improvement of the clinical status. Heart failure stage was improved in all patients: 14 patients were stage II and 10 were stage III. Ejection fraction failed to improve (19.5 +/- 10 to 18.5 +/- 5). Mortality rate was 5% at one year, 24% at two years, and 45% at four years.
abstract_id: PUBMED:10087562
Torsade de pointes ventricular tachycardia during low dose intermittent dobutamine treatment in a patient with dilated cardiomyopathy and congestive heart failure. The authors describe the case of a 56-year-old woman with chronic, severe heart failure secondary to dilated cardiomyopathy and absence of significant ventricular arrhythmias who developed QT prolongation and torsade de pointes ventricular tachycardia during one cycle of intermittent low dose (2.5 mcg/kg per min) dobutamine. This report of torsade de pointes ventricular tachycardia during intermittent dobutamine supports the hypothesis that unpredictable fatal arrhythmias may occur even with low doses and in patients with no history of significant rhythm disturbances. The mechanisms of proarrhythmic effects of Dubutamine are discussed.
abstract_id: PUBMED:8948268
Medium-term effectiveness of L-thyroxine treatment in idiopathic dilated cardiomyopathy. Background: In dilated cardiomyopathy, short-term administration of L-thyroxine (100 micrograms/ day) improves cardiac and exercise performance without changing the heart's adrenergic sensitivity. The aim of this study was to test the medium-term (3 months) efficacy of L-thyroxine (10 patients) compared with placebo (10 patients) and to find out whether later effects are obtainable.
Methods: Echocardiographic parameters in the control state and during acute changes of left ventricular afterload, cardiopulmonary exercise test, and hemodynamic parameters, including cardiac beta 1 responses to dobutamine, were obtained before and at the end of treatment.
Results: Significant (P < 0.05) changes were observed only with the active drug. After L-thyroxine, patients did not show evidence of chemical hyperthyroidism, despite the increase in thyroxine and the reduction in thyroid-stimulating hormone plasma levels. Cardiac performance improved, as shown by the increase in the left ventricular ejection fraction and rightward shift of the slope of the relation left ventricular ejection fraction/end-systolic stress. Resting cardiac output increased, and the left ventricular diastolic dimensions and systemic vascular resistances decreased. The responses of cardiac output and heart rate to dobutamine infusion were also enhanced. Functional capacity markedly improved, together with an increase in peak exercise cardiac output.
Conclusion: L-thyroxine does not lose its beneficial effects on cardiac and exercise performance on medium-term administration and does not induce adverse effects. In addition to the short-term study, the left ventricular diastolic dimensions were decreased. An upregulation of beta 1 receptors might explain the cardiac response to dobutamine.
abstract_id: PUBMED:16183152
Reverse left ventricular remodeling by intermittent dobutamine infusions and amiodarone in end-stage heart failure due to idiopathic dilated cardiomyopathy. Background: The aim of this study was to evaluate the long-term effect of combined intermittent dobutamine infusions (IDI) and oral amiodarone on reverse left ventricular (LV) remodeling and hemodynamics of patients with idiopathic dilated cardiomyopathy (IDC) and end-stage congestive heart failure (CHF).
Methods: This non-randomized, prospective, clinical trial included sixteen consecutive patients suffering from dyspnea for a mean of 76+/-43 months, who presented with acute cardiac decompensation and were weaned from dobutamine therapy after an initial 72-h infusion. They were then placed on a regimen of oral amiodarone, 400 mg/day and weekly IDI, 10 microg/kg/min, for 8 h. The long-term clinical outcomes and the effects of treatment on reverse LV remodeling (echocardiographic parameters) and hemodynamics were evaluated at 3, 6, and 12 months of follow up.
Results: A significant degree of reverse LV remodeling, hemodynamic improvements, and survivals >1.5 years were observed in 9 of the 16 patients (56%). In addition, 5 patients (31% of entire cohort) were weaned from IDI after a mean of 61+/-41 weeks, and 4 remained clinically stable for 116+/-66 weeks thereafter. At 12 months of follow-up, LV end-diastolic and end-systolic volume indices had decreased from 231+/-91 to 206+/-80 ml/m2 (P=0.002) and from 137+/-65 to 110+/-50 ml/m2 (P=0.003), respectively, right atrial pressure from 16+/-6 to 5.6+/-4 mm Hg, (P=0.031), and pulmonary capillary wedge pressure from 29+/-4 to 16+/-5.4 mm Hg, P=0.000, while LV ejection fraction had increased from 22+/-6% to 27.3+/-8% (P=0.006).
Conclusions: In end-stage CHF due to IDC, long-term treatment with IDI and oral amiodarone caused reverse LV remodeling, and allowed permanent and successful weaning from IDI in 1/4 of patients.
abstract_id: PUBMED:17395050
Carvedilol improves left atrial and left ventricular function and reserve in dilated cardiomyopathy after 1 year of treatment. Background: The aim of this study was to evaluate the effects of carvedilol therapy on left atrial (LA) function in patients with heart failure from nonischemic dilated cardiomyopathy.
Methods And Results: Thirty-five patients (42.4 +/- 13.5 years) in New York Heart Association functional Class II-III have been studied. A low-dose (20 mug.kg.min) echo-dobutamine study has been performed, before and 12 months after carvedilol therapy. Twelve months after carvedilol therapy, a significant improvement in LA and left ventricular (LV) function was observed. To investigate the beneficial effects of carvedilol, patients were separated into 2 groups according to the presence of pretreatment LV contractile reserve (CR) (ejection fraction [EF] increases >20% after dobutamine infusion): Group A consisted of 18 patients with CR and Group B of 17 patients without CR. After carvedilol treatment, both the LV and LA function were improved in group A (P < .01 for all). However, in group B, only the LA function was significantly improved (left atrial ejection volume increased from 10.4 +/- 3 mL to 15.4 +/- 6.7, P < .01 and LA ejection fraction from 19.6 +/- 45.3% to 29.4 +/- 12.5%, P < .01), whereas the LV contractile reserve has partially reappeared (EF from 29.9 +/- 4.5% at baseline, increased after dobutamine infusion to 35.8 +/- 6.8%, P < .0001).
Conclusions: In conclusion, carvedilol therapy is associated with improvement in both LV and LA functions in nonischemic dilated cardiomyopathy. In a subgroup of these patients, carvedilol may act differently on LV and LA function.
abstract_id: PUBMED:15534058
Dobutamine stress 99mTc-tetrofosmin quantitative gated SPECT predicts improvement of cardiac function after carvedilol treatment in patients with dilated cardiomyopathy. Unlabelled: We evaluated whether dobutamine stress (99m)Tc-tetrofosmin quantitative gated SPECT (D-QGS) could predict improvement of cardiac function by carvedilol therapy in patients with dilated cardiomyopathy (DCM).
Methods: The study included 30 patients with idiopathic DCM and a left ventricular ejection fraction (LVEF) of <45%. D-QGS was performed in all patients to measure LVEF at rest and during dobutamine infusion (10 microg/kg/min). LVEF and left ventricular end-diastolic volume (LVEDV) were determined by echocardiography, plasma brain natriuretic peptide (BNP) was measured, and the New York Heart Association (NYHA) functional class was estimated at baseline and after 1 y of combined treatment with an angiotensin-converting enzyme (ACE) inhibitor, diuretic, and the beta-blocker carvedilol. After treatment, the echocardiographic LVEF improved by >5% in 15 patients (group A) but did not improve in the remaining 15 patients (group B).
Results: The baseline LVEF, LVEDV, plasma BNP, and NYHA functional class were similar in both groups. However, there was a greater increase of LVEF (Delta LVEF) with dobutamine infusion during D-QGS in group A than that in group B (12.0% +/- 5.8% vs. 2.7% +/- 4.2%, P < 0.0001). When a cutoff value of 6.6% for Delta LVEF was used to predict the improvement of LVEF by carvedilol therapy, the sensitivity was 86.7%, the specificity was 86.7%, and the accuracy was 86.7%. LVEDV, plasma BNP, and NYHA functional class all showed superior improvement in group A compared with group B.
Conclusion: Delta LVEF measured by D-QGS was significantly larger in patients who responded to carvedilol than that in nonresponders. These findings indicate that D-QGS can be used to predict improvement of cardiac function and heart failure symptoms by carvedilol therapy in patients with idiopathic DCM.
abstract_id: PUBMED:3299303
Review of intermittent dobutamine infusions for congestive cardiomyopathy. Dobutamine is a potent inotropic agent traditionally used for treatment of acute cardiac decompensation of congestive heart failure (CHF). It acts primarily by increasing myocardial contractility and cardiac output. It has a rapid onset of action, a half-life of 2 minutes, and a duration of action of 10 minutes. Recently, the therapeutic effect of dobutamine was noted to be prolonged beyond the discontinuation of an infusion, persisting for 4-10 weeks after infusion of 48-72 hours. Because of this prolonged effect, dobutamine infusions were evaluated in outpatients with intractable CHF and were effective in improving their functional status. No effect on survival rates may be expected, but this form of therapy may improve the patient's lifestyle. Although several factors may limit the application of dobutamine infusion to outpatients, it offers an effective alternative to traditional therapy for select patients.
abstract_id: PUBMED:8707430
Beneficial cumulative role of both nitroglycerin and dobutamine on right ventricular systolic function in congestive heart failure patients awaiting heart transplantation. End-stage idiopathic dilated cardiomyopathy or ischemic heart disease usually present with very low cardiac output and severe ventricular dysfunction which may require pharmacological support before heart transplantation. Right ventricular ejection fraction might be an important factor of functional capacity and survival in congestive heart failure. In order to test the immediate response of right ventricular hemodynamic parameters to nitroglycerin and dobutamine usually used to treat severe left ventricular dysfunction, we studied 17 congestive heart failure patients (15 men, two women; mean age 55 +/- 13 years) with end-stage idiopathic dilated cardiomyopathy (n = 10) or end-stage ischemic heart disease (n = 7), left ventricular ejection fraction < 35% (mean 22 +/- 8%), and sinus rhythm. A well validated thermodilution technique using a dedicated catheter with a fast catheter-computer response, permitting instantaneous measurements of right ventricular ejection fraction, was used. Right ventricular hemodynamic parameters were recorded at baseline, after an intravenous bolus injection of 3 mg nitroglycerin and after an intravenous infusion of dobutamine administered after nitroglycerin until normalization of cardiac index or a maximal dose of 15 micrograms/kg/min. Pulmonary artery mean pressure significantly decreased after nitroglycerin (43 +/- 9 mmHg at baseline vs. 31 +/- 10 mmHg after nitroglycerin, P < 0.0001) and did not subsequently change after dobutamine (32 +/- 10 mmHg after dobutamine, ns). Cardiac index was not affected by nitroglycerin (1.7 +/- 0. l/min/m2 at baseline vs. 2.0 +/- 0.3 l/min/m2 after nitroglycerin, ns), but dramatically increased after dobutamine (3.0 +/- 1.0 l/min/m2 after dobutamine, P < 0.0001). Concomitantly to the changes of these two parameters, right ventricular ejection fraction progressively increased (14 +/- 8% at baseline vs. 20 +/- 10% after nitroglycerin (P < 0.0006) vs. 28 +/- 13% after dobutamine (P < 0.0001)). Progressive increase of right ventricular ejection fraction after administration of nitroglycerin followed by administration of dobutamine suggests the beneficial cumulative role of both medications on right ventricular systolic function in severe congestive heart failure.
Answer: Intermittent dobutamine treatment has been investigated for its effects on patients with dilated cardiomyopathy (DCM). The studies show mixed results regarding the benefits and potential risks associated with this treatment.
One study found that intermittent dobutamine use, in addition to conventional therapy, provided improvements in left ventricular ejection fraction (LVEF), cardiac output, cardiac index (CI), pulmonary wedge pressure, and quality of life, particularly after the first treatment. However, these beneficial effects tended to disappear after the third month of treatment, and harmful effects, such as an increase in ventricular premature beats and troponin-T positivity, became more evident (PUBMED:12223320).
Another study reported that patients with end-stage congestive heart failure (CHF) due to idiopathic dilated cardiomyopathy (IDC) derived a greater survival benefit from intermittent dobutamine infusions (IDI) and oral amiodarone than patients with ischemic heart disease (IHD). The probability of 2-year survival was significantly higher in the IDC group compared to the IHD group (PUBMED:16023232).
A further study showed that intermittent dobutamine infusions improved the heart failure stage in all patients included in the study, although the ejection fraction did not improve. The mortality rate was 5% at one year, 24% at two years, and 45% at four years (PUBMED:8122866).
However, there are also reports of potential risks associated with intermittent dobutamine treatment. One case described a patient with dilated cardiomyopathy and congestive heart failure who developed QT prolongation and torsade de pointes ventricular tachycardia during a cycle of intermittent low dose dobutamine, suggesting that unpredictable fatal arrhythmias may occur even with low doses and in patients with no history of significant rhythm disturbances (PUBMED:10087562).
In summary, intermittent dobutamine treatment can provide short-term benefits in cardiac parameters and quality of life for patients with DCM, and it may offer a survival benefit, particularly in patients with IDC. However, the long-term benefits are less clear, and there is a potential risk of harmful effects, including serious arrhythmias. Therefore, the use of intermittent dobutamine treatment in DCM patients should be carefully considered and monitored. |
Instruction: Do inflammatory factors play a significant role in etiopathogenesis of endometrial cysts?
Abstracts:
abstract_id: PUBMED:24364468
Do inflammatory factors play a significant role in etiopathogenesis of endometrial cysts? Part 1. Unlabelled: Endometriosis is an estrogen-related chronic condition which consists in the implantation and growth of endometrial cells outside the uterine cavity. It has an immune and inflammatory background, and to-date the precise etiopathogenesis of endometrial cysts has not been unequivocally defined. The objective of the study was evaluation of the indicators of the inflammatory state, including RANTES and the levels of C-reactive protein, leukocytes, fibrinogen and iron in the blood serum of patients with endometrial cysts (n=48) and benign ovarian tumours of mature teratoma type (n=38). Statistical analysis was performed using the Mann-Whitney Rank Sum Test. The p values p<0.05 were considered statistically significant.
Results: While comparing the results, respectively in groups, the mean levels in blood serum were as follows: RANTES 31,429.79 pg/ml (from 26,576.6-99,605.00) vs. 26,988.72 pg/ml (from 26,013.58-113,435.00) for p=0.428; CRP and WBC 2.13 vs. 1.54 mg/l; p=0.076 and 5.35 vs. 6.7; p=0.029; fibrinogen 3.12 vs. 2.57 mg%; p<0.001); iron level 87.20 vs. 78.01 ug/dl for p=0.430, and CA-125 36.50 vs. 15.08 U/ml; p<0.001).
Conclusions: Statistically significant differences were observed in the levels of WBC, fibrynogen and CA-125 in blood serum. Therefore, the role of the inflammatory factor in the etiopathogenesis of endometrial cysts still remains unexplained, and the presented study may emerge as pioneer investigations in the area of etiology of endometriosis.
abstract_id: PUBMED:35450000
Sonoelastography evaluation in the diagnosis of endometrial pathology combined with chronic endometritis in infertile women. Endometrial pathology, including hyperplastic processes in the structure of reproductive disorders, occupies one of the leading places along with inflammatory diseases of the pelvic organs, contributing to infertility in 80% of cases and irregular menstrual cycle in 40-43%. This study aims to optimize the diagnostic algorithm in patients with endometrial hyperplasia combined with chronic endometritis and determine qualitative indicators of compression sonoelastography in patients with endometrial pathology and infertility. A comprehensive clinical and laboratory examination of 90 infertile patients aged 25 to 45 years with endometrial hyperplasia combined with chronic inflammation, retention cysts, and benign ovarian tumors was carried out. The results of clinical-laboratory and complex ultrasound examination with compression sonoelastography were compared with the data of pathomorphological and immunohistochemical studies. A high percentage of pelvic inflammatory disease (55.0%), benign lesions of the cervix (67.5%), hyperplastic processes of the myometrium (37.5%), an increasing number of polyps by 2.9 times, leiomyomas and adenomyosis - by 2.3 times (p<0.05) was established. In the case of a combination of endometrial hyperplasia and ovarian cysts, a high percentage of comorbidity of gynecological pathology is verified (37.8%), and the use of compression sonoelastography allows to establish class II and class III elastograms in 91.1% of cases which characterize benign endometrial lesions, reduce the number of false-positive results in 95.6% of cases, correctly interpret the nature of pathological changes and increase the sensitivity of ultrasound techniques.
abstract_id: PUBMED:38256043
Proteomic Profiling Identifies Candidate Diagnostic Biomarkers of Hydrosalpinx in Endometrial Fluid: A Pilot Study. Hydrosalpinx is a fluid occlusion and distension of the fallopian tubes, often resulting from pelvic inflammatory disease, which reduces the success of artificial reproductive technologies (ARTs) by 50%. Tubal factors account for approximately 25% of infertility cases, but their underlying molecular mechanisms and functional impact on other reproductive tissues remain poorly understood. This proteomic profiling study applied sequential window acquisition of all theoretical fragment ion spectra mass spectrometry (SWATH-MS) to study hydrosalpinx cyst fluid and pre- and post-salpingectomy endometrial fluid. Among the 967 proteins identified, we found 19 and 17 candidate biomarkers for hydrosalpinx in pre- and post-salpingectomy endometrial fluid, respectively. Salpingectomy significantly affected 76 endometrial proteins, providing insights into the enhanced immune response and inflammation present prior to intervention, and enhanced coagulation cascades and wound healing processes occurring one month after intervention. These findings confirmed that salpingectomy reverses the hydrosalpinx-related functional impairments in the endometrium and set a foundation for further biomarker validation and the development of less-invasive diagnostic strategies for hydrosalpinx.
abstract_id: PUBMED:11354710
Cystic endometrial hyperplasia-pyometra complex in the bitch: should the two entities be disconnected? The uteri of 26 clinically healthy bitches and 42 bitches with a clinical suspicion of pyometra were examined histologically using a computerized image analysis system. Histologic lesions were characterised mainly by thickening or atrophy of the endometrium and by varying degrees of cystic changes of the glands. These lesions were observed in most of the clinically healthy bitches as well as in all of the clinically ill animals. In most of the ill bitches a variable degree of inflammation also was found. Some bitches with clinical signs indicative for pyometra had no inflammatory reaction in the uterus. These bitches were misdiagnosed as suffering from pyometra, confirming the difficulty of diagnosing pyometra by simple clinical examination. Determination of sex hormone serum levels revealed that all dogs in both groups were either in metestrus or in anestrus. Based on the results of this study the cystic endometrial hyperplasia-pyometra complex can be divided in two entities: a cystic endometrial hyperplasia-mucometra complex and an endometritis-pyometra complex. Both entities bear many similarities with each other, except for the inflammatory reaction in the endometritis-pyometra complex. It is concluded from this study that the latter complex probably does not necessarily follow the former, but that both can arise de novo.
abstract_id: PUBMED:33218351
Differential expression of Oct-4, CD44, and E-cadherin in eutopic and ectopic endometrium in ovarian endometriomas and their correlations with clinicopathological variables. Background: Endometriosis is an estrogen-dependent inflammatory disease that often causes infertility and chronic pelvic pain. Although endometriosis is known as a benign disease, it has demonstrated characteristics of malignant neoplasms, including neoangiogenesis, tissue invasion, and cell implantation to distant organs. Octamer-binding protein 4 (Oct-4) is a molecular marker for stem cells that plays an essential role in maintaining pluripotency and self-renewal processes in various types of benign and malignant tissues. CD44 is a multifunctional cell surface adhesion molecule that acts as an integral cell membrane protein and plays a role in cell-cell and cell-matrix interactions. E-cadherin is an epithelial cell-cell adhesion molecule that plays important role in the modulation of cell polarization, cell migration, and cancer metastasis. The aim of this study was to investigate the expression patterns of Oct-4, CD44, and E-cadherin in eutopic and ectopic endometrial tissues from women with endometrioma compared to control endometrial tissues from women without endometrioma.
Methods: In the present study, Oct-4, CD44, and E-cadherin expressions were evaluated in eutopic and ectopic endometrial tissue samples from women with endometrioma (n = 32) and compared with those of control endometrial tissue samples from women without endometrioma (n = 30).
Results: Immunohistochemical expression of Oct-4 was significantly higher in the ectopic endometrial tissue samples of women with endometrioma than in the control endometrial tissue samples (p = 0.0002). Conversely, CD44 and E-cadherin expressions were significantly lower in the ectopic endometrial tissue samples of women with endometrioma than in the control endometrial tissue samples (p = 0.0137 and p = 0.0060, respectively). Correlation analysis demonstrated significant correlations between Oct-4 expression and endometrioma cyst diameter (p = 0.0162), rASRM stage (p = 0.0343), and total rASRM score (p = 0.0223). Moreover, CD44 expression was negatively correlated with the presence of peritoneal endometriotic lesions (p = 0.0304) while E-cadherin expression was negatively correlated with the presence of deep infiltrating endometriosis (p = 0.0445).
Conclusions: Increased expression of Oct-4 and decreased expression of adhesion molecules in endometriotic tissues may contribute to the development and progression of endometriosis.
abstract_id: PUBMED:28075410
Is Toxoplasma gondii a Trigger of Bipolar Disorder? Toxoplasma gondii, a ubiquitous intracellular parasite, has a strong tropism for the brain tissue, where it forms intracellular cysts within the neurons and glial cells, establishing a chronic infection. Although latent toxoplasmosis is generally assumed to be asymptomatic in immunocompetent individuals, it is now clear that it can induce behavioral manipulations in mice and infected humans. Moreover, a strong relation has emerged in recent years between toxoplasmosis and psychiatric disorders. The link between T. gondii and schizophrenia has been the most widely documented; however, a significant association with bipolar disorder (BD) and suicidal/aggressive behaviors has also been detected. T. gondii may play a role in the etiopathogenesis of psychiatric disorders affecting neurotransmitters, especially dopamine, that are implicated in the emergence of psychosis and behavioral Toxoplasma-induced abnormalities, and inducing brain inflammation by the direct stimulation of inflammatory cytokines in the central nervous system. Besides this, there is increasing evidence for a prominent role of immune dysregulation in psychosis and BD. The aim of this review is to describe recent evidence suggesting a link between Toxoplasma gondii and BD, focusing on the interaction between immune responses and this infectious agent in the etiopathogenesis of psychiatric symptoms.
abstract_id: PUBMED:35143137
Naringenin and morin reduces insulin resistance and endometrial hyperplasia in the rat model of polycystic ovarian syndrome through enhancement of inflammation and autophagic apoptosis. Polycystic Ovary Syndrome (PCOS) is a gynecologic disorder with unsatisfactory treatment options. Hyperandrogenism and insulin resistance (IR) are two symptoms of PCOS. The majority of PCOS patients (approximately 50% to 70%) have IR and moderate diffuse inflammation of varying degrees. We investigated in-vitro and in-vivo effects of naringenin, morin and their combination on PCOS induced endometrial hyperplasia by interfering with the mTORC1 and mTORC2 signaling pathways. The vaginal smear test ensured the regular oestrous cycles in female rats. Serum cytokines (TNF-α and IL-6) were assessed using the ELISA test, followed by in-vivo and in-vitro determination of prominent gene expressions (mTORC1and C2, p62, LC3-II, and Caspase-3 involved in the inflammatory signaling mechanisms through RT-PCR, western bloting, or immunohistochemical analysis. In addition, the viability of naringenin or morin treated cells was determined using flow cytometry analysis. The abnormal oestrous cycle and vaginal keratosis indicated that PCOS was induced successfully. The recovery rate of the oestrous cycle with treatments was increased significantly (P<0.01) when compared to the PCOS model. Narigenin, morin, or a combination of the two drugs substantially decreased serum insulin, TNF-α, IL-6 levels with improved total anti-oxidant capacity and SOD levels (P<0.01). Treatments showed suppression of HEC-1-A cells proliferation with increased apoptosis (P<0.01) by the upregulation of Caspase-3 expression, followed by downregulation of mTORC, mTORC1, and p62 (P<0.01) expressions with improved LC3-II expressions (P<0.05) respectively. The histological findings showed a substantial increase in the thickness of granulose layers with improved corpora lutea and declined the number of cysts. Our findings noticed improved inflammatory and oxidative microenvironment of ovarian tissues in PCOS treated rats involving the autophagic and apoptotic mechanisms demonstrating synergistic in-vitro and in-vivo therapeutic effects of treatments on PCOS-induced endometrial hyperplasia.
abstract_id: PUBMED:20132369
Diagnosis and management of endometriosis: the role of the advanced practice nurse in primary care. Purpose: To discuss the etiology, clinical presentation, diagnosis, and management of endometriosis for the advanced practice nurse (APN) in primary care.
Data Sources: Selected research, clinical studies, clinical practice guidelines, and review articles.
Conclusions: Commonly encountered by the APN in primary care, endometriosis is a chronic, progressive inflammatory disease characterized by endometrial lesions, cysts, fibrosis, or adhesions in the pelvic cavity, causing chronic pelvic pain and infertility in women of reproductive age. Because of its frequently normal physical examination findings, variable clinical presentations, and nonspecific, overlapping symptoms with other conditions, endometriosis can be difficult to diagnose. As there currently are no accurate noninvasive diagnostic tests specific for endometriosis, it is imperative for the APN to become knowledgeable about the etiology, clinical presentation, diagnosis, and current treatment options of this disease.
Implications For Practice: The APN in primary care plays an essential role in health promotion through disease management and infertility prevention by providing support and much needed information to the patient with endometriosis. APNs can also facilitate quality of care and manage treatments effectively to improve quality of life, reduce pain, and prevent further progression of disease. Practice recommendations include timely diagnosis, pain management, infertility counseling, patient education, and support for quality of life issues.
abstract_id: PUBMED:23482677
Oral pulse granuloma associated with keratocystic odontogenic tumor: Report of a case and review on etiopathogenesis. Pulse granuloma is a distinct oral entity characterized as a foreign body reaction occurring either centrally or peripherally. It is usually seen in the periapical or in the sulcus area. Occasionally the lesions occur in the wall of the cyst, commonest being the inflammatory odontogenic cyst. Histologically, they present as eosinophilic hyaline mass with giant cell inclusions and inflammatory cells. They may show different histological characteristics, possibly related to the length of time in the tissue. Adequate recognition is important to avoid misdiagnosis. Many authors suggest that pulse granuloma results due to implantation of food particles of plant or vegetable origin into the tissue following tooth extraction. This paper aims to report a case of pulse granuloma associated with keratocystic odontogenic tumor with its histochemical and polarizing microscopic features and discuss on etiopathogenesis of pulse granuloma.
abstract_id: PUBMED:18684450
Interleukin-10 attenuates TNF-alpha-induced interleukin-6 production in endometriotic stromal cells. Objective: To determine whether high levels of interleukin (IL)-10 can attenuate the production of tumor necrosis factor (TNF)-alpha-induced proinflammatory cytokines in endometriotic stromal cells.
Design: Prospective study.
Setting: Department of Ob/Gyn, Tottori University, Japan.
Patient(s): Thirty-five patients with ovarian endometrioma and ten patients with uterine myoma.
Intervention(s): Endometriotic stromal cells were obtained from chocolate cyst linings of ovaries. Endometrial stromal cells obtained from patient with uterine myoma.
Main Outcome Measure(s): Expression of IL-10 gene in endometriotic or endometrial stromal cells was determined by real-time reverse-transcriptase polymerase chain reaction (RT-PCR). We performed immunohistochemical staining to find the presence of IL-10 and IL-10 receptors 1 and 2. We examined the effects of TNF-alpha and IL-10 on the expression of IL-6 or IL-8 by real-time RT-PCR and ELISA. We examined the activation of intracellular signal transduction molecules in endometriotic stromal cells by Western blotting.
Result(s): Addition of IL-10 suppressed the expressions of IL-6 induced by TNF-alpha and IL-10 induced the phosphorylation of STAT3 in endometriotic stromal cells. TNF-alpha induced the expression of phosphorylated ERK1/2, JNK1/2, and I kappaB. Adding IL-10 suppressed the phosphorylation of these signal molecules.
Conclusion(s): Interleukin-10 attenuates TNF-alpha-induced IL-6 synthesis via NF-kappaB and MAPK pathways in endometriotic cells.. Interleukin-10 may play a significant role in the pathogenesis of endometriosis.
Answer: Yes, inflammatory factors do play a significant role in the etiopathogenesis of endometrial cysts. The study by PUBMED:24364468 found statistically significant differences in the levels of white blood cells (WBC), fibrinogen, and CA-125 in the blood serum of patients with endometrial cysts, suggesting an inflammatory component in their etiopathogenesis. Although the role of the inflammatory factor in the etiopathogenesis of endometrial cysts is not fully explained, the study indicates that inflammation may be involved.
Additionally, PUBMED:18684450 reports that interleukin-10 (IL-10) can attenuate the production of tumor necrosis factor (TNF)-alpha-induced proinflammatory cytokines in endometriotic stromal cells. This suggests that the inflammatory response, mediated by cytokines such as TNF-alpha and IL-6, is significant in the pathogenesis of endometriosis, which is closely related to the formation of endometrial cysts.
Furthermore, PUBMED:33218351 discusses the differential expression of molecular markers such as Oct-4, CD44, and E-cadherin in eutopic and ectopic endometrium in ovarian endometriomas, which are a type of endometrial cyst. The study found increased expression of Oct-4 and decreased expression of adhesion molecules in endometriotic tissues, which may contribute to the development and progression of endometriosis, an inflammatory condition.
These findings collectively support the notion that inflammatory factors are indeed significant in the etiopathogenesis of endometrial cysts, contributing to the inflammatory environment and potentially influencing the development and progression of endometriotic lesions that can form cysts. |
Instruction: Does a constriction ring alter ejaculation latency?
Abstracts:
abstract_id: PUBMED:17535277
Does a constriction ring alter ejaculation latency? Objective: To assess the efficacy of a 'constriction ring' as an option for treating premature ejaculation (PE).
Patients And Methods: Between September 2003 and October 2006, 42 men with an intravaginal ejaculation latency time (IELT) of <1 min were evaluated. Over a 4-week period, a constriction ring was used during intercourse.
Results: The median (range) IELT was 42 (33-54) s before treatment and 46 (31-55) s after 4 weeks of using the ring; there was no statistically significant difference in the IELT before and after treatment (P = 0.1), and no major complications.
Conclusion: The 'constriction ring' is not an effective treatment for PE.
abstract_id: PUBMED:37132032
Identifying an optimal ejaculation latency for the diagnosis of men reporting orgasmic/ejaculation difficulty. Background: Criteria for the definition and diagnosis of delayed ejaculation (DE) are yet under consideration.
Aim: This study sought to determine an optimal ejaculation latency (EL) threshold for the diagnosis of men with DE by exploring the relationship between various ELs and independent characterizations of delayed ejaculation.
Methods: In a multinational survey, 1660 men, with and without concomitant erectile dysfunction (ED) and meeting inclusion criteria, provided information on their estimated EL, measures of DE symptomology, and other covariates known to be associated with DE.
Outcomes: We determined an optimal diagnostic EL threshold for men with DE.
Results: The strongest relationship between EL and orgasmic difficulty occurred when the latter was defined by a combination of items related to difficulty reaching orgasm and percent of successful episodes in reaching orgasm during partnered sex. An EL of ≥16 minutes provided the greatest balance between measures of sensitivity and specificity; a latency ≥11 minutes was the best threshold for tagging the highest number/percentage of men with the severest level of orgasmic difficulty, but this threshold also demonstrated lower specificity. These patterns persisted even when explanatory covariates known to affect orgasmic function/dysfunction were included in a multivariate model. Differences between samples of men with and without concomitant ED were negligible.
Clinical Implications: In addition to assessing a man's difficulty reaching orgasm/ejaculation during partnered sex and the percent of episodes reaching orgasm, an algorithm for the diagnosis of DE should consider an EL threshold in order to control diagnostic errors.
Strengths And Limitations: This study is the first to specify an empirically supported procedure for diagnosing DE. Cautions include the use of social media for participant recruitment, relying on estimated rather than clocked EL, not testing for differences between DE men with lifelong vs acquired etiologies, and the lower specificity associated with using the 11-minute criterion that could increase the probability of including false positives.
Conclusion: In diagnosing men with DE, after establishing a man's difficulty reaching orgasm/ejaculation during partnered sex, using an EL of 10 to 11 minutes will help control type 2 (false negative) diagnostic errors when used in conjunction with other diagnostic criteria. Whether or not the man has concomitant ED does not appear to affect the utility of this procedure.
abstract_id: PUBMED:37615496
Perceived intravaginal ejaculation latency time: The diagnosis of premature ejaculation among Vietnamese men. Introduction: Premature ejaculation (PE) is a prevalent sexual dysfunction in men that greatly affects their quality of life. In PE, the duration of sexual performance is considered an important aspect. However, a self-estimated value of intravaginal ejaculation latency time (perceived IELT, PIELT) as a criterion for diagnosis has not been specified.
Aim: This study aimed to determine the validity and a threshold value for PIELT in PE diagnosis.
Method: In our cross-sectional study, we recruited 550 men from March 2019 to January 2020 and interviewed them regarding their general demographic characteristics, sexual habits, PIELT and completed a premature ejaculation diagnostic tool (PEDT) questionnaire. Eventually, a combination of a clinical diagnosis and PEDT score was used, in which those with PEDT ≥ 11 and diagnosed with possible PE were assigned to the final PE(+) group; those with PEDT score ≤ 8 and diagnosed with no PE were included in the final PE(-) group.
Results: Men PE(-) had more frequent sexual intercourse (9.74 ± 5.38 vs. 6.69 ± 5.38 episodes per month, p < 0.001) and had higher marriage rate (72.7% vs. 60.4%, p = 0.002) than PE(+) patients. No significant difference was noted regarding age, smoking habit, age of first sexual experience, and number of sexual partners between the two groups. The mean PIELT of control subjects and PE(+) patients were 11.69 ± 6.83 min and 2.01 ± 1.21 min, respectively. On receiver operating characteristic curve analysis, the cut-off value of PIELT of 3.75 min can be used to distinguish PE men (area under the curve = 0.982, sensitivity/specificity = 0.961/0.909), which means that men with a PIELT ≤ 3.5 min is suggestive of PE.
Conclusion: The impact of PE is dramatic both from a social and a personal perspective. PE(+) patients married significantly less and have significantly lower sexual activity compared to a PE(-) population. Furthermore, a PIELT of ≤ 3.5 min predicts PE demonstrating the need to revise its taxonomy and definition.
abstract_id: PUBMED:30456601
Mechanisms of contractile ring tension production and constriction. The contractile ring is a remarkable tension-generating cellular machine that constricts and divides cells into two during cytokinesis, the final stage of the cell cycle. Since the ring's discovery, the parallels with muscle have been emphasized. Both are contractile actomyosin machineries, and long ago, a muscle-like sliding filament mechanism was proposed for the ring. This review focuses on the mechanisms that generate ring tension and constrict contractile rings. The emphasis is on fission yeast, whose contractile ring is sufficiently well characterized that realistic mathematical models are feasible, and possible lessons from fission yeast that may apply to animal cells are discussed. Recent discoveries relevant to the organization in fission yeast rings suggest a stochastic steady-state version of the classic sliding filament mechanism for tension. The importance of different modes of anchoring for tension production and for organizational stability of constricting rings is discussed. Possible mechanisms are discussed that set the constriction rate and enable the contractile ring to meet the technical challenge of maintaining structural integrity and tension-generating capacity while continuously disassembling throughout constriction.
abstract_id: PUBMED:35340631
Constriction ring of the penis in a newborn infant: A rare form of amniotic band syndrome. The amniotic band comprises disrupted amnion strands causing entrapment or entanglement of various fetal parts resulting in a spectrum of anomalies from digital band constriction or amputation to severe craniofacial/visceral defects and even fetal demise. We present a newborn infant with a rare, isolated ring constriction of the penis.
abstract_id: PUBMED:24912663
Resiniferatoxin for treatment of lifelong premature ejaculation: a preliminary study. Objectives: To evaluate the efficacy of resiniferatoxin in the treatment of patients with lifelong premature ejaculation.
Methods: A total of 41 outpatients (mean age 26.14 ± 4 years) with premature ejaculation completed the present study. They were randomly separated into the resiniferatoxin group and the placebo group. The resiniferatoxin group included 11 patients with redundant prepuce and 10 patients without redundant prepuce, whereas the placebo group contained 10 patients with redundant prepuce and 10 patients without. For the treatment, the glans were respectively soaked in 30 mL of resiniferatoxin with a concentration of 100 nmol/L or 10% alcohol solution for 30 min before sexual intercourse. Clinical efficacy was assessed by using the Chinese Index of Sexual Function for Premature Ejaculation-5 and the intravaginal ejaculation latency time before or 4 weeks after the treatment. The side-effects were also evaluated.
Results: In the resiniferatoxin group, the effective rate of patients with redundant prepuce was 63.6%, and both the intravaginal ejaculation latency time and Chinese Index of Sexual Function for Premature Ejaculation-5 significantly increased (P < 0.05). However, the effective rate of patients without redundant prepuce was 20%, and there were no significant changes of their intravaginal ejaculation latency time and Chinese Index of Sexual Function for Premature Ejaculation-5 before and after the resiniferatoxin treatment (P > 0.05). The total effective rate of patients treated with resiniferatoxin was 42.9%. In the placebo group, the effective rate of patients with or without redundant prepuce was 20% and 10%, respectively. The total effective rate of patients treated with placebo was 15%, and there were no significant changes of their intravaginal ejaculation latency time and Chinese Index of Sexual Function for Premature Ejaculation-5 before and after the placebo treatment (P > 0.05). The side-effects included a slight burning sensation for the glans penis and dysuria.
Conclusions: These preliminary results show that resiniferatoxin might be suitable for treating patients with lifelong premature ejaculation and particularly those with redundant prepuce.
abstract_id: PUBMED:33145131
Comparison of the Efficacy of Tramadol and Paroxetine in the Management of Premature Ejaculation. Objective The goal of this study was to compare the efficacy of tramadol and paroxetine in the treatment of primary premature ejaculation (PE). Study design This study was a randomized controlled trial performed in the outpatient department of Nishtar Hospital, Multan, from January 2017 to January 2018. Methodology One hundred six patients were diagnosed with PE and included in the study. The patients were categorized into two groups receiving either tramadol or paroxetine through a lottery randomization method. The main variables were baseline PE, baseline satisfaction after intercourse, baseline intravaginal ejaculatory latency time (IELT), ejaculation control, difficulty in ejaculation, and after-treatment satisfaction with sexual intercourse and IELT. We used IBM SPSS Statistics for Windows, Version 23.0 (Armonk, NY: IBM Corp.) for data analysis, and p≤0.05 was considered statistically significant. Results Ejaculation control, difficulty in ejaculation, and distress due to ejaculation in patients in the tramadol group was noted as 24.5%, 7.5%, and 7.5%, respectively. Ejaculation control, difficulty in ejaculation, and distress due to ejaculation in the paroxetine group was noted as 49.1%, 17%, and 24.5%, respectively. The differences were statistically significant within the groups at baseline and after treatment of PE (p<0.001). Conclusion Tramadol is an effective and useful drug as compared to paroxetine for the treatment of PE. Tramadol can be used as an alternative to other medications for the treatment of lifelong PE.
abstract_id: PUBMED:32792761
A Comparative Study of the Efficacy of Levosulpiride versus Paroxetine in Premature Ejaculation. Background: Premature ejaculation (PME) can be defined as a lack in the normal voluntary control over ejaculation. It is the most common sexual dysfunction encountered by the male populace. In general, these patients presents with distress. Hence, a novel treatment to eliminate their problem is required. Although the role of SSRI has already been established, the high discontinuation rate and other types of sexual dysfunctions associated with SSRIs reduce their efficacy in controlling this menace. Levosulpiride is a new drug indicated in treatment of PE.
Aims And Objectives: The objective is to study the efficacy of levosulpiride; paroxetine and their comparison in patients of PE.
Methodology: Index of premature ejaculation (IPE) and intravaginal ejaculation latency time (IELT) were used. A total of 36 patients (18 in each group) were included. The patients were assessed at baseline; at 4 weeks' and at 8 weeks' interval.
Results: On comparison the score of IPE in domains of ejaculation control, sexual satisfaction, and the total score of IPE were statistically significant on all the three visits. However, the distress score of IPE and the IELT score were statistically not significant between the two groups.
Conclusion: No doubt both agents are efficacious in patients of PME but paroxetine is more efficacious than levosulpiride. At the same time, levosulpiride is a lesser studied and used drug hence more research should be done for it.
abstract_id: PUBMED:30788869
Effect of premature ejaculation desensitisation therapy combined with dapoxetine hydrochloride on the treatment of primary premature ejaculation. To evaluate the overall treatment benefits of premature ejaculation desensitisation therapy combined with 30 mg dapoxetine hydrochloride treatment on patients with primary premature ejaculation (PPE). Ninety-nine PPE patients were randomly divided into two groups at the ratio of 2:1. Sixty-six PPE patients received premature ejaculation desensitisation therapy accomplished by Weili Automatic Semen Collection-Penis Erection Detection and Analysis workstation (WLJY-2008) combined with 30 mg dapoxetine hydrochloride treatment (DTCD group), and another 33 patients received 30 mg dapoxetine hydrochloride-only treatment (DO group). Intravaginal ejaculation latency time (IELT) and premature ejaculation profile (PEP) were recorded before and during the treatment, and clinical global impression of change (CGIC) in PPE was recorded at the fourth week and the end of the treatment and the items. In both groups were significantly improved (p < 0.0001) in IELT, PEP and CGIC for premature ejaculation compared with baseline, and DTCD treatment showed a more significant improvement on PPE patients in the items compared with DO treatment (p < 0.05). Thus, premature ejaculation desensitisation combined with dapoxetine therapy may be a better choice for improving premature ejaculation with PPE.
abstract_id: PUBMED:27784887
Ejaculatory training lengthens the ejaculation latency and facilitates the functioning of the spinal generator for ejaculation of rats with rapid ejaculation. A spinal pattern generator controls the ejaculatory response. Central pattern generators (CPGs) may be entrained to improve the motor patterns under their control. In the present study we tested the hypothesis that training of the spinal generator for ejaculation (SGE) by daily copulation until ejaculation, could promote substantive changes in its functioning permitting a better SGE control of the genital motor pattern of ejaculation (GMPE) and, as a consequence, a normalization of the ejaculation latency of rats with rapid ejaculation. To that aim, we evaluated in sexually experienced male rats with rapid ejaculation (1) the effects of daily copulation to ejaculation, following different entrainment schedules, on their ejaculation latencies, (2) the impact of these different ejaculatory entrainment schedules upon the parameters of the GMPE and (3) the possible emergence of persistent changes in the functioning of the SGE associated to the daily ejaculation entrainment schedules. The data obtained show that intense ejaculatory training of rats with rapid ejaculation lengthens the ejaculation latency during copulation and augments the ejaculatory capacity of the SGE in this population when spinalized. Thus, present data reveal that like other CPGs, the SGE can be trained and put forward that training of the SGE by daily copulation to ejaculation might be a promising alternative that should be taken into consideration for the treatment of premature ejaculation.
Answer: No, a constriction ring does not appear to significantly alter ejaculation latency. A study conducted between September 2003 and October 2006 involving 42 men with an intravaginal ejaculation latency time (IELT) of less than 1 minute found that after using a constriction ring during intercourse for a 4-week period, there was no statistically significant difference in the IELT before and after treatment. The median IELT was 42 seconds before treatment and 46 seconds after using the ring, indicating that the constriction ring is not an effective treatment for premature ejaculation (PE) (PUBMED:17535277). |
Instruction: Variant Creutzfeldt-Jakob disease in the United Kingdom: a countrywide or local risk?
Abstracts:
abstract_id: PUBMED:19692715
Variant Creutzfeldt-Jakob disease in the United Kingdom: a countrywide or local risk? Background: The aim of this study was to identify factors that may have augmented local risks for variant Creutzfeldt-Jakob disease (vCJD).
Methods: A descriptive study was conducted of local investigations of UK cases of vCJD, who had lived close together at some point since 1980. The main outcome measures were domestic, educational, occupational, healthcare associated, social and recreational links between cases; common dietary, iatrogenic and other possible routes of exposure to vCJD infection; and locally elevated vCJD risk.
Results: A cluster of five cases of vCJD in a rural area in North Leicestershire was investigated in 2000 (p=0.004). A further 12 investigations of geographically associated cases of vCJD have been undertaken in the UK. In nine of the 12 locations, some or all of the local cases had consumed beef purchased from the same local retail outlets or provided by a common supplier of school meals, or had some aspect of their medical-dental care in common. In only three of these locations were circumstances identified where the local risk of transmission might have been elevated. In none of the locations was there strong evidence to exclude chance as a likely explanation for the local occurrence of these vCJD cases.
Conclusion: Although it is possible that in some parts of the UK local factors may have increased the risk of acquiring vCJD, most cases that were geographically close to each other are most likely due to the same factors that gave rise to the large majority of other vCJD cases in the UK.
abstract_id: PUBMED:24689837
Risk assessment for transmission of variant Creutzfeldt-Jakob disease by transfusion of red blood cells in the United States. Background: Variant Creutzfeldt-Jakob disease (vCJD) is transmitted by blood transfusion. To mitigate the risk of transfusion-transmitted vCJD (TTvCJD), the US Food and Drug Administration has recommended deferral of potential at-risk blood donors, but some risk remains. We describe a quantitative risk assessment to estimate residual, postdeferral TTvCJD risk in the United States.
Study Design And Methods: We assumed that certain US donors may have acquired vCJD infection through dietary exposure to the agent of bovine spongiform encephalopathy during time spent in the United Kingdom, France, and other countries in Europe. Because of uncertainties regarding the prevalence of vCJD in the United Kingdom, we used both low and high UK prevalence estimates as model inputs. The model estimated the risk of infection from a transfusion in year 2011 and the cumulative risk from 1980 through 2011. The model was validated by comparing the model predictions with reported cases of vCJD.
Results: Using the low UK prevalence estimate, the model predicted a mean risk of 1 in 134 million transfusions, zero TTvCJD infections acquired in the year 2011, and zero cumulative clinical TTvCJD cases for the period spanning 1980 to 2011. With the high UK prevalence estimate, the model predicted a mean risk of 1 in 480,000 transfusions, six infections for 2011, and nine cumulative clinical cases from 1980 to 2011.
Conclusions: Model validation exercises indicated that predictions based on the low prevalence estimate are more consistent with clinical cases actually observed to date, implying that the risk, while highly uncertain, is likely very small.
abstract_id: PUBMED:10386559
Risk of transmission of bovine spongiform encephalopathy to humans in the United States: report of the Council on Scientific Affairs. American Medical Association. Context: The risk of possible transmission of bovine spongiform encephalopathy (BSE) in the United States is a substantial public health concern.
Objective: To systematically review the current scientific literature and discuss legislation and regulations that have been implemented to prevent the disease.
Methods: Literature review using the MEDLINE, EMBASE, and Lexis/Nexis databases for 1975 through 1997 on the terms bovine spongiform encephalopathy, prion diseases, prions, and Creutzfeldt-Jakob syndrome. The Internet was used to identify regulatory actions and health surveillance.
Data Extraction: MEDLINE, EMBASE, and Lexis/Nexis databases were searched from 1975 through 1997 for English-language articles that provided information on assessment of transmission risk.
Results: Unique circumstances in the United Kingdom caused the emergence and propagation of BSE in cattle, including widespread use of meat and bonemeal cattle feed derived from scrapie-infected sheep and the adoption of a new type of processing that did not reduce the amount of infectious prions prior to feeding. Many of these circumstances do not exist in the United States. In the United Kingdom, new variant Creutzfeldt-Jakob disease probably resulted from the ingestion of BSE-contaminated processed beef. The United Kingdom and the European Union now have strong regulations in place to stop the spread of BSE. While BSE has not been observed in the United States, the US government has surveillance and response plans in effect.
Conclusions: Current risk of transmission of BSE in the United States is minimal because (1) BSE has not been shown to exist in this country; (2) adequate regulations exist to prevent entry of foreign sources of BSE into the United States; (3) adequate regulations exist to prevent undetected cases of BSE from uncontrolled amplification within the US cattle population; and (4) adequate preventive guidelines exist to prevent high-risk bovine materials from contaminating products intended for human consumption.
abstract_id: PUBMED:8313487
Bovine spongiform encephalopathy in the United Kingdom: memorandum from a WHO meeting. This Memorandum reviews the current state of research being carried out on transmissible spongiform encephalopathies (TSE) and examines the results of epidemiological studies conducted on bovine spongiform encephalopathy (BSE) and Creutzfeldt-Jakob disease (CJD) in the United Kingdom. It is concluded that the BSE epidemic is on the decline and the policies adopted in the United Kingdom are sufficient to minimize the risk of exposure to BSE of all species, including humans.
abstract_id: PUBMED:19170997
From mad cows to sensible blood transfusion: the risk of prion transmission by labile blood components in the United Kingdom and in France. Transfusion transmission of the prion, the agent of variant Creutzfeldt-Jakob disease (vCJD), is now established. Subjects infected through food may transmit the disease through blood donations. The two nations most affected to date by this threat are the United Kingdom (UK) and France. The first transfusion cases have been observed in the UK over the past 5 years. In France, a few individuals who developed vCJD had a history of blood donation, leading to a risk of transmission to recipients, some of whom could be incubating the disease. In the absence of a large-scale screening test, it is impossible to establish the prevalence of infection in the blood donor population and transfused patients. This lack of a test also prevents specific screening of blood donations. Thus, prevention of transfusion transmission essentially relies at present on deferral of "at-risk" individuals. Because prions are present in both white blood cells and plasma, leukoreduction is probably insufficient to totally eliminate the transfusion risk. In the absence of a screening test for blood donations, recently developed prion-specific filters could be a solution. Furthermore, while the dietary spread of vCJD seems efficiently controlled, uncertainty remains as to the extent of the spread of prions through blood transfusion and other secondary routes.
abstract_id: PUBMED:19334063
Variant Creutzfeldt-Jakob disease in France and the United Kingdom: Evidence for the same agent strain. Objective: Variant Creutzfeldt-Jakob disease (vCJD) was first reported in the United Kingdom in 1996. Since then, the majority of cases have been observed in the United Kingdom where there was a major epidemic of bovine spongiform encephalopathy. France was the second country affected. To address the hypothesis of the involvement of a common strain of agent, we have compared clinical, neuropathological, and biochemical data on vCJD patients from both countries.
Methods: In France and the United Kingdom, epidemiological and clinical data were obtained from analysis of medical records and direct interview of the family of the patients using the same standardized questionnaire in both countries. When brain material was available, we performed with similar methods a comparative study of brain lesions and PrP(res) glycoform ratios in both vCJD populations.
Results: Clinical data, genetic background, neuropathological finding, and biochemical findings in the 185 patients observed in France (n = 23) and the United Kingdom (n = 162) were similar except for age at death. Currently, blood transfusion is a risk factor identified only in the United Kingdom.
Interpretation: The close similarity between the cases of vCJD in France and the United Kingdom supports the hypothesis that a common strain of infectious agent is involved in both countries. The 5-year delay in the peak that we observed in France compared with the United Kingdom fits well with the increase in the importation of beef products to France from the United Kingdom between 1985 and 1995.
abstract_id: PUBMED:32134162
Assessment of risk of variant creutzfeldt-Jakob disease (vCJD) from use of bovine heparin. Purpose: In the late1990s, reacting to the outbreak of bovine spongiform encephalopathy (BSE) in the United Kingdom that caused a new variant of Creutzfeldt-Jakob disease (vCJD) in humans, manufacturers withdrew bovine heparin from the market in the United States. There have been growing concerns about the adequate supply and safety of porcine heparin. Since the BSE epidemic has been declining markedly, the US Food and Drug Administration reevaluates the vCJD risk via use of bovine heparin.
Methods: We developed a computational model to estimate the vCJD risk to patients receiving bovine heparin injections. The model incorporated information including BSE prevalence, infectivity levels in the intestines, manufacturing batch size, yield of heparin, reduction in infectivity by manufacturing process, and the dose-response relationship.
Results: The model estimates a median risk of vCJD infection from a single intravenous dose (10 000 USP units) of heparin made from US-sourced bovine intestines to be 6.9 × 10-9 (2.5-97.fifth percentile: 1.5 × 10-9 -4.3 × 10-8 ), a risk of 1 in 145 million, and 4.6 × 10-8 (2.5-97.fifth percentile: 1.1 × 10-8 -2.6 × 10-7 ), a risk of 1 in 22 million for Canada-sourced products. The model estimates a median risk of 1.4 × 10-7 (2.5-97.fifth percentile: 2.9 × 10-8 -9.3 × 10-7 ) and 9.6 × 10-7 (2.5-97.fifth percentile: 2.1 × 10-7 -5.6 × 10-6 ) for a typical treatment for venous thromboembolism (infusion of 2-4 doses daily per week) using US-sourced and Canada-sourced bovine heparin, respectively.
Conclusions: The model estimates the vCJD risk from use of heparin when appropriately manufactured from US or Canadian cattle is likely small. The model and conclusions should not be applied to other medicinal products manufactured using bovine-derived materials.
abstract_id: PUBMED:35609012
Risk of variant Creutzfeldt-Jakob disease transmission by blood transfusion in Australia. Background And Objectives: Most of the 233 worldwide cases of variant Creutzfeldt-Jakob disease (vCJD) have been reported in the United Kingdom and 3 have been associated with transfusion-transmission. To mitigate the potential vCJD risk to blood safety, Australian Red Cross Lifeblood imposes restrictions on blood donation from people with prior residency in, or extended travel to, the United Kingdom during the risk period 1980-1996. We have modified a previously published methodology to estimate the transfusion-transmission risk of vCJD associated with fresh component transfusion in Australia if the UK residence deferral was removed.
Materials And Methods: The prevalence of current pre-symptomatic vCJD infection in the United Kingdom by age at infection and genotype was estimated based on risk of exposure to the bovine spongiform encephalopathy agent for the period 1980-1996. These results were used to estimate the age-specific prevalence of undiagnosed, pre-symptomatic vCJD in the Australian population in the current year due to prior UK residency or travel. The primary model outputs were the 2020 vCJD risks/unit of vCJD contamination, transfusion-transmission (infections) and clinical cases.
Results: The overall (prior UK residency in and travel to United Kingdom, 1980-1996) mean risk of contamination per unit was 1 in 29,900,000. The risks of resulting vCJD transmission (infection) and clinical case were 1 in 389,000,000 and 1 in 1,450,000,000, respectively.
Conclusion: Our modelling suggests that removing the Lifeblood donation deferral for travel to, or UK residence, would result in virtually no increased risk of vCJD transfusion-transmission and would be a safe and effective strategy for increasing the donor base.
abstract_id: PUBMED:17188543
vCJD and blood transfusion: risk assessment in the United Kingdom. The risk of vCJD transmission via blood transfusion depends on potential levels of infectivity, recipients' exposure to infected donors and individual susceptibility. On infectivity, SEAC (the UK's main scientific advisory committee on TSEs), has published an updated position statement. Based on animal models, this suggests that infectivity is split roughly equally between leucocytes and plasma, with negligible levels directly associated with red cells or platelets. Risk assessments are now therefore based on the amounts of plasma and leucocytes within each component as transfused. Recipients' exposure to infection depends critically on the prevalence of infection in the population. This remains unknown, so a range of assumptions must still be considered. A further consideration is the likelihood of any infected donors' blood being infective. Those infected in the primary outbreak will now have been incubating vCJD for 10-25 years. Current thinking is that blood may be more infective later in the incubation period. This reinforces the case for a precautionary approach to transmission risks, despite the small number of incidents seen so far. Exposure will also depend on how many donors contributed components to a given individual. Recent work has shown that more patients receive large numbers of units than previously thought. These highly-transfused patients are a particular cause for concern. The current precautionary assumption is of all recipients being susceptible to infection by transfusion, though incubation periods may differ markedly.
abstract_id: PUBMED:14504321
Creutzfeldt-Jakob disease in United Kingdom patients treated with human pituitary growth hormone. Objective: To investigate risk factors for Creutzfeldt-Jakob disease (CJD) in patients in the United Kingdom treated with human pituitary growth hormone (hGH).
Methods: Incidence rates of CJD, based on person-year denominators, were assessed in a cohort of 1,848 patients treated with hGH in the United Kingdom from 1959 through 1985 and followed to the end of 2000.
Results: CJD developed in 38 patients. Risk of CJD was significantly increased by treatment with hGH prepared by the Wilhelmi method of extraction from human pituitaries. Risk was further raised if this treatment was administered at ages 8 to 10 years. The peak risk of CJD was estimated to occur 20 years after first exposure, and the estimated lifetime cumulative risk of CJD in Wilhelmi-treated patients was 4.5%.
Conclusions: Size-exclusion chromatography, used in non-Wilhelmi preparation methods, may prevent CJD infection. Susceptibility to CJD may vary with age, and susceptibility may be present in only a few percent of the population.
Answer: The study of variant Creutzfeldt-Jakob disease (vCJD) in the United Kingdom suggests that while local factors may have increased the risk of acquiring vCJD in some parts of the UK, most cases that were geographically close to each other are likely due to the same factors that gave rise to the large majority of other vCJD cases in the UK (PUBMED:19692715). This indicates that the risk of vCJD in the UK is not confined to specific localities but is a countrywide concern. The cluster of five cases in North Leicestershire and the 12 investigations of geographically associated cases of vCJD across the UK showed that in most locations, cases had common dietary or medical-dental care links, but strong evidence to exclude chance as a likely explanation for the local occurrence of these vCJD cases was not found (PUBMED:19692715).
Furthermore, the risk of transmission of vCJD through blood transfusion has been assessed, and while the risk is highly uncertain, it is likely very small (PUBMED:24689837). This assessment, along with the fact that the UK has implemented strong regulations to stop the spread of BSE and vCJD, suggests that the risk of vCJD is managed on a national level rather than being a localized issue (PUBMED:10386559). The similarity in clinical, neuropathological, and biochemical data on vCJD patients from France and the UK also supports the hypothesis that a common strain of infectious agent is involved in both countries, further indicating that the risk is not confined to local regions within the UK (PUBMED:19334063).
In conclusion, the evidence suggests that the risk of vCJD in the United Kingdom is a countrywide risk rather than a local one. |
Instruction: Is the pain visual analogue scale linear and responsive to change?
Abstracts:
abstract_id: PUBMED:31403125
Correlations among algometry, the visual analogue scale, and the numeric rating scale to assess chronic pelvic pain in women. Objective: To investigate the correlation between the numerical rating scale, visual analogue scale, and pressure threshold by algometry in women with chronic pelvic pain.
Study Design: This was a cross-sectional study. We included 47 patients with chronic pelvic pain. All subjects underwent a pain assessment that used three different methods and were divided according to the cause of pain (endometriosis versus non-endometriosis). Moreover, we assessed the agreement between the scales (visual, analogue and algometry) using the intraclass correlation coefficient (ICC).
Results: The ICC for the numeric rating scale and the visual analogue scale regarding pain (0.992), dysmenorrhea (1.00) and dyspareunia (0.996) were strong. The agreement between the scales was excellent (p ≤0.01). The correlation between algometry and the scales showed a moderate and inverse association, and this correlation was statistically significant: as the scores on the numeric rating scale and the visual analogue scale regarding dyspareunia increased, the algometry thresholds decreased.
Conclusions: The assessment of women with chronic pelvic pain should combine pressure algometry and the numeric rating scale or the visual analogue scale, because of their inverse correlations and satisfactory reliability and sensitivity, to make pain assessment less subjective and more accurate.
abstract_id: PUBMED:31660159
Validation of a visual analogue scale for the evaluation of the postoperative anxiety: A prospective observational study. Aim: Anxiety affects the perception of pain during the postoperative period. A simple evaluation scale could improve the management of this component. The objective of this study was to evaluate the reproducibility and the consistency of a visual analogue scale for anxiety compared with the reference method, the State-Trait Anxiety Inventory (STAI).
Design: Observational, prospective, monocentric study of 500 patients in the post-anaesthetist care unit. Anxiety was evaluated using both the visual analogue scale for anxiety and the STAI in perioperative patients. Consistency between the visual analogue scale for anxiety and the STAI, detection thresholds and factors predicting anxiety were researched.
Results: A correlation was found between the visual analogue scale for anxiety and the STAI. There was also a correlation between pain and anxiety. Analysis of receiver operating characteristic (ROC) curves showed a visual analogue scale for anxiety threshold of 34/100 allowing the identification of patients with or without anxiety. Predictive factors for anxiety are female gender, use of benzodiazepine in premedication, emergency surgery and significant pain in the post-anaesthetist care unit. In summary, visual analogue scale for anxiety is a useful tool for detecting the anxiety component of postoperative pain. It could be used in association with covariates of interest to improve anxiety management during the postoperative period.
abstract_id: PUBMED:28850536
How to analyze the Visual Analogue Scale: Myths, truths and clinical relevance. Background And Aims: The Visual Analogue Scale (VAS) is a popular tool for the measurement of pain. A variety of statistical methods are employed for its analysis as an outcome measure, not all of them optimal or appropriate. An issue which has attracted much discussion in the literature is whether VAS is at a ratio or ordinal level of measurement. This decision has an influence on the appropriate method of analysis. The aim of this article is to provide an overview of current practice in the analysis of VAS scores, to propose a method of analysis which avoids the shortcomings of more traditional approaches, and to provide best practice recommendations for the analysis of VAS scores.
Methods: We report on the current usage of statistical methods, which fall broadly into two categories: those that assume a probability distribution for VAS, and those that do not. We give an overview of these methods, and propose continuous ordinal regression, an extension of current ordinal regression methodology, which is appropriate for VAS at an ordinal level of measurement. We demonstrate the analysis of a published data set using a variety of methods, and use simulation to compare the power of the various methods to detect treatment differences, in differing pain situations.
Results: We demonstrate that continuous ordinal regression provides the most powerful statistical analysis under a variety of conditions.
Conclusions And Implications: We recommend that in the situation in which no covariates besides treatment group are included in the analysis, distribution-free methods (Wilcoxon, Mann-Whitney) be used, as their power is indistinguishable from that of the proposed method. In the situation in which there are covariates which affect VAS, the proposed method is optimal. However, in this case, if the VAS scores are not concentrated around either extreme of the scale, normal-distribution methods (t-test, linear regression) are almost as powerful, and are recommended as a pragmatic choice. In the case of small sample size and VAS skewed to either extreme of the scale, the proposed method has vastly superior power to other methods.
abstract_id: PUBMED:24921952
Is the pain visual analogue scale linear and responsive to change? An exploration using Rasch analysis. Objectives: Pain visual analogue scales (VAS) are commonly used in clinical trials and are often treated as an interval level scale without evidence that this is appropriate. This paper examines the internal construct validity and responsiveness of the pain VAS using Rasch analysis.
Methods: Patients (n = 221, mean age 67, 58% female) with chronic stable joint pain (hip 40% or knee 60%) of mechanical origin waiting for joint replacement were included. Pain was scored on seven daily VASs. Rasch analysis was used to examine fit to the Rasch model. Responsiveness (Standardized Response Means, SRM) was examined on the raw ordinal data and the interval data generated from the Rasch analysis.
Results: Baseline pain VAS scores fitted the Rasch model, although 15 aberrant cases impacted on unidimensionality. There was some local dependency between items but this did not significantly affect the person estimates of pain. Daily pain (item difficulty) was stable, suggesting that single measures can be used. Overall, the SRMs derived from ordinal data overestimated the true responsiveness by 59%. Changes over time at the lower and higher end of the scale were represented by large jumps in interval equivalent data points; in the middle of the scale the reverse was seen.
Conclusions: The pain VAS is a valid tool for measuring pain at one point in time. However, the pain VAS does not behave linearly and SRMs vary along the trait of pain. Consequently, Minimum Clinically Important Differences using raw data, or change scores in general, are invalid as these will either under- or overestimate true change; raw pain VAS data should not be used as a primary outcome measure or to inform parametric-based Randomised Controlled Trial power calculations in research studies; and Rasch analysis should be used to convert ordinal data to interval data prior to data interpretation.
abstract_id: PUBMED:26728352
Ratings of pain and activity limitation on the visual analogue scale and global impression of change in multimodal rehabilitation of back pain - analyses at group and individual level. Purpose: To evaluate changes in pain intensity and activity limitation, at group and individual levels, and their associations with the global impression of change after multimodal rehabilitation in patients with back pain.
Method: Patients with long-term back pain (n = 282) participated in a 4-week programme with a follow-up after 6 months. Visual analogue scales (VAS) were used to rate pain intensity and activity limitation. Global impression of change (GIC) was rated on a 7-category scale. The sign test, the Svensson method and the Spearman rank correlation were used for analyses.
Results: Significantly lower ratings in pain and activity limitation at follow-up were found at group level. However, a large individual variability was found by the Svensson method. The correlations between GIC and changes in pain and activity limitation were rs = 0.49 and rs = 0.50, respectively. A rated GIC of at least "much better" on group level showed changes of ≥20 mm on the VAS.
Conclusions: At group level, lower VAS ratings were found in patients with back pain. However, a large individual variability in pain and activity limitation was also found resulting in low to moderate associations between GIC and the change in VAS ratings. The large individual variability might be due to the impreciseness in the ratings on the VAS. We have presented a critical discussion of statistical methods in connection with the VAS. Implications for Rehabilitation The use of VAS as a rating instrument may be questioned, especially for perceived pain intensity which is a too complex experience to be rated on a line without any visible categories. Single ratings of pain intensity should preferably be complemented with the ratings of activity limitation in patients with long-term back pain. Global impression of change is a suggested inclusive rating after rehabilitation. The improvement desired by the patient should preferably be determined before rehabilitation.
abstract_id: PUBMED:33994565
Design of Paper-Based Visual Analogue Scale Items. Paper-based visual analogue scale (VAS) items were developed 100 years ago. Although they gained great popularity in clinical and medical research for assessing pain, they have been scarcely applied in other areas of psychological research for several decades. However, since the beginning of digitization, VAS have attracted growing interest among researchers for carrying out computerized and paper-based data assessments. In the present study, we investigated the research question "Which different design characteristics of paper-based VAS items are preferred by women and men?" Based on a sample of 115 participants (68 female), our results revealed that the respondents preferred a paper-based VAS item with a horizontal, 8-cm long, 3 DTP ("desktop publishing point") wide, black line, with flat line endpoints, and the ascending numerical anchors "0" and "10", both for women and men. Although we did not identify any gender difference in these characteristics, our findings uncovered clear preferences on how to design paper-based VAS items.
abstract_id: PUBMED:32444340
Minimal important change for the visual analogue scale foot and ankle (VAS-FA). Background: Visual analogue scale foot and ankle (VAS-FA) is a patient-reported outcome measure for foot and ankle disorders. The VAS-FA is validated into several languages and well adopted into use. Nonetheless, minimal important change (MIC) for the VAS-FA has not been estimated thus far.
Methods: The VAS-FA score was obtained from 106 patients undergoing surgery for various foot and ankle complaints. MIC was estimated using an anchor-based predictive method.
Results: The adjusted MIC was 6.8 for total VAS-FA score, and 9.3 for the Pain, 5.8 for the Function, and 5.7 for the Other complaints subscales. The VAS-FA score was found to separate improvement and deterioration in patients' foot and ankle condition.
Conclusions: MIC was successfully defined for the VAS-FA in the current study. The VAS-FA can be used to evaluate foot and ankle patients' clinical foot and ankle status and its change. Further research on estimating disease-specific MICs is recommended.
abstract_id: PUBMED:24735056
Measuring the Intensity of Chronic Pain: Are the Visual Analogue Scale and the Verbal Rating Scale Interchangeable? Objectives: The 0 to 100 mm visual analogue scale (VAS) and the five-category verbal rating scale (VRS) are commonly used for measuring pain intensity. An open question remains as to whether these scales can be used interchangeably to allow comparisons between intensities of pain in the clinical setting or increased statistical power in pain research.
Methods: Seven hundred and ninety-six patients were requested to rate the present intensity of their chronic pain on the two scales. Spearman's rank correlation coefficients between VAS and VRS were calculated. For testing interchangeability, VAS was transformed into a discrete ordinal scale by dividing the entire VAS into five categories, either equidistantly (biased) or using frequency distributions of VAS (unbiased). We used Goodman-Kruskal's gamma and Wilson's e measures of ordinal association quantified the relationships between the transformed VAS and VRS scores and Svensson method to evaluate agreement between biased and unbiased discrete VAS and VRS scales.
Results: Average VAS and VRS scores were 76 ± 18 mm and "severe," respectively. Spearman's rank correlation coefficient values between continuous VAS and VRS were 0.77 to 0.85. Goodman-Kruskal's gamma ordinal associations between discrete VAS and VRS were 0.82 to 0.92 and 0.90 to 0.98 for the biased and unbiased VAS, respectively. Wilson's e measures were 0.51 to 0.61 and 0.54 to 0.65, accordingly. Svensson analysis showed low probability of agreement between both biased (0.66 to 0.76) and unbiased (0.75 to 0.82) VAS and VRS.
Discussion: Regardless of the relatively high Spearman correlations between original VAS and VRS, the low ordinal association and low probability of agreement between discrete VAS and VRS suggest that they are not interchangeable. Therefore, VAS and VRS should not be used interchangeably in the clinical setting or for increased statistical power in pain research.
abstract_id: PUBMED:33561663
The Psychometric Properties of the Visual Analogue Scale Applied by an Observer to Assess Procedural Pain in Infants and Young Children: An Observational Study. Purpose: The Visual Analogue Scale applied by an observer (VASobs) is widely used to quantify pain but the evidence to support validity is poor. The aim of this study was to evaluate the psychometric and practical properties of the VASobs used to assess procedural pain in infants and young children.
Design And Methods: In an observational study, 26 clinicians applied the VASobs independently to video segments of 100 children aged six to 42 months undergoing a procedure to generate pain and distress scores. Each video segment was scored by four randomly selected reviewers.
Results: Reliability for pain scores was poor to fair (ICC 0.35 to 0.55) but higher for distress scores (ICC 0.6 to 0.89). At a cut-off score of 3, sensitivity and specificity were 84.7% and 95.0%, respectively for pain and 91.5% and 77.5% respectively for distress. Linear mixed modelling confirmed responsiveness. An increase in pain scores (regression slope 4.95) and distress scores (regression slope 5.52) across phases (baseline to procedure) was seen for painful procedures. The correlation between VASobs pain and FLACC scores was good (r = 0.74) and correlations between VASobs distress and FLACC scores were excellent (r = 0.89).
Conclusion: VASobs was easily applied and preferred by clinicians. Despite evidence of sensitivity and responsiveness to pain, the reliability results were poor, and this scale cannot be recommended for use.
Practice Implications: The results of this study prevent recommending the VASobs for assessing procedural pain in infants and young children for clinical or research purposes.
abstract_id: PUBMED:38143014
Measuring pain intensity in older adults. Can the visual analogue scale and the numeric rating scale be used interchangeably? Objectives: Visual analogue scale (VAS) and numeric rating scale (NRS) are two commonly used instruments for measuring pain intensity. Both instruments are validated for use in both clinical and research settings, and share a range of similar aspects. Some studies have shown that the two instruments may be used interchangeably, but the results are conflicting. In this study we assessed whether the VAS and the NRS instruments may be used interchangeably when measuring pain intensity in older adults.
Methods: Data were collected in a cross-sectional study, as part of the follow-up in a larger longitudinal study conducted at the Akershus University Hospital, Norway 2021 to 2022 and included 39 older adults aged ≥65. Participants were regarded as a normal older adult population as they were not recruited on basis of a specific condition or reports of pain. The participants were asked to rate their pain intensity on an average day using VAS and NRS. Bland-Altman analysis was performed to assess agreement between the two instruments.
Results: Thirty-seven participants with mean (SD) age of 77 (5.9) were included in the analysis. Mean (SD) pain assessed by VAS and NRS was 2.8 (1.8) and NRS 4.7 (2.2), respectively. A mean difference (SD) of 2.0 (1.9) between the scores of the two instruments was statistically significantly different from zero (p < 0.001) confirming bias. The 95% limits of agreement were estimated to be -1.7 to 5.7. A post-hoc analysis, removing an outlier, resulted in similar conclusions.
Conclusion: There was poor agreement between the VAS and NRS scale for measuring pain intensity in older adults. This suggests that the two instruments should not be used interchangeably when assessing pain intensity in this population.
Ethical Approval: Regional Committees for Medical and Health Research Ethics [2016/2289].
Trial Registration: NCT03162081, 22 May 2017.
Answer: The pain visual analogue scale (VAS) is commonly used in clinical trials to measure pain, but its linearity and responsiveness to change have been questioned. According to a study that used Rasch analysis to examine the internal construct validity and responsiveness of the pain VAS, the scale was found to be valid for measuring pain at one point in time. However, the study also found that the pain VAS does not behave linearly, and the standardized response means (SRMs) derived from ordinal data overestimated the true responsiveness by 59%. The study concluded that raw pain VAS data should not be used as a primary outcome measure or to inform parametric-based Randomized Controlled Trial power calculations in research studies, and that Rasch analysis should be used to convert ordinal data to interval data prior to data interpretation (PUBMED:24921952).
In another study, the psychometric and practical properties of the VAS applied by an observer (VASobs) were evaluated for assessing procedural pain in infants and young children. The study found that despite evidence of sensitivity and responsiveness to pain, the reliability results were poor, and the VASobs could not be recommended for use (PUBMED:33561663).
Furthermore, a study assessing whether the VAS and the numeric rating scale (NRS) could be used interchangeably when measuring pain intensity in older adults found poor agreement between the two instruments. This suggests that the VAS and NRS should not be used interchangeably in this population (PUBMED:38143014).
In summary, while the pain VAS is a valid tool for measuring pain at a single point in time, it does not behave linearly, and its responsiveness to change may be overestimated when using raw data. Additionally, its reliability and interchangeability with other scales, such as the NRS, can be problematic, particularly in certain populations. Therefore, careful consideration and appropriate statistical methods are recommended when using the VAS to measure changes in pain intensity over time. |
Instruction: Does hyperbaric oxygen therapy reduce the effects of ischemia on colonic anastomosis in laparoscopic colon resection?
Abstracts:
abstract_id: PUBMED:27026260
Does hyperbaric oxygen therapy reduce the effects of ischemia on colonic anastomosis in laparoscopic colon resection? Background: An increase in intra-abdominal pressure causes a decrease in the splanchnic blood flow and the intramucosal pH of the bowel, as well as increasing the risk of ischemia in the colon. The aim of the present study is to evaluate the effect of hyperbaric oxygen therapy (HBOT) on the ischemia caused by laparoscopy in colonic anastomosis in an experimental model of laparoscopic colonic surgery.
Materials And Methods: We divided 30 male Wistar albino rats into three groups: Group A was the control (open colon anastomosis); Group B received LCA (laparoscopic colon anastomosis); while Group C received both LCA and HBOT. Each group contained ten animals. We placed Group C (LCA and HBOT) in an experimental hyperbaric chamber into which we administered pure oxygen at 2.1 atmospheres absolute 100% oxygen for 60 min for ten consecutive days.
Results: The anastomotic bursting pressure value was found to be higher in the open surgery group (226 ± 8.8) (Group A). The result for Group C (213 ± 27), which received HBOT, was better than that for Group B (197 ± 27). However, there was no statistically significant difference between Group B and Group C. Group A showed better healing than the other groups, while significant differences in the fibroblast proliferation scores were found between Groups A and B. In terms of tissue hydroxyproline levels, a significant difference was found between Groups A and B and between Groups A and C, but not between Groups B and C.
Conclusions: HBOT increases the oxygen level in the injured tissue. Although HBOT might offer several advantages, it had only a limited effect on the healing of colonic anastomosis in rats with increased intra-abdominal pressure in our study.
Key Words: Anastomosis, Colon, Hyperbaric Oxygen Treatment, Oxidative Stress.
abstract_id: PUBMED:9874433
The effects of hyperbaric oxygen on normal and ischemic colon anastomoses. Background: Leakage from colonic anastomoses is a major complication causing increased mortality and morbidity, and ischemia is a well-known cause of this event. Inadequate tissue oxygenation could be reversed by using hyperbaric oxygen. This study was designed to investigate the effects of hyperbaric oxygen on the healing of ischemic and normal colon anastomoses in the rat model.
Methods: Standardized left colon resection 3 cm above the peritoneal reflection and colonic anastomosis were performed in 40 Wistar rats divided into four groups. The control group (I) received no further treatment. To mimic ischemia, 2 cm mesocolon was ligated on either site of the anastomosis in group II and IV rats. Hyperbaric oxygen therapy was started immediately after surgery in group III and IV rats (therapeutic groups). All animals were sacrificed on the fourth postoperative day. After careful relaparotomy, in situ bursting pressure was measured. The hydroxyproline contents of the anastomotic segments in equal length were determined.
Results: The hydroxyproline assay revealed that rats in group II with ischemic colonic anastomosis have significantly lower levels (P <0.05). The highest levels are in the group III rats with normal colonic anastomosis treated by hyperbaric oxygen (P <0.05). There was no significant difference in hydroxyproline levels between group II and group IV animals (P >0.05). Group III animals had significantly higher bursting pressures than any other group (P <0.05). Group II rats had lowest bursting pressures (P <0.05). Group IV animals had significantly higher levels than group II (P <0.05). Mean bursting pressure values both in groups III and IV and hydroxyproline levels in group III were significantly increased by hyperbaric oxygen therapy (P <0.05).
Conclusions: Ischemia impairs anastomotic healing. Hyperbaric oxygen increases anastomotic healing of both normal and ischemic colonic anastomosis and reverses ischemic damage. This study demonstrated that hyperbaric oxygen improves anastomotic healing.
abstract_id: PUBMED:16552813
Effects of hyperbaric oxygen and Pgg-glucan on ischemic colon anastomosis. Aim: In colorectal surgery, anastomotic failure is still a problem in ischemia. Here,we analyzed the effects of hyperbaric oxygen and beta-glucan on colon anastomoses in ischemic condition.
Methods: Colonic resection and anastomosis in rectosigmoid region were done in forty Wistar-Albino rats of four groups of equal number. Colon mesentery was ligated to induce ischemia. The first group was the control group. The subjects of second group were treated with hyperbaric oxygen;the third group with glucan and the forth group were treated with both. At the forth day, rats were sacrificed,anastomotic segment was resected and burst pressures and hydroxyproline levels of anastomotic line were measured.
Results: The burst pressure difference of second and third groups from the control group were meaningful (P<0.01); the forth group differed significantly from the control (P<0.001). There was no difference between the treated groups on burst pressure level (P>0.05). The hydroxyproline levels in all treated groups were different from the control group significantly (P<0.001). Hydroxyproline levels in the forth group were higher than those of the second and the third groups (P<0.001). There were no significant differences between the second and the fourth groups in burst pressure and hydroxyproline levels (P>0.05).
Conclusion: Hyperbaric oxygen and glucan improve healing in ischemic colon anastomoses by anti-microbic,immune stimulating properties and seem to act synergistically when combined together.
abstract_id: PUBMED:24270957
Is combined therapy more effective than growth hormone or hyperbaric oxygen alone in the healing of left ischemic and non-ischemic colonic anastomoses? Objective: Our aim was to investigate the effects of growth hormone (GH), hyperbaric oxygen and combined therapy on normal and ischemic colonic anastomoses in rats.
Methods: Eighty male Wistar rats were divided into eight groups (n = 10). In the first four groups, non-ischemic colonic anastomosis was performed, whereas in the remaining four groups, ischemic colonic anastomosis was performed. In groups 5, 6, 7, and 8, colonic ischemia was established by ligating 2 cm of the mesocolon on either side of the anastomosis. The control groups (1 and 5) received no treatment. Hyperbaric oxygen therapy was initiated immediately after surgery and continued for 4 days in groups 3 and 4. Groups 2 and 6 received recombinant human growth hormone, whereas groups 4 and 8 received GH and hyperbaric oxygen treatment. Relaparotomy was performed on postoperative day 4, and a perianastomotic colon segment 2 cm in length was excised for the detection of biochemical and mechanical parameters of anastomotic healing and histopathological evaluation.
Results: Combined treatment with hyperbaric oxygen and GH increased the mean bursting pressure values in all of the groups, and a statistically significant increase was noted in the ischemic groups compared to the controls (p<0.05). This improvement was more evident in the ischemic and normal groups treated with combined therapy. In addition, a histopathological evaluation of anastomotic neovascularization and collagen deposition showed significant differences among the groups.
Conclusions: Combined treatment with recombinant human growth hormone and hyperbaric oxygen resulted in a favorable therapeutic effect on the healing of ischemic colonic anastomoses.
abstract_id: PUBMED:32578671
Effect of preconditioning and postoperative hyperbaric oxygen therapy on colonic anastomosis healing with and without ischemia in rats. Purpose: To investigate the effect of hyperbaric oxygen therapy on colonic anastomosis healing with and without ischemia in rats.
Methods: Forty female rats underwent segmental resection of 1 cm of the left colon followed by end-to-end anastomosis. They were randomly assigned to four groups (n=10 each), a sham group; two groups were submitted to Hyperbaric Oxygen therapy (HBOT) with and without induced ischemia and the induced ischemia group without HBOT. The HBOT protocol evaluated was 100% O2 at 2.4 Atmosphere absolute pressure (ATA) for 60 minutes, two sessions before as a preconditioning protocol and three sessions after the operation. Clinical course and mortality were monitored during all experiment and on the day of euthanasia on the fourth day after laparotomy. Macroscopic appearance of the abdominal cavity were assessed and samples for breaking strength of the anastomosis and histopathological parameters were collected.
Results: There was no statistically significant difference in mortality or anastomosis leak between the four experimental groups. Anastomosis breaking strength was similar across groups.
Conclusion: The HBOT protocol tested herein at 2.4 ATA did not affect histopathological and biomechanical parameters of colonic anastomotic healing, neither the clinical outcomes death and anastomosis leak on the fourth day after laparotomy.
abstract_id: PUBMED:21226391
Hyperbaric oxygen on the healing of ischemic colonic anastomosis--an experimental study in rats. The aim of the present study was to evaluate the effect of hyperbaric oxygen therapy (HBO2) on the healing process of ischemic colonic anastomoses in rats. Forty Wistar rats were divided into four groups: control (Group I), control and HBO2 (Group II), ischemia (Group III), ischemia and HBO2 (Group IV). Ischemia was achieved by clamping four centimeters of the colonic arcade. On the eighth therapy day, the anastomotic region was removed for quantification of hydroxyproline and immunohistochemical determination of metalloproteinases 1 and 9 (MMP1, MMP9). The immunohistochemical studies showed significantly larger metalloproteinase-labeled areas in Group IV compared with Group III for both MMP1 and MMP9 (p < 0.01). This finding points to a higher remodeling activity of the anastomoses in this experimental group. Additionally, animals subjected to hyperbaric oxygen therapy showed both a reduction in interstitial edema and an increase in hydroxyproline concentrations [at the anastomotic site]. Therefore, we conclude that HBO2 is indeed beneficial in anastomotic ischemia.
abstract_id: PUBMED:33616775
Ischemic proctitis 6 months after laparoscopic sigmoidectomy: a case report. Background: Ischemic colitis is a common disease; however, its pathophysiology remains unclear, especially in ischemic proctitis after sigmoidectomy. We present a rare case of ischemic proctitis 6 months after laparoscopic sigmoidectomy.
Case Presentation: The patient was a 60-year-old man with hypertension, type 2 diabetes, and hyperlipidemia. He was a smoker. He underwent laparoscopic sigmoidectomy for pathological stage I sigmoid colon cancer and was followed up without any adjuvant therapy. Six months after his surgery, he complained of lower abdominal discomfort, bloody stools, and tenesmus. Colonoscopy showed extensive rectal ulcers between the anastomotic site and the anal canal, which was particularly severe on the anal side several centimeters beyond the anastomosis. We provided non-surgical management, including hyperbaric oxygen therapy. The rectal ulcers had healed 48 days after the therapeutic intervention. He has not experienced any recurrence for 3.5 years.
Conclusions: While performing sigmoidectomy, it is important to consider the blood backflow from the anal side of the bowel carefully, especially for patients with risk factors of ischemic proctitis.
abstract_id: PUBMED:890244
The role of oxygen therapy in the healing of experimental skin wounds and colonic anastomosis. A number of experimental studies have indicated that wound healing is adversely affected by hypoxia, and it has been suggested that healing can be improved by increasing inspired oxygen tensions. However, this hypothesis is based on observations on simulated wounds or tissue implants in experimental animals, and the clinical relevance of these observations is uncertain. The effects of oxygen therapy on the healing of skin wounds and colonic anastomoses were examined in rats. Sutured skin incisions and normal and ischaemic colonic anastomoses were studied in control animals breathing air and in test animals breathing 50 per cent oxygen. Wound healing was assessed by measurements of wound breaking strength, colonic bursting wall tension and wound collagen after 7 days' treatment with oxygen. There was no significant difference in the measurements in skin wounds or colonic anastomoses in test and control animals, and there was a similar incidence of anastomotic dehiscence in the ischaemic colon of test animals and controls. Oxygen therapy had no apparent effect on wound healing in this study, and it was concluded that further studies are required to determine whether or not there is a rational basis for the clinical use of oxygen therapy to help wound healing.
abstract_id: PUBMED:24065219
Evaluation of the effects of hyperbaric oxygen treatment and enoxaparin on left colon anastomosis. An experimental study. Background: Surgical interventions on left colon lead to high morbidity. The problems in wound healing are the main cause of this morbidity. Hypoxia retards wound healing and hyperbaric oxygen treatment (HBOT) has an anti-hypoxic effect.
Materials And Methods: In this experimental study we divided eighty Wistar albino rats into eight groups and numbered between 1 and 8. Normal (non-ischemic) and ischemic left colon anastomosis were performed in the first and second four groups respectively. HBOT and subcutaneous enoxaparin were applied to the groups separately and in combination for four days, except the control groups. (Group-1 and Group-5). We measured anastomotic bursting pressures and performed pathological examinations besides electron microscopic study in one sample from each group after sacrificing the rats on the fourth day.
Results: There were no statistically significant differences in bursting pressures when we compared Group-1 with other non-ischemic groups, and Group-5 with Group-6, but there were statistically significant differences when we compared Group-5 with Group-7 and 8. In pathological examination, there were no statistically significant differences between the groups concerning necrosis, epithelization, granulation tissue formation and collagen deposition. Statistically significant differences were found in the scores of neovascularization when we compared Group-1 with Group-3 and 4, and Group-5 with Group-8. Electron microscopic evaluation revealed a prominent increase both in neovascularization and collagen fibers in the samples taken from the groups received enoxaparine and hyperbaric oxygen treatment in combination.
Conclusions: These findings suggest that HBOT increases neovascularization and bursting pressures in ischemic colon anastomosis in contrast with enoxaparin.
abstract_id: PUBMED:27041554
Hyperbaric oxygen therapy improves colorectal anastomotic healing. Purpose: Hyperbaric oxygen treatment (HBOT) has been found to improve the healing of poorly oxygenated tissues. This study aimed to investigate the influence of HBOT on the healing in ischemic colorectal anastomosis.
Methods: Forty Wistar rats were randomly divided into a treatment group that received HBOT for 10 consecutive days (7 days before and 3 days after surgery), or in a control group, which did not receive the therapy. Colectomy with an ischemic anastomosis was performed in all rats. In each group, the rats were followed for 3 or 7 days after surgery to determine the influence of HBOT on anastomotic healing.
Results: Five rats from each group died during follow-up. No anastomotic dehiscence was seen in the HBOT group, compared to 37.5 % and 28.6 % dehiscence in the control group on postoperative day (POD) 3 and 7, respectively. The HBOT group had a significantly higher bursting pressure (130.9 ± 17.0 mmHg) than the control group (88.4 ± 46.7 mmHg; p = 0.03) on POD 3. On POD 3 and POD 7, the adhesion severity was significantly higher in the control groups than in the HBOT groups (p < 0.005). Kidney function (creatinine level) of the HBOT group was significantly better than of the control group on POD 7 (p = 0.001). Interestingly, a significantly higher number of CD206+ cells (marker for type 2 macrophages) was observed in the HBOT group at the anastomotic area on POD 3.
Conclusion: Hyperbaric oxygen enhanced the healing of ischemic anastomoses in rats and improved the postoperative kidney function.
Answer: Hyperbaric oxygen therapy (HBOT) has been studied for its potential benefits in the healing of colonic anastomosis, particularly under ischemic conditions. The studies show mixed results, with some indicating that HBOT can improve the healing of both normal and ischemic colonic anastomoses by increasing tissue oxygenation and reversing ischemic damage (PUBMED:9874433), while others suggest that the effect might be limited.
For instance, one study found that HBOT increased the anastomotic bursting pressure value in a laparoscopic colonic anastomosis model, suggesting better healing than without HBOT, but the difference was not statistically significant (PUBMED:27026260). Another study showed that HBOT improved healing in ischemic colon anastomoses and seemed to act synergistically when combined with beta-glucan (PUBMED:16552813). Additionally, combined treatment with recombinant human growth hormone and HBOT resulted in a favorable therapeutic effect on the healing of ischemic colonic anastomoses (PUBMED:24270957).
Conversely, a study that tested a specific HBOT protocol at 2.4 Atmosphere absolute pressure (ATA) found no significant effect on histopathological and biomechanical parameters of colonic anastomotic healing, nor on clinical outcomes such as death and anastomosis leak (PUBMED:32578671). Another study concluded that HBOT is beneficial in anastomotic ischemia, as evidenced by a reduction in interstitial edema and an increase in hydroxyproline concentrations at the anastomotic site (PUBMED:21226391).
In the context of ischemic proctitis after laparoscopic sigmoidectomy, non-surgical management including HBOT was used successfully to heal rectal ulcers (PUBMED:33616775). However, an older study found no apparent effect of oxygen therapy on wound healing in rats, suggesting that further studies are needed to determine the clinical relevance of oxygen therapy for wound healing (PUBMED:890244).
In summary, while there is evidence to suggest that HBOT can be beneficial for the healing of colonic anastomosis, especially under ischemic conditions, the results are not uniformly positive, and the effectiveness may depend on specific protocols and individual circumstances. Further research is needed to fully understand the role of HBOT in this context. |
Instruction: Is frequency of family meals associated with parental encouragement of healthy eating among ethnically diverse eighth graders?
Abstracts:
abstract_id: PUBMED:23651952
Is frequency of family meals associated with parental encouragement of healthy eating among ethnically diverse eighth graders? Objective: The purpose of the present study was to explore the relationship between family meals and parental encouragement of healthy eating overall and by ethnicity.
Design: Family meal frequency was measured with one item asking how many times in the past 7 d all or most of the family ate a meal together, which was then categorized to represent three levels of family meals (≤2 times, 3-6 times and ≥7 times). Parental encouragement of healthy eating assessed how often parents encouraged the student to eat fruits and vegetables, drink water, eat wholegrain bread, eat breakfast and drink low-fat milk (never to always). An overall scale of parental encouragement of healthy eating was created. Mixed-effect regression analyses were run controlling for gender, ethnicity, age and socio-economic status. Moderation by ethnicity was explored.
Setting: Middle schools.
Subjects: Participants included 2895 US eighth grade students participating in the Central Texas CATCH (Coordinated Approach To Child Health) Middle School Project (mean age 13·9 years; 24·5 % White, 52·7 % Hispanic, 13·0 % African-American, 9·8 % Other; 51·6 % female).
Results: Eating more family meals was significantly associated with having parents who encouraged healthy eating behaviours (P for trend <0·001). The number of family meals was positively associated with encouragement of each of the healthy eating behaviours (P for trend <0·0001). There were no differences in the relationships by ethnicity of the students.
Conclusions: Families who eat together are more likely to encourage healthy eating in general. Interventions which promote family meals may include tips for parents to increase discussions about healthy eating.
abstract_id: PUBMED:27989447
No Time for Family Meals? Parenting Practices Associated with Adolescent Fruit and Vegetable Intake When Family Meals Are Not an Option. Background: Despite research linking family meals to healthier diets, some families are unable to have regular meals together. These families need guidance about other ways to promote healthy eating among adolescents.
Objective: Our aim was to examine the association between various parenting practices and adolescent fruit and vegetable (F/V) intake at different levels of family meal frequency.
Design: We conducted a cross-sectional, population-based survey of influences on adolescent weight-related behaviors using Project EAT (Eating and Activity in Teens) 2010.
Participants/setting: Participants were 2,491 adolescents recruited from middle/high schools in Minneapolis/St Paul, MN.
Measures: Adolescent F/V intake was ascertained with a food frequency questionnaire. Survey items assessed frequency of family meals and F/V parenting practices (availability, accessibility, parent modeling, parent encouragement, and family communication).
Statistical Analyses: Linear regression models were used to examine associations with and interactions among family meals and parenting practices. Models were adjusted for age, sex, socioeconomic status, race/ethnicity, and energy intake (kilocalories per day).
Results: Family meals, F/V availability, F/V accessibility, F/V modeling, and encouragement to eat healthy foods were independently associated with higher F/V intake. Of the 949 (34%) adolescents who reported infrequent family meals (≤2 days/wk), mean F/V intake was 3.6 servings/day for those with high home F/V availability vs 3.0 servings/day for those with low home F/V availability. Similar differences in mean F/V intake (0.3 to 0.6 servings/day) were found for high vs low F/V accessibility, parental modeling, and parent encouragement for healthy eating. Frequent family meals in addition to more favorable parenting practices were associated with the highest F/V intakes.
Conclusions: Food parenting practices and family meals are associated with greater adolescent F/V intake. Longitudinal and intervention studies are needed to determine which combination of parenting practices will lead to improvements in adolescent diets.
abstract_id: PUBMED:27995039
Sociodemographic characteristics associated with frequency and duration of eating family meals: a cross-sectional analysis. Introduction: Children who frequently eat family meals are less likely to develop risk- and behavior-related outcomes, such as substance misuse, sexual risk, and obesity. Few studies have examined sociodemographic characteristics associated with both meal frequency (i.e., number of meals) and duration (i.e., number of minutes spent at mealtimes).
Methods: We examine the association between sociodemographics and family meal frequency and duration among a sample of 85 parents in a large New England city that was recruited through the public-school system. Additionally, we examined differences in family meals by race/ethnicity and parental nativity. Unadjusted ANOVA and adjusted ANCOVA models were used to assess the associations between sociodemographic characteristics and frequency and duration of meals.
Results: Sociodemographic characteristics were not significantly associated with the frequency of family meals; however, in the adjusted models, differences were associated with duration of meals. Parents who were born outside the U.S. spent an average of 135.0 min eating meals per day with their children compared to 76.2 for parents who were born in the U.S. (p < 0.01). Additionally, parents who reported being single, divorced, or separated on average, spent significantly more time per day eating family meals (126.7 min) compared to parents who reported being married or partnered (84.4; p = 0.02). Differences existed in meal duration by parental nativity and race/ethnicity, ranging from 63.7 min among multi-racial/other parents born in the U.S. to 182.8 min among black parents born outside the U.S.
Discussion: This study builds a foundation for focused research into the mechanisms of family meals. Future longitudinal epidemiologic research on family meals may help to delineate targets for prevention of maladaptive behaviors, which could affect family-based practices, interventions, and policies.
abstract_id: PUBMED:25489408
Eating habits and eating behaviors by family dinner frequency in the lower-grade elementary school students. Background/objectives: Recently, there has been an increased interest in the importance of family meals on children's health and nutrition. This study aims to examine if the eating habits and eating behaviors of children are different according to the frequency of family dinners.
Subjects/methods: The subjects were third-grade students from 70 elementary schools in 17 cities nationwide. A two-stage stratified cluster sampling was employed. The survey questionnaire was composed of items that examined the general characteristics, family meals, eating habits, eating behaviors, and environmental influence on children's eating. The subjects responded to a self-reported questionnaire. Excluding the incomplete responses, the data (n = 3,435) were analyzed using χ(2)-test or t-test.
Results: The group that had more frequent family dinners (≥ 5 days/week, 63.4%), compared to those that had less (≤ 4 days/week, 36.6%), showed better eating habits, such as eating meals regularly, performing desirable behaviors during meals, having breakfast frequently, having breakfast with family members (P < 0.001), and not eating only what he or she likes (P < 0.05). Those who had more frequent family dinners also consumed healthy foods with more frequency, including protein foods, dairy products, grains, vegetables, seaweeds (P < 0.001), and fruits (P < 0.01). However, unhealthy eating behaviors (e.g., eating fatty foods, salty foods, sweets, etc.) were not significantly different by the frequency of family dinners.
Conclusions: Having dinner frequently with family members was associated with more desirable eating habits and with healthy eating behaviors in young children. Thus nutrition education might be planned to promote family dinners, by emphasizing the benefits of having family meals on children's health and nutrition and making more opportunities for family meals.
abstract_id: PUBMED:30407068
Understanding Parental Ethnotheories and Practices About Healthy Eating: Exploring the Developmental Niche of Preschoolers. Purpose: To understand parental ethnotheories (ie, belief systems) and practices about preschoolers' healthy eating guided by the developmental niche framework.
Design: Qualitative hermeneutic phenomenology.
Setting: Home.
Participants: Participants were 20 parents of preschool-age children ages 3 to 5 years, recruited from a quantitative investigation. A majority of the participants were white, female, married, well educated, and working full time.
Methods: Participants who completed the quantitative survey were asked to provide their contact information if they were willing to be interviewed. From the pool of participants who expressed their willingness to participate in the interviews, 20 participants were selected using a random number generator. In-person semistructured interviews were conducted until data saturation (n = 20). Thematic analysis was performed.
Results: Three themes and 6 subthemes emerged: theme 1-parental ethnotheories about healthy eating included subthemes of knowledge about healthy eating, motivations to promote healthy child development through healthy eating, and sources of knowledge about healthy eating (eg, doctors, social media, government guidelines, positive family-of-origin experiences); theme 2-parental ethnotheories that supported organization of children's physical and social settings included structured mealtime routines and food socialization influences (eg, grandparents, siblings, and childcare programs); and theme 3-parental ethnotheories that supported children's learning about healthy eating included parent-child engagement, communication, and encouragement in food-related activities (eg, meal preparation, visiting farmer's market, grocery shopping, gardening, cooking, baking).
Conclusion: Findings advance the literature on parental practices about healthy eating. Parental ethnotheories (eg, beliefs, motivations, knowledge, and skills) matter. Developmental niche of preschoolers (ie, physical and social settings, childrearing practices, and parental ethnotheories) constitutes an interactive system in which ethnotheories serve as guides to parental practices. Fostering nutrition education and parent-child engagement, communication, and encouragement in food-related activities are recommended to promote children's healthy eating in daily routines.
abstract_id: PUBMED:33879111
Parental phone use during mealtimes with toddlers and the associations with feeding practices and shared family meals: a cross-sectional study. Background: Positive parental feeding practices and a higher frequency of family meals are related to healthier child dietary habits. Parents play an essential role when it comes to the development of their child's eating habits. However, parents are increasingly distracted by their mobile phone during mealtimes. The aim of this study was to describe the feeding practices and daily shared family meals among parents who use and do not use a mobile phone during mealtimes, and further to explore the associations between the use of a mobile phone during mealtimes and feeding practices and daily shared family meals, respectively.
Methods: Cross-sectional data from the Food4toddler study were used to explore the association between mobile use during meals and parental feeding practices including family meals. In 2017/2018 parents of toddlers were recruited through social media to participate in the study. In total 298 out of 404 who volunteered to participate, filled in a baseline questionnaire, including questions from the comprehensive feeding practices questionnaire (CFPQ), questions of frequency of family meals and use of mobile phone during meals.
Results: Herein, 4 out of 10 parents reported various levels of phone use (meal distraction) during mealtimes. Parental phone use was associated with lower use of positive parental feeding practices like modelling (B = - 1.05 (95% CI -1.69; - 0.41)) and family food environment (B = - 0.77 (95% CI -1.51; - 0.03)), and more use of negative parental feeding practices like emotional regulation (B = 0.73 (95% CI 0.32; 1.14)) and the use of pressure to eat (B = 1.22 (95% CI 0.41; 2.03)). Furthermore, parental phone use was associated with a lower frequency of daily family breakfast (OR = 0.50 (95% CI 0.31; 0.82)) and dinner (OR = 0.57 (95% CI 0.35; 0.93)).
Conclusions: Mobile phone use is common among parents during mealtimes, and findings indicate that parental phone use is associated with less healthy feeding practices and shared family meals. These findings highlight the importance of making parents aware of potential impacts of meal distractions.
Trial Registration: ISRCTN92980420 . Registered 13 September 2017. Retrospectively registered.
abstract_id: PUBMED:24529833
Family meal frequency among children and adolescents with eating disorders. Purpose: Previous studies on family meals and disordered eating have mainly drawn their samples from the general population. The goal of the current study is to determine family meal frequency among children and adolescents with anorexia nervosa (AN), bulimia nervosa (BN), and feeding or eating disorder not elsewhere classified (FED-NEC) and to examine whether family meal frequency is associated with eating disorder psychopathology.
Methods: Participants included 154 children and adolescents (M = 14.92 ± 2.62), who met criteria for AN (n = 60), BN (n = 32), or FED-NEC (n = 62). All participants completed the Eating Disorder Examination and the Family Meal Questionnaire prior to treatment at the University of Chicago Eating Disorders Program.
Results: AN and BN participants significantly differed in terms of family meal frequency. A majority of participants with AN (71.7%), compared with less than half (43.7%) of participants with BN, reported eating dinner with their family frequently (five or more times per week). Family meal frequency during dinner was significantly and negatively correlated with dietary restraints and eating concerns among participants with BN (r = -.381, r = -.366, p < .05) and FED-NEC (r = -.340, r = -.276, p < .05).
Conclusions: AN patients' higher family meal frequency may be explained by their parents' relatively greater vigilance over eating, whereas families of BN patients may be less aware of eating disorder behaviors and hence less insistent upon family meals. Additionally, children and adolescents with AN may be more inhibited and withdrawn and therefore are perhaps more likely to stay at home and eat together with their families.
abstract_id: PUBMED:25130186
Family meals and disordered eating in adolescents: are the benefits the same for everyone? Objective: To examine the association between family meals and disordered eating behaviors within a diverse sample of adolescents and further investigate whether family-level variables moderate this association.
Method: Data from adolescents (EAT 2010: Eating and Activity in Teens) and their parents (Project F-EAT: Families and Eating and Activity among Teens) were collected in 2009-2010. Surveys were completed by 2,382 middle and high school students (53.2% girls, mean age = 14.4 years) from Minneapolis/St. Paul, MN, public schools. Parents/guardians (n = 2,792) completed surveys by mail or phone.
Results: Greater frequency of family meals was associated with decreased odds of engaging in unhealthy weight control behaviors in boys, and dieting, unhealthy and extreme weight control behaviors in girls. Results indicate that the protective effects of family meals are, in general, robust to family-level variables; 64 interactions were examined and only seven were statistically significant. For example, among girls, the protective nature of family meals against dieting and unhealthy weight control behaviors was diminished if they also reported family weight-related teasing (both p < .01).
Discussion: The results confirmed previous research indicating that participation in family meals is protective against disordered eating for youth, particularly girls. However, results suggest that in some cases, the protection offered by family meals may be modified by family-level variables.
abstract_id: PUBMED:29363195
Is frequency of family meals associated with fruit and vegetable intake among preschoolers? A logistic regression analysis. Background: The present study aimed to examine the associations between frequency of family meals and low fruit and vegetable intake in preschool children. Promoting healthy nutrition early in life is recommended for combating childhood obesity. Frequency of family meals is associated with fruit and vegetable intake in school-age children and adolescents; the relationship in young children is less clear.
Methods: We completed a secondary analysis using data from the Early Childhood Longitudinal Study-Birth Cohort. Participants included children, born in the year 2001, to mothers who were >15 years old (n = 8 950). Data were extracted from structured parent interviews during the year prior to kindergarten. We used hierarchical logistic regression to describe the relationships between frequency of family meals and low fruit and vegetable intake.
Results: Frequency of family meals was associated with low fruit and vegetable intake. The odds of low fruit and vegetable intake were greater for preschoolers who shared less than three evening family meals per week (odds ratio = 1.5, β = 0.376, P < 0.001) than preschoolers who shared the evening meal with family every night.
Conclusions: Fruit and vegetable intake is related to frequency of family meals in preschool-age children. Educating parents about the potential benefits of frequent shared meals may lead to a higher fruit and vegetable consumption among preschoolers. Future studies should address other factors that likely contribute to eating patterns during the preschool years.
abstract_id: PUBMED:28572070
Class and eating: Family meals in Britain. This paper examines social differentiation in eating patterns in Britain. It focuses on family meals among individuals with under-age children. Eating with family members has been associated with improvement in wellbeing, nutritional status, and school performance of the children. Modern lifestyles may pose a challenge to commensal eating for all groups, but the scale of the impact varies between social classes, with some groups at higher risk of shortening or skipping family meal time. Eating patterns are differentiated by individual's social class; they have also been associated with educational attainment, work schedules, and household composition. The objective of this study is to disaggregate the effect of these variables. Using data from the 2014/2015 UK Time Use Survey I analyse the net effect of social class, education, income, work and family characteristics on the frequency and duration of family meals. Individuals in the highest occupational class dedicate more time overall to family meals. However, class effect becomes insignificant when other variables, such as education or income, are controlled for. This study finds that higher educated individuals have more frequent family meals, and more affluent individuals spend more time at the table with their household members. Work characteristics are associated with frequency of meals, but not with their duration. Finally, household composition matters for how people eat. Parents of younger children eat with their family members more frequently than parents of teenagers. Single parents, a notoriously time-poor category, spend the least amount of time eating with their families and have fewer commensal meals.
Answer: Yes, the frequency of family meals is associated with parental encouragement of healthy eating among ethnically diverse eighth graders. A study exploring the relationship between family meals and parental encouragement of healthy eating behaviors found that eating more family meals was significantly associated with having parents who encouraged healthy eating behaviors such as eating fruits and vegetables, drinking water, eating wholegrain bread, eating breakfast, and drinking low-fat milk. The number of family meals was positively associated with encouragement of each of the healthy eating behaviors, and there were no differences in the relationships by ethnicity of the students (PUBMED:23651952). |
Instruction: Does measurement dependence explain the effects of the Life Skills Training program on smoking outcomes?
Abstracts:
abstract_id: PUBMED:15530601
Does measurement dependence explain the effects of the Life Skills Training program on smoking outcomes? Background: The Life Skills Training (LST) program is the most prominent school-based smoking prevention program in terms of its consistency in being named on lists of best practices. This study assessed whether the results pertaining to cigarette smoking reported in evaluations of the LST program are measurement dependent.
Methods: Seventeen reports published between 1980 and 2003 that included at least one outcome measure pertaining to cigarette smoking were identified. Data pertaining to the cigarette smoking measures used in the analysis and whether the results showed a statistically significant difference between experimental and control groups at follow-up were extracted from the reports.
Results: Fourteen different outcome measures were used across 17 reports. Only three pairs of reports presented the same set of outcomes. Recent reports showed the most consistent set of findings in support of the LST program, but there was little consistency in the outcome measures used in these analyses.
Conclusions: The use of so many smoking outcomes in the LST program evaluations raises concern as to whether the positive program effects reported are measurement dependent.
abstract_id: PUBMED:26202801
Long-Term Effects of the Life Skills Program IPSY on Substance Use: Results of a 4.5-Year Longitudinal Study. This study investigated the long-term effectiveness of a Life Skills program with regard to use and proneness to legal and illicit drug use across a 4.5-year study interval. The universal school-based Life Skills program IPSY (Information + Psychosocial Competence = Protection) against adolescent substance use was implemented over 3 years (basic program in grade 5 and booster sessions in grades 6 and 7). Over the same time period, it was evaluated based on a longitudinal quasi-experimental design with intervention and control group, including two follow-up assessments after program completion [six measurement points; N (T1) = 1657 German students; M age (T1) = 10.5 years]. Applying an HLM approach, results showed that participation in IPSY had a significant effect on the frequency of smoking, and proneness to illicit drug use, across the entire study period. In addition, shorter-term effects were found for the frequency of alcohol use in that intervention effects were evident until the end of program implementation but diminished 2 years later. Thus, IPSY can be deemed an effective intervention against tobacco use and proneness to and use of illicit drugs during adolescence; however, further booster sessions may be necessary in later adolescence to enhance youths' resistance skills when alcohol use becomes highly normative among peers.
abstract_id: PUBMED:26286364
Evaluation of the 10-year history of a 2-day standardized laparoscopic surgical skills training program at Kyushu University. Purpose: Laparoscopic and open surgical skills differ distinctly from one another. Our institute provides laparoscopic surgical skills training for currently active surgeons throughout Japan. This study was performed to evaluate the effectiveness of our 2-day standardized laparoscopic surgical skills training program over its 10-year history.
Methods: We analyzed the data on trainee characteristics, outcomes of skills assessments at the beginning and end of the program, and self-assessment after 6 months using a questionnaire survey.
Results: From January 2004 to December 2013, 914 surgeons completed the 2-day training program. Peaks in postgraduate years of experience occurred at years 2, 8, and 17. Suturing and knot tying times were significantly shorter at the end than beginning of the program (p < 0.001). However, the numbers of misplaced and loose sutures, maximum misplacement distance, and number of injuries to the rubber sheet were significantly higher at the end of the program (p < 0.001). A questionnaire at 6 months post-training revealed significant improvements in the overall skills and forceps manipulation (p < 0.0001) and a significantly shorter mean operation time for laparoscopic cholecystectomy (p < 0.001).
Conclusion: Our 2-day training program for active Japanese surgeons is thus considered to be effective; however, continued voluntary training is important and further outcomes assessments are needed.
abstract_id: PUBMED:23784075
Evaluation of the effectiveness of a smoking prevention program based on the 'Life Skills Training' approach. Our objective was to verify the effectiveness of a program based on the Life Skills Training approach with a greater extent than usual, not applied by teachers and a very high degree of reliability regarding the implementation of the expected content. Twenty-eight secondary schools in Granada (Spain) were randomly assigned to the intervention or control group. The students in the intervention group received 21 one-hour sessions in the first year and 12 one-hour sessions in the second year, whereas those in the control group received no health education or preventive sessions. Students completed questionnaires before and after the first year of sessions, before and after the second year, and at 1 year after the program. All five questionnaires were completed by 77% of the 1048 students initially enrolled in the study. The results suggest that the program had no preventive effects either immediately or at 1 year after its application. Application of the Life Skills Training approach does not appear to prevent the onset of smoking but may prove effective for avoiding escalation of the consumption levels of tobacco or other problematic drugs.
abstract_id: PUBMED:31046732
Quality of a life skills training program in Karnataka, India - a quasi experimental study. Background: Youth focused Life Skills Education and Counseling Services (YLSECS) program, trained teachers/National Service Scheme (NSS) officers to deliver Life Skills Education (LSE) and counseling services to college going youth in the state of Karnataka in India. Available evaluation of life-skills training program have neglected the recording and or reporting of outcome among those trained to implement life-skills training program. Present paper highlights the quality of YLSECS training program and change in perception among teachers/NSS-officers trained in-terms of improvement in their cognitive/affective domains.
Methods: YLSECS program focused on World Health Organization identified ten essential domains of life-skills. Participants of the YLSECS program were trained by adopting facilitatory approach based on the principles of Kolb's learning theory. Quasi experimental study design was used to evaluate the outcome of training among participants. Quality of the training was assessed using scoring system and change in perception was assessed using Likert scale. Statistical significance of change in perception before and after training was assessed by paired't' test for proportion.
Results: Overall, 792 participants rated the quality of training as either "good" or "excellent". Post-training, significant (p < 0.001) proportion of the participants reported improved awareness about life-skills (before training 49.9 to 74.4% vs post-training range from 91.6 to 95.1% for various domains). There was statistically significant (p < 0.001) increase in participants reporting "very confident" in teaching various life skill domains (before training from 22.7 to 34.2% for various domains and post-training it ranged from 65.2 to 74.7% for various domains). There was modest increase in participants reporting perceived ability to conduct life-skills workshop "without assistance" post-training (before training from 16.8 to 22.9% for various domains vs post-training ranged from 29.8 to 36.8% for various domains). Interestingly, considerable proportion of participants who prior to training reported being confident in providing life skills training (without any assistance), later (i.e post training) reported they need some/more assistance for the same.
Conclusion: YLSECS training program significantly improved participants knowledge and confidence in imparting life-skills and highlight the need for continued handholding of participants for effective implementation of LSE and counseling service program.
abstract_id: PUBMED:37916201
Effects of a Life Skills Enhancement Program on the Life Skills and Risk Behaviors of Social Media Addiction in Early Adolescence. Objective: This research aimed to develop and investigate the effects of a life skills enhancement program on the life skills and risk behaviors of social media addiction in early adolescence.
Methods: This research used a quasi-experimental design for a controlled study with a pre-test and post-test that collected data through a general information questionnaire, Social Media Addiction Screening Scale: S-MASS, and a life skills test. There were 48 samples recruited by purposive sampling from 5 schools in Chiang Mai, Thailand. The life skills enhancement program was developed under the theory of cognitive and behavioral therapy in combination with group therapy or occupational therapy. The program had a total of 10 sessions, with 1 session per week for 60 minutes and 10 weeks in total.
Results: For the results, a statistically significant difference in post-test SMASS scores between the control and experimental group was found (p<0.01). Moreover, a statistically significant difference in the experimental group between pre-test and post-test using S-MASS scores decreased significantly after participating in the program but not in the control group. This result is similar to the comparative data of life skills scores that revealed there was a statistically significant difference between the pre-test and post-test only in the experimental group. For the comparative data between the control and experimental group, however, there were no statistically significant differences in pre-test and post-test life skills scores between the two groups.
Conclusion: From the results, it can be summarized that the life skills enhancement program had affected an increase in life skills and a decrease in risky social media usage among adolescents.
abstract_id: PUBMED:35735401
Effectiveness and Factors Associated with Improved Life Skill Levels of Participants of a Large-Scale Youth-Focused Life Skills Training and Counselling Services Program (LSTCP): Evidence from India. (1) Background: To empower and facilitate mental health promotion for nearly 18 million youth, a pioneering state-wide Life Skills Training and Counselling Services Program (LSTCP) was implemented in Karnataka, India. This study assesses the changes in life skills scores, level of life skills and factors associated with increased life skills among participants of the LSTCP. (2) Method: This pre-post study design was conducted on 2669 participants who underwent a six-day structured LSTCP. Changes in mean life skills scores and level of life skill categories pre- and post-LSTCP were assessed. Multivariate logistic regression was performed to assess the factors associated with increases in life skills. (3) Results: The LSTCP resulted in significant changes in life skill scores and level of life skills, indicating the effectiveness of the training. All life skill domains, except empathy and self-awareness, increased post-training. There was a positive shift in the level of life skills. Age (AOR = 1.34, CI = 1.11-1.62), gender (AOR = 1.39, CI = 1.15-1.68), education (AOR = 1.44, CI = 1.05-1.97) and physical (AOR = 1.02, CI = 1.01-1.03) and psychological (AOR = 1.02, CI = 1.01-1.03) quality of life was associated with an increase in life skills among participants. (4) Conclusions: The LSTCP is effective in improving the life skills of participants. The LSTCP modules and processes can be used to further train youth and contribute to mental health promotion in the state.
abstract_id: PUBMED:35222180
Pilot Study on the Effects of the Teaching Interpersonal Skills Program for Teens Program (PEHIA). Background/objective: Social skills are essential in adolescence, both for their relational dimension and for their influence on other areas of adolescent life, so it is essential to include Social skills in the formal education of students.
Method: This paper presents the results of an experimental mixed factorial design pilot study in which an Interpersonal Skills Training Program for Adolescents (PEHIA) was applied. The convenience sample consisted of 51 adolescents. An evaluation was carried out before and after the intervention, using the CEDIA (Adolescent Interpersonal Difficulties Assessment Questionnaire) and SAS-A (Social Anxiety Scale for Adolescents) questionnaires.
Results: The mixed factorial ANOVA show significant differences in the overall measures and in most of the subscales of both questionnaires, indicating that PEHIA is effectiveness, at least in the short term.
Conclusions: The results obtained in assertiveness, interpersonal relationships and public speaking suggest that the program is feasible and shows promising results in reducing anxiety. However, a larger scale study should be conducted.
abstract_id: PUBMED:17116518
Avoiding "truth": tobacco industry promotion of life skills training. Purpose: To understand why and how two tobacco companies have been promoting the Life Skills Training program (LST), a school-based drug prevention program recommended by the Centers for Disease Control and Prevention to reduce youth smoking.
Methods: We analyzed internal tobacco industry documents available online as of October 2005. Initial searches were conducted using the keywords "life skills training," "LST," and "positive youth development."
Results: Tobacco industry documents reveal that since 1999, Philip Morris (PM) and Brown and Williamson (B&W) have worked to promote LST and to disseminate the LST program into schools across the country. As part of their effort, the companies hired a public relations firm to promote LST and a separate firm to evaluate the program. The evaluation conducted for the two companies did not show that LST was effective at reducing smoking after the first or second year of implementing the program. Even so, the tobacco companies continued to award grants to schools for the program. PM and B&W's role in promoting LST is part of a public relations strategy to shift the "youth smoking paradigm" away from programs that highlight the tobacco industry's behavior and toward programs in which the industry can be a partner.
Conclusions: Individuals and organizations responsible for developing and implementing tobacco control and youth smoking prevention programs should be aware of PM and B&W's role and motivations to encourage the wide-spread adoption of LST in schools.
abstract_id: PUBMED:37095906
Life skills training: Can it increases self esteem and reduces student anxiety? Students are always required to have good self-esteem. Still, psychological problems, such as excessive anxiety, will cause discomfort and distress, avoid social situations, and interfere with daily life, made them feel worthless. The purpose of the study was to determine the effect of self-esteem on anxiety through life skills training. The research subjects were 14 students divided into two groups, namely the experimental group and the control group. The measurement uses a self-esteem scale and an anxiety scale. Data analysis used non-parametric analysis, namely Mann Whitney, Wilcoxon, and Spearman's Rank Correlation Test. The results of this study indicate a significant decrease in anxiety with increased self-esteem using life skills training for students.
Answer: Measurement dependence refers to the possibility that the effects observed in a study are influenced by the specific measures used to assess outcomes. In the context of the Life Skills Training (LST) program and its impact on smoking outcomes, the study by PUBMED:15530601 raises concerns about measurement dependence. The study found that there was a wide variety of smoking outcome measures used across 17 reports, with little consistency in the measures employed. This lack of consistency in outcome measures could potentially explain the positive program effects reported, as different measures may capture different aspects of smoking behavior or may be more or less sensitive to changes resulting from the intervention. The study suggests that the positive effects of the LST program on smoking outcomes could be measurement dependent, meaning that the reported effects might vary depending on which specific smoking measures were used in the evaluations.
However, it is important to note that measurement dependence is just one potential explanation for the observed effects. Other studies, such as the one reported in PUBMED:26202801, have found significant long-term effects of the LST program on smoking frequency and proneness to illicit drug use, suggesting that the program can be effective regardless of the measurement issue. This study used a longitudinal design and a consistent set of measures over time, which may provide a more reliable assessment of the program's effectiveness.
In conclusion, while measurement dependence could potentially explain some of the effects of the LST program on smoking outcomes, other evidence suggests that the program can have a genuine impact on reducing smoking behavior among adolescents. It is important for future evaluations of the LST program to use consistent and validated outcome measures to more accurately assess its effectiveness. |
Instruction: Do stroke patients realise that a consequence of hypertension is stroke?
Abstracts:
abstract_id: PUBMED:9594990
Do stroke patients realise that a consequence of hypertension is stroke? Study Objective: The specific objectives of the study were to survey residual disability and handicap following stroke. Information on four risk factors, namely hypertension, age, smoking, and alcohol abuse, was obtained. Enquiry was made into the subjects' insight into the causes of their problems.
Design: Descriptive survey.
Setting: Baragwanath Hospital and Soweto.
Participants: Stroke patients 12-14 weeks post-discharge.
Outcome Measures: Structured questionnaire.
Results: A total of 361 patients were initially screened. Only 54 fulfilled all inclusion criteria, 38 (70%) over 50 years of age and 16 (30%) under 50 years. Ninety-three of the 361 died within the first 3 months; 71% of all patients knew that they had suffered a stroke. Only 20% of the total group understood that hypertension had probably caused their stroke, although 76% of the older group and 56% of the younger group had been told at some stage that they were hypertensive. Of the older group 32% knew the name of their medication, 21% could not name their medication and 23% claimed they were on no medication. Similarly in the younger group, 19% could name their medication, 25% could not name their medication, and 12% were on no medication. In addition 16% of the older group and 56% of the younger group admitted to smoking. The abuse of alcohol in both groups was low, but this figure was taken from subjective assessment and may not reflect the true extent of drinking as a risk factor.
Conclusion: Most patients in this study appear well aware of their hypertension and take medication. However, they seem unaware that their hypertension and stroke are causally linked and their hypertension knowledge is suboptimal. It is also apparent that smoking is increasing as a major risk factor for stroke in the black population of South Africa. Patients need more education regarding hypertension and its consequences.
abstract_id: PUBMED:35146170
Sufferings of its consequences; patients with Type 2 diabetes mellitus in North-East Ethiopia, A qualitative investigation. Background: The burden of diabetes in Ethiopia is exponentially increasing with more than 68% of people with it being undiagnosed and a death rate of 32%. It is a disease impacting patients with negative somatic, psychological, social, and economic consequences. Patients in Ethiopia have very low awareness about chronic complications, which is very worrying. The study aimed to explore the consequences of their disease experienced by type 2 diabetes patients in North-East Ethiopia.
Methods: The study employed a phenomenological approach informed by the consequences dimension of the Common-Sense Model. It was conducted from July 2019 to January 2020 using purposive sampling with face-to-face in-depth interviews, for about three weeks, until reaching theoretical saturation. The data were collected from twenty-four type 2 diabetes patients, who were selected to include various socio-demographic characteristics. The data were organized by QDA Miner Lite v2.0.8 and analyzed thematically using narrative strategies.
Results: Using Common-Sense Model as a framework, the diabetes consequences experienced by the participants were categorized as complications and impacts. While the most common complications were cardiovascular disorders (hypertension, erectile dysfunction, heart and kidney problems, hyperlipidemia, edema, stroke, and fatigue) and ocular problems; the most common impacts were psychosocial (dread in life, suffering, family disruption, hopelessness, dependency, and craving), and economic (incapability and loss of productivity) problems.
Conclusion: The patients here were bothered by diabetes complications as well as its psycho-social, economic and somatic consequences; being the psycho-social impacts the most common. As a result, the patients have been suffering in the dread of "what can come next?" This dictates that holistic care, based on Common-Sense Model, is needed in providing special emphasis to psycho-social issues.
abstract_id: PUBMED:12505478
Stroke and sleep apnoea: cause or consequence? The relationships between obstructive sleep apnoea syndrome (OSAS) and stroke are still under discussion, but increasing evidence demonstrates that the OSAS is an independent risk factor for stroke. However, in rare cases, OSAS could be a consequence of strokes, especially if located in the brainstem. Many recent studies have found a 70 to 95% frequency of OSAS (defined by an apnoea/hypopnoea index >10) in patients with acute stroke. Age, body mass index, diabetes, and severity of stroke have been identified as independent predictors of stroke. Furthermore, the presence of OSAS in stroke patients could lead to a poor outcome. The potential mechanisms linking OSAS and stroke are probably multiple (arterial hypertension, cardiac arrhythmia, increased atherogenesis, coagulation disorders, and cerebral haemodynamic changes). Despite numerous uncertainties, OSAS should be systematically screened at the moment it is clinically suspected in patients with acute stroke. However, the optimal timing (early or differed) for treatment with nasal continuous positive airway pressure remains to be determined.
abstract_id: PUBMED:30178470
Acute stroke patients' knowledge of stroke at discharge in China: a cross-sectional study. Objectives: A good mastery of stroke-related knowledge can be of great benefit in developing healthy behaviours. This study surveyed the knowledge about stroke and influencing factors among patients with acute ischaemic stroke (AIS) at discharge in a Chinese province.
Methods: A cross-section study was conducted from November 1, 2014 to January 31, 2015. A total of 1531 AIS patients in Hubei Province completed a questionnaire at discharge. Multivariate linear regression was used to identify the influencing factors of their knowledge of stroke.
Results: About 31.2% of the respondents did not know that stroke is caused by blockage or rupture of cerebral blood vessels and 20.3% did not realise they need immediate medical attention after onset. Approximately 50% did not know that sudden blurred vision, dizziness, headache and unconsciousness are the warning signs of stroke. Over 40% were not aware of the risk factors of the condition, such as hypertension, hyperlipidaemia, diabetes mellitus, smoking and obesity. Over 20% had no idea that they need long-term medication and strict control of blood pressure, blood lipids and blood sugar. Their knowledge levels were correlated with regions of residence (P < 0.0001), socioeconomic status (P < 0.05), physical condition (P < 0.01), previous stroke (P < 0.0001) and family members and friends having had a stroke (P < 0.01).
Conclusions: Most AIS patients in Hubei Province, China, had little knowledge of stroke at discharge. Further efforts should be devoted to strengthening the in-hospital education of stroke patients, especially those with a low income and those from rural areas.
abstract_id: PUBMED:16834831
Physicians, patients, and public knowledge and perception regarding hypertension and stroke: a review of survey studies. Background: Hypertension is the most common treatable risk factor for stroke. Efforts have been made to raise the awareness of both hypertension and stroke. There is a lack of clear understanding of the current state of knowledge, attitudes, and perceptions about hypertension and stroke among patients, the public, and physicians.
Objectives: To understand the level of knowledge, attitudes, and perceptions regarding hypertension and stroke among patients, the public, and physicians and to highlight the practices of physicians in managing hypertension given current hypertension guideline recommendations.
Methods: Current Contents, Embase, and Medline databases were searched to identify manuscripts published between January 1994 and December 2004 reporting surveys concerning the knowledge and perceptions of patients, the public, and physicians regarding hypertension and stroke. Studies were summarized and collated into a spreadsheet.
Results: Of a total of 85 manuscripts identified, only 43 contained information meeting the study objectives. Based on the reported results, it was observed that patients and public alike are generally aware that hypertension is one of the risk factors of stroke, and that stroke could be a consequence of hypertension, but do not consider hypertension to be a serious health concern. Physicians appreciate the importance of managing hypertension to avoid future complications such as stroke. However, they do not conform to the recommendations made in various hypertension guidelines. They have higher thresholds than guideline recommendations for defining and categorizing hypertension, for starting antihypertensive therapy, and for target blood pressure goals. They do not aggressively manage hypertension in older people, considering that the elderly are at greater risk for developing stroke.
Conclusions: Patients and public are aware of the link between hypertension and stroke but do not appreciate the consequences of uncontrolled hypertension. Physicians worldwide need to engage in patient communication regarding hypertension, stroke, and the dangers of uncontrolled hypertension, and need to implement guideline recommendations for hypertension diagnosis and management.
abstract_id: PUBMED:24524022
Determinants of the length of stay in stroke patients. Objectives: The study objective was to identify the factors that influence the length of stay (LOS) in hospital for stroke patients and to provide data for managing hospital costs by managing the LOS.
Methods: This study used data from the Discharge Injury Survey of the Korea Centers for Disease Control and Prevention, which included 17,364 cases from 2005 to 2008.
Result: The LOS for stroke, cerebral infarction, intracerebral hemorrhage, and subarachnoid hemorrhage was 18.6, 15.0, 28.9, and 25.3 days, respectively. Patients who underwent surgery had longer LOS. When patients were divided based on whether they had surgery, there was a 2.4-time difference in the LOS for patients with subarachnoid hemorrhage, 2.0-time difference for patients with cerebral infarction, and 1.4-time difference for patients with intracerebral hemorrhage. The emergency route of admission and other diagnosis increased LOS, whereas hypertension and diabetic mellitus reduced LOS.
Conclusion: In the present rapidly changing hospital environments, hospitals approach an efficient policy for LOS, to maintain their revenues and quality of assessment. If LOS is used as the indicator of treatment expenses, there is a need to tackle factors that influence the LOS of stroke patients for each disease group who are divided based on whether surgery is performed or not for the proper management of the LOS.
abstract_id: PUBMED:2843018
Mild hypertension in patients with suspected dilated cardiomyopathy: cause or consequence? This study was undertaken to clarify the relationship between mild transient hypertension and dilated cardiomyopathy. Fifty-five patients were studied: group 1--controls (12 patients), group 2--hypertensives without clinical evidence of heart failure (14 patients), group 3--patients with hypertensive heart failure and diastolic blood pressure above 100 mmHg (10 patients), group 4--patients with possible dilated cardiomyopathy with mild hypertension, i.e. diastolic blood pressure of 90-100 mmHg (8 patients), group 5--patients with dilated cardiomyopathy and normal blood pressure (11 patients). The haemodynamic status and cardiac contractility indices were measured in each patient on admission, using M-mode echocardiography. Serum sodium and potassium as well as the urinary sodium, potassium and vanillyl mandelic acid excretions were also measured. The stroke volume, cardiac output and cardiac index fell with heart failure, but much more remarkably in group 4. The peripheral vascular resistance was higher in groups 2, 3 and 4 than in groups 1 and 5; so also were the aortic diameter, left posterior wall thickness and left ventricular mass. The plasma volume, aldosterone and cortisol levels were higher and the urinary sodium and potassium excretion lower in patients with heart failure (groups 3, 4 and 5). It is concluded that the raised blood pressure found in some patients suspected to have dilated cardiomyopathy is not due to the haemodynamic and biochemical changes that occur in heart failure. Such patients are 'chronic' hypertensives with hypertensive heart failure. Their presenting blood pressure is low because of their markedly reduced cardiac output.
abstract_id: PUBMED:32161893
Latest Concepts in the Endodontic Management of Patients with Cardiovascular Disorders. There are several cardiovascular interventions that need special considerations in the provision of treatments within the scope of endodontics. If these interventions are not carefully identified, diagnosed, and considered in the overall treatment plan for the patient, they may result in fatal conditions. These include hypertension that causes fatal cardiac disorders, such as angina pectoris, ischemic heart diseases, and myocardial infarction, and also cerebrovascular diseases; congestive heart failure; infective endocarditis, valvular diseases, and carrying pacemakers; and the use of antiplatelet and anticoagulant drugs that are commonly prescribed for patients who have experienced heart stroke. The aim of this article is to review the newest recommendations for patients with these disorders who require endodontic treatments.
abstract_id: PUBMED:28852243
Genealogy Study of Three Generations of Patients with Bipolar Mood Disorder Type I. Introduction: The purpose of this research is genealogy examination of three generation of bipolar mood disorder Type I patients.
Methods: Patients selected using Poisson sampling method from 100 patients with bipolar mood disorder Type I, referring to a psychiatric center of Amir Kabir Hospital of Arak, Iran. Examine issues such as physical ailments, psychological review of living and deceased family members of each patient, drawn family pedigree using pedigree chart, check the relationship of the different pattern of the autosomal dominant and recessive disease, sex-linked dominant and recessive and linked to Y chromosome have been performed on patients. Different methods used in this study are pedigree chart and young mania rating scale and SPSS and Pearson's correlation test for analyzing the data collected.
Results: Among the studied inheritance patterns, the most common inheritance pattern was autosomal recessive. There was a significant relationship between age, number of generation, and inheritance patterns with physical ailments in families of patients with bipolar mood disorder (P < 0.05), but there was no significant association with mental illness (P > 0.05). Furthermore, there was a significant relation between generation and skin, gastrointestinal, ovarian, lung, coronary heart disease, diabetes mellitus, hypertension, Cerebrovascular accident (CVA), hyperlipidemia, cardiomyopathy, hypothyroidism, and kidney disease in patients with bipolar affective disorder Type I (P < 0.05).
Conclusion: The results showed that autosomal recessive was the most pattern of inheritance and there is a significant relationship between generation and some physical disorders in patients with bipolar mood disorder Type I.
abstract_id: PUBMED:37916134
Risk Factors Related to the Death of Admitted COVID-19 Patients: A Buffalo Study. Background: Coronavirus disease 2019 (COVID-19) may result in a severe acute respiratory syndrome that leads to a worldwide pandemic. Despite the increasing understanding of COVID-19 disease, the mortality rate of hospitalized COVID-19 patients remains high.
Objective: To investigate the risk factors related to the mortality of admitted COVID-19 patients during the peak of the epidemic from August 2021 to October 2021 in Vietnam.
Methods: This is a prospective cohort study performed at the Hospital for Rehabilitation-Professional diseases. The baseline and demographic data, medical history, clinical examination, the laboratory results were recorded for patients admitted to the hospital with confirmed COVID-19. A radiologist and a pulmonologist will read the chest radiographs on admission and calculate the Brixia scores to classify the severity of lung abnormalities. Patients were followed up until beingrecovered or their death. Comparison of clinical and subclinical characteristics between recovery and death groups to find out risk factors related to the death of COVID-19 patients.
Results: Among 104 admitted COVID-19 patients, men accounted for 42.3%, average age of 61.7 ± 13.7. The most common symptoms were fever 76.9%, breathlessness 74%, and fatigue 53.8%. The majority (84.6%) of the study population had at least one co-morbidity, including hypertension (53.8%), diabetes (25.9%), gastritis (19.2%), ischemic heart disease (15.4) %), stroke (9.6%) and osteoarthritis (9.6%). The rate of mild and moderate COVID-19 is 13.4%, severe 32.7%, and critical 40.4%. There are 88 inpatients (84.6%) who needed respiratory support. The median hospital stay was 13 days (IQR 10-17.75 days). The rate of intubated patients with mechanical ventilation was 31.7%. The overall mortality rate was 29.8%. Risk factors related to death included Brixia scores > 9, Urea > 7 mmol/L, Ferrtin > 578 ng/ml, Failure to get vaccinated, Age > 60 years, and Low Oxygen SpO2 < 87% (BUFFALO).
Conclusion: The main result of the study is the independent risk factors related to the death of admitted COVID-19 patients including Brixia scores > 9, Urea > 7 mmol/L, Ferrtin > 578 ng/ml, Failure to get vaccinated, Age > 60 years, and Low Oxygen SpO2 < 87% ((BUFFALO) which suggests that these COVID-19 patients should be closely followed up.
Answer: The awareness of stroke patients regarding hypertension as a risk factor for stroke appears to be limited. In a study conducted at Baragwanath Hospital and Soweto, only 20% of stroke patients understood that hypertension had probably caused their stroke, despite 76% of the older group and 56% of the younger group having been informed at some point that they were hypertensive (PUBMED:9594990). Similarly, a cross-sectional study in China found that over 40% of patients with acute ischemic stroke were not aware of hypertension as a risk factor for stroke (PUBMED:30178470). Another review of survey studies indicated that while patients and the public are generally aware that hypertension is one of the risk factors for stroke, they do not consider hypertension to be a serious health concern (PUBMED:16834831). These findings suggest that stroke patients may not fully realize or appreciate the causal link between hypertension and stroke, highlighting the need for more education regarding hypertension and its consequences. |
Instruction: Can we understand partnerships in general practice?
Abstracts:
abstract_id: PUBMED:8732326
Can we understand partnerships in general practice? A pilot study. Background: Partnerships have been investigated in different professions, but other than identifying problems, little work has been carried out on general practice.
Objective: The aim of this present study was to develop methods for studying partnerships in general practice.
Method: A tripartite methodological approach was used, with questionnaires adapted from other instruments in use in other professions, followed by an individual interview with each partner, and non-participant observation at a partnership meeting. Results for one case-study partnership are given.
Results: There were no major differences between the partners on all dimensions measured; the minor differences indicated by the results of the questionnaires were corroborated by the partner interviews and observations.
Conclusions: We conclude that the use of such techniques could provide support to partnerships going through significant periods of change.
abstract_id: PUBMED:34016308
DNP scholarly projects: Unintended consequences for academic-practice partnerships. With the rapid proliferation of Doctor of Nursing Practice (DNP) programs, academic-practice partnerships are critical in the implementation of rigorous and valuable scholarly projects. However, the failure to develop meaningful partnerships can have unintended consequences, particularly when students and practice sites do not have the preparation and support to navigate these partnerships. Four case studies are presented that explore the issues of preserving autonomy, practicing stewardship, imposing unfair burden and maintaining project fidelity. Best practices are presented to promote equitable collaboration and a mutually beneficial experience. Universities must have the resources required to generate expert clinicians able to translate research into practice and support effective academic-practice partnerships.
abstract_id: PUBMED:10162760
General practice partnerships: an exploratory review. Presents the results of a literature review on general practice partnerships. The objective was to find out what has been written and by whom. The results of the review indicate that very little empirical work has been carried out and most of the publications are by doctors addressing the practical problems of working in partnerships. Given this paucity of material, goes on to discuss relevant literature from social science disciplines and presents five perspectives on partnerships. Each perspective yields questions worthy of further investigation particularly at a time when primary care is experiencing rapid change and development.
abstract_id: PUBMED:25601245
Using partnerships to advance nursing practice and education: the precious prints project. With the release of the Institute of Medicine's (2011) Future of Nursing report, nursing leaders recognized that strong academic-practice partnerships are critical to advancing the report's recommendations. Using established principles for academic-practice partnerships, a manufacturer, children's hospital, student nurses organization, and college of nursing created the Precious Prints Project (P(3)) to give families who have experienced the death of a child a sterling silver pendant of the child's fingerprint. This article outlines the background, implementation, and benefits of the P(3) partnership with the aim of encouraging readers to consider how similar programs might be implemented in their organizations. To date, the program has given pendants to more than 90 families. In addition, nurses and nursing students have been introduced to the provision of a tangible keepsake for families experiencing the loss of a child and participation in philanthropy and an academic practice partnership.
abstract_id: PUBMED:1392922
General practice partnerships: till death us do part? Objectives: To investigate applications for general practice partnership vacancies by established general practitioner principals, the reasons for changing partnerships, and the disincentives to these moves.
Design: Confidential postal questionnaire.
Subjects: Applicants to 367 general practices in the United Kingdom advertising for a new full time partner.
Main Outcome Measures: The proportion of job applications containing at least one application from established principals, proportion of principals appointed as new partners, incentives and disincentives to changing partnership.
Results: Of 325 replies (89% response rate) received, 292 were suitable for further analysis. 210/241 (87%) of all applications contained some applications from at least one established principal. 12% of all applications were made by principals. 41/296 (14%) of the newly appointed partners had previously been an established principal. The main reasons for leaving the previous partnership were a desire to move locality or not getting on with previous partners. The disincentives to changing partnerships were largely financial, including the cost of the move and loss of income.
Conclusions: It is possible for established principals in general practice to overcome the disincentives and to change partnerships. There did not seem to be any overall prejudice against appointing principals, in contrast to previously published views.
abstract_id: PUBMED:26053328
Unpacking University-Community Partnerships to Advance Scholarship of Practice. Today, more than ever, occupational therapists are engaged in close partnerships with community organizations and community settings such as service agencies, refugee and immigrant enclaves, and faith-based organizations, to name a few, for the purpose of engaging in scholarship of practice. However, we know little about the views of community partners regarding the development and sustainability of university-community partnerships. The purpose of this article is twofold: First, we will describe a pilot study in which we gathered qualitative data from community partners engaged in scholarship of practice with faculty and students, regarding their views about benefits of partnerships, challenges, and characteristics of sustainable partnerships. Second, based on this pilot study and extensive experience of the authors, we propose a revised version of a partnerships model available in the literature. We illustrate the model through examples of the authors' collective experiences developing and sustaining successful university-community partnerships.
abstract_id: PUBMED:23236092
Partnerships for community mental health in the Asia-Pacific: principles and best-practice models across different sectors. Objectives: Stage Two of the Asia-Pacific Community Mental Health Development Project was established to document successful partnership models in community mental health care in the region. This paper summarizes the best-practice examples and principles of partnerships in community mental health across 17 Asia-Pacific countries.
Conclusions: A series of consensus workshops between countries identified best-practice exemplars that promote or advance community mental health care in collaboration with a range of community stakeholders. These prototypes highlighted a broad range of partnerships across government, non-government and community agencies, as well as service users and family carers. From practice-based evidence, a set of 10 key principles was developed that can be applied in building partnerships for community mental health care consistent with the local cultures, communities and systems in the region. Such practical guidance can be useful to minimize fragmentation of community resources and promote effective partnerships to extend community mental health services in the region.
abstract_id: PUBMED:31136672
Academic-Practice Partnerships: A Win-Win. Partnerships between academia and practice can lead to improved patient care and health system innovations. Nurse educators in both academia and practice are positioned to facilitate opportunities for students and practicing nurses to be involved in evidence-based practice (EBP) care initiatives involving academic-health care partners in clinical and/or community-based systems. Best practices in collaborative partnerships have demonstrated the significance of their far-reaching impact on patients, students, direct care nurses, and health systems. Translation of EBP knowledge to practice transforms patient outcomes and empowers nurses to address the complexity of health care systems. This article describes the process and outcomes of an academic-practice partnership facilitated by nurse educators in both academic and practice settings. The impact of the adoption of EBP projects on clinical practice, students, and practicing nurses is described. National and international implications for academic- practice partnerships are discussed. [J Contin Educ Nurs. 2019;50(6):282-288.].
abstract_id: PUBMED:32984440
Decomposition of Practice as an Activity for Research-Practice Partnerships. This analysis examines the process of one research-practice partnership (RPP) engaged in the activity of decomposing elementary principal practice in the context of an instructional improvement initiative in mathematics. Decomposing, or breaking apart, complex practice has been used primarily by researchers to inform the design of pre-service teacher education. We argue that decomposition is a rich activity for researchers and practitioners to collaboratively engage in to support improvement efforts where practitioners are expected to transform their day-to-day practice. We examine what can be learned from the process by which one RPP engaged in decomposing practice that might be useful for other RPPs. Our retrospective, qualitative analysis supports understanding of how RPPs might engage in decomposition and the role decomposition might play in supporting RPPs to foster educational transformation in local contexts.
abstract_id: PUBMED:12927480
Mutual influences of general practitioners in partnerships. The aim of this study was to find out whether or not general practitioners (GPs) within the same partnership show more similarities in attitudes and behaviour than GPs in different partnerships, and what the causes of these similarities might be. Knowledge of the causes of patterns of similarities within medical teams contributes to understanding medical practice variation, which is crucial in developing effective health care policies. Data were used from the Dutch National Survey of General Practice ('87/'88), consisting of a stratified sample of 161 Dutch GPs, who served 335,000 patients in total. To find out whether GPs in the same partnership are indeed more similar than GPs randomly chosen from different partnerships, we constructed two kinds of pairs: all possible pairs of GPs working in the same partnership (actual pairs), and randomly constructed pairs of GPs who are not working in the same partnership (random pairs). For each pair difference scores were computed for a variety of attitudes and behaviour. Difference scores for actual and random pairs were analysed using multi-level analysis. Most differences in attitudes and behaviour were smaller for actual pairs than for random pairs. Furthermore, in the majority of the cases differences were no longer statistically significant after explanatory variables indicating selection, gradual adaptation and rapid adaptation through shared circumstances were taken into account. It was found that Dutch GPs working in the same partnership showed more resemblance in attitudes and behaviour than GPs not working in the same partnership. Most indications point towards circumstances, and to a lesser extent towards adaptation, as an explanation of similarities within partnerships. The implication of this study is that medical practice variations are not merely individual differences in preferred practice style, but are patterned by social processes in partnerships and local circumstances.
Answer: Understanding partnerships in general practice is a multifaceted endeavor that has been explored through various research methods and perspectives. A pilot study aimed to develop methods for studying partnerships in general practice by employing a tripartite methodological approach, which included questionnaires, individual interviews, and non-participant observations at partnership meetings. The study found no major differences between partners on all dimensions measured, suggesting that such techniques could support partnerships during significant periods of change (PUBMED:8732326).
The literature on general practice partnerships is limited, with most publications addressing practical problems from the perspective of doctors. An exploratory review highlighted the need for further empirical work and suggested that insights from social science disciplines could be valuable, especially given the rapid changes in primary care (PUBMED:10162760).
Research has also shown that general practitioners within the same partnership tend to have more similarities in attitudes and behavior than those from different partnerships. This resemblance is likely due to shared circumstances and, to a lesser extent, adaptation within the partnership. These findings suggest that variations in medical practice are influenced by social processes and local conditions within partnerships (PUBMED:12927480).
In the broader context of academic-practice partnerships, studies have emphasized the importance of meaningful collaborations to avoid unintended consequences, such as imposing unfair burdens or compromising project fidelity (PUBMED:34016308). Successful partnerships can lead to improved patient care and health system innovations, as evidenced by the Precious Prints Project, which demonstrated the benefits of academic-practice partnerships in providing tangible keepsakes for grieving families (PUBMED:25601245).
Overall, understanding partnerships in general practice requires a comprehensive approach that considers the dynamics within partnerships, the broader context of academic-practice collaborations, and the influence of local circumstances. The use of diverse research methods and the incorporation of principles from social sciences can provide valuable insights into the functioning and development of partnerships in general practice. |
Instruction: Is lumbar disk disease an occupational disease?
Abstracts:
abstract_id: PUBMED:12226775
Is lumbar disk disease an occupational disease? Scientific background, radiological findings, and medical legal interpretations Aim: It should be cleared whether or not the interpretation of lumbar disk disease as an occupational disease is justifiable. Which disc changes follow whole-body vibration and can they be distinguished from those which occur constitutionally while aging?
Method: Orthopedic meta-analysis of epidemiological and occupational studies concerning the influence of whole- body vibration.
Results: Reliable studies are rare. Severe methodological problems limit the interpretation of difficult relationships. The role of age when working influences begin as well as the stress and behaviour of exposed persons away from the work-place before and while working with whole-body vibration is not known. There is no study which could be called exact according orthopedic criteria. It is therefore not evident that whole-body vibration causes lumbar disc disease.
Conclusions: After whole-body vibration similar to long term heavy lifting an earlier beginning of disk degeneration in X-ray-studies can be observed. This leads to prevalence differences, which diminish with increasing age. Deviation to the left of the prevalence curve lasts for five to ten years. Whole-body vibration leads to a topographic modification of disk degeneration of the lumbar spine. After long duration exposition an increased amount of spondylotic changes at the thoracolumbar junction and the middle half of lumbar spine can be observed (up to the upper plate of the fourth vertebral body). This can be explained by biomechanic means: whole-body vibration caused by tractor driving and similar long-term exposures leads to traction of the disks of the lower thoracic spine and the upper and middle parts of lumbar spine.
abstract_id: PUBMED:11276958
Intervertebral disk displacement and trauma It is difficult to find medical evidence of a correlation between a lumbar disk disease and trauma. One should consider whether the individual degeneration of lumbar disks or the trauma lead to the typical complaints. Disk disease in the population are very common. Therefore the relevance of the individual affection before trauma has to be considered. Spinal trauma with its sudden, incidental onset needs to be differentiated from purposeful and conscious movements. An intervertebral disk disease can be classified as accident related only in cases involving adequate trauma, with no previous complains, and a sudden onset of pain.
abstract_id: PUBMED:12376870
Disk-related diseases of the lumbar spine as an example for the critical interaction between clinical diagnosis and occupational disease First an overview of the significance of musculoskeletal diseases in terms of national economy and social politics is given, and then the historical development of the occupational disease "disk-related spinal disorders" is outlined. The most important court decisions and the actual state of jurisprudence on this matter are summarized, emphasizing the questions which still have to be answered in the course of medical evaluation of a spinal occupational disease. Based on a joint research project on the spinal effects of whole-body vibrations, an analysis of lumbar X-rays is presented which aimed at detecting specific patterns of response corresponding to the respective extent of strain. In spite of a statistically significant relationship between the clinical diagnosis of a lumbar syndrome and the severity of the degenerative radiological changes on the one hand and vibration exposure on the other hand, the evaluation of the lumbar X-rays did not show any clear radiological pattern related to the exposure. Furthermore, starting points for prevention are discussed. With regard to whole-body vibration, the technical possibilities of reducing the amount of vibration load are still not completely exhausted. However, during preventive measures of occupational health usually carried out as medical screening examinations, the occupational health physician again will face some of the same problems which have already been met with respect to the medical evaluation. Thus, a suggestion is made to modify the traditional concepts of the Professional Industrial Associations on occupational diseases in order to take into account the peculiarities of disk-related spinal disorders.
abstract_id: PUBMED:32157746
Interactions between the MMP-3 gene rs591058 polymorphism and occupational risk factors contribute to the increased risk for lumbar disk herniation: A case-control study. Objective: Lumbar disk herniation (LDH) is a complex condition based on lumbar disk degeneration (LDD). Previous studies have shown that genetic factors are highly associated with the severity and risk for LDH. This case-control study was aimed to evaluate the association between the matrix metalloproteinase (MMP)-3 gene rs591058 C/T polymorphism and LDH risk in a southern Chinese population.
Methods: A total of 231 LDH patients and 312 healthy controls were recruited in this study. Genotyping was analyzed using a standard polymerase chain reaction and restriction fragment length polymorphism (PCR-RFLP).
Results: It was observed that TT genotype or T allele carriers of the MMP-3 gene rs591058 C/T polymorphism was more likely associated with an increased risk for LDH. Subgroup analyses showed the following characteristics increased the risk for LDH: female sex; cigarette smoking; and alcohol consumption. Furthermore, individuals with high whole body vibration, bending/twisting, and lifting were associated with an increased risk for LDH.
Conclusion: Taken together, these data indicated that the MMP-3 gene rs591058 C/T polymorphism was associated with an increased risk for LDH. The MMP-3 gene rs591058 C/T polymorphism might serve as a clinical indicator and marker for LDH risk in the Chinese population.
abstract_id: PUBMED:24970094
The dose-response relationship between cumulative lifting load and lumbar disk degeneration based on magnetic resonance imaging findings. Background: Lumbar disk degeneration (LDD) has been related to heavy physical loading. However, the quantification of the exposure has been controversial, and the dose-response relationship with the LDD has not been established.
Objective: The purpose of this study was to investigate the dose-response relationship between lifetime cumulative lifting load and LDD.
Design: This was a cross-sectional study.
Methods: Every participant received assessments with a questionnaire, magnetic resonance imaging (MRI) of the lumbar spine, and estimation of lumbar disk compression load. The MRI assessments included assessment of disk dehydration, annulus tear, disk height narrowing, bulging, protrusion, extrusion, sequestration, degenerative and spondylolytic spondylolisthesis, foramina narrowing, and nerve root compression on each lumbar disk level. The compression load was predicted using a biomechanical software system.
Results: A total of 553 participants were recruited in this study and categorized into tertiles by cumulative lifting load (ie, <4.0 × 10(5), 4.0 × 10(5) to 8.9 × 10(6), and ≥8.9 × 10(6) Nh). The risk of LDD increased with cumulative lifting load. The best dose-response relationships were found at the L5-S1 disk level, in which high cumulative lifting load was associated with elevated odds ratios of 2.5 (95% confidence interval [95% CI]=1.5, 4.1) for dehydration and 4.1 (95% CI=1.9, 10.1) for disk height narrowing compared with low lifting load. Participants exposed to intermediate lifting load had an increased odds ratio of 2.1 (95% CI=1.3, 3.3) for bulging compared with low lifting load. The tests for trend were significant.
Limitations: There is no "gold standard" assessment tool for measuring the lumbar compression load.
Conclusions: The results suggest a dose-response relationship between cumulative lifting load and LDD.
abstract_id: PUBMED:11600730
The role of cumulative physical work load in lumbar spine disease: risk factors for lumbar osteochondrosis and spondylosis associated with chronic complaints. Objectives: To investigate the relation with a case-control study between symptomatic osteochondrosis or spondylosis of the lumbar spine and cumulative occupational exposure to lifting or carrying and to working postures with extreme forward bending.
Methods: From two practices and four clinics were recruited 229 male patients with radiographically confirmed osteochondrosis or spondylosis of the lumbar spine associated with chronic complaints. Of these 135 had additionally had acute lumbar disc herniation. A total of 197 control subjects was recruited: 107 subjects with anamnestic exclusion of lumbar spine disease were drawn as a random population control group and 90 patients admitted to hospital for urolithiasis who had no osteochondrosis or spondylosis of the lumbar spine radiographically were recruited as a hospital based control group. Data were gathered in a structured personal interview and analysed using logistic regression to control for age, region, nationality, and other diseases affecting the lumbar spine. To calculate cumulative forces to the lumbar spine over the entire working life, the Mainz-Dortmund dose model (MDD), which is based on an overproportional weighting of the lumbar disc compression force relative to the respective duration of the lifting process was applied with modifications: any objects weighing >or=5 kg were included in the calculation and no minimum daily exposure limits were established. Calculation of forces to the lumbar spine was based on self reported estimates of occupational lifting, trunk flexion, and duration.
Results: For a lumbar spine dose >9 x 10(6) Nh (Newton x hours), the risk of having radiographically confirmed osteochondrosis or spondylosis of the lumbar spine as measured by the odds ratio (OR) was 8.5 (95% confidence interval (95% CI) 4.1 to 17.5) compared with subjects with a load of 0 Nh. To avoid differential bias, forces to the lumbar spine were also calculated on the basis of an internal job exposure matrix based on the control subjects' exposure assessments for their respective job groups. Although ORs were lower with this approach, they remained significant.
Conclusions: The calculation of the sum of forces to the lumbar spine is a useful tool for risk assessment for symptomatic osteochondrosis or spondylosis of the lumbar spine. The results suggest that cumulative occupational exposure to lifting or carrying and extreme forward bending increases the risk for developing symptomatic osteochondrosis or spondylosis of the lumbar spine.
abstract_id: PUBMED:17086770
Cervical and lumbar MRI findings in aviators as a function of aircraft type. Background & Aims: Neck pain and lower back pain (LBP) are frequently reported by military helicopter pilots (HP) and fighter pilots. A small number of studies have used imaging methods to evaluate spinal cervical degenerative findings in pilots exposed to high +Gz, with results indicating an increase in cervical disk protrusions in this population. We evaluated the cervical and lumbar spine with magnetic resonance imaging (MRI) to assess the prevalence of degenerative changes in three subpopulations of pilots.
Methods: Fighter pilots (FP), transport pilots (TP), and HP (10 pilots in each group) underwent cervical and lumbar MRI. Degenerative pathologic changes (disk herniation, cord compression, foraminal stenosis, and the presence of osteophytes) were evaluated in each group by two independent experienced radiologists.
Results: Cervical spine degenerative changes seemed to be associated with older age rather then aircraft type, affecting the older group of TP (8/10 pilots) more than the younger FP group who were exposed to high +Gz (3/10 pilots). In contrast, for lumbar spine degenerative changes, we found an uncommon pattern of lumbar spine degeneration in HP, affecting the upper part of the lumbar spine (10/13 disks found at L1-L4).
Conclusions: The results of this study suggest that HP may have detectable degenerative lumbar findings. More research is needed to validate these findings as well as to explore the possible pathophysiological link between occupational exposures and the specific involvement of the upper lumbar spine.
abstract_id: PUBMED:22281487
Risk factors for hospitalization due to lumbar disc disease. Study Design: Prospective cohort study.
Objective: To study biomechanical factors in relation to symptomatic lumbar disc disease.
Summary Of Background Data: The importance of biomechanical factors in lumbar disc disease has been questioned in the past decade and knowledge from large prospective studies is lacking.
Methods: The study basis is a cohort of 263,529 Swedish construction workers who participated in a national occupational health surveillance program from 1971 until 1992. The workers' job title, smoking habits, body weight, height, and age were registered at the examinations. The occurrence of hospitalization due to lumbar disc disease from January 1, 1987, until December 31, 2003, was collected from a linkage with the Swedish Hospital Discharge Register.
Results: There was an increased risk for hospitalization due to lumbar disc disease for several occupational groups compared with white-collar workers and foremen. Occupational groups with high biomechanical loads had the highest risks, for example, the relative risk for concrete workers was 1.55 (95% confidence interval [CI], 1.29-1.87). A taller stature was consistently associated with an increased risk. The relative risk for a man of 190- to 199-cm height was 1.55 (95% CI, 1.30-1.86) compared with a man being 170- to 179-cm height. Body weight and smoking were also risk factors, but weaker than height. Workers in the age span of 30 to 39 years had the highest relative risk (RR = 1.87; 95% CI, 1.58-2.23) compared with those aged 20 to 29 years, whereas men aged 60 to 65 years had a lower risk (RR = 0.86; 95% CI, 0.68-1.09).
Conclusion: This study indicates that factors increasing the load on the lumbar spine are associated with hospitalization for lumbar disc disease. Occupational biomechanical factors seem to be important, and a taller stature was consistently associated with an increased risk.
abstract_id: PUBMED:14133699
CLINICO-STATISTICAL CONSIDERATIONS ON OVER 1000 CASES OPERATED ON FOR HERNIA OF THE LUMBAR DISK. I N/A
abstract_id: PUBMED:19422710
Cumulative occupational lumbar load and lumbar disc disease--results of a German multi-center case-control study (EPILIFT). Background: The to date evidence for a dose-response relationship between physical workload and the development of lumbar disc diseases is limited. We therefore investigated the possible etiologic relevance of cumulative occupational lumbar load to lumbar disc diseases in a multi-center case-control study.
Methods: In four study regions in Germany (Frankfurt/Main, Freiburg, Halle/Saale, Regensburg), patients seeking medical care for pain associated with clinically and radiologically verified lumbar disc herniation (286 males, 278 females) or symptomatic lumbar disc narrowing (145 males, 206 females) were prospectively recruited. Population control subjects (453 males and 448 females) were drawn from the regional population registers. Cases and control subjects were between 25 and 70 years of age. In a structured personal interview, a complete occupational history was elicited to identify subjects with certain minimum workloads. On the basis of job task-specific supplementary surveys performed by technical experts, the situational lumbar load represented by the compressive force at the lumbosacral disc was determined via biomechanical model calculations for any working situation with object handling and load-intensive postures during the total working life. For this analysis, all manual handling of objects of about 5 kilograms or more and postures with trunk inclination of 20 degrees or more are included in the calculation of cumulative lumbar load. Confounder selection was based on biologic plausibility and on the change-in-estimate criterion. Odds ratios (OR) and 95% confidence intervals (CI) were calculated separately for men and women using unconditional logistic regression analysis, adjusted for age, region, and unemployment as major life event (in males) or psychosocial strain at work (in females), respectively. To further elucidate the contribution of past physical workload to the development of lumbar disc diseases, we performed lag-time analyses.
Results: We found a positive dose-response relationship between cumulative occupational lumbar load and lumbar disc herniation as well as lumbar disc narrowing among men and women. Even past lumbar load seems to contribute to the risk of lumbar disc disease.
Conclusion: According to our study, cumulative physical workload is related to lumbar disc diseases among men and women.
Answer: The question of whether lumbar disk disease is an occupational disease is complex and has been the subject of various studies. The evidence suggests that certain occupational risk factors may contribute to the development of lumbar disk disease, although the relationship is not straightforward and is influenced by a combination of factors.
Several studies have indicated that whole-body vibration, as experienced by individuals operating heavy machinery or driving vehicles like tractors, can lead to earlier onset of disk degeneration and specific topographic modifications of disk degeneration in the lumbar spine (PUBMED:12226775). Additionally, a case-control study found that interactions between the MMP-3 gene rs591058 polymorphism and occupational risk factors, such as whole-body vibration, bending/twisting, and lifting, contribute to an increased risk for lumbar disk herniation (PUBMED:32157746).
Moreover, a dose-response relationship has been established between cumulative lifting load and lumbar disk degeneration based on magnetic resonance imaging findings, suggesting that the risk of lumbar disk degeneration increases with the cumulative lifting load (PUBMED:24970094). A case-control study also found that cumulative occupational exposure to lifting or carrying and extreme forward bending increases the risk for developing symptomatic osteochondrosis or spondylosis of the lumbar spine (PUBMED:11600730).
However, it is important to note that the interpretation of lumbar disk disease as an occupational disease is not without controversy. Some studies have faced methodological challenges, and the role of individual factors such as age, genetics, and non-occupational behaviors also plays a significant role in the development of lumbar disk disease (PUBMED:12226775; PUBMED:11276958). Furthermore, while a prospective cohort study indicated that biomechanical factors such as occupational load and body stature are associated with hospitalization for lumbar disc disease, the importance of these factors has been debated (PUBMED:22281487).
In conclusion, while there is evidence to suggest that certain occupational exposures can increase the risk of lumbar disk disease, it is not solely an occupational disease. It results from a multifactorial process that includes genetic predisposition, lifestyle factors, and occupational risks. The medical and legal interpretation of lumbar disk disease as an occupational disease must consider the complex interplay of these various factors (PUBMED:12376870). |
Instruction: Mirror exposure to increase body satisfaction: Should we guide the focus of attention towards positively or negatively evaluated body parts?
Abstracts:
abstract_id: PUBMED:26117584
Mirror exposure to increase body satisfaction: Should we guide the focus of attention towards positively or negatively evaluated body parts? Background And Objectives: Though there is some evidence that body exposure increases body satisfaction, it is still unclear why exposure works and how attention should be guided during exposure. This pilot study manipulates the focus of attention during body exposure.
Methods: Female participants high in body dissatisfaction were randomly assigned to an exposure intervention that exclusively focused on self-defined attractive (n = 11) or self-defined unattractive (n = 11) body parts. Both interventions consisted of five exposure sessions and homework. Outcome and process of change were studied.
Results: Both types of exposure were equally effective and led to significant improvements in body satisfaction, body checking, body concerns, body avoidance and mood at post-test. Improvements for body satisfaction and mood were maintained at follow-up while body shape concerns and body checking still improved between post-test and follow-up. Body avoidance improvements were maintained for the positive exposure while the negative exposure tended to further decrease long-term body avoidance at follow-up.. The 'positive' exposure induced positive feelings during all exposure sessions while the 'negative' exposure initially induced a worsening of feelings but feelings started to improve after some sessions. The most unattractive body part was rated increasingly attractive in both conditions though this increase was significantly larger in the negative compared to the positive exposure condition.
Limitations: The sample size was small and non-clinical.
Conclusions: Both types of exposure might be effective and clinically useful. Negative exposure is emotionally hard but might be significantly more effective in increasing the perceived attractiveness of loathed body parts and in decreasing avoidance behavior.
abstract_id: PUBMED:27236075
Take a look at the bright side: Effects of positive body exposure on selective visual attention in women with high body dissatisfaction. Women with high body dissatisfaction look less at their 'beautiful' body parts than their 'ugly' body parts. This study tested the robustness of this selective viewing pattern and examined the influence of positive body exposure on body-dissatisfied women's attention for 'ugly' and 'beautiful' body parts. In women with high body dissatisfaction (N = 28) and women with low body dissatisfaction (N = 14) eye-tracking was used to assess visual attention towards pictures of their own and other women's bodies. Participants with high body dissatisfaction were randomly assigned to 5 weeks positive body exposure (n = 15) or a no-treatment condition (n = 13). Attention bias was assessed again after 5 weeks. Body-dissatisfied women looked longer at 'ugly' than 'beautiful' body parts of themselves and others, while participants with low body dissatisfaction attended equally long to own/others' 'beautiful' and 'ugly' body parts. Although positive body exposure was very effective in improving participants' body satisfaction, it did not systematically change participants' viewing pattern. The tendency to preferentially allocate attention towards one's 'ugly' body parts seems a robust phenomenon in women with body dissatisfaction. Yet, modifying this selective viewing pattern seems not a prerequisite for successfully improving body satisfaction via positive body exposure.
abstract_id: PUBMED:31336262
Experimental induction of self-focused attention via mirror gazing: Effects on body image, appraisals, body-focused shame, and self-esteem. Cognitive and behavioural models of body dysmorphic disorder posit that selective self-focused attention via mirror gazing plays a key role in the aetiology and maintenance of the disorder. However, there is little empirical support for these theoretical claims. This study aimed to induce self-focused attention via mirror gazing to examine the proposed theoretical effects on body image, distress, body-focused shame, and self-esteem. Fifty-one non-clinical participants (78.43% female) were randomly allocated to one of the two conditions: low self-focused attention (i.e., looking into a mirror placed 100 cm/ 39 in away) vs. high self-focused attention (i.e., focusing on a disliked part in a mirror placed 10 cm/ 4 in away). Following 5 min of mirror gazing, the high self-focused attention condition experienced decreased satisfaction with appearance, perceived attractiveness, and self-esteem, and increased distress about appearance, distress about disliked parts, urges to change appearance, and body-focused shame. Approaching the mirror from a distance appeared to have no effect. Findings are consistent with theories suggesting that self-focused attention and mirror behaviours might contribute to the development of body dysmorphic disorder and maintain its psychological effects.
abstract_id: PUBMED:38489951
The effect of self-focused attention during mirror gazing on body image evaluations, appearance-related imagery, and urges to mirror gaze. Background And Objectives: Mirror gazing has been linked to poor body image. Cognitive-behavioral models propose that mirror gazing induces self-focused attention. This activates appearance-related imagery, increases body dissatisfaction, and promotes further mirror gazing. However, evidence for these relationships remains scarce. Our study experimentally investigated how self-focused attention impacts overall and facial appearance satisfaction, perceived attractiveness, distress about appearance and disliked features, vividness and emotional quality of appearance-related imagery, and urges to mirror gaze. Baseline body dysmorphic concerns were studied as a moderator.
Methods: Singaporean undergraduates (Mage = 21.22, SDage = 1.62; 35 females, 28 males) were randomly assigned to high or low self-focused attention during a mirror gazing task. Dependent variables were measured with visual analogue scales, and body dysmorphic concerns with the Body Image Disturbance Questionnaire (BIDQ). Analysis of variance and moderation analyses were conducted.
Results: Self-focused attention lowered overall and facial appearance satisfaction. Perceived attractiveness decreased only in individuals with high baseline body dysmorphic concerns. Contrary to predictions, distress, appearance-related imagery, and urges to mirror gaze were unaffected.
Limitations: This study used a non-clinical sample. The BIDQ has not been psychometrically validated in Singaporean samples.
Conclusions: Self-focused attention during mirror gazing lowers positive body image evaluations. Individuals with higher body dysmorphic concerns are particularly vulnerable to low perceived attractiveness.
abstract_id: PUBMED:30223161
Mirror exposure therapy for body image disturbances and eating disorders: A review. Mirror exposure therapy is a clinical trial validated treatment component that improves body image and body satisfaction. Mirror exposure therapy has been shown to benefit individuals with high body dissatisfaction and patients with eating disorders (ED) in clinical trials. Mirror exposure is an optional component of cognitive behavioral therapy (CBT), an effective treatment for body dysmorphic disorder (BDD). However, most clinical trials of mirror exposure therapy have been small or uncontrolled and have included few male subjects. Adverse events have been reported during mirror exposure clinical trials. We discuss how individuals respond when looking in a mirror and how mirrors can be used therapeutically, and we critically evaluate the evidence in favor of mirror exposure therapy. We discuss clinical indications and technical considerations for the use of mirror exposure therapy.
abstract_id: PUBMED:31241208
Eye-tracking study on the effects of happiness and sadness on body dissatisfaction and selective visual attention during mirror exposure in bulimia nervosa. Objective: Abundant research points to the central role of body image disturbances in the occurrence of eating disorders (ED). While emotional arousal has been identified as a trigger for binge eating in bulimia nervosa (BN), empirical knowledge on the influence of emotions on body image in individuals with BN is scarce. The present study sought to experimentally examine effects of a positive and negative emotion induction on body dissatisfaction and selective attention towards negatively valenced body parts among people with BN.
Method: In a randomized-controlled cross-over design, happiness and sadness were induced by film clips one-week apart in women with BN (n = 23) and non-ED controls (n = 26). After the emotion induction, participants looked at their body in a full-length mirror, while their attentional allocation was recorded with the help of a mobile eye tracker. Participants repeatedly rated their momentary body dissatisfaction.
Results: Induction of happiness led to a significant decrease in self-reported body dissatisfaction. Furthermore, attentional bias (higher gaze duration and frequency) towards the most disliked body part relative to the most liked body part was significantly greater in the sadness than happiness condition in BN. No significant effects of emotion induction on gaze duration and gaze frequency during mirror exposure were found for controls.
Discussion: In line with assumptions of current models on ED, findings support the notion that emotional state influences the body image of patients with BN.
abstract_id: PUBMED:36418176
A distal external focus of attention facilitates compensatory coordination of body parts. Many studies have shown that focusing on an intended movement effect that is farther away from the body (distal external focus) results in performance benefits relative to focusing on an effect that is closer to the body (proximal external focus) or focusing on the body itself (internal focus) (see, Chua, Jimenez-Diaz, Lewthwaite, Kim & Wulf, 2021). Furthermore, the advantages of a distal external focus seem to be particularly pronounced in skilled performers (Singh & Wulf, 2020). The present study examined whether such benefits of more distal attentional focus may be associated with enhanced functional variability. Volleyball players (n = 20) performed 60 overhand volleyball serves to a target. Using a within-participants design, the effects of a distal external focus (bullseye), proximal external focus (ball) and an internal focus (hand) were compared. The distal focus condition resulted in significantly higher accuracy scores than did the proximal and internal focus conditions. In addition, uncontrolled manifold analysis showed that functional variability (as measured by the index of synergy) was greatest in the distal focus condition. These findings suggest that a distal external focus on the task goal may enhance movement outcomes by optimising compensatory coordination of body parts.
abstract_id: PUBMED:15629749
Selective visual attention for ugly and beautiful body parts in eating disorders. Body image disturbance is characteristic of eating disorders, and current treatments use body exposure to reduce bad body feelings. There is however little known about the cognitive effects of body exposure. In the present study, eye movement registration (electroculography) as a direct index of selective visual attention was used while eating symptomatic and normal control participants were exposed to digitalized pictures of their own body and control bodies. The data showed a decreased focus on their own 'beautiful' body parts in the high symptomatic participants, whereas inspection of their own 'ugly' body parts was given priority. In the normal control group a self-serving cognitive bias was found: they focused more on their own 'beautiful' body parts and less on their own 'ugly' body parts. When viewing other bodies the pattern was reversed: high symptom participants allocated their attention to the beautiful parts of other bodies, whereas normal controls concentrated on the ugly parts of the other bodies. From the present findings the hypothesis follows that a change in the processing of information might be needed for body exposure to be successful.
abstract_id: PUBMED:35731138
What happens in the course of positive mirror exposure? Effects on eating pathology, body satisfaction, affect, and subjective physiological arousal in patients with anorexia and bulimia nervosa. Objective: Mirror exposure (ME) is a therapeutic technique to improve body image disturbance. However, evidence on the effectiveness of different forms of ME in clinical populations is lacking. The present study therefore analysed effects of ME on trait-like and state measures of body image in patients with anorexia nervosa (AN) and bulimia nervosa (BN).
Method: The present study therefore analysed effects of ME on trait-like and state measures of body image in patients with anorexia nervosa (AN) and bulimia nervosa (BN). In total, 47 inpatients underwent 3 ME sessions guided by a therapist, with instructions to exclusively verbalise positively about their whole body. Participants completed questionnaires on trait-like eating pathology and body image at the start and end of the study, and instruments on state affect and body satisfaction were administered directly before and after each ME session. Subjective physiological arousal and emotional valence relating to each body part were assessed within each session.
Results: The results indicate significant improvements in eating pathology and body image regarding trait-like measures in patients with AN and BN. Concerning state measures, negative affect significantly decreased and body satisfaction increased during ME. Physiological arousal decreased and positively valenced emotions relating to the various body parts increased.
Conclusions: These findings suggest that positively verbalising about one's body during ME improves eating pathology, body image, affect, and subjective physiological arousal, and thus seems to be an effective form of ME.
abstract_id: PUBMED:27123587
Negative Mood Increases Selective Attention to Negatively Valenced Body Parts in Female Adolescents with Anorexia Nervosa. Objective: Previous research has yielded evidence of increased attentional processing of negatively valenced body parts in women with anorexia nervosa (AN), especially for those with high depressive symptomatology. The present study extended previous research by implementing an experimental mood manipulation.
Method: In a within-subjects design, female adolescents with AN (n = 12) and an age matched female control group (CG; n = 12) were given a negative and a positive mood induction at a one-week interval. After each mood induction, participants underwent a 3-min mirror exposure, while their eye movements were recorded.
Results: After the positive mood induction, both AN and CG participants displayed longer and more frequent gazes towards their self-defined most ugly relative to their self-defined most beautiful body part. However, after the negative mood induction, only females with AN were characterized by increased attention to their most ugly compared to their most beautiful body part, while CG participants' attention distribution was balanced. Furthermore, in the negative (but not in the positive) mood induction condition gaze frequency and duration towards the most ugly body part was significantly stronger in the AN group relative to the CG.
Discussion: The results emphasize the role of negative mood in the maintenance of pathological information processing of the self-body. This increased body-related negativity-bias during negative mood may lead to the persistence and aggravation of AN patients' body image disturbance.
Answer: The question of whether to guide the focus of attention towards positively or negatively evaluated body parts during mirror exposure to increase body satisfaction has been explored in various studies.
One pilot study found that both types of exposure, focusing on self-defined attractive or unattractive body parts, were equally effective and led to significant improvements in body satisfaction, body checking, body concerns, body avoidance, and mood at post-test. However, the 'negative' exposure was emotionally harder but might be significantly more effective in increasing the perceived attractiveness of loathed body parts and in decreasing avoidance behavior (PUBMED:26117584).
Another study showed that while positive body exposure was very effective in improving participants' body satisfaction, it did not systematically change participants' viewing pattern. The tendency to preferentially allocate attention towards one's 'ugly' body parts seems a robust phenomenon in women with body dissatisfaction. Modifying this selective viewing pattern seems not a prerequisite for successfully improving body satisfaction via positive body exposure (PUBMED:27236075).
Experimental induction of self-focused attention via mirror gazing, focusing on a disliked part, resulted in decreased satisfaction with appearance, perceived attractiveness, and self-esteem, and increased distress about appearance, distress about disliked parts, urges to change appearance, and body-focused shame (PUBMED:31336262).
In contrast, a study that induced self-focused attention during mirror gazing found that it lowered overall and facial appearance satisfaction, and perceived attractiveness decreased only in individuals with high baseline body dysmorphic concerns. Distress, appearance-related imagery, and urges to mirror gaze were unaffected (PUBMED:38489951).
In the context of eating disorders, mirror exposure therapy has been shown to benefit individuals with high body dissatisfaction and patients with eating disorders, but most clinical trials have been small or uncontrolled and have included few male subjects (PUBMED:30223161).
An eye-tracking study on the effects of happiness and sadness on body dissatisfaction and selective visual attention during mirror exposure in bulimia nervosa found that happiness led to a significant decrease in self-reported body dissatisfaction and that attentional bias towards the most disliked body part relative to the most liked body part was significantly greater in the sadness condition in BN (PUBMED:31241208).
In conclusion, both positive and negative exposures during mirror therapy can be effective in improving body satisfaction, but the focus on negatively evaluated body parts might offer additional benefits in terms of reducing avoidance behavior and increasing the perceived attractiveness of those parts. However, the emotional difficulty associated with negative exposure and the robustness of attentional biases towards 'ugly' body parts should be considered when designing interventions (PUBMED:26117584; PUBMED:27236075). |
Instruction: Can the written information to research subjects be improved?
Abstracts:
abstract_id: PUBMED:10390684
Can the written information to research subjects be improved?--an empirical study. Objectives: To study whether linguistic analysis and changes in information leaflets can improve readability and understanding.
Design: Randomised, controlled study. Two information leaflets concerned with trials of drugs for conditions/diseases which are commonly known were modified, and the original was tested against the revised version.
Setting: Denmark.
Participants: 235 persons in the relevant age groups.
Main Measures: Readability and understanding of contents.
Results: Both readability and understanding of contents was improved: readability with regard to both information leaflets and understanding with regard to one of the leaflets.
Conclusion: The results show that both readability and understanding can be improved by increased attention to the linguistic features of the information.
abstract_id: PUBMED:12528731
The extent of written trial information: preferences among potential and actual trial subjects. Aim: To investigate the preferred extent of written information in clinical trials among potential and actual trial participants.
Materials And Methods: Questionnaire survey among citizens of Copenhagen County (PUB, N=508), patients attending an out-patient clinic (OPC, N=200), and finally among participants in two clinical trials (ROC, N=32; MRCRUC, N=47--see Abbreviations). Questions concerned attitudes to and preferences towards a relatively short ("short form") and a more detailed information form ("long form") about a hypothetical, but realistic trial.
Results: Approximately 1/8 of the respondents in PUB were satisfied with the "short form", whereas this was the case for approximately 1/6 of outpatients and 1/5 of actual trial participants. Regarding the "long form" approximately three quarters of respondents in all groups were satisfied. Outpatients as a whole were satisfied to a larger extent that respondents from the PUB trial concerning the "short form" (p=0.04). The "long form" was preferred by a little less than 4/5 of respondents in all groups.
Conclusion: Written information to trial subjects should be detailed, as a majority of both potential and actual research participants prefers this, given the choice between two information forms of different extent on the same trial.
abstract_id: PUBMED:24941360
How should we inform patients about antidepressants? A study comparing verbal and written information. Objective. To compare the efficacy of verbal, written and, combined verbal and written information about selective serotonin reuptake inhibitors in patients with depression. Method. Patients with a diagnosis of major depression who were prescribed selective serotonin reuptake inhibitors (n=104) were randomly allocated to verbal (n=34, 18F 16M), written (n=38, 19F 19 M) and verbal and written information (n=32, 18F 14M) groups, the content of the verbal and written information being exactly the same. Beck depression inventory was used to evaluate the depressive symptoms. Patients were called back after 10-14 days and their retention of the knowledge was measured. Results. The total retention scores of the verbal group, written group and the combined written and verbal group were 12.85±2.19, 7.39±2.85, and 13.19±2.12, respectively. The total scores of the verbal and the combined verbal and written information groups were significantly higher than those of the written group. The information scores had a significant positive correlation with education level. Conclusion. The retention of verbal information given to patients with low levels of depression concerning the effects and side effects of serotonin reuptake inhibitors is higher than written information. Further studies with more severely depressed patients, comparing the basal information level and the information level after the intervention and the effect of information on compliance are needed.
abstract_id: PUBMED:22070445
Developing written information on osteoarthritis for patients: facilitating user involvement by exposure to qualitative research. Introduction: In developing a guidebook on osteoarthritis (OA), we collaborated with people who have chronic joint pain (users). But to advise, users need to be aware of and sensitive about their own state of knowledge and educationalists argue that adults sometimes lack such awareness. This paper will report on our experience of providing users with findings from qualitative research to increase awareness of their level of knowledge.
Method: A summary of the results from qualitative research into people's experiences of living with chronic pain was sent to individual members of two groups of users. It was then used to structure group meetings held to help identify information needed for the guidebook.
Findings: Some users found the summary difficult to read and suggested how to simplify it. Nevertheless, it helped most users to become aware of the experiences and views of others who have OA and thus become more sensitive to their own level of knowledge. It also helped them recall experiences that stimulated practical suggestions for managing joint pain in everyday life and provided a way of gently challenging the views of users when they appeared to assume that their views were widely held. The discussions brought to light gaps in the research literature.
Conclusion: We believe this way of involving users by exposing them to qualitative research findings about lay experiences of living with OA effectively facilitated the users' contributions to the needs of those who have to live with OA, and we believe it has wider applications.
abstract_id: PUBMED:11874309
Paying research subjects: an analysis of current policies. Background: Few data are available on guidelines used by research organizations to make decisions about paying subjects.
Objective: To analyze existing guidance regarding payment of research subjects and to identify common characteristics and areas for further research.
Design: Descriptive content analysis of policies.
Measurements: Written policies and rules of thumb about paying subjects from 32 U.S. research organizations.
Results: Of 32 organizations, 37.5% had written guidelines about paying subjects; all but 1 reported having rules of thumb. Few (18.8%) were able to provide a confident estimate of the proportion of studies that pay subjects. Organizations reported that investigators and institutional review boards make payment decisions and that both healthy and ill subjects in some studies are paid for their time (87%), for inconvenience (84%), for travel (68%), as incentive (58%), or for incurring risk (32%). Most organizations require that payment be prorated (84%) and described in the consent document (94%).
Conclusions: Most organizations pay some research subjects, but few have written policies on payment. Because investigators and institutional review boards make payment decisions with little specific guidance, standards vary.
abstract_id: PUBMED:17904716
Beyond "misunderstanding": written information and decisions about taking part in a genetic epidemiology study. Although the need to obtain "informed" consent is institutionalised as a principle of ethical practice in research, there is persistent evidence that the meanings people attribute to research tend to be substantially at variance with what might be deemed "correct". One dominant account in the ethics literature has been to treat apparent "misunderstandings" as a technical problem, to be fixed through improving the written information given to research candidates. We aimed to explore theoretically and empirically the role of written information in "informing" participants in research. We conducted a qualitative study involving semi-structured interviews with 29 unpaid healthy volunteers who took part in a genetic epidemiology study in Leicestershire, UK. Data analysis was based on the constant comparative method. We found that people may make sense of information about research, including the content of written information, in complex and unexpected ways. Many participants were unable to identify precisely the aim of the study in which they had participated, saw their participation as deriving from a moral imperative, and had understandings of issues such as feedback of DNA results that were inconsistent with what had been explained in the written information about the study. They had high levels of confidence in the organisations conducting the research, and consequently had few concerns about their participation. These findings, which suggest that some "misunderstanding" may be a persistent and incorrigible feature of people's participation in research, raise questions about the principle of informed consent and about the role of written information. These questions need to be addressed through engagement and dialogue between the research, research participants, social science, and ethics communities.
abstract_id: PUBMED:27860255
What impact does written information about fatigue have on patients with autoimmune rheumatic diseases? Findings from a qualitative study. Objectives: Although fatigue is a common symptom for people with rheumatic diseases, limited support is available. This study explored the impact of written information about fatigue, focusing on a booklet, Fatigue and arthritis.
Methods: Thirteen patients with rheumatic disease and fatigue were recruited purposively from a rheumatology outpatient service. They were interviewed before and after receiving the fatigue booklet. Two patients, plus six professionals with relevant interests, participated in a focus group. Transcripts were analysed thematically and a descriptive summary was produced.
Results: Interviewees consistently reported that fatigue made life more challenging, and none had previously received any support to manage it. Reflecting on the booklet, most said that it had made a difference to how they thought about fatigue, and that this had been valuable. Around half also said that it had affected, or would affect, how they managed fatigue. No one reported any impact on fatigue itself. Comments from interviewees and focus group members alike suggested that the research process may have contributed to the changes in thought and behaviour reported. Its key contributions appear to have been: clarifying the booklet's relevance; prompting reflection on current management; and introducing accountability.
Conclusions: This study indicated that written information can make a difference to how people think about fatigue and may also prompt behaviour change. However, context appeared to be important: it seems likely that the research process played a part and that the impact of the booklet may have been less if read in isolation. Aspects of the research appearing to facilitate impact could be integrated into routine care, providing a pragmatic (relatively low-cost) response to an unmet need.
abstract_id: PUBMED:15126602
Inside information: Financial conflicts of interest for research subjects in early phase clinical trials. In recent years, several research subjects have told us that they had bought or intended to buy stock in the companies sponsoring the clinical trials in which they were enrolled. This situation has led us to ask what, if any, are physician-investigators' scientific, ethical, and legal responsibilities concerning research subjects who choose to buy stock in the companies sponsoring the clinical trials in which they are participating. Although the scope of this problem is unknown and is likely to be small, this commentary examines the scientific, ethical, and legal concerns raised by such activities on the part of research subjects enrolled in early phase clinical trials. In addition, this commentary also outlines the basis for our opinion that research subjects involved in an early phase clinical trial should avoid the financial conflicts of interest created by trading stock in the company sponsoring the clinical trial.
abstract_id: PUBMED:31239103
The effectiveness of written communication for decision support in clinical practice. Background: The application of various tools suggests limitations in the usage of drug information provided to medical professionals. Concurrent views of utility and suggested improvements for written information by health care practitioners are lacking.
Objective: This study's objectives were to: (1) assess practice-based perspectives on the relative efficacy and utility of different medication-related written materials for health care practitioners, (2) discern aspects of written communications that are valued for actionable information and merit health care practitioners' attention, (3) determine common or unique themes of clinicians practicing as physicians, pharmacists, physician assistants, and nurses regarding medication-related written information; and (4) organize constructs and themes as a cogent array of current deficiencies in written communications to guide improvements.
Methods: Two focus group panels (physicians, physician assistants, pharmacists, nurses) were convened to address clinical decisional balance and the utility of written information about medicines in assisting them with those decisions. A facilitated dialogue followed a semi-structured interview guide including overarching questions and tap-root probes derived from the literature. Comparative analyses were used to interpret data. An a priori coding framework informed the interview guide and served as a basis for initial identification of themes.
Results: Panelists from diverse practices and settings voiced convergent agreement on the limited utility of written materials, attributed primarily to current structure, formatting, content, and design. Recommendations thematically supported the need for greater accuracy, recency, adaptability, sequencing, and accessibility of information in formats more frequently digitalized.
Conclusions: Focus group panels of practitioners provided rich information on how current written information such as Dear Doctor letters and package inserts could be improved to facilitate real-time decision-making. Overall, improvements could contribute to an improved capacity for efficient, effective, and sustained evidence-based practice behavior.
abstract_id: PUBMED:23567071
Pharmacist and general practitioner ambivalence about providing written medicine information to patients-a qualitative study. Background: Written medicine information is an important aspect of patient education, increasing patient knowledge and satisfaction. It can play an important role in promoting education, communication, and improving health literacy. In Australia, standardized, comprehensive written medicine information is available for all medicines. Patients' want such written information, however they report it is generally not supplied or there is limited interaction between healthcare professionals and patients when provided.
Objective: A qualitative study was conducted to explore the opinions and attitudes of healthcare professionals toward the use of written medicine information in practice.
Method: The study involved focus groups with 32 general practitioners (4 groups), 29 community pharmacists (4 groups) and 7 hospital pharmacists (1 group) in Sydney and rural New South Wales. All discussions were transcribed verbatim and analyzed using thematic content analysis.
Findings: Across the groups, professionals were aware of patients' needs for medicine information, but provision to patients in practice appeared the exception rather than the rule. Common reasons for non-provision were lack of time, possible creation of patient anxiety, low literacy, and perceived length and complexity of the information. Many desired more balanced information for patients on both benefits and risks. Most felt current materials were not useful information-sharing tools; some perceived that it undermined their relationship with patients. Improvement strategies proposed included increased consultation times and improved quality of the information.
Conclusion: Participants were ambivalent about supplying written medicine information to their patients and concerned about its impact on the patient-provider relationship. This contributed to limited provision, despite the information being available for all medicines. A tailored approach to meet individual patient information preferences, together with efforts to support professionals as facilitators of information may increase written medicine information utilization as an information-sharing tool to improve health literacy and patient engagement.
Answer: Yes, the written information to research subjects can be improved. An empirical study demonstrated that both readability and understanding of information leaflets for drug trials could be enhanced by focusing on linguistic features of the information. The study involved modifying two information leaflets and comparing the original versions against the revised ones in a randomized, controlled setting with 235 participants. The results showed improvements in readability for both leaflets and in understanding for one of the leaflets (PUBMED:10390684). |
Instruction: Do premenopausal women with major depression have low bone mineral density?
Abstracts:
abstract_id: PUBMED:35054263
Premenopausal Singaporean Women Suffering from Major Depressive Disorder Treated with Selective Serotonin Reuptake Inhibitors Had Similar Bone Mineral Density as Compared with Healthy Controls. The association between selective serotonin reuptake inhibitor (SSRI) treatment and lower bone mineral density (BMD) remains controversial, and further research is required. This study aimed to compare the BMD, levels of bone formation and bone metabolism markers in medicated premenopausal Singaporean women with major depressive disorder (MDD) and matched healthy controls. We examined 45 women with MDD who received SSRI treatment (mean age: 37.64 ± 7) and 45 healthy controls (mean age: 38.1 ± 9.2). BMD at the lumbar spine, total hip and femoral neck were measured using dual-energy X-ray absorptiometry. We also measured bone formation markers, procollagen type 1 N-terminal propeptide (P1NP) and bone metabolism markers, osteoprotegerin (OPG) and receptor activator of nuclear factor-kappa-Β ligand (RANKL). There were no significant differences in the mean BMD in the lumbar spine (healthy controls: 1.04 ± 0.173 vs. MDD patients: 1.024 ± 0.145, p = 0.617, left hip (healthy controls: 0.823 ± 0.117 vs. MDD patients: 0.861 ± 0.146, p = 0.181) and right hip (healthy controls: 0.843 ± 0.117 vs. MDD patients: 0.85 ± 0.135, p = 0.784) between healthy controls and medicated patients with MDD. There were no significant differences in median P1NP (healthy controls: 35.9 vs. MDD patients: 37.3, p = 0.635), OPG (healthy controls: 2.6 vs. MDD patients: 2.7, p = 0.545), RANKL (healthy controls: 23.4 vs. MDD patients: 2178.93, p = 0.279) and RANKL/OPG ratio (healthy controls: 4.1 vs. MDD patients: 741.4, p = 0.279) between healthy controls and medicated patients with MDD. Chronic SSRI treatment might not be associated with low BMD in premenopausal Singaporean women who suffered from MDD. This finding may help female patients with MDD make an informed decision when considering the risks and benefits of SSRI treatment.
abstract_id: PUBMED:16046174
Bone mineral density in premenopausal women with major depression. Aim: To investigate the relationship between the major depression and bone mineral density (BMD) in premenopausal women.
Material And Methods: We compared BMD, plasma cortisol level, osteocalcin and C-telopeptide levels of 35 premenopausal women with major depression with those of 30 healthy women who were matched for age and body mass index. Major depression was diagnosed according to Diagnostic and Statistical Manual of Mental Disorders (fourth edition) criteria. Nineteen patients had mild and 16 patients had moderate severity of major depression as measured by Hamilton rating scale for depression.
Results: Women with any risk factor for osteoporosis were excluded from the study. All women underwent BMD measurement by DEXA at lumbar (L2-4) and femoral neck region. After an overnight fasting, plasma cortisol levels were measured at 08:00 h by using competitive immunoassay method. Osteocalcin and C-telopeptide were used for the evaluation of bone turnover. There were no significant differences in BMD, plasma cortisol level, osteocalcin and C-telopeptide levels between the patients and the control groups. There was also no correlation between the plasma cortisol level, the duration and the severity of disease, antidepressant drug use and BMD.
Conclusion: Major depression had no significant effect on BMD and bone turnover markers in our patient group of mild to moderate severity of the disorder.
abstract_id: PUBMED:27727264
Insecure attachment style predicts low bone mineral density in postmenopausal women. A pilot study. Introduction: Major depressive disorder (MDD) and osteoporosis are two common disorders with high morbidity and mortality rates. Conflicting data have found associations between MDD and low bone mineral density (BMD) or osteoporosis, although causative factors are still unclear. A pilot study was designed with the aim to assess the relationship between MDD and BMD in postmenopausal women with MDD compared to healthy volunteers. We hypothesized that attachment style (AS) mediated this relationship.
Methods: The sample was made of 101 postmenopausal women, 49 with MDD and 52 age-matched healthy volunteers. Structured clinical interview and Beck Depression Inventory (BDI) were performed to assesse MDD. AS was evaluated using the Relationship Questionnaire (RQ). BMD was measured by dual energy X-ray absorptiometry.
Results: The univariate analysis showed that women with MDD had lower BMD values as compared to healthy volunteers. In the regression models MDD diagnosis and BDI score were not significant predictors of low BMD. The “preoccupied” pattern of insecure AS was a significant, independent predictor of decreased BMD in all skeletal sites: lumbar spine (p=0.008), femoral neck (p=0.011), total hip (p=0.002).
Conclusions: This is the first study exploring the relationship between AS, MDD and BMD. Our results support the link between MDD and low BMD. We found that insecure AS was a risk factor for decreased BMD, regardless of depression. Insecure AS may play a role in the relationship between MDD and BMD or may constitute a risk factor itself. Therapeutic interventions focused on AS could improve psychiatric disorders and physical diseases related to low BMD.
abstract_id: PUBMED:17313608
Relation of cortisol levels and bone mineral density among premenopausal women with major depression. We aimed to investigate the relationship between cortisol levels and bone mineral density (BMD) among premenopausal women with major depression. We compared BMD, plasma cortisol, osteocalcin and C-telopeptide (CTx) levels of 36 premenopausal women with major depression with 41 healthy women who were matched for age and body mass index. Osteocalcin and CTx were used for the evaluation of bone turnover. The clinical diagnosis of major depression was made by using the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria. The 21-item Hamilton Rating Scale for Depression was used for the assessment of depressive symptoms. In comparison with the controls, the mean BMD of the depressed women was significantly lower at the lumbar spine and at all sites of the proximal femur (p = 0.02, 0.01). Plasma cortisol levels were significantly higher in depressive patients than in controls (p = 0.001). Osteocalcin was lower and CTx was higher in the patient group than in controls (p = 0.04, p = 0.008). Lumbar and femur BMD scores were negatively correlated with cortisol levels in the patient group. Major depression had important effects on BMD and bone turnover markers. Depression should be considered among risk factors for osteoporosis in premenopausal women.
abstract_id: PUBMED:18039992
Low bone mass in premenopausal women with depression. Background: An increased prevalence of low bone mineral density (BMD) has been reported in patients with major depressive disorder (MDD), mostly women.
Methods: Study recruitment was conducted from July 1, 2001, to February 29, 2003. We report baseline BMD measurements in 89 premenopausal women with MDD and 44 healthy control women enrolled in a prospective study of bone turnover. The BMD was measured by dual-energy x-ray absorptiometry at the spine, hip, and forearm. Mean hourly levels of plasma 24-hour cytokines, 24-hour urinary free cortisol, and catecholamine excretion were measured in a subset of women. We defined MDD according to the Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition).
Results: The prevalence of low BMD, defined as a T score of less than -1, was greater in women with MDD vs controls at the femoral neck (17% vs 2%; P = .02) and total hip (15% vs 2%; P = .03) and tended to be greater at the lumbar spine (20% vs 9%; P = .14). The mean +/- SD BMD, expressed as grams per square centimeters, was lower in women with MDD at the femoral neck (0.849 +/- 0.121 vs 0.866 +/- 0.094; P = .05) and at the lumbar spine (1.024 +/- 0.117 vs 1.043 +/- 0.092; P = .05) and tended to be lower at the radius (0.696 +/- 0.049 vs 0.710 +/- 0.055; P = .07). Women with MDD had increased mean levels of 24-hour proinflammatory cytokines and decreased levels of anti-inflammatory cytokines.
Conclusions: Low BMD is more prevalent in premenopausal women with MDD. The BMD deficits are of clinical significance and comparable in magnitude to those resulting from established risk factors for osteoporosis, such as smoking and reduced calcium intake. The possible contribution of immune or inflammatory imbalance to low BMD in premenopausal women with MDD remains to be clarified.
abstract_id: PUBMED:22848407
Do premenopausal women with major depression have low bone mineral density? A 36-month prospective study. Background: An inverse relationship between major depressive disorder (MDD) and bone mineral density (BMD) has been suggested, but prospective evaluation in premenopausal women is lacking.
Methods: Participants of this prospective study were 21 to 45 year-old premenopausal women with MDD (n = 92) and healthy controls (n = 44). We measured BMD at the anteroposterior lumbar spine, femoral neck, total hip, mid-distal radius, trochanter, and Ward's triangle, as well as serum intact parathyroid hormone (iPTH), ionized calcium, plasma adrenocorticotropic hormone (ACTH), serum cortisol, and 24-hour urinary-free cortisol levels at 0, 6, 12, 24, and 36 months. 25-hydroxyvitamin D was measured at baseline.
Results: At baseline, BMD tended to be lower in women with MDD compared to controls and BMD remained stable over time in both groups. At baseline, 6, 12, and 24 months intact PTH levels were significantly higher in women with MDD vs. controls. At baseline, ionized calcium and 25-hydroxyvitamin D levels were significantly lower in women with MDD compared to controls. At baseline and 12 months, bone-specific alkaline phosphatase, a marker of bone formation, was significantly higher in women with MDD vs. controls. Plasma ACTH was also higher in women with MDD at baseline and 6 months. Serum osteocalcin, urinary N-telopeptide, serum cortisol, and urinary free cortisol levels were not different between the two groups throughout the study.
Conclusion: Women with MDD tended to have lower BMD than controls over time. Larger and longer studies are necessary to extend these observations with the possibility of prophylactic therapy for osteoporosis.
Trial Registration: ClinicalTrials.gov NCT 00006180.
abstract_id: PUBMED:8815939
Bone mineral density in women with depression. Background: Depression is associated with alterations in behavior and neuroendocrine systems that are risk factors for decreased bone mineral density. This study was undertaken to determine whether women with past or current major depression have demonstrable decreases in bone density.
Methods: We measured bone mineral density at the hip, spine, and radius in 24 women with past or current major depression and 24 normal women matched for age, body-mass index, menopausal status, and race, using dual-energy x-ray absorptiometry. We also evaluated cortisol and growth hormone secretion, bone metabolism, and vitamin D-receptor alleles.
Results: As compared with the normal women, the mean (+/-SD) bone density in the women with past or current depression was 6.5 percent lower at the spine (1.00+/-0.15 vs. 1.07+/-0.09 g per square centimeter, P=0.02), 13.6 percent lower at the femoral neck (0.76+/-0.11 vs. 0.88+/-0.11 g per square centimeter, P<0.001), 13.6 percent lower at Ward's triangle (0.70+/-0.14 vs. 0.81+/-0.13 g per square centimeter, P<0.001), and 10.8 percent lower at the trochanter (0.66+/-0.11 vs. 0.74+/-0.08 g per square centimeter, P<0.001). In addition, women with past or current depression had higher urinary cortisol excretion (71+/-29 vs. 51+/-19 micrograms per day [196+/-80 vs. 141+/-52 nmol per day], P=0.006), lower serum osteocalcin concentration (P=0.04), and lower urinary excretion of deoxypyridinoline (P=0.02).
Conclusions: Past or current depression in women is associated with decreased bone mineral density.
abstract_id: PUBMED:27453860
Depressive symptoms and bone mineral density in menopause and postmenopausal women: A still increasing and neglected problem. Background: The association between depression and loss of bone mineral density (BMD) has been reported as controversial.
Objective: The objectıve of the current study was to investigate whether an association exists between depression and low BMD during the menopausal and postmenopausal period.
Materials And Methods: A cross-sectional descriptive study was used to generate menopause symptoms experienced by Arabian women at the Primary Health Care Centers in Qatar. A multi-stage sampling design was used, and a representative sample of 1650 women aged 45-65 years were included during July 2012 and November 2013. This prospective study explored the association between bone density and major depressive disorder in women. Bone mineral densitometry measurements (BMD) (g/m(2)) were assessed at the BMD unit using a lunar prodigy DXA system (Lunar Corp., Madison, WI). Data on body mass index (BMI), clinical biochemistry variables including serum 25-hydroxyvitamin D were collected. The Beck Depression Inventory was administered for depression purposes.
Results: Out of 1650 women 1182 women agreed to participate in the study (71.6%). The mean age and standard deviation (SD) of the menopausal age were 48.71 ± 2.96 with depressed and 50.20 ± 3.22 without depressed (P < 0.001). Furthermore, the mean and SD of postmenopausal age were 58.55 ± 3.27 with depression and 57.78 ± 3.20 without depression (P < 0.001). There were statistically significant differences between menopausal stages with regards to a number of parity, and place of living. There were statistically significant differences between menopausal stages with regards to BMI, systolic and diastolic blood pressure, Vitamin D deficiency, calcium deficiency and shisha smoking habits. Overall, osteopenia and osteoporosis and bone loss were significantly lower in postmenopausal women than in menopausal women (P < 0.001). Similarly, T-score and Z-score were lower with depression menopause and postmenopausal women (P < 0.001). The multivariate logistic regression analyses revealed that the depression, the mean serum Vitamin D deficiency, calcium level deficiency, less physical activity, comorbidity, number of parity, systolic and diastolic blood pressure and shisha smoking habits were considered as the main risk factors associated with bone mineral loss after adjusting for age, BMI and other variables.
Conclusion: Depression is associated with low BMD with a substantially greater BMD decrease in depressed women and cases of clinical depression. Depression should be considered as an important risk factor for osteoporosis.
abstract_id: PUBMED:19446797
Major depression is a risk factor for low bone mineral density: a meta-analysis. Background: The role of depression as a risk factor for low bone mineral density (BMD) and osteoporosis is not fully acknowledged, mainly because the relevant literature is inconsistent and because information on the mechanisms mediating brain-to-bone signals is rather scanty.
Methods: Searching databases and reviewing citations in relevant articles, we identified 23 studies that quantitatively address the relationship between depression and skeletal status, comparing 2327 depressed with 21,141 nondepressed individuals. We subjected these studies to meta-analysis, assessing the association between depression and BMD as well as between depression and bone turnover markers.
Results: Overall, depressed individuals displayed lower BMD than nondepressed subjects, with a composite weighted mean effect size (d) of -.23 (95% confidence interval: -.33 to -.13; p < .001). The association between depression and BMD was similar in the spine, hip, and forearm. It was stronger in women (d = -.24) than men (d = -.12) and in premenopausal (d = -.31) than postmenopausal (d = -.12) women. Only women individually diagnosed for major depression by a psychiatrist with DSM criteria displayed significantly low BMD (d = -.36); women diagnosed by self-rating questionnaires did not (d = -.06). Depressed subjects had increased urinary levels of bone resorption markers (d = .52).
Conclusions: The present findings portray depression as a significant risk factor for low BMD. Premenopausal women who are psychiatrically diagnosed with major depression are particularly at high-risk for depression-associated low BMD. Hence, periodic BMD measurements and antiosteoporotic prophylactic and curative measures are strongly advocated for these patients.
abstract_id: PUBMED:23155514
Bone density and depression in premenopausal South African women: a pilot study. Objective: It is posited that the effect of depression on BMD is dependent on the severity of depression. Conflicting evidence exists regarding this possible association. This study investigated the association between depression and low bone mineral density (BMD).
Methods: The hypothesis was investigated in a random sample of volunteers (n=40) and in premenopausal female psychiatric patients (n=5) diagnosed with recurrent severe major depression. The outcome measures were BMD (DEXA); depression (Beck Depression Inventory and Psychological General Well-being Scale) and 24-hour saliva cortisol levels (ELISA). In a comparison of women (4 of the 40 i.e. "control" subjects) with negligible symptoms of depression and the five patients with severe recurrent major depression- BMD, depression, saliva cortisol and bone turnover markers were measured and compared. Pro-inflammatory status (IL-1 and TNF-alpha) was investigated in the psychiatric patients only.
Results: In the random - non clinical - sample of women (n=40), 26 exhibited normal BDM and 14 exhibited low BMD. Drepressive symptoms and cortisol level were not significantly different between these two groups. Women with severe recurrent major depression (n=5)exhibited lower median BMD T-scores, higher overall bone turnover and higher 24-hour cortisol levels compared to "control" subjects (n=4). The psychiatric patients also exhibited elevated IL-1 levels.
Conclusion: The effect of depression on BMD may be dependent on the depression severity, IL-1 and cortisol are possible mediators in depression-induced BMD loss.
Answer: The relationship between major depression and bone mineral density (BMD) in premenopausal women is complex and has been the subject of various studies with differing results. Some studies have found no significant differences in BMD between premenopausal women with major depression and healthy controls. For instance, a study on premenopausal Singaporean women treated with selective serotonin reuptake inhibitors (SSRIs) for major depressive disorder (MDD) found no significant differences in BMD when compared with healthy controls (PUBMED:35054263). Similarly, another study reported no significant effect of major depression on BMD and bone turnover markers in a group of women with mild to moderate severity of the disorder (PUBMED:16046174).
However, other studies have suggested that major depression may be associated with lower BMD in premenopausal women. A study found that women with past or current major depression had demonstrably decreased bone density compared to normal women (PUBMED:8815939). Another study reported that low BMD is more prevalent in premenopausal women with MDD, with BMD deficits comparable in magnitude to those resulting from established risk factors for osteoporosis (PUBMED:18039992). A prospective study also indicated that women with MDD tended to have lower BMD than controls over time (PUBMED:22848407).
Furthermore, a meta-analysis portrayed depression as a significant risk factor for low BMD, particularly in premenopausal women who are psychiatrically diagnosed with major depression (PUBMED:19446797). A pilot study suggested that the effect of depression on BMD may be dependent on the severity of depression, with women with severe recurrent major depression exhibiting lower BMD, higher overall bone turnover, and higher cortisol levels compared to control subjects (PUBMED:23155514).
In conclusion, while some studies have found no significant association between major depression and BMD in premenopausal women, others have identified major depression as a risk factor for low BMD, particularly in cases of severe depression. The evidence suggests that the relationship may be influenced by factors such as the severity of depression, the presence of other risk factors, and possibly the use of antidepressant medication. |
Instruction: Is it useful to repeating trabeculoplasty?
Abstracts:
abstract_id: PUBMED:31573564
Diode and argon trabeculoplasty in primary open-angle glaucoma treatment Laser trabeculoplasty is one of the methods used to decrease intraocular pressure (IOP) as part of comprehensive glaucoma treatment. Argon laser trabeculoplasty (ALT) and diode laser trabeculoplasty (DLT) are two of the most widely used laser trabeculoplasty techniques. The mechanism of its hypotensive action is based on the effect that laser emission photocoagulate has on trabecular meshwork and subsequent outflow facilitation, but the difference in wavelength and emission energy determine the difference in direct laser action and the features of postoperative clinical appearance. The efficacy of both methods has been verified in many studies; however, relatively few of them have made comparative analysis of the two techniques. The article reviews publications dedicated to comparing the efficacy of ALT and DLT, including their hypotensive effect, safety, as well as data on laser surgery tolerance and complications.
abstract_id: PUBMED:24330092
Mechanisms of selective laser trabeculoplasty: a review. Selective laser trabeculoplasty is a safe and effective treatment for glaucoma, with greater cost effectiveness than its pharmacological and surgical alternatives. Nevertheless, although the basic science literature on selective laser trabeculoplasty continues to grow, there remains uncertainty over the mechanism by which laser trabeculoplasty reduces intraocular pressure. To address this uncertainty, the evidence behind several potential biological and mechanical mechanisms of selective laser trabeculoplasty were reviewed. In particular, cytokine secretion, matrix metalloproteinase induction, increased cell division, repopulation of burn sites and macrophage recruitment were discussed. Refining our understanding of these mechanisms is essential both to understanding the pathophysiology of ocular hypertension and developing improved therapies to treat the condition.
abstract_id: PUBMED:33519186
Clinical Outcomes of Micropulse Laser Trabeculoplasty Compared to Selective Laser Trabeculoplasty at One Year in Open-Angle Glaucoma. Background: There is limited long-term data comparing selective laser trabeculoplasty (SLT) to the newer micropulse laser trabeculoplasty (MLT) using a laser emitting at 532 nm. In this study, we determine the effectiveness and safety of MLT compared to SLT.
Design: Retrospective comparative cohort study.
Participants: A total of 85 consecutive eyes received SLT and 43 consecutive eyes received MLT.
Methods: Patients with open-angle glaucoma receiving their first treatment of laser trabeculoplasty were included. Exclusion criteria are prior laser trabeculoplasty, laser cyclophotocoagulation or glaucoma surgery, and follow-up of less than 1 year.
Main Outcome Measures: The primary outcome was success at 1 year, defined as a reduction in intraocular eye pressure (IOP) by ≥20% from baseline or met prespecified target IOP with no additional glaucoma medication or subsequent glaucoma intervention.
Results: Baseline IOP was 18.0 mmHg (95% CI=16.4-19.5) in the MLT group on an average of 1.8 (95% CI=1.4-2.2) glaucoma medications compared to 18.2 mmHg (95% CI=17.2-19.3) for the SLT group on an average of 2.0 (95% CI=1.6-2.3) medications. At 1-hour post-laser, the SLT group had more transient IOP spikes (MLT 5% vs SLT 16%, P=0.10). There was a trend toward increased success in the SLT group compared to MLT at 1 year (relative risk=1.4, 95% CI=0.8-2.5, P=0.30).
Conclusion And Relevance: Eyes had similar success after MLT compared to SLT at 1 year. Laser trabeculoplasty with either method could be offered as treatment with consideration of MLT in those eyes where IOP spikes should be avoided.
abstract_id: PUBMED:11151276
Is it useful to repeating trabeculoplasty? Purpose: To study the effect of a second trabeculoploplasty with argon laser (ALT) on the intraocular pressure.
Methods: 24 eyes of 18 patients with an average age of 64 are revised. These patients had had a previous ALT and after an average time interval of 26 months, a new ALT was performed on them. The previous IOP is compared with the one obtained 3 months after repeating the ALT. Analyzing only the cases on which both ALT have been carried out with the same parameters (impact of 50 micron, time 0.15 s., power 700 mw), we isolate a second group of 14 eyes of 14 patients, with 66 years of average age, on whom the ALT was repeated, after an average time interval of 18 months. The means and standard deviations are calculated, wearing out the contrast with the T-Student.
Results: On the second ALT we obtain a decrease of the IOP of 26%. On the second group the IOP falls to an average of 16% after the first ALT and 28% after the second.
Conclusions: The ALT can be repeated with good results, obtaining even greater decreases of the lOP on the second ALT.
abstract_id: PUBMED:32602256
Outcomes of pattern scanning laser trabeculoplasty and selective laser trabeculoplasty: Results from the lausanne laser trabeculoplasty registry. Purpose: To compare the long-term safety and efficacy of pattern scanning laser trabeculoplasty (PSLT) and selective laser trabeculoplasty (SLT).
Methods: This was a retrospective database analysis (Lausanne Laser Trabeculoplasty Registry) of patients having had laser trabeculoplasty (LT) prior to 2017 with a minimum follow-up of 1 year. Inclusion criteria were age ≥40 years and diagnosis of ocular hypertension (OHT) and open-angle glaucoma (OAG). Selective laser trabeculoplasty (SLT) eyes were matched to PSLT eyes according to baseline intraocular pressure (IOP), baseline number of ocular hypotensive medications (OHM) and glaucoma diagnosis. Success was defined as an IOP ≤ 20% from baseline or an IOP equal or lower than baseline accompanied by a reduction in OHM. Multivariate regression models were used to study associations between success and baseline clinical parameters.
Results: From 280 eyes in the database, 81 eyes had PSLT and were matched with 81 SLT eyes (162 patients). Mean age was 69.4 ± 12.1 years, and 56.2% were female. Mean IOP was 18.6 ± 5.3 and 18.2 ± 4.1 mmHg at baseline and 15.9 ± 3.0 and 16.0 ± 3.4 mmHg at 12 months and 15.2 ± 2.7 and 16.2 ± 3.4 mmHg at 24 months, for PSLT and SLT, respectively. 60.5% of PSLT and 65.4% of SLT eyes achieved treatment success (p = 0.20). Number of OHM was 1.0 ± 1.0 and 1.4 ± 1.2, respectively (p = 0.052). Baseline IOP (OR = 1.23, p < 0.01) and number of OHM (OR = 1.67, p < 0.01) were associated with success in both PSLT and SLT, while LT modality was not [OR = 0.81 (0.43-1.53), p = 0.52], and a diagnosis of primary OAG was negatively associated (OR = 0.42, p = 0.04).
Conclusion: Our study did not find any significant differences between PSLT and SLT in terms of safety and efficacy in patients with OHT and glaucoma. Baseline IOP was associated with higher success rates in both procedures. Additional studies are needed to evaluate the outcomes of PSLT in non-Caucasian populations and the ability of repeat PSLT to achieve additional IOP reduction.
abstract_id: PUBMED:20142974
Update on laser trabeculoplasty. Newer techniques of Laser Trabeculoplasty have revived the procedure and gained widespread acceptance by the ophthalmic community. This review was undertaken to address the evolution of different laser trabeculoplaty techniques, proposed mechanisms of action as well as review current studies of the therapeutic effects of these interventions.
abstract_id: PUBMED:28028351
Selective Laser Trabeculoplasty: An Overview. Given the obvious quality of life concerns with medical and surgical lowering of intraocular pressure (IOP), lasers have received considerable attention as a therapeutic modality for glaucoma. Selective laser trabeculoplasty (SLT) is increasingly being used in clinical practice as both the primary procedure and as an adjunct to medical and surgical therapy. Preliminary published evidence suggests that SLT is an effective, compliance-free, repeatable and safe therapeutic modality having only minor, transient, self-limiting or easily controlled side effects with no sequelae. This review attempts a broad overview of the current knowledge of its mechanism, efficacy, indications and limitations, point out the knowledge lacunae that still exist with respect to this highly promising technology which has captured the attention of glaucoma surgeons all over the world.
How To Cite This Article: Jha B, Bhartiya S, Sharma R, Arora T, Dada T. Selective Laser Trabeculoplasty: An Overview. J Current Glau Prac 2012;6(2):79-90.
abstract_id: PUBMED:35225967
Spotlight on MicroPulse Laser Trabeculoplasty in Open-Angle Glaucoma: What's on? A Review of the Literature. Glaucoma is the most common cause of permanent blindness in the world, caused by a progressive optic neuropathy. Patients with glaucoma are often treated with topical medicines therapy in order to reduce intra-ocular pressure (IOP). On the other hand, laser therapies, with the introduction of Argon Laser Trabeculoplasty (ALT) and successively with Selective Laser Trabeculoplasty (SLT), were reported to be effective in IOP control, with low adverse effect rates. In recent years, the micropulse laser, a subthreshold laser technology, was introduced with the goal of reducing side effects while maintaining the effectiveness of the laser treatments. Several studies focused on Micropulse Diode Laser Trabeculoplasty (MDLT) in open-angle glaucoma, to evaluate its effectiveness and possible side effects. Promising results were reported, but irradiation circumstances have not been standardized yet and its role as a substitute for previous laser techniques has yet to be defined. As a result, the goal of this review was to analyze the physical principles at the basis of MDLT and to frame it in the open-angle glaucoma management setting, highlighting the advantages and shortfalls of this technique.
abstract_id: PUBMED:26997784
Selective Laser Trabeculoplasty: A Clinical Review. Unlabelled: Selective laser trabeculoplasty (SLT) is a safe and effective treatment modality for lowering the intraocular pressure in patients with glaucoma. It achieves its results by selective absorption of energy in the trabecular pigmented cells, sparing adjacent cells and tissues from thermal damage, with minimal morphological tissue alteration following treatment. On the basis of the peer-reviewed medical literature, SLT is efficacious in lowering IOP, as initial treatment or when medical therapy is insufficient in all types of open-angle glaucoma in all races. SLT achieves intraocular pressure reduction similar to that of argon laser trabeculoplasty but without the tissue destruction and side effects. Observed side effects following SLT were almost uniformly transient and minor. We review highlights of recently published studies on the mechanisms and clinical outcome of SLT in order to address frequently raised issues pertinent to SLT in the clinical practice.
Key Messages: Selective laser trabeculoplasty is a safe and effective treatment modality for lowering the intraocular pressure in patients with glaucoma. How to cite this article: Alon S. Selective Laser Trabeculoplasty: A Clinical Review. J Current Glau Prac 2013; 7(2):58-65.
abstract_id: PUBMED:33867755
Laser Trabeculoplasty Perceptions and Practice Patterns of Canadian Ophthalmologists. Aim: To describe the current practice patterns and perceptions of Canadian ophthalmologists using laser trabeculoplasty (LTP).
Materials And Methods: A cross-sectional survey of 124 members of the Canadian Ophthalmological Society (COS) who perform LTP was conducted. Descriptive statistics and Chi-square comparative analyses were performed on anonymous self-reported survey data.
Results: Of the 124 respondents, 34 (27.4%) completed a glaucoma fellowship. Use of selective laser trabeculoplasty (SLT) (94.4%) was preferred over argon laser trabeculoplasty (ALT) (5.6%). The most frequently cited reasons for SLT preference was less damage to trabecular meshwork (30.7%), availability (16.2%), and repeatability (16.2%). In all, 47.6% of the respondents performed LTP concurrently with medical treatment, 33.9% used it after medical treatment, and 17.7% used it as first-line treatment. Majority (87.1%) of the respondents believed that SLT is effective when repeated. In suitable patients, 41.9% of the respondents stated on average they repeat SLT once, 26.6% twice, and 19.4% greater than 2 times, respectively. Of those who repeat SLT on patients, 80.7% found repeat SLT treatments have good outcomes for patients. In all, 105 (84.7%) ophthalmologists responded they would benefit from an LTP practice guideline. Significantly more ophthalmologists without glaucoma fellowships perceived they would benefit from a practice guideline (p value <0.001).
Conclusion: This survey provides valuable practical information on how LTP is used in the treatment of glaucoma in Canada.
Clinical Significance: The findings may serve as a baseline survey to trend future practices.
How To Cite This Article: Lee EY, Farrokhyar F, Sogbesan E. Laser Trabeculoplasty Perceptions and Practice Patterns of Canadian Ophthalmologists. J Curr Glaucoma Pract 2020;14(3):81-86.
Answer: Yes, repeating trabeculoplasty can be useful. A study on the effect of a second trabeculoplasty with argon laser (ALT) showed that a second ALT can result in a decrease of intraocular pressure (IOP) by 26%. In a subgroup where both ALTs were carried out with the same parameters, the IOP fell by an average of 16% after the first ALT and 28% after the second, suggesting that repeating ALT can yield good results and may even lead to greater decreases in IOP (PUBMED:11151276). Additionally, the majority of Canadian ophthalmologists surveyed believed that selective laser trabeculoplasty (SLT) is effective when repeated, and a significant proportion of them reported good outcomes for patients upon repeating SLT (PUBMED:33867755). |
Instruction: Management of uncomplicated skull fractures in children: is hospital admission necessary?
Abstracts:
abstract_id: PUBMED:9792964
Management of uncomplicated skull fractures in children: is hospital admission necessary? Objective: This study was undertaken to determine the necessity for routine hospital admission of children with skull fractures, a normal neurological exam, a normal head CT, and no other injuries ('uncomplicated skull fracture').
Methods: A prospective study of closed-head injuries in children was done over a 2-year period at St. Louis Children's Hospital. All patients with closed head injuries underwent skull radiographs and a head CT scan. From this cohort, children with uncomplicated skull fractures were identified and studied. For comparison, a retrospective analysis was also performed of the hospital admission records of children admitted over a 5-year period (1990-1994) with the diagnosis of epidural hematoma (EDH) to identify the typical time intervals between injury and documentation of the lesion in these cases.
Results: Forty-four patients with uncomplicated skull fractures were identified; all had been admitted for observation. Mean age was 1.8 years. Average time between injury and hospital admission was 6.35 h with half of this time being spent in the emergency room. Average LOS was 35 h, but 50% of patients were hospitalized less than 24 h. No patient in this study group suffered a complication related to their inury. Twenty-three patients with EDH had been admitted during the 5-year review period. Slightly more than one-half of patients had their EDH detected within 6 h of injury. The others were diagnosed more than 6 h after injury due to a delay in medical evaluation or a delay in obtaining a computed tomographic (CT) scan after an initial medical evaluation.
Conclusions: Patients with uncomplicated skull fractures, in the absence of recurrent emesis and/or evidence of child abuse, can be considered for discharge home. The definition of an uncomplicated skull fracture requires that a head CT be performed on these patients.
abstract_id: PUBMED:1796791
Minimal head injury: is admission necessary? The records of 138 patients admitted a Glasgow Coma Score (GCS) of 14 or 15 following head injury were reviewed to assess the need for hospital observation and to determine whether obtaining a normal computerized tomography (CT) scan in the emergency department could have avoided admission. GCS was 15 in 103 patients (74%) and 14 in 35 (26%). Eighty-three patients were admitted for their head injury alone, and 55 had other injuries but would have required admission for their head injury. Loss of consciousness was documented in 51 per cent and suspected in another 29 per cent and was distributed equally regardless of GCS. Seven per cent (5/71) of skull x rays were positive and were associated with CNS pathology in three patients. Skull x rays in an additional four patients with positive CT findings were negative including a patient with an epidural hematoma (EDH). Seventeen per cent (13/75) of CT scans were positive (contusions 5, subdural hematoma 3, subarachnoid hemorrhage 2, edema 2, EDH 1). Only the patient with the EDH required operative treatment. No patient with a normal CT scan went on to develop any neurosurgical problems, and 78 per cent of the patients admitted with isolated head injuries were discharged within 48 hours. Significant CNS pathology does occur following "minimal" head injuries. Skull x rays are not helpful. The use of CT scanning appears to triage those patients requiring admission and in hospital observation.
abstract_id: PUBMED:28922710
Paediatric mild head injury: is routine admission to a tertiary trauma hospital necessary? Background: Previous studies have shown that children with isolated linear skull fractures have excellent clinical outcomes and low risk of surgery. We wish to identify other injury patterns within the spectrum of paediatric mild head injury, which need only conservative management. Children with low risk of evolving neurosurgical lesions could be safely managed in primary hospitals.
Methods: We retrospectively analysed all children with mild head injury (i.e. admission Glasgow coma score 13-15) and skull fracture or haematoma on a head computed tomography scan admitted to Westmead Children's Hospital, Sydney over the years 2009-2014. Data were collected regarding demographics, clinical findings, mechanism of injury, head computed tomography scan findings, neurosurgical intervention, outcome and length of admission. Wilcoxon paired test was used with P value <0.05 considered significant.
Results: Four hundred and ten children were analysed. Three hundred and eighty-one (93%) children were managed conservatively, 18 (4%) underwent evacuation of extradural haematoma (TBI surgery) and 11 (3%) needed fracture repair surgery. Two children evolved a surgical lesion 24 h post-admission. Only 17 of 214 children transferred from peripheral hospitals needed neurosurgery. Overall outcomes: zero deaths, one needed brain injury rehabilitation and 63 needed child protection unit intervention. Seventy-five percentage of children with non-surgical lesions were discharged within 2 days. Eighty-three percentage of road transfers were discharged within 3 days.
Conclusions: Children with small intracranial haematomas and/or skull fractures who need no surgery only require brief inpatient symptomatic treatment and could be safely managed in primary hospitals. Improved tertiary hospital transfer guidelines with protocols to manage clinical deterioration could have cost benefit without risking patient safety.
abstract_id: PUBMED:9522909
Management of minor head injuries: admission criteria, radiological evaluation and treatment of complications. The clinical course of patients admitted following minor head injuries (Glasgow Coma Score [GCS] 13-15) has been studied less extensively than in severely head injured patients. Admission criteria, methods and indications for radiological evaluation are controversial. To study this further, a retrospective review of 633 patients admitted following such injuries to King Khalid University Hospital between 1986 and 1993 was undertaken. Their ages ranged from one month to 80 years (average 17 years). The mechanisms of injury were mainly falls in 339 (53.5%) cases and road traffic accidents in 234 (37%). None of the cases resulted from a non-accidental injury. Radiological evaluation was by skull radiography in 616 (97.3%) cases followed by CT scan in 131 (20.7%). These studies revealed a skull fracture in 78 (12.7%) cases. Six of these 78 patients with skull fracture required a neurosurgical procedure during the first week post injury. These represented 0.97% of the cases who had skull radiographs. A base of skull fracture was an ominous sign, since 3 of the 5 cases with such fractures required ventilation of which one resulted in the only mortality of this series, the fourth developed meningitis. Of the cases studied, 3 (0.5%) developed growing skull fractures all had the initial injury during their first year of life. Other complications were as follows: 25 (3.9%) early post-traumatic seizures, 10 (1.6%) chronic subdural haematomas, 9 (1.4%) extradural haematomas, 2 (0.3%) post-traumatic hydrocephalus and one (0.2%) cerebral abscess. We conclude that patients who have an abnormal GCS, a neurological deficit, post-traumatic seizure, signs or suspicion of basal or depressed skull fracture should be admitted for observation because of the risk of deterioration. Patients with a history of loss of consciousness or amnesia without any of the previous may be discharged to be observed at home by a competent observer, otherwise, will need admission for observation. Radiological evaluation once indicated must be by CT scan. There is no benefit from immediate skull radiography in the initial evaluation of minor head injuries. The indications for CT are an abnormal GCS, presence of neurological deficit, signs of basilar or depressed fracture and persistent or progressive headache or vomiting. Infants with minor injuries should be followed up at least once after two to three months for possible growing fractures.
abstract_id: PUBMED:85015
Were you knocked out? In the period 1970-75 inclusive 5152 patients were admitted to an accident hospital after an uncomplicated injury to their head. This group was compared with the 116 patients who needed craniotomy in the same period. It is suggested that precautionary admission of patients with minor head injuries is excessive.
abstract_id: PUBMED:17502197
Clinical algorithm and resource use in the management of children with minor head trauma. Purpose: There are no clear guidelines for the management of minor head injury, including the use of skull x-rays and computed tomography (CT) scans of the head. This is reflected in clinical practice by a wide variability in imaging study use and by the fact that some patients are discharged home from the emergency room (ER), whereas others are admitted to the hospital with or without a period of observation before admission. To address this issue, we proposed and applied a new protocol for minor head injury at our institution.
Methods: Between January 2004 and December 2005, 417 patients presented to the emergency department at our institution with minor head injury. All of them had fallen from less than 1 m. Every chart was retrospectively evaluated, and pertinent data were extracted.
Results: The mean age of the patients was 9.8 months (2 weeks to 32 months). One hundred fifty-three had a skull x-ray, and 13 had a CT scan of the head. Of the 153 patients who had a skull x-ray, only 15 had a skull fracture. Of these 15 patients, 3 also had a CT scan of the head that confirmed the diagnosis of skull fracture. Of the 13 CT scans that were done, only these 3 were positive. Eleven patients were kept in the ER for 6 hours for close observation, and 5 of these were eventually admitted. Overall, 8 patients were admitted to the hospital for observation. Of these 8 patients, 7 had a skull x-ray, from which 5 were positive. Only 2 of the admitted patients had a CT scan, and they were both positive for a skull fracture. One of the CT also demonstrated a subdural hematoma along with subarachnoid hemorrhage. These 2 patients also had a positive skull x-ray. None of the patients that were admitted had headaches or neurologic impairments. The mean age of the patients admitted was 3.8 months (2 weeks to 12 months). The mean hospital stay was 1.2 days (1-3 days).
Conclusion: Only 10% of the skull x-rays and CT scans were positive for a skull fracture, which led to an admission in half of these patients. The other half was mainly discharged from ER after being observed. Several patients underwent a skull x-ray that we feel was not necessary in the management of their minor head injury. For those who had a head CT scan, only one revealed additional information and none of them had an impact on the final management. Observation in the ER could have been reasonable for most cases.
abstract_id: PUBMED:6814631
Admission after mild head injury: benefits and costs. Large numbers of patients are admitted to hospital in Britain after mild head injury in the hope of anticipating complications. Investigation of 1442 consecutive admissions with head injury to the Edinburgh Royal Infirmary yielded 56 intracranial haematomas. Of 865 patients who were alert and orientated in the accident and emergency department after having been briefly knocked out but who had no skull fracture, no focal neurological signs, and no history of headache or vomiting, only one developed an intracranial haematoma. In deciding which patients should be admitted a skull fracture is a much more important risk factor than is a history of brief unconsciousness. If criteria for admission took account of this fewer patients would be admitted and the saving would be considerable.
abstract_id: PUBMED:8451883
Management of acute craniocerebral trauma The first step in the management of acute head injury is the maintenance of circulation and respiration. In case of surgical intervention correct timing is necessary. Postoperative monitoring and intensive care are important to maintain sufficient brain circulation.
abstract_id: PUBMED:6593623
Changes in the pattern of disease managed by general surgeons in New Zealand, 1940-80. Changes in the pattern of disease managed by general surgeons in New Zealand, were assessed for the period 1940-80, using the public hospital admission and cancer incidence data of a representative selection of diseases. During the forty-year period, there was an increase in the incidence of four of the five cancers studied, and an increase in admission rate for non-specific abdominal pain, head injury (indicated by skull fracture), peripheral arterial disease (indicated by aortic aneurysm), gall-stone disease and pancreatitis, large bowel disorders and breast diseases. There was a decline in admission rate for appendicitis, stomach disorders and goitre.
abstract_id: PUBMED:10859793
Management of simple midfacial fractures, particularly in professional soccer players Uncomplicated midfacial fractures represent a frequent and typical injury of soccer-players in oral and maxillofacial surgery. The treatment of these fractures in professional players requires special treatment modalities, especially concerning quick rehabilitation. The examples of a nasal bone fracture, a malar bone fracture and a zygomatic arch fracture in 3 professional soccer players demonstrate the therapeutic principles of these most common uncomplicated midfacial fractures. In this context the fabrication of individual facial masks is presented, which allow an improved postoperative protection and therefore an earlier participation of the player in training and matches.
Answer: The management of uncomplicated skull fractures in children and the necessity of hospital admission have been evaluated in several studies. A prospective study at St. Louis Children's Hospital found that children with uncomplicated skull fractures, a normal neurological exam, a normal head CT, and no other injuries could be considered for discharge home, suggesting that routine hospital admission may not be necessary for this group (PUBMED:9792964). Similarly, a study reviewing records of patients with minimal head injuries indicated that a normal CT scan could triage patients who do not require admission, as no patient with a normal CT scan developed neurosurgical problems (PUBMED:1796791).
A retrospective analysis at Westmead Children's Hospital, Sydney, showed that children with mild head injury and low-risk injury patterns could be safely managed in primary hospitals, as the majority of children with non-surgical lesions were discharged within 2 days (PUBMED:28922710). Another retrospective review at King Khalid University Hospital suggested that patients with minor head injuries and certain risk factors such as abnormal GCS, neurological deficit, or signs of basal or depressed skull fracture should be admitted for observation, while others could be discharged with home observation (PUBMED:9522909).
An earlier study suggested that precautionary admission for patients with minor head injuries might be excessive (PUBMED:85015), and a study from Edinburgh Royal Infirmary found that among patients who were alert and orientated after mild head injury without skull fracture or other risk factors, only one developed an intracranial hematoma, indicating that admission criteria could be more selective (PUBMED:6814631).
In summary, the literature suggests that children with uncomplicated skull fractures and a normal CT scan may not require routine hospital admission and could be safely managed with home observation or in primary hospitals, provided there are protocols to manage clinical deterioration and appropriate follow-up is ensured. However, certain risk factors may necessitate hospital admission for observation. |
Instruction: Is high grade prostatic intraepithelial neoplasia still a risk factor for adenocarcinoma in the era of extended biopsy sampling?
Abstracts:
abstract_id: PUBMED:20438403
Is high grade prostatic intraepithelial neoplasia still a risk factor for adenocarcinoma in the era of extended biopsy sampling? Aims: There is controversy regarding the role of high grade prostatic intraepithelial neoplasia (HGPIN) on prostatic needle biopsy (PNB) as a risk factor for prostatic adenocarcinoma. We utilise a large Canadian database to determine whether HGPIN detected on extended PNB is a significant risk factor for prostatic adenocarcinoma.
Methods: Pathological findings from PNBs from 12 304 men who underwent initial PNB during an 8 year period were analysed. Patients were included in the study if their initial diagnosis was HGPIN alone or a benign diagnosis, if at least one follow-up PNB was performed, and if both the initial and follow-up PNB contained at least 10 prostate cores.
Results: In the benign group of 105 patients and the HGPIN group of 120 patients, 14.1% and 20.8% were diagnosed with prostatic adenocarcinoma, respectively. When the HGPIN group was further subdivided into unifocal (1 core) and multifocal (>or=2 cores) groups, 9.4% and 29.9% developed prostatic adenocarcinoma, respectively (p < 0.0001). Cox regression analysis adjusting for age and prostate specific antigen (PSA) confirms the significance of HGPIN as a risk factor for prostatic adenocarcinoma (p = 0.0045).
Conclusions: Patients with an initial diagnosis of multifocal HGPIN on extended PNB are at a greater risk for subsequent prostatic adenocarcinoma than those with unifocal HGPIN or benign diagnoses.
abstract_id: PUBMED:19524976
Multifocal high grade prostatic intraepithelial neoplasia is a significant risk factor for prostatic adenocarcinoma. Purpose: There is debate in the literature on the role of high grade prostatic intraepithelial neoplasia as a risk factor for subsequent prostatic adenocarcinoma detection on prostatic needle biopsy. We determined whether high grade prostatic intraepithelial neoplasia on initial prostatic needle biopsy is an independent risk factor for prostatic adenocarcinoma and whether differences exist between prostatic adenocarcinoma in patients with previous high grade prostatic intraepithelial neoplasia and those with a benign diagnosis.
Materials And Methods: Pathological findings in prostatic needle biopsies in 12,304 men who underwent initial prostatic needle biopsy in an 8-year period were analyzed. Patients were included in the analysis when the initial diagnosis was high grade prostatic intraepithelial neoplasia alone or a benign diagnosis and at least 1 followup prostatic needle biopsy was performed. The primary study outcome was prostatic adenocarcinoma and secondary outcome measurements were cancer characteristics, such as Gleason score and extent of tissue involvement with prostatic adenocarcinoma.
Results: In the high grade prostatic intraepithelial neoplasia group of 564 patients and the benign group of 845, 27.48% and 22.01%, respectively, were diagnosed with prostatic adenocarcinoma on followup prostatic needle biopsy (p = 0.02). When age, prostate specific antigen and sampling extent were adjusted for, the adenocarcinoma risk after an initial diagnosis of high grade prostatic intraepithelial neoplasia remained significant (OR 1.38, p = 0.03). The risk was related to the extent of high grade prostatic intraepithelial neoplasia in the initial sample with a greater likelihood of adenocarcinoma when multiple prostatic sites were involved by high grade prostatic intraepithelial neoplasia. Patients in whom prostatic adenocarcinoma developed after a benign diagnosis on initial prostatic needle biopsy had greater tumor volume. However, mean followup was longer in the benign group than in the high grade prostatic intraepithelial neoplasia group (2.35 vs 1.36 years).
Conclusions: Patients with an initial diagnosis of high grade prostatic intraepithelial neoplasia, especially when multifocal, are at greater risk for subsequent prostatic adenocarcinoma than those with a benign diagnosis. Results suggest that followup should be more rigorous in patients with multifocal high grade prostatic intraepithelial neoplasia.
abstract_id: PUBMED:22670187
High-grade prostatic intraepithelial neoplasia. High-grade prostatic intraepithelial neoplasia (HGPIN) has been established as a precursor to prostatic adenocarcinoma. HGPIN shares many morphological, genetic, and molecular signatures with prostate cancer. Its predictive value for the development of future adenocarcinoma during the prostate-specific antigen screening era has decreased, mostly owing to the increase in prostate biopsy cores. Nevertheless, a literature review supports that large-volume HGPIN and multiple cores of involvement at the initial biopsy should prompt a repeat biopsy of the prostate within 1 year. No treatment is recommended for HGPIN to slow its progression to cancer.
abstract_id: PUBMED:15780372
High-grade prostatic intraepithelial neoplasia in needle biopsy as risk factor for detection of adenocarcinoma: current level of risk in screening population. Objectives: To assess the current incidence of prostate carcinoma detection in serial biopsies in a prostate-specific antigen-based screening population after a diagnosis of isolated high-grade prostatic intraepithelial neoplasia (HG-PIN) in needle biopsy tissue.
Methods: We retrospectively identified 190 men with a diagnosis of isolated HG-PIN in needle biopsy tissue. Most men (86%) were diagnosed from 1996 to 2000. Logistic regression analysis was used to predict the presence of carcinoma in these 190 men and in a control group of 1677 men with only benign prostatic tissue in needle biopsy tissue.
Results: The cumulative risk of detection of carcinoma on serial sextant follow-up biopsies was 30.5% for those with isolated HG-PIN compared with 26.2% for the control group (P = 0.2). Patient age (P = 0.03) and serum prostate-specific antigen level (P = 0.02) were significantly linked to the risk of cancer detection, but suspicious digital rectal examination findings (P = 0.1), the presence of HG-PIN (P = 0.2), and the histologic attributes of PIN were not (all with nonsignificant P values). HG-PIN found on the first repeat biopsy was associated with a 41% risk of subsequent detection of carcinoma compared with an 18% risk if benign prostatic tissue was found on the first repeat biopsy (P = 0.01).
Conclusions: The results of our study have shown that the current level of risk for the detection of prostate carcinoma in a screened population is 30.5% after a diagnosis of isolated HG-PIN in a needle biopsy. This risk level is lower than the previously reported risk of 33% to 50%. HG-PIN is a risk factor for carcinoma detection only when found on consecutive sextant biopsies. The data presented here should prompt reconsideration of repeat biopsy strategies for HG-PIN, and re-evaluation of the absolute necessity of repeat biopsy for all patients with HG-PIN.
abstract_id: PUBMED:7684999
Significance of high-grade prostatic intraepithelial neoplasia on needle biopsy. We studied 33 cases with an initial needle biopsy of the prostate that showed only high-grade prostatic intraepithelial neoplasia (PIN 2-3), for which follow-up biopsies were available. Twenty-four men (73%) were shown to have adenocarcinoma either on a simultaneous (14 patients) or subsequent (10 patients) biopsy. The grade of PIN (grade 2 v 3), rectal examination findings, and transrectal ultrasound results proved not to be significantly different in patients with proven adenocarcinoma compared with those without proven carcinoma. In contrast, serum prostate-specific antigen (PSA) concentrations were elevated in 90% of patients with carcinoma compared with only 50% of those with a benign follow-up biopsy. Persistent elevation of serum PSA concentration was seen in only one of three patients with serial PSA measurements and a benign follow-up biopsy. Notably, all patients with carcinoma for whom we had serial measurements of serum PSA levels had persistent elevation. The finding of high-grade PIN on needle biopsy often represents a sampling problem with carcinoma nearby. Consequently, the finding of high-grade PIN on needle biopsy merits vigorous follow-up, including rebiopsy. In particular, patients with increased serum PSA appear to be at greater risk of harboring prostatic adenocarcinoma. However, a significant number of patients with high-grade PIN on initial biopsy may not have evidence of carcinoma on repeat biopsy. Thus, radical prostatectomy or radiotherapy for PIN is not warranted.
abstract_id: PUBMED:12856644
Significance of high-grade prostatic intraepithelial neoplasia on prostate biopsy. The early diagnosis of prostate cancer has been facilitated by the development of serum prostate-specific antigen (PSA) testing and evolution in transrectal ultrasound-guided biopsy of the prostate. Over a decade has passed since the initial recommendations for systematic sextant sampling of the prostate to increase the accuracy of cancer detection. Subsequently, variations in the number and location of biopsies have been proposed to maximize prostate cancer detection and obtain more complete information regarding tumor grade, tumor volume, and local stage. Although current biopsy strategies provide a wide sampling of the prostate gland, biopsy histology may not be conclusive for either the presence or absence of adenocarcinoma. High-grade prostatic intraepithelial neoplasia (HGPIN) is found in a significant fraction of patients undergoing transrectal prostate biopsies. In this article, we discuss the significance of high-grade prostatic intraepithelial neoplasia and other abnormal histology findings and current evidence addressing the presence of cancer and need for additional prostate biopsies.
abstract_id: PUBMED:11490235
Repeat biopsy strategy in patients with atypical small acinar proliferation or high grade prostatic intraepithelial neoplasia on initial prostate needle biopsy. Purpose: Isolated high grade prostatic intraepithelial neoplasia and/or atypical small acinar proliferation on prostate biopsy increases the risk of identifying cancer on repeat biopsy. We report the results of repeat prostate biopsy for high grade prostatic intraepithelial neoplasia and/or atypical small acinar proliferation, and propose an optimal repeat biopsy strategy.
Materials And Methods: Of 1,391 men who underwent standard systematic sextant biopsy of the prostate 137 (9.8%) had isolated high grade prostatic intraepithelial neoplasia or atypical small acinar proliferation, including 100 who underwent repeat prostate biopsy within 12 months of the initial biopsy.
Results: Adenocarcinoma was detected in 47 of the 100 patients who underwent repeat biopsy. The initial biopsy site of high grade prostatic intraepithelial neoplasia and/or atypical small acinar proliferation matched the sextant location of cancer on repeat biopsy in 22 cases (47%). Repeat biopsy directed only to the high grade prostatic intraepithelial neoplasia and/or atypical small acinar proliferation site on initial biopsy would have missed 53% of cancer cases. In 12 of the 47 men (26%) cancer was limited to the side of the prostate contralateral to the side of high grade prostatic intraepithelial neoplasia and/or atypical small acinar proliferation. Of the 31 patients with cancer in whom the transition zone was sampled cancer was limited to the transition zone in 4 (13%) and evident at other biopsy sites in 13 (42%). The only significant predictor of positive repeat biopsy was mean prostate specific antigen velocity plus or minus standard error (1.37 +/- 1.4 versus 0.52 +/- 0.8 ng./ml. per year, p <0.001).
Conclusions: Patients with isolated high grade prostatic intraepithelial neoplasia and/or atypical small acinar proliferation on prostate biopsy are at 47% risk for cancer on repeat biopsy. The optimal repeat biopsy strategy in this setting should include bilateral biopsies of the standard sextant locations. We also strongly recommend that transition zone sampling should be considered.
abstract_id: PUBMED:15028446
Can the number of cores with high-grade prostate intraepithelial neoplasia predict cancer in men who undergo repeat biopsy? Objectives: To evaluate whether the presence of, or the number of cores containing, high-grade prostatic intraepithelial neoplasia (PIN) found in men who underwent initial extended multisite biopsy could predict which men would have prostate cancer on subsequent repeat biopsies.
Methods: Between June 1997 and January 2003, 1086 men underwent initial prostate biopsy for early detection of prostate cancer using an extended multisite biopsy scheme. Of these, 175 men without cancer underwent at least one repeat biopsy (range one to three; median interval between biopsies, 3 months). Among these 175 patients, 47 had high-grade PIN on initial biopsy.
Results: The initial extended biopsy identified cancer in 33.8% (367 of 1086) and high-grade PIN in 20.8% (226 of 1086). The incidence of high-grade PIN only in patients found to have cancer on initial biopsy was 29.7% (109 of 367). The presence of high-grade PIN was associated with concurrent prostate cancer at the initial biopsy (P <0.0001). Overall, repeat biopsy identified cancer in 18.3% of the 175 men. Of the 47 men with high-grade PIN, 5 (10.6%) were found to have cancer on repeat biopsy. The number of biopsy specimens positive for high-grade PIN on initial biopsy was not associated with the likelihood of prostate cancer on repeat biopsy. Multivariate logistic regression analysis showed that neither the presence of high-grade PIN nor the number of cores containing high-grade PIN on initial biopsy were predictors for prostate cancer on repeat biopsy.
Conclusions: The number of cores positive for high-grade PIN was not predictive for cancer on repeat biopsy.
abstract_id: PUBMED:15105651
High-grade prostatic intraepithelial neoplasia on needle biopsy: risk of cancer on repeat biopsy related to number of involved cores and morphologic pattern. The importance of isolated high-grade prostatic intraepithelial neoplasia (HGPIN) on needle biopsy is its association with synchronous invasive carcinoma. The relevance of this relationship has been called into question in recent years. In our study, we examined whether the histologic subtype of HGPIN (ie, tufting, micropapillary, cribriform, flat) and/or the number of core biopsies involved by HGPIN was predictive of a subset of men who were at higher risk of having invasive carcinoma on follow-up biopsies. We examined 200 sets of needle biopsies with a diagnosis of isolated HGPIN. Patient age ranged from 46 to 90 years (mean 66.4 years). The breakdown of the histologic subtypes of HGPIN is as follows: tufting 59%, micropapillary 34.3%, cribriform 6.2%, and flat 0.5%. A total of 132 patients (66%) had follow-up biopsies. Prostatic adenocarcinoma was identified in 28.8% of patients with 89.5% of cancers identified on the first two follow-up biopsies. For men that had two or more cores with HGPIN on the initial biopsy, 35.9% eventually had cancer on follow-up whereas men with only single core involvement had cancer in 22% of cases. Men with tufting/flat HGPIN on the initial biopsy had cancer on follow-up in 31.9% of cases, whereas the micropapillary/cribriform subtype was associated with cancer in 22% of follow-up biopsies. The histologic findings on the first repeat biopsy can be quite informative as to the risk of synchronous invasive carcinoma. Of the men with HGPIN on the first repeat biopsy, 32% eventually had cancer on follow-up. Additionally, if multiple cores were involved by HGPIN on the first repeat biopsy, the risk of finding cancer was 50%, regardless of single or multiple core involvement on the initial biopsy. Men with a benign diagnosis on the first repeat biopsy had a 14% risk of having cancer on follow-up. These data indicate that the multiple core involvement by HGPIN, both on initial and first repeat biopsy, defines a subset of men that are at increased risk of harboring synchronous invasive carcinoma. The histologic subtype of PIN does not appear to be as informative.
abstract_id: PUBMED:9535395
Diagnostic effect of complete histologic sampling of prostate needle biopsy specimens. In 1997, approximately 1 million 18-gauge prostate needle core biopsies were performed in the United States. Yet limited data exist on the effect of histologic sampling on detection of carcinoma in needle biopsy tissue, and no data have been published on the diagnostic yield of complete histopathologic examination of prostate needle biopsy specimens. We sought to evaluate the diagnostic effect of obtaining additional sections after a nonmalignant diagnosis was given on three initial slides. This prospective study was of 200 consecutively identified cases. Complete histologic examination of all needle biopsy tissue from 100 cases diagnosed as atypia (encompassing high-grade prostatic intraepithelial neoplasia [PIN] and focal glandular atypia) on the initial three slides was compared with complete examination for a control population of 100 cases diagnosed as benign prostatic tissue on the initial three slides. New histologic abnormalities in levels were characterized as to diagnostic category, distribution in additional slides and morphometrically determined size. Complete histologic sampling of prostate needle biopsy specimens with serial sections entirely through the paraffin block required a mean of 30 slides per block, with a mean of 4 sections per slide. In 17 (17%) cases with atypia diagnosed on the initial three slides, a new histologic abnormality was detected in levels. In 4 (10%) of 40 cases of focal glandular atypia, definitive carcinoma was present on additional sections, including the first additional slide. In no case with a diagnosis of benign prostatic tissue (n = 100) or high-grade PIN (n = 60) on the three initial slides was carcinoma diagnosed on additional slides. Additional histologic sampling after a diagnosis of isolated high-grade PIN does not seem necessary; these patients with high-grade PIN should undergo rebiopsy. Because of the profound consequences of a definitive diagnosis of prostatic carcinoma, we recommend obtaining a single additional slide with several 3-microm sections after a diagnosis of focal glandular atypia has been given for three initial slides of needle biopsy specimens from the prostate.
Answer: Yes, high grade prostatic intraepithelial neoplasia (HGPIN) is still considered a risk factor for adenocarcinoma even in the era of extended biopsy sampling. Studies have shown that patients with an initial diagnosis of HGPIN, particularly when multifocal, are at a greater risk for subsequent prostatic adenocarcinoma compared to those with a benign diagnosis (PUBMED:20438403; PUBMED:19524976). Multifocal HGPIN, where multiple cores are involved, has been associated with a higher likelihood of developing prostatic adenocarcinoma (PUBMED:20438403; PUBMED:19524976). Cox regression analysis adjusting for age and prostate-specific antigen (PSA) confirmed the significance of HGPIN as a risk factor for prostatic adenocarcinoma (PUBMED:20438403).
Furthermore, the predictive value of HGPIN for the development of future adenocarcinoma has decreased due to the increase in prostate biopsy cores, but literature still supports that large-volume HGPIN and multiple cores of involvement at the initial biopsy should prompt a repeat biopsy of the prostate within 1 year (PUBMED:22670187). No treatment is recommended for HGPIN to slow its progression to cancer, indicating the importance of monitoring rather than immediate intervention (PUBMED:22670187).
In summary, despite the advancements in biopsy techniques and the increase in the number of cores sampled, HGPIN remains a significant risk factor for the development of prostatic adenocarcinoma, with multifocal involvement being particularly concerning. |
Instruction: Is very high C-reactive protein in young adults associated with indicators of chronic disease risk?
Abstracts:
abstract_id: PUBMED:24485478
Is very high C-reactive protein in young adults associated with indicators of chronic disease risk? Background: Cases with very high C-reactive protein (CRP>10mg/L) are often dropped from analytic samples in research on risk for chronic physical and mental illness, but this convention could inadvertently result in excluding those most at risk. We tested whether young adults with very high CRP scored high on indicators of chronic disease risk. We also tested intergenerational pathways to and sex-differentiated correlates of very high CRP.
Methods: Data came from Waves I (ages 11-19) and IV (ages 24-34) of the National Longitudinal Study of Adolescent Health (N=13,257). At Wave I, participants' parents reported their own education and health behaviors/health. At Wave IV, young adults reported their socioeconomic status, psychological characteristics, reproductive/health behaviors and health; trained fieldworkers assessed BMI, waist circumference, blood-pressure, and medication use, and collected bloodspots from which high-sensitivity CRP (hs-CRP) was assayed.
Results: Logistic regression analyses revealed that many common indicators of chronic disease risk - including parental health/health behaviors reported 14 years earlier - were associated with very high CRP in young adults. Several of these associations attenuated with the inclusion of BMI. More than 75% of young adults with very high CRP were female. Sex differences in associations of some covariates and very high CRP were observed.
Conclusion: Especially among females, the exclusion of cases with very high CRP could result in an underestimation of "true" associations of CRP with both, chronic disease risk indicators and morbidity/mortality. In many instances, very high CRP could represent an extension of the lower CRP range when it comes to chronic disease risk.
abstract_id: PUBMED:19582378
Nail antioxidant trace elements are inversely associated with inflammatory markers in healthy young adults. Antioxidant intake may be linked to a reduction of the chronic low-grade inflammatory state related to obesity and several accompanying disorders such as insulin resistance, cardiovascular diseases, and metabolic syndrome. So, the aim of this study was to evaluate the potential associations between nail trace elements and several indicators in healthy young adults, emphasizing on the putative effect of antioxidant trace element intake on inflammation-related marker concentrations. This study enrolled 149 healthy young adults, whose anthropometrical and blood pressure values as well as lifestyle features were analyzed. Fasting blood samples were collected for the biochemical and inflammation-related measurements (C-reactive protein, tumor necrosis factor-alpha (TNF-alpha), interleukin (IL)-6, IL-18, and homocysteine). Nail samples were collected for the analysis of selenium, zinc, and copper concentrations. Our results showed that nail selenium was negatively associated with IL-18; nail zinc concentrations were inversely related to circulating IL-6, IL-18, and TNF-alpha, whereas nail copper (Cu) and Cu/selenium values were negatively correlated with homocysteine levels and the Cu/zinc ratio was unaffected. In conclusion, nail content on some trace elements related to antioxidant defense mechanisms seems to be associated with several inflammation-related markers linked to chronic diseases in apparently healthy young adults, which is of interest to understand the role of antioxidant intake.
abstract_id: PUBMED:30767573
Body fat percentage is more strongly associated with biomarkers of low-grade inflammation than traditional cardiometabolic risk factors in healthy young adults - the Lifestyle, Biomarkers, and Atherosclerosis study. The primary aim was to appraise the relationship between body fat percentage and the inflammatory markers C-reactive protein (CRP) and orosomucoid in a population of young, non-smoking, healthy, Swedish adults, without any chronic diseases. A secondary aim was to compare whether these associations differed between the women using estrogen contraceptives and those who did not. We assessed the association in linear regression models between body fat percentage based on a bio-impedance measurement and plasma concentrations of CRP and orosomucoid in men and women aged 18-26 years, n = 834. Statistically significant associations were found between body fat percentage and both biomarkers of inflammation, with β coefficients of 0.30 (95% CI 0.24-0.37) and 0.28 (0.22-0.35) for CRP and orosomucoid, respectively (p < .001). Adjustment for established risk factors marginally lowered the effects sizes (partial betas, 0.28 and 0.20, respectively), while the strong statistically significant associations remained. In the female cohort, estrogen and non-estrogen using subpopulations did not significantly differ in the correlations between body fat percentage and the inflammatory biomarkers, even adjusted for established cardiometabolic risk factors. In conclusion, in healthy young adults, higher levels of body fat percentage are associated with elevations in plasma biomarkers of inflammation, suggesting that a systemic inflammatory process, promoting atherosclerosis, may commence already at this early stage in life. CRP and orosomucoid plasma concentrations differed between users and non-users of estrogen contraceptives, but both subgroups showed similar correlations between increasing body fat percentage and increasing plasma concentrations of the biomarkers of inflammation.
abstract_id: PUBMED:24338596
Plasma levels of 14:0, 16:0, 16:1n-7, and 20:3n-6 are positively associated, but 18:0 and 18:2n-6 are inversely associated with markers of inflammation in young healthy adults. Inflammation is a recognized risk factor for the development of chronic diseases, such as type 2 diabetes and atherosclerosis. Evidence suggests that individual fatty acids (FA) may have distinct influences on inflammatory processes. The goal of this study was to conduct a cross-sectional analysis to examine the associations between circulating FA and markers of inflammation in a population of young healthy Canadian adults. FA, high-sensitivity C-reactive protein (hsCRP), and cytokines were measured in fasted plasma samples from 965 young adults (22.6 ± 0.1 years). Gas chromatography was used to measure FA. The following cytokines were analyzed with a multiplex assay: regulated upon activation normal T cell expressed and secreted (RANTES/CCL5), interleukin 1-receptor antagonist (IL-1Ra), interferon-γ (IFN-γ), interferon-γ inducible protein 10 (IP-10), and platelet-derived growth factor β (PDGF-ββ). Numerous statistically significant associations (p < 0.05, corrected for multiple testing) were identified between individual FA and markers of inflammation using linear regression. Myristic (14:0), palmitic (16:0), palmitoleic (16:1n-7), and dihomo-γ-linolenic (20:3n-6) acids were positively associated with all markers of inflammation. In contrast, stearic acid (18:0) was inversely associated with hsCRP and RANTES, and linoleic acid (18:2n-6) was inversely associated with hsCRP, RANTES and PDGF-ββ. In conclusion, our results indicate that specific FA are distinctly correlated with various markers of inflammation. Moreover, the findings of this study suggest that FA profiles in young adults may serve as an early indicator for the development of future complications comprising an inflammatory component.
abstract_id: PUBMED:31377557
Elevated hs-CRP level is associated with depression in younger adults: Results from the Korean National Health and Nutrition Examination Survey (KNHANES 2016). Introduction: Reports on the association between the level of circulating high-sensitivity C-reactive protein (hs-CRP) and depression have been inconsistent. The aim of this study was to examine the association between hs-CRP and depression in a large sample.
Methods: This study used data obtained from a representative Korean sample of 5447 people who participated in the first (2016) year of the seventh Korean National Health and Nutrition Examination Survey (KNHNES VII-1). Depression was identified using a cutoff of 5 on the Patient Health Questionnaire-9 (PHQ-9), and high hs-CPR level was defined as ≥ 3.0 mg/L.
Findings: Participants with a high CRP levels had a significantly higher rate of depression than did those with a low hs-CRP levels (25.1% vs. 19.8%, p = 0.007). Serum hs-CRP was independently associated with the PHQ-9 total score after adjusting for potentially confounding factors (B = 0.014; 95% CI = 0.008-0.020). After controlling for body mass index (BMI), smoking, alcohol use problems, hypertension, diabetes, dyslipidemia, chronic illness related hs-CRP, and metabolic syndrome. Furthermore, elevated hs-CRP level was significantly associated with an increased risk of depression (adjusted OR = 1.44; 95% CI = 1.01-2.07) in younger adults, but no significant association was observed among older adults.
Conclusion: These findings suggest a significant correlation between high hs-CRP levels and depression in younger adults. Further studies are necessary to investigate the age-specific association and the biological mechanism involved.
abstract_id: PUBMED:26512756
Increased RhoA/Rho-Kinase Activity and Markers of Endothelial Dysfunction in Young Adult Subjects with Metabolic Syndrome. Background: Metabolic syndrome, a chronic condition associated with higher risk of cardiovascular diseases, is increasingly prevalent in young adults. Dyslipidemia, proinflammatory cytokines, endothelial dysfunction signs, and RhoA/Rho-kinase (ROCK) activation are considered risk factors of cardiovascular diseases. The occurrence of these factors in young patients with metabolic syndrome but without type 2 diabetes or hypertension has not been fully studied. The objective of this study was to evaluate young subjects with enlarged waist circumference and dyslipidemia but without type 2 diabetes or hypertension,for markers associated with a higher risk of cardiovascular diseases.
Methods: Thirty-two male patients aged 31 ± 1.3 years diagnosed with metabolic syndrome according to the National Cholesterol Education Program Adult Treatment Panel III guide for enlarged waist circumference, elevated triglycerides, and low HDL levels, but with blood pressure and fasting glucose within normal ranges, were evaluated for RhoA/ROCK activity in leukocytes, serum fatty acid methyl esters profile, proinflammatory cytokines, and oxidative stress markers in addition to thrombin generation and biochemical analysis. Age- and gender-matched healthy subjects were equivalently evaluated.
Results: Patients showed higher RhoA/ROCK activity, elevated levels of interleukin-6, soluble CD40L, monocyte chemoattractant protein, and high-sensitivity C-reactive protein (P < 0.001) as well as parameters of endogenous thrombin generation potential (P < 0.05) compared with healthy subjects. Increased thiobarbituric acid reactive substances, advanced oxidation protein product, and insulin levels and low nitric oxide biodisponibility (P < 0.001) were also found in patients as compared with controls. Palmitic acid was one of the saturated fatty acids found to be significantly elevated in patients compared with control subjects (P = 0.0087).
Conclusions: Increased markers of cardiovascular risk are already present in young adults with metabolic syndrome but without type 2 diabetes or hypertension.
abstract_id: PUBMED:24513874
Self-rated health and C-reactive protein in young adults. Background: Poor self-rated health (SRH) and elevated inflammation and morbidity and mortality are robustly associated in middle- and older-aged adults. Less is known about SRH-elevated inflammation associations during young adulthood and whether these linkages differ by sex.
Methods: Data came from the National Longitudinal Study of Adolescent Health. At Wave IV, young adults aged 24–34 reported their SRH, acute and chronic illnesses, and sociodemographic and psychological characteristics relevant to health. Trained fieldworkers assessed medication use, BMI, waist circumference, and also collected bloodspots from which high-sensitivity CRP (hs-CRP) was assayed. The sample size for the present analyses was N = 13,236.
Results: Descriptive and bivariate analyses revealed a graded association between SRH and hs-CRP: Lower ratings of SRH were associated with a higher proportion of participants with hs-CRP >3 mg/L and higher mean levels of hs-CRP. Associations between SRH and hs-CRP remained significant when acute and chronic illnesses, medication use, and health behaviors were taken into account. When BMI was taken into account, the association between SRH and hs-CRP association fully attenuated in females; a small, but significant association between SRH and hs-CRP remained in males.
Conclusion: Poor SRH and elevated hs-CRP are associated in young adults, adjusting for other health status measures, medication use, and health behavior. In males, SRH provided information about elevated hs-CRP that was independent of BMI. In females, BMI may be a better surrogate indicator of global health and pro-inflammatory influences compared to SRH.
abstract_id: PUBMED:32578801
Relationship between periodontitis and subclinical risk indicators for chronic non-communicable diseases. In view of the epidemiological relevance of periodontal disease and chronic noncommunicable diseases, the study aimed to evaluate the relationship between them through subclinical indicators of systemic risk in a population group with healthy habits, including alcohol and tobacco abstinence. A complete periodontal examination of six sites per tooth was performed in a sample of 420 participants from the Advento study (Sao Paulo), submitted to anthropometric and laboratory evaluation. Periodontitis was defined and classified based on the Community Periodontal Index score 3 (periodontal pocket = 4-5 mm) and score 4 (periodontal pocket ≥ 6 mm). The prevalence of mild/moderate and severe periodontitis was 20% and 8.2%, respectively. Both categories of periodontal disease had significantly higher levels of triglycerides, C-reactive protein, calcium score, and calcium percentile, whereas blood glucose after tolerance test was significantly higher among people with severe periodontitis and HDL-c levels were lower (p < 0.05). Young adults with severe periodontitis had significantly higher prevalence of obesity, pre-diabetes, hypertension, and metabolic syndrome. Besides these conditions, the older adults with severe periodontitis had significantly higher prevalence of dyslipidemia and subclinical atherosclerosis. The group with periodontitis had also a higher coronary heart disease risk based on the PROCAM score (p < 0.05). The results indicated associations of periodontitis with several systemic indicators for chronic noncommunicable diseases, and highlighted the need for multiprofessional measures in the whole care of patients.
abstract_id: PUBMED:19596710
Vitamin C deficiency in a population of young Canadian adults. A cross-sectional study of the 979 nonsmoking women and men aged 20-29 years who participated in the Toronto Nutrigenomics and Health Study from 2004 to 2008 was conducted to determine the prevalence of serum ascorbic acid (vitamin C) deficiency and its association with markers of chronic disease in a population of young Canadian adults. High performance liquid chromatography was used to determine serum ascorbic acid concentrations from overnight fasting blood samples. A 1-month, 196-item food frequency questionnaire was used to assess dietary intakes. Results showed that 53% of subjects had adequate, 33% had suboptimal, and 14% had deficient levels of serum ascorbic acid. Subjects with deficiency had significantly higher measurements of mean C-reactive protein, waist circumference, body mass index, and blood pressure than did subjects with adequate levels of serum ascorbic acid. The odds ratio for serum ascorbic acid deficiency was 3.43 (95% confidence interval: 2.14, 5.50) for subjects who reported not meeting the recommended daily intake of vitamin C compared with those who did. Results suggest that 1 of 7 young adults has serum ascorbic acid deficiency, in part, because of unmet recommended dietary intakes. Furthermore, serum ascorbic acid deficiency is associated with elevated markers of chronic disease in this population of young adults, which may have long-term adverse health consequences.
abstract_id: PUBMED:31469876
Risk factors for nutrition-related chronic disease among adults in Indonesia. Objective: To conduct a secondary data analysis detailing the associations between sociodemographic and behavioral factors and nutrition-related chronic disease.
Methods: These analyses utilized 2014 data from the Indonesian Family Life Survey, a home-based survey that collected socioeconomic, dietary intake, physical activity, and biological data among adults. We explored four outcomes in relation to sociodemographic and behavioral determinants: 1) hypertension, 2) elevated high-sensitivity c-reactive protein (hs-CRP), and 3) central obesity, as these are critical metabolic determinants in the progression to cardiovascular disease, and 4) type 2 diabetes. Hypertension was defined as systolic blood pressure ≥140 mm or diastolic blood pressure ≥ 90mm or current use of antihypertensive medication. Elevated hs-CRP was defined as hs-CRP >3 mg/dL. Central obesity was defined as waist circumference ≥ 90 cm if male and waist circumference ≥ 80 cm if female, which are specific to South Asia. Type 2 diabetes was defined as glycated hemoglobin ≥ 6.5%. We employed separate gender-stratified multivariate logistic regression models to test the associations between sociodemographic and behavioral determinants and each nutrition-related chronic disease outcome. All analyses employed sampling weights, which account for the survey design.
Results: In 2014, about 30% of adults were hypertensive and one-fifth had elevated hs-CRP. Approximately 70% of women had central obesity and 11.6% of women and 8.9% of men had diabetes. Older-age was consistently associated with nutrition-related chronic disease and being overweight was associated with hypertension, elevated hs-CRP, and type 2 diabetes. Regularly consuming instant noodles (women) and soda (men) were associated with elevated hs-CRP and soda consumption was associated with central obesity among men.
Conclusions: Large segments of the adult population in Indonesia now have or are at risk for non-communicable disease. Our analyses provide preliminary empirical evidence that interventions that target healthful food intake (e.g. reduce the intake of ultra-processed foods) should be considered and that the reduction of overweight is critical for preventing chronic diseases in Indonesia.
Answer: Yes, very high C-reactive protein (CRP) levels in young adults are associated with indicators of chronic disease risk. Research has shown that young adults with very high CRP levels score high on various indicators of chronic disease risk, including parental health behaviors and health reported 14 years earlier. Many of these associations, however, attenuate with the inclusion of body mass index (BMI) in the analysis. It is also noted that more than 75% of young adults with very high CRP are female, suggesting a potential sex difference in the association between CRP levels and chronic disease risk (PUBMED:24485478).
Additional studies support the link between high CRP levels and chronic disease risk factors in young adults. For instance, body fat percentage has been found to be more strongly associated with biomarkers of low-grade inflammation, such as CRP, than traditional cardiometabolic risk factors in healthy young adults (PUBMED:30767573). Furthermore, specific fatty acids in the plasma have been positively or inversely associated with markers of inflammation, indicating that fatty acid profiles may serve as early indicators for the development of complications involving an inflammatory component (PUBMED:24338596).
Moreover, elevated hs-CRP levels have been associated with depression in younger adults, suggesting a significant correlation between high hs-CRP levels and mental health, which could be considered an aspect of chronic disease risk (PUBMED:31377557). Young adults with metabolic syndrome, a condition associated with higher cardiovascular disease risk, have also been found to exhibit increased markers of cardiovascular risk, including elevated levels of proinflammatory cytokines and high-sensitivity CRP (PUBMED:26512756).
In summary, very high CRP levels in young adults are indeed associated with indicators of chronic disease risk, and this association is influenced by various factors, including BMI, sex, and other health behaviors and conditions. |
Instruction: Social and economic consequences of obstetric fistula: life changed forever?
Abstracts:
abstract_id: PUBMED:17727854
Social and economic consequences of obstetric fistula: life changed forever? Objectives: To summarize the social, economic, emotional, and psychological consequences incurred by women with obstetric fistula; present the results of a meta-analysis for 2 major consequences, divorce/separation and perinatal loss; and report on improvements in health and self-esteem and on the possibility of social reintegration following successful fistula repair.
Methods: We conducted a review of the literature published between 1985 and 2005 on fistula in developing countries. We then performed a meta-analysis for 2 of the major consequences of having a fistula, divorce/separation and perinatal child loss.
Results: Studies suggest that surgical treatment usually closes the fistula and improves the physical and mental health of affected women.
Conclusion: With additional social support and counseling, women may be able to successfully reintegrate socially following fistula repair.
abstract_id: PUBMED:38243609
The economic consequences of obstetric fistula: A systematic search and narrative review. Background: Obstetric fistula develops from obstructed labor and is a devastating condition with significant consequences across several domains of a woman's life. This study presents a narrative review of the evidence on the economic consequences of obstetric fistula.
Methods: Three databases were searched, and search results were limited to English language papers published after 2003. Search results were reviewed for relevance based on title and abstract followed by full text review using specific inclusion and exclusion criteria. Bibliographies of papers were also scanned to identify relevant papers for inclusion. Data were extracted under three categories (defined a priori): the economic consequences of having the condition, the economic consequences of seeking care, and the macroeconomic impacts.
Results: The search returned 517 unique papers, 49 of which were included after screening. Main findings identified from the studies include women losing their jobs, becoming dependent on others, and losing financial support when relationships are lost. Seeking care was economically costly for families or unaffordable entirely. There were no studies describing the impact of fistula on national economies.
Conclusion: Economic consequences of obstetric fistula are multifaceted, pervasive, and are intertwined with the physical and psychosocial consequences of the condition. Understanding these consequences can help tailor existing fistula programs to better address the impacts of the condition. Further research to address the dearth of literature describing the macroeconomic impact of obstetric fistula will be critical to enhance the visibility of this condition on the health agendas of countries.
abstract_id: PUBMED:32807689
Psycho-social and economic reintegration of women operated for urogenital fistula Objective: To study the psycho-social and economic reintegration of women operated for genital fistula in Congo.
Material And Methods: This was a descriptive observational study conducted in Brazzaville and Ewo, Republic of Congo, from April 1 to October 31, 2018. It included patients operated for genital fistula between 2008 and 2017. Variables of interest were socio-demographic, reproductive and clinical characteristics. The analysis was performed using the SPSS 20 software.
Results: The overall, 34 patients were studied, with age ranging from 29 to 65 years old with a median of 43 (39, 50) year. The context of fistula occurrence was obstetrical in 24 women (70%). The Practice of an income-generating activity before, during and after fistula was 76%; 32% and 64% (P=0.0007). Concerning the psychological status, in these women, self-esteem went from 26% to 73% (P=0.0003) and the prevalence of suicidal thoughts went from 29% before fistula treatment to 0% after (P=0.0009). The tendency to isolate themselves went from 44% before fistula cure to 3% after (P=0.00008). With regards to reproductive life, 54% of women had no desire for maternity and 17% did not have a desire of sexual intercourse. Only 26% of women benefited from the psychologist. Support.
Conclusion: In this series, we observed a resumption of income-generating activities in women operated for a treatment of urogenital fistula and a psychological recovery with an increase in self-esteem and a decrease in suicidal thoughts.
Level Of Evidence: 4.
abstract_id: PUBMED:37671506
The social, economic, emotional, and physical experiences of caregivers for women with female genital fistula in Uganda: A qualitative study. ABSTRACTThis study aimed to explore the firsthand experiences of informal primary caregivers of women with female genital fistula in Uganda. Caregivers that accompanied women for surgery at Mulago National Teaching and Referral Hospital were recruited between January and September 2015. Caregivers participated in in-depth interviews and focus groups. Data were analysed thematically and informed adaptation of a conceptual framework. Of 43 caregivers, 84% were female, 95% family members, and most married and formally employed. Caregivers engaged in myriad personal care and household responsibilities, and described being on call for an average of 22.5 h per day. Four overlapping themes emerged highlighting social, economic, emotional, and physical experiences/consequences. The caregiving experience was informed by specific caregiver circumstances (e.g. personal characteristics, care needs of their patient) and dynamic stressors/supports within the caregiver's social context. These results demonstrate that caregivers' lived social, economic, emotional, and physical experiences and consequences are influenced by both social factors and individual characteristics of both the caregiver and their patient. This study may inform programmes and policies that increase caregiving supports while mitigating caregiving stressors to enhance the caregiving experience, and ultimately ensure its feasibility, particularly in settings with constrained resources.
abstract_id: PUBMED:24103285
Clinical and economic consequences of pancreatic fistula after elective pancreatic resection. Background: Postoperative pancreatic fistula is the main cause of morbidity after pancreatic resection. This study aimed to quantify the clinical and economic consequences of pancreatic fistula in a medium-volume pancreatic surgery center.
Methods: Hospital records from patients who had undergone elective pancreatic resection in our department were identified. Pancreatic fistula was defined according to the International Study Group on Pancreatic Fistula (ISGPF). The consequences of pancreatic fistula were determined by treatment cost, hospital stay, and out-patient follow-up until the pancreatic fistula was completely healed. All costs of the treatment are calculated in Euros. The cost increase index was calculated for pancreatic fistula of grades A, B, and C as multiples of the total cost for the no fistula group.
Results: In 54 months, 102 patients underwent elective pancreatic resections. Forty patients (39.2%) developed pancreatic fistula, and 54 patients (52.9%) had one or more complications. The median length of hospital stay for the no fistula, grades A, B, and C fistula groups was 12.5, 14, 20, and 59 days, respectively. The hospital stay of patients with fistula of grades B and C was significantly longer than that of patients with no fistula (P<0.001). The median total cost of the treatment was 4952, 4679, 8239, and 30 820 Euros in the no fistula, grades A, B, and C fistula groups, respectively.
Conclusions: The grading recommended by the ISGPF is useful for comparing the clinical severity of fistula and for analyzing the clinical and economic consequences of pancreatic fistula. Pancreatic fistula prolongs the hospital stay and increases the cost of treatment in proportion to the severity of the fistula.
abstract_id: PUBMED:31262289
The loss of dignity: social experience and coping of women with obstetric fistula, in Northwest Ethiopia. Background: Obstetric fistula is a debilitating condition resulted from poorly (un) managed prolonged obstructed labor. It has significant psychosocial and economic consequences on those affected and their families. Data regarding experiences and coping mechanisms of Ethiopian women with fistula is scarce.
Methods: Qualitative design was employed with in depth interview technique by using open ended interview guide. Eleven fistula patients waiting for surgical repair at the fistula treatment center of Gondar Specialized Referral Hospital were selected with typical case selection. Thedata were audio-taped, transcribed and translated from Amharic to English. Open code version 4.03was used to organize data and identify themes for analysis.
Results: The age of participants of the study ranged between 19 to 43 years. Ten of them were from rural areas. Regarding their educational status eight cannot read and write. Similar number were either separated or divorced. Six of them lived with obstetric fistula without treatment from one to five years. Five women related their condition to their fate. The women faced challenges in role performance, marital and social relationships and economic capability. Frequent bathing, use of stripes of old clothes as a pad, self-isolation and hiding from being observed, wearing extra clothes as cover, increasing water intake and reducing hot drinks and fluids other than water were the ways they have devised to cope with the incontinence.
Conclusion: The study participants reported that they experienced deep sense of loss, diminished self-worth and multiple social challenges. They coped with the incontinence in various ways among which some were non effective and might have continuing negative impact on woman's quality of life even after corrective surgery. Developing bridging intervention for early identification and referral could reduce period of women's suffering.
abstract_id: PUBMED:19249652
Social implications of obstetric fistula: an integrative review. Obstetric fistula is a devastating complication of obstructed labor that affects more than two million women in developing countries, with at least 75,000 new cases every year. Prolonged pressure of the infant's skull against the tissues of the birth canal leads to ischemia and tissue death. The woman is left with a hole between her vagina and bladder (vesicovaginal) or vagina and rectum (rectovaginal) or both, and has uncontrollable leakage of urine or feces or both. It is widely reported in scientific publications and the media that women with obstetric fistula suffer devastating social consequences, but these claims are rarely supported with evidence. Therefore, the true prevalence and nature of the social implications of obstetric fistula are unknown. An integrative review was undertaken to determine the current state of the science on social implications of obstetric fistula in sub-Saharan Africa.
abstract_id: PUBMED:29268711
"I am a person but I am not a person": experiences of women living with obstetric fistula in the central region of Malawi. Background: The consequences of living with obstetric fistula are multifaceted and very devastating for women, especially those living in poor resource settings. Due to uncontrollable leakages of urine and/or feces, the condition leaves women with peeling of skin on their private parts, and the wetness and smell subject them to stigmatization, ridicule, shame and social isolation. We sought to gain a deeper understanding of lived experiences of women with obstetric fistula in Malawi, in order to recommend interventions that would both prevent new cases of obstetric fistula as well as improve the quality of life for those already affected.
Methods: We conducted semi-structured interviews with 25 women with obstetric fistula at Bwaila Fistula Care Center in Lilongwe and in its surrounding districts. We interviewed twenty women at Bwaila Fistula Care Center; five additional women were identified through snowball sampling and were interviewed in their homes. We also interviewed twenty family members. To analyze the data, we used thematic analysis. Data were categorized using Nvivo 10. Goffman's theory of stigma was used to inform the data analysis.
Results: All the women in this study were living a socially restricted and disrupted life due to a fear of involuntary disclosure and embarrassment. Therefore, "anticipated" as opposed to "enacted" stigma was especially prevalent among the participants. Many lost their positive self-image due to incontinence and smell. As a way to avoid shame and embarrassment, these women avoided public gatherings; such as markets, church, funerals and weddings, thus losing part of their social identity. Participants had limited knowledge about their condition.
Conclusion: The anticipation of stigma by women in this study consequently limited their social lives. This fear of stigma might have arisen from previous knowledge of social norms concerning bowel and bladder control, which do not take into account an illness like obstetric fistula. This misconception might have also arisen from lack of knowledge about causes of the condition itself. There is need therefore to create awareness and educate women and their communities about the causes of obstetric fistula, its prevention and treatment, which may help to prevent fistula as well as reduce all dimensions of stigma, and consequently increase dignity and quality of life for these women.
abstract_id: PUBMED:37746937
Impact of Beyond Fistula programming on economic, psychosocial and empowerment outcomes following female genital fistula repair: A retrospective study. Objective: To retrospectively assess changes in economic status, psychosocial status and empowerment among women who participated in Beyond Fistula reintegration programming following fistula repair.
Methods: We conducted a retrospective study among 100 Beyond Fistula program participants capturing sociodemographic characteristics, obstetric and fistula history, program participation, and our primary outcomes: economic status, psychosocial status, and empowerment via quantitative survey at two time points: before program participation and currently. Data were collected from November 2020 to July 2021 from 2013 to 2019 program participants. We compared outcomes across these two time points using paired t tests or McNemar's tests.
Results: The proportion of individuals owning property (28.0% vs. 38.0%, P = 0.006), having a current source of income (19.0% vs. 56.0%, P < 0.001), and saving or investing income (11.0% vs. 37.0%, P < 0.001) increased significantly from pre- to post-programming. We also identified statistically significant increases from pre- to post-programming in self-esteem (5.0 [IQR 4.0-5.0] vs. 5.0 [IQR 5.0-5.0], P < 0.001), reintegration (53.0 [IQR 43.0-69.0] vs. 65.0 [IQR 51.0-72.0], P < 0.001) and level of input into household economic decision making (2.0 [SD 1.0] vs. 2.3 [SD 1.0], P = 0.004).
Conclusion: Beyond Fistula programming likely improved economic status, psychosocial status, and empowerment of participants. Post-surgical interventions incorporating a holistic approach can advance recovery through supporting psychosocial and economic wellbeing and should be offered to women undergoing genital fistula repair.
abstract_id: PUBMED:36841757
The psycho-social impact of obstetric fistula and available support for women residing in Nigeria: a systematic review. Background: Obstetric fistula continues to affect the social and mental wellbeing of women living in Nigeria giving rise to poor maternal health outcome. While the World Health Organisation (WHO) has recommended the introduction of psycho-social interventions in the management of obstetric fistula women, psycho-social support for women living with obstetric fistula in Nigeria, are limited. This systematic review aimed to synthesise the psycho-social impact of obstetric fistula on women living in Nigeria as well as the available psycho-social support for these affected women.
Methods: Following a keyword strategy, Medline, CINAHL, Google scholar, ScienceDirect, Cochrane library, PsychINFO, AMED, British Nursing database, Pubmed central, TRIP database, UK Pubmed central, socINDEX, Annual reviews, ISI Web of Science, Academic search complete, Credo reference, Sage premier and Scopus databases were searched alongside hand searching of articles. The inclusion criteria were set as articles published between 2000 and 2020, on the psychosocial consequences of obstetric fistula in Nigeria. The Critical Appraisal Skills Program (CASP) tool was used to appraise the quality of the included studies. The data was extracted and then analysed using narrative synthesis.
Results: 620 relevant citations were identified, and 8 studies were included. Women with obstetric fistula, living in Nigeria were found to be ostracised, abandoned by families and friends, stigmatised and discriminated against, which led to depression, loneliness, loss of self-esteem, self-worth and identity. Psycho-social interventions for women who experienced obstetric fistula are not widely available.
Conclusion: There is a need for the introduction of more rehabilitation and reintegration programs across the country. The psychosocial effect of obstetric fistula is significant and should be considered when developing interventions. Further, more research is needed to evaluate the sustainability of psychosocial interventions in Nigeria.
Answer: The social and economic consequences of obstetric fistula are profound and can indeed change a woman's life forever. Women with obstetric fistula often suffer from divorce or separation and perinatal child loss, as indicated by a meta-analysis (PUBMED:17727854). They may lose their jobs, become dependent on others, and lose financial support when relationships are lost (PUBMED:38243609). The condition can lead to a loss of dignity, diminished self-worth, and multiple social challenges, including challenges in role performance, marital and social relationships, and economic capability (PUBMED:31262289).
Affected women may experience social isolation, stigmatization, ridicule, and shame due to uncontrollable leakage of urine and/or feces (PUBMED:29268711). They often engage in coping mechanisms such as frequent bathing, using old clothes as pads, self-isolation, and avoiding public gatherings, which can have a continuing negative impact on their quality of life even after corrective surgery (PUBMED:31262289).
The psycho-social impact includes ostracism, abandonment by families and friends, stigma, discrimination, depression, loneliness, loss of self-esteem, self-worth, and identity (PUBMED:36841757). Despite the possibility of surgical treatment usually closing the fistula and improving physical and mental health, additional social support and counseling are crucial for successful social reintegration (PUBMED:17727854).
Caregivers of women with fistula also experience social, economic, emotional, and physical consequences, highlighting the need for programs and policies that support caregiving while mitigating stressors (PUBMED:37671506). Moreover, women who participated in Beyond Fistula reintegration programming showed significant improvements in economic status, psychosocial status, and empowerment following fistula repair (PUBMED:37746937).
In summary, obstetric fistula can have devastating social and economic consequences for women, affecting their personal identity, social relationships, and economic independence. While surgical repair can address the physical aspects of the condition, comprehensive support including psycho-social interventions is essential to address the full spectrum of consequences and facilitate reintegration into society (PUBMED:17727854; PUBMED:38243609; PUBMED:31262289; PUBMED:29268711; PUBMED:36841757; PUBMED:37671506; PUBMED:37746937). |
Instruction: The profession of public health informatics: still emerging?
Abstracts:
abstract_id: PUBMED:38422945
The public health profession in Spain: an urgent challenge to strengthen its practice The recent health crises have highlighted the weakness of public health structures in Spain. The causes are, among others, the scarcity of economic resources and the delay in their institutional modernization. In addition, there is the weakness of the training processes and the employability. The Spanish Society of Public Health and Health Administration (SESPAS) has developed a White paper of the public health profession with the aim of contributing to strengthening professional practice. The sociodemographic characteristics of the associations federated to SESPAS have been described and the discourse of professionals has been analyzed through six focus groups and 19 interviews (72 people). To agree on the conclusions and recommendations, a meeting was organized with the participation of 29 participants. The demographic and employment data of the 3467 people belonging to seven SESPAS societies show that, overall, about 60% are women and 40% were under 50 years of age. Undergraduate degrees were medicine (35.9%), nursing (17.4%) and pharmacy and veterinary medicine (10.4%). Key aspects of the meaning of public health, training, employability and career and institutionalization of public health were collected through interviews and focus groups. The final meeting agreed on 25 conclusions and 24 recommendations that aim to contribute to strengthening professionals and the public health profession in Spain. Some of them, related to training, employability and professional career, have been shared in a workshop at the School of Public Health of Menorca with public health officials from the Ministry of Health and some autonomous communities.
abstract_id: PUBMED:26038473
Perspectives of public health laboratories in emerging infectious diseases. The world has experienced an increased incidence and transboundary spread of emerging infectious diseases over the last four decades. We divided emerging infectious diseases into four categories, with subcategories in categories 1 and 4. The categorization was based on the nature and characteristics of pathogens or infectious agents causing the emerging infections, which are directly related to the mechanisms and patterns of infectious disease emergence. The factors or combinations of factors contributing to the emergence of these pathogens vary within each category. We also classified public health laboratories into three types based on function, namely, research, reference and analytical diagnostic laboratories, with the last category being subclassified into primary (community-based) public health and clinical (medical) analytical diagnostic laboratories. The frontline/leading and/or supportive roles to be adopted by each type of public health laboratory for optimal performance to establish the correct etiological agents causing the diseases or outbreaks vary with respect to each category of emerging infectious diseases. We emphasize the need, especially for an outbreak investigation, to establish a harmonized and coordinated national public health laboratory system that integrates different categories of public health laboratories within a country and that is closely linked to the national public health delivery system and regional and international high-end laboratories.
abstract_id: PUBMED:11189723
Public health implications of emerging zoonoses. Many new, emerging and re-emerging diseases of humans are caused by pathogens which originate from animals or products of animal origin. A wide variety of animal species, both domestic and wild, act as reservoirs for these pathogens, which may be viruses, bacteria or parasites. Given the extensive distribution of the animal species affected, the effective surveillance, prevention and control of zoonotic diseases pose a significant challenge. The authors describe the direct and indirect implications for public health of emerging zoonoses. Direct implications are defined as the consequences for human health in terms of morbidity and mortality. Indirect implications are defined as the effect of the influence of emerging zoonotic disease on two groups of people, namely: health professionals and the general public. Professional assessment of the importance of these diseases influences public health practices and structures, the identification of themes for research and allocation of resources at both national and international levels. The perception of the general public regarding the risks involved considerably influences policy-making in the health field. Extensive outbreaks of zoonotic disease are not uncommon, especially as the disease is often not recognised as zoonotic at the outset and may spread undetected for some time. However, in many instances, the direct impact on health of these new, emerging or re-emerging zoonoses has been small compared to that of other infectious diseases affecting humans. To illustrate the tremendous indirect impact of emerging zoonotic diseases on public health policy and structures and on public perception of health risks, the authors provide a number of examples, including that of the Ebola virus, avian influenza, monkeypox and bovine spongiform encephalopathy. Recent epidemics of these diseases have served as a reminder of the existence of infectious diseases and of the capacity of these diseases to occur unexpectedly in new locations and animal species. The need for greater international co-operation, better local, regional and global networks for communicable disease surveillance and pandemic planning is also illustrated by these examples. These diseases have contributed to the definition of new paradigms, especially relating to food safety policies and more generally to the protection of public health. Finally, the examples described emphasise the importance of intersectorial collaboration for disease containment, and of independence of sectorial interests and transparency when managing certain health risks.
abstract_id: PUBMED:19297243
The profession of public health informatics: still emerging? Purpose: Although public health informatics (PHI) was defined in 1995, both then and still now it is an "emerging" profession. An emergent profession lacks a base of "technical specialized knowledge." Therefore, we analyzed MEDLINE bibliographic citation records of the PHI literature to determine if a base of technical, specialized PHI literature exists, which could lead to the conclusion that PHI has emerged from its embryonic state.
Method: A MEDLINE search for PHI literature published from 1980-2006 returned 16,942 records. Record screening by two subject matter experts netted 2493 PHI records that were analyzed by the intervals of previous PHI CBMs 96-4 and 2001-2 for 1980-1995 (I(1980)) and 1996-2000 (I(1996)), respectively, and a new, third interval of 2001-2006 (I(2001)).
Results: The distribution of records was 676 (I(1980)), 839 (I(1996)) and 978 (I(2001)). Annual publication rates were 42 (I(1980)), 168 (I(1996)), and 163 (I(2001)). Cumulative publications were accelerating. A subset of 19 (2.5%) journals accounted for 730 (29.3%) of the records. The journal subset average (+/-SD) annual publication rates of 0.7+/-0.6 (I(1980)), 2.9+/-1.9 (I(1996)), and 3.1+/-2.7 (I(2001)) were different, F(3, 64)=7.12, p<.05. Only I(1980) was different (p<.05) from I(1996) or I(2001). Average (+/-SE) annual rate of increase for all journals (8.4+/-0.8 publications per year) was different from the subset of 19 (2.7+/-0.3), t(36)=5.74, p<.05. MeSH first time-to-indexing narrowed from 7.3 (+/-4.3) years to the year (0.5+/-0.8) the term was introduced, t(30)=7.03, p<.05.
Conclusion: A core set of journals, the proliferation of PHI articles in varied and numerous journals, and rapid uptake of MeSH suggest PHI is acquiring professional authority and now should not be tagged as an "emerging" profession.
abstract_id: PUBMED:18378340
Emerging zoonoses: the challenge for public health and biodefense. The concept of new and emerging diseases has captured the public interest and has revitalized the public health infectious disease research community. This interest has also resulted in competition for funding and turf wars between animal health and public health scientists and public officials and, in some cases, has delayed and hindered progress toward effective prevention, control and biodefense. There is a dynamic list of outbreaks causing substantial morbidity and mortality in humans and often in the reservoir animal species. Some agents have the potential to grow into major epidemics. There are many determinants that influence the emergence of diseases of concern that require the use of current understanding of the nature of agent persistence and spread. Additional factors that are global must be added to plans for prevention and control. To this complex mix has been added the potential for accidental or malicious release of agents. The nature of emerging infectious agents and their impact is largely unpredictable. Models that strive to predict the dynamics of agents may be useful but can also blind us to increasing disease risks if it does not match a specific model. Field investigations of early events will be critical and should drive prevention and control actions. Many disease agents have developed strategies to overcome extremes of reservoir qualities like population size and density. Every infectious agent spreads easier when its hosts are closer together. Zoonoses must be dealt with at the interface of human and animal health by all available information. Lessons learned from the emergence of and response to agents like West Nile virus, H5N1 avian influenza, SARS and bovine spongiform encephalopathy, the cause of new-variant Creutzfeldt-Jakob disease in humans, must be used to create better plans for response and meet the challenge for public health and biodefense.
abstract_id: PUBMED:29207450
One Health Perspectives on Emerging Public Health Threats. Antimicrobial resistance and emerging infectious diseases, including avian influenza, Ebola virus disease, and Zika virus disease have significantly affected humankind in recent years. In the premodern era, no distinction was made between animal and human medicine. However, as medical science developed, the gap between human and animal science grew deeper. Cooperation among human, animal, and environmental sciences to combat emerging public health threats has become an important issue under the One Health Initiative. Herein, we presented the history of One Health, reviewed current public health threats, and suggested opportunities for the field of public health through better understanding of the One Health paradigm.
abstract_id: PUBMED:23896977
Public health as a distinct profession: has it arrived? This article reviews the elements consistent with the definition of a "profession" in the contemporary United States and argues that public health should be considered a distinct profession, recognizing that it has a unique knowledge base and career paths independent of any other occupation or profession. The Welch-Rose Report of 1915 prescribed education for public health professionals and assumed that, although at first the majority of students would be drawn from other professions, such as medicine, nursing, and sanitary engineering, public health was on its way to becoming "a new profession." Nearly a century later, the field of public health has evolved dramatically in the direction predicted. It clearly meets the criteria for being a "profession" in that it has (1) a distinct body of knowledge, (2) an educational credential offered by schools and programs accredited by a specialized accrediting body, (3) career paths that include autonomous practice, and (4) a separate credential, Certified in Public Health (CPH), indicative of self-regulation based on the newly launched examination of the National Board of Public Health Examiners. Barriers remain that challenge independent professional status, including the breadth of the field, more than one accrediting body, wide variation in graduate school curricula, and the newness of the CPH. Nonetheless, the benefits of recognizing public health as a distinct profession are considerable, particularly to the practice and policy communities. These include independence in practice, the ability to recruit the next generation, increased influence on health policy, and infrastructure based on a workforce of strong capacity and leadership capabilities.
abstract_id: PUBMED:15702711
Emerging zoonoses and pathogens of public health significance--an overview. Emerging zoonotic diseases have assumed increasing importance in public and animal health, as the last few years have seen a steady stream of new diseases, each emerging from an unsuspected quarter and causing severe problems for animals and humans. The reasons for disease emergence are multiple, but there are two main factors--expansion of the human population and globalisation of trade. Current issues such as the increasing movement of a variety of animal species, ecological disruption, uncultivatable organisms, and terrorism, all imply that emerging zoonotic diseases will in all probability, not only continue to occur, but will increase in the rate of their emergence. The recurring nature of the crises dictates that closer integration of veterinary and medical communities is warranted, along with improved education of the general public and policy makers.
abstract_id: PUBMED:25558694
Protecting public health in the age of emerging infections. Emerging and re-emerging infections cause huge concern among public health workers and international and national bodies such as the World Health Organization (WHO) and the U.S. National Institutes of Health (NIH). Indeed, scientists around the world express the view that despite the danger, research on these emerging virulent pathogens is crucial and must continue. While most of the studies underway are targeted at improving and protecting public health, some studies bear potentiallyserious risks resulting from misuse. These studies are defined as dual-use research of concern (DURC), where it is not evident that the benefits outweigh the risks. The H5N1 controversy has pushed various governments to institute new policies to govern such research. We describe the regulations that govern this emerging field of research in the United States and Israel, two countries that have taken leading stands on these issues. We suggest that the existing policies are able to mitigate many of the risks that this research encapsulates, yet more work is required--especially on the global level.
abstract_id: PUBMED:21717041
A world wide public health problem: the principal re-emerging infectious diseases. The extraordinary progress in the knowledge of infectious disease, the discovery of antibiotics and effective vaccines are among the great achievement of the nineteenth and twentieth centuries. These achievement have led to a dramatic reduction in the levels of mortality from these diseases. According to the World Health Organization, the term "re-emerging infectious diseases" refers to infectious diseases, which although well known, have not been of recent public health importance. However, climate change, migration, changes in health services, antibiotic resistance, population increase, international travel, the increase in the number of immune-depressed patients ,etc have lead to the re-emergence of these diseases. The climate changes are exposing sectors of the population to inadequate fresh air, water, food and resources for survival which, in consequence, provoke increases in both internal and international migration. In this particular period in which we find ourselves, characterized by globalization, the international community has become aware that the re-emergence of these diseases poses an important risk for public health underlines the necessity to adopt appropriate strategies for their prevention and control. The re-emerging diseases of the twenty-first century are a serious problem for public health and even though there has been enormous progress in medical science and in the battle against infectious diseases, they are still a long way from being really brought under control. A well organized monitoring system would enable the epidemiological characteristics of the infectious diseases to be analyzed and the success or otherwise of preventive interventions to be precisely evaluated. For this reason, the World Health Organization and the European Union have discussed the formation of a collaborative network for the monitoring and control of re-emerging diseases and has initiated special programmes. The battle between humanity and infectious disease has a history lasting thousands of years and is destined to continue in the future emphasizing the need to adopt combined policies for the prevention and control of re-emerging diseases.
Answer: The profession of public health informatics (PHI) has been considered an "emerging" profession for some time. However, an analysis of MEDLINE bibliographic citation records of the PHI literature published from 1980-2006 suggests that PHI may no longer be in its embryonic state. The study found a core set of journals, a proliferation of PHI articles in varied and numerous journals, and rapid uptake of MeSH terms related to PHI. These findings indicate that PHI is acquiring professional authority and suggest that it should not be tagged as an "emerging" profession anymore (PUBMED:19297243).
This conclusion is supported by the fact that PHI has a distinct body of knowledge, educational credentials offered by accredited schools and programs, career paths that include autonomous practice, and a separate credential, Certified in Public Health (CPH), indicative of self-regulation based on the examination of the National Board of Public Health Examiners. These elements are consistent with the definition of a "profession" in the contemporary United States, and thus, it can be argued that public health should be considered a distinct profession (PUBMED:23896977).
In summary, while PHI was once an emerging profession, the evidence from literature and the establishment of professional standards and credentials suggest that it has now arrived as a distinct profession within the field of public health. |
Instruction: Transurethral microwave thermotherapy in symptomatic benign prostatic hyperplasia: a possible association between androgen status and treatment result?
Abstracts:
abstract_id: PUBMED:9294621
Transurethral microwave thermotherapy in symptomatic benign prostatic hyperplasia: a possible association between androgen status and treatment result? Background: Nothing is yet known of possible endocrine effects of transurethral microwave thermotherapy (TUMT) or of possible influence of endocrine status on the result of thermotherapy.
Methods: Serum levels of testosterone (T), SHBG, estradiol, LH, and PSH were measured in 48 men with BPH before and 2-3 months after TUMT (Prostatron, Prostasoft 2.0; Technomed International, Lyon, France). Assessment of results was based on the patients' own estimations.
Results: The treatment did not alter hormone levels. Patients who reported response after 12 months (n = 21) had significantly lower outset levels of calculated free testosterone (fT) than in the nonresponders (n = 27). In the patients aged < 70 years (n = 13), both the fT and T values were lower than in the nonresponders (n = 15). There was no age difference between responders and nonresponders.
Conclusions: TUMT did not influence hormone levels. These observations suggest that androgen status may influence the final result of treatment.
abstract_id: PUBMED:17070354
Transurethral microwave thermotherapy effectiveness in small prostates. Objectives: To dispel the misconceptions that patients with small prostates react differently than patients with larger prostates to cooled transurethral microwave thermotherapy. Cooled transurethral microwave thermotherapy has developed into a valid alternative to treat men with lower urinary tract symptoms due to benign prostatic hyperplasia. However, doubts still remain regarding the ability of this office-based technique to treat smaller prostates.
Methods: A database of 713 men from six previous studies using cooled transurethral microwave thermotherapy devices developed by Urologix were combined for this analysis. The data were analyzed to determine whether the baseline prostate size had a significant effect on American Urological Association Symptom Index, peak flow rate, quality-of-life score, or symptom problem index. Follow-up intervals in this analysis include 6, 12, 24, 36, 48, and 60 months after therapy. Visual analog scale ratings during treatment were also assessed. General linear models and repeated measures analyses were performed.
Results: Statistical analysis showed no effect of baseline prostate size on treatment outcomes for more than 5 years. Visual analog scale measurements were also not affected by the baseline prostate size.
Conclusions: Transurethral microwave thermotherapy appears to be as efficacious in treating patients with small prostates as those with large prostates and should be offered as a treatment modality to patients with prostates of all sizes.
abstract_id: PUBMED:24102183
Transurethral microwave thermotherapy treatment of chronic urinary retention in patients unsuitable for surgery. Objective: The aim of this study was to evaluate transurethral microwave thermotherapy (TUMT) in the treatment of chronic urinary retention due to benign prostatic hyperplasia (BPH) in patients unsuitable for surgery.
Material And Methods: The study enrolled 124 patients with chronic urinary retention due to BPH. The median age was 80 years (61-92 years). Of the enrolled patients, 77 (62%) were assessed by an anaesthesiologist as being unsuitable for surgery owing to cardiac, pulmonary, neurological or other diseases. Overall, 115 patients (93%) had an indwelling catheter. The remaining nine patients (7%) performed clean intermittent self-catheterization. The treatment was performed under local anaesthesia in the outpatient department using the ProstaLund Coretherm Device. At the 6-month follow-up, the Danish version of the International Prostate Symptom Score (DAN-PSS), postvoiding residual volume and urinary peak flow were measured. Improvement in quality of life was also registered.
Results: The success of TUMT was assessed by looking at the percentage of patients relieved of their catheter and by the improvement in quality of life. Overall, 77% of patients were relieved of their catheter and 79% reported an improvement in their quality of life.
Conclusion: In this study, both the median age and the percentage of patients unsuitable for surgery were larger than in previous studies. Despite this, TUMT relieved 77% of their catheter and 79% reported an improvement in their quality of life. This study shows that TUMT is an effective treatment for patients unsuitable for surgery and with chronic urinary retention.
abstract_id: PUBMED:16752156
Transurethral microwave thermotherapy for the treatment of BPH: still a challenger? Minimally invasive therapies for treatment of benign prostatic hyperplasia (BPH) compete with the gold standard transurethral resection of the prostate (TURP). Comparisons of efficacy and safety have broadened the knowledge of different treatment modalities. Concerns of quality of life such as unaltered sexual function as well as cost considerations drive the market to develop techniques of lower level invasiveness. Among the competitors the office based transurethral microwave thermotherapy (TUMT) provides the broadest scale of scientific data. Numerous manufacturers sell various modifications of this technology. According to different clinical studies TUMT proved to be an effective, safe, and durable therapy for the treatment of lower urinary tract symptoms (LUTS) secondary to BPH. However, TURP still holds the steadier long-term results and is more effective to reduce obstruction as well as other LUTS.
abstract_id: PUBMED:17143105
Transurethral microwave thermotherapy: from evidence-based medicine to clinical practice. Purpose Of Review: The aim of this article is to provide new clinical data on transurethral microwave thermotherapy, evaluate it in the perspective of evidence-based guidelines and daily practice and investigate the driving forces that determine the current position of thermotherapy for the management of benign prostatic obstruction.
Recent Findings: Recent studies have provided significant evidence regarding the efficacy, safety and durability of thermotherapy. Updated evidence-based clinical guidelines on the management of patients with benign prostatic obstruction have been made available. Surveys have evaluated the acceptance of transurethral microwave thermotherapy from the urological community. In addition, several studies have made major contributions to our knowledge of the translation of evidence to daily practice.
Summary: The range of therapeutic options for benign prostatic obstruction continues to widen creating the need for clarity in selection and application of these treatments. High-quality data on transurethral microwave thermotherapy have been published and integrated into clinical guidelines. Considerations on the implementation of guidelines to clinical practice, emergence of new treatments, shift of benign prostatic obstruction therapy, economics and the increasing need to treat patients with different clinical profile during the last decade seem to affect the position of transurethral microwave thermotherapy in the armamentarium of a urological centre. Into this frame, transurethral microwave thermotherapy tailored to selective cases seems to remain an attractive option.
abstract_id: PUBMED:11342912
Long-term followup of randomized transurethral microwave thermotherapy versus transurethral prostatic resection study. Purpose: We evaluate the durable effect of high-energy transurethral microwave thermotherapy and transurethral prostatic resection for treatment of patients with lower urinary tract symptoms suggestive of bladder outflow obstruction.
Materials And Methods: Between January 1996 and March 1997, 155 patients with lower urinary tract symptoms suggestive of bladder outflow obstruction were randomized to receive transurethral microwave thermotherapy (Prostatron*; device and commercial software) (82) or undergo transurethral prostatic resection (73). Initial patient evaluation was performed according to international standards. Patients were followed annually with the International Prostate Symptom Score (I-PSS) and uroflowmetry (maximum flow rate). The Kaplan-Meier survival analysis was used to calculate the cumulative risk of re-treatment, adjusted for loss to followup.
Results: A total of 78 patients received transurethral microwave thermotherapy and 66 underwent transurethral prostatic resection. Median followup was 33 months. In the thermotherapy group mean maximum urinary flow rate improved from 9.2 ml. per second at baseline to 15.1, 14.5 and 11.9 ml. per second at 1, 2 and 3 years, and mean I-PSS decreased from 20 to 8, 9, and 12, respectively. In the resection group the corresponding numbers for maximum urinary flow rate were 7.8, 24.5, 23.0 and 24.7 ml. per second at 1, 2 and 3 years, and for I-PSS were 20, 3, 4 and 3, respectively. At 36 months, 14 patients in the thermotherapy and 8 from the resection groups underwent re-treatment, and the cumulative risk was 19.8% (95% confidence interval 10.4% to 29.3%) and 12.9% (4.5% to 21.3%), respectively (p = 0.28).
Conclusions: Transurethral microwave thermotherapy and transurethral prostatic resection achieve durable improvement in patients with lower urinary tract symptoms suggestive of bladder outflow obstruction, while the magnitude of improvement is higher with resection. The repeat thermotherapy is based on failure of therapy whereas repeat resection is based on complications of therapy.
abstract_id: PUBMED:9598499
Long-term results of lower energy transurethral microwave thermotherapy. Purpose: We evaluate long-term results of lower energy transurethral microwave thermotherapy (Prostasoft 2.0*) and identify pretreatment characteristics that predict a favorable outcome.
Materials And Methods: Between December 1990 and December 1992, 231 patients with lower urinary tract symptoms were treated with lower energy transurethral microwave thermotherapy. Subjective and objective voiding parameters were collected from medical records and a self-administered questionnaire. Kaplan-Meier plots were constructed to assess the risk of re-treatment.
Results: Of the patients 41% underwent invasive re-treatment within 5 years of followup and 17% were re-treated with medication. The re-treatment-free period was somewhat longer in patients with a peak flow rate greater than 10 ml. per second, a Madsen score 15 or less, a post-void residual volume 100 ml. or less and age greater than 65 years at baseline. Prostate volume did not modify the outcome. No incontinence was caused by transurethral microwave thermotherapy, 8% had recurrent urinary tract infection and 8% had retrograde ejaculation. Only 1 patient had a urethral stricture after transurethral microwave thermotherapy.
Conclusions: At 5 years after transurethral microwave thermotherapy 41% of the patients received instrumental treatment. Patients with a lower Madsen score and lower residual volume, and those with higher peak flow and age were somewhat better responders to lower energy transurethral microwave thermotherapy.
abstract_id: PUBMED:9915432
Sexual function following high energy microwave thermotherapy: results of a randomized controlled study comparing transurethral microwave thermotherapy to transurethral prostatic resection. Purpose: We evaluate changes in sexual function in patients treated with high energy transurethral microwave thermotherapy compared to transurethral resection of the prostate.
Materials And Methods: A total of 147 patients randomized to undergo transurethral microwave thermotherapy or transurethral resection of the prostate were asked to complete a self-administered questionnaire evaluating sexual function before, and 3 and 12 months after treatment. The questionnaire dealt with such items as social status, libido, quality of erection, ejaculation and overall satisfaction of sexual functioning.
Results: There was a statistically significant improvement of micturition in both groups. The improvement in the transurethral prostatic resection group was significantly better than in the transurethral microwave thermotherapy group. Antegrade ejaculation occurred at 3 months following treatment in 27% of the transurethral prostatic resection group compared to 74% of the transurethral microwave thermotherapy group and at 1 year in 37 and 67%, respectively. Significantly more patients undergoing transurethral prostatic resection (36%) had changes in sexual function compared to the transurethral microwave thermotherapy group (17%). The transurethral microwave thermotherapy group was more satisfied with the sex life. Of these patients 55% graded sex as very satisfying compared to 21% in the transurethral prostatic resection group. The severity of symptoms was not correlated with sexual function in this study. In general, older patients had sexual dysfunction more often, while younger patients had pain during sexual activities more frequently.
Conclusions: Although clinically less effective, high energy transurethral microwave thermotherapy is a better therapeutic option than surgery for patients who want to preserve sexual function. In particular ejaculation is often preserved after transurethral microwave thermotherapy while there is significant deterioration following transurethral prostatic resection. In general, older patients have greater sexual dysfunction.
abstract_id: PUBMED:14622486
Application of external microwave thermotherapy in urology: past, present, and future. The excellent clinical results of transurethral microwave thermotherapy (TUMT) for the treatment of symptomatic benign prostatic hyperplasia (BPH) gave to TUMT the leading position among the microwave thermotherapy modalities available for the treatment of different urologic conditions. Research in TUMT has focused on operating software, temperature monitoring, intraprostatic heat distribution, cell-kill calculations, and correlations with clinical variables. Randomized comparisons of TUMT with other established therapies for BPH, including transurethral resection, have facilitated the evaluation of the clinical outcome, durability, morbidity, and costs of the treatment. The applications of microwave thermotherapy in other urologic diseases are also presented in this review.
abstract_id: PUBMED:18210336
Transurethral microwave thermotherapy of the prostate--evaluation with MRI and analysis of parameters relevant to outcome. Objectives: To evaluate morphological changes in the hyperplastic prostate tissue following transurethral microwave thermotherapy and to investigate the dependence of the treatment outcome on structural and physiological features of the prostate.
Material And Methods: In this prospective study, 13 patients with chronic urinary retention due to benign prostatic hyperplasia (BPH) underwent Coretherm (ProstaLund, Lund, Sweden) microwave thermotherapy. Prior to the treatment and 1 week and 6 months after, the patients were examined with MRI using morphologic, contrast medium-enhanced perfusion and diffusion-weighted imaging. Such advanced MRI techniques permit an assessment of parameters that have a hypothetical influence on microwave thermotherapy (e.g. prostate blood perfusion, water content and prostate microstructure).
Results: Morphologic and perfusion MRI showed a clear prostatic tissue defect in all 13 patients after 1 week and in all 12 patients at 6 months' follow-up. The mean size of the defect was 22.5 cm(3) (27%) (range 3.7-47.3 cm(3)) and 4.1 cm(3) (1.1-10.1 cm(3)) at 1 week and 6 months, respectively. The cell kill volume was estimated to be 20.5+/-7.4 cm(3) and correlated significantly with the size assessed by MRI at 1 week (r=0.8; p=0.002) and 6 months (r=0.69; p=0.05). At 6 months, the mean decrease in prostate volume was 20.1 cm(3) (p<0.0001). The microstructure of the prostate expressed as the apparent diffusion coefficient was shown to have a strong influence on the treatment process. All patients (n=12) were catheter-free at 6 months. Another patient experienced persistent obstruction and underwent transurethral resection of the prostate.
Conclusions: Coretherm microwave treatment leads to significant intraprostatic necrosis, which is detectable with MRI even 6 months after the treatment. Diffusion-weighted MRI is capable of identifying structural features of the prostate that can predict the length of treatment and the amount of energy needed.
Answer: Transurethral microwave thermotherapy (TUMT) is a treatment option for symptomatic benign prostatic hyperplasia (BPH). A study investigated the possible association between androgen status and the result of TUMT treatment. The study measured serum levels of testosterone (T), SHBG, estradiol, LH, and PSH in 48 men with BPH before and 2-3 months after TUMT. The results indicated that TUMT did not alter hormone levels. However, patients who reported a response after 12 months had significantly lower outset levels of calculated free testosterone (fT) than nonresponders. In patients aged less than 70 years, both the fT and T values were lower than in nonresponders, suggesting that androgen status may influence the final result of TUMT treatment (PUBMED:9294621). |
Instruction: Is there an increased lability in parents' smoking behaviour after a childbirth?
Abstracts:
abstract_id: PUBMED:8792501
Is there an increased lability in parents' smoking behaviour after a childbirth? Objective: To test our hypothesis that there is an increased lability in parents' smoking behaviour after a childbirth, and to search for demographic factors associated with lability in parents' smoking behaviour.
Design: A one month, prospective questionnaire study.
Setting: Maternal and child health centres in Oslo, Norway.
Sample: 222 families in which at least one adult was smoking were enrolled in the study. 37 families dropped out (16.7%) and 185 families completed both questionnaires.
Measurements: Changes in daily smoking, smoking quantity, and practical measures to prevent passive smoking by the children, as assessed by parental reports.
Results: Families with a child aged less than one year (infant) were more likely to make one or another positive change (quit, reduce, stop smoking indoors, stop smoking in living rooms) than families with only older children. There was a trend for families with an infant to make negative changes more often (start smoking, increase) as well. Older parents made positive changes more often than younger ones. Single parents were less likely to make positive changes.
Conclusions: The study indicates that there is an increased lability in parents' smoking behaviour after a childbirth.
abstract_id: PUBMED:16110228
Influence of socio-economic status, parents and peers on smoking behaviour of adolescents. With the aim of analysing the importance of psycho-social factors in predicting adolescents' smoking behaviour, a model of the interrelations between socio-economic status, parents', peers' and adolescents' own smoking behaviours was tested. The sample consisted of 2,616 adolescents. LISREL analyses were used to support the model; males and females were evaluated separately. Peers' smoking is the strongest predictor of adolescent smoking. Parents' smoking behaviour influences adolescents' smoking directly, but also indirectly through the parents' influence on peers' smoking behaviour. Socio-economic status influences adolescent smoking indirectly through its influence on parents' and peers' smoking behaviour. Our model is significant in both males and females and explains 42-51% of the variance in adolescent smoking behaviour. Accentuation of peers' influence on adolescents' smoking behaviour without considering the interrelations between the influence of socio-economic status, parents and peers may lead us to incorrect conclusions in research as well as in prevention.
abstract_id: PUBMED:32603997
Parenthood and smoking. Parents' smoking is harmful to infants' health. While it is well established that the fraction of mothers smoking during pregnancy is non-negligible, it is an open question of how many parents actually quit smoking to account for the adverse health effects accruing to their offspring. It is also unknown for how long smoking is reduced after first childbirth. This paper investigates these questions in a longitudinal analysis. The analyzed time period covers smoking patterns several years before childbirth and up to twenty years afterwards. Women's smoking probability already drops several years before first childbirth and it remains reduced until the first child turns 18 years old. In the second and third trimester of pregnancy, the drop is largest by around 75 percent.
abstract_id: PUBMED:30747356
Child Effects on Lability in Parental Warmth and Hostility: Moderation by Parents' Internalizing Problems. Research documents that lability in parent-child relationships-fluctuations up and down in parent-child relationships-is normative during adolescence and is associated with increased risk for negative outcomes for youth. Yet little is known about factors that predict lability in parenting. This study evaluated whether children's behaviors predicted lability in parent-child relationships. Specifically this study tested whether youth maladjustment (delinquency, substance use, internalizing problems) in Grade 6 was associated with greater lability (e.g., more fluctuations) in parents' warmth and hostility towards their children across Grades 6-8. The study also tested whether the associations between youth maladjustment and lability in parents' warmth and hostility were moderated by parents' internalizing problems. The sample included youth and their parents in two parent families who resided in rural communities and small towns (N = 618; 52% girls, 90% Caucasian). Findings suggest that parents' internalizing problems moderated the associations between child maladjustment and parenting lability. Among parents with high levels of internalizing problems, higher levels of youth maladjustment were associated with greater lability in parents' warmth. Among parents with low in internalizing problems, higher levels of youth maladjustment were associated with less lability in parents' warmth. The discussion focuses on how and why parent internalizing problems may affect parental reactivity to youth problem behavior and intervention implications.
abstract_id: PUBMED:8483861
The role of childbirth in smoking cessation. Background: Many women abstain from smoking during pregnancy, but relapse rates in the first year postpartum are high. The impact of childbirth on long-term abstinence from smoking is unknown for both women and men.
Methods: We assessed the impact of childbirth on long-term abstinence from smoking (minimum: 17 months, much longer in most cases) in a retrospective cohort analysis of 925 women and 1,494 men who were interviewed in 1984 to 1986 in the national baseline survey of the German Cardiovascular Prevention Study.
Results: Among women, smoking cessation rates resulting in long-term abstinence were about three times higher during the year of childbirth and the year before than in other years (adjusted rate ratio, 2.98; 95% confidence interval, 2.21-4.03). Childbirth was also associated with increased cessation rates among better educated men (adjusted rate ratio for this subgroup, 1.84; 95% confidence interval, 1.16-2.92), but not among less educated men. Nevertheless, childbirth led to long-term abstinence from smoking only in a small minority of smoking mothers and fathers.
Conclusion: Despite increased cessation rates around childbirth, more effective measures are needed to promote sustained abstinence after childbirth among both parents.
abstract_id: PUBMED:9232719
Social support and the smoking behaviour of parents with preschool children. In a study of the relationship between social support and smoking behaviour, 1046 parents coming with their children for well-child control at health centres in Oslo, Norway, completed a questionnaire. The prevalence of daily smoking increased with decreasing social support. However, this association did not remain significant when adjusting for demographic and household characteristics. Among smoking parents, indoor smoking at home was related to medium (OR = 1.97; CI: 1.01-3.81) and low social support (OR = 2.35; CI: 1.19-4.63) when adjusting for demographic and household characteristics. Smoking parents smoked more cigarettes per day when they had low social support. However, this association was only seen in parents with several children. In this group, smoking 10 cigarettes per day or more was strongly related to medium (OR = 5.05; CI: 1.66-15.35) and low social support (OR = 7.81; CI: 2.44-25.01).
abstract_id: PUBMED:31172297
Affective lability in offspring of parents with major depressive disorder, bipolar disorder and schizophrenia. Affective lability, defined as the propensity to experience excessive and unpredictable changes in mood, has been proposed as a potential transdiagnostic predictor of major mood and psychotic disorders. A parental diagnosis of bipolar disorder has been associated with increased affective lability in offspring. However, the association between affective lability and family history of other mood and psychotic disorders has not been examined. We measured affective lability using the self- and parent-reported Children's Affective Lability Scale in a cohort of 320 youth aged 6-17 years, including 137 offspring of a parent with major depressive disorder, 68 offspring of a parent with bipolar disorder, 24 offspring of a parent with schizophrenia, and 91 offspring of control parents. We tested differences in affective lability between groups using mixed-effects linear regression. Offspring of a parent with major depressive disorder (β = 0.46, 95% CI 0.17-0.76, p = 0.002) or bipolar disorder (β = 0.47, 95% CI 0.12-0.81, p = 0.008) had significantly higher affective lability scores than control offspring. Affective lability did not differ significantly between offspring of a parent with schizophrenia and offspring of control parents. Our results suggest that elevated affective lability during childhood is a marker of familial risk for mood disorders.
abstract_id: PUBMED:32414093
Parental Perceptions of Children's Exposure to Tobacco Smoke and Parental Smoking Behaviour. Around 40% of children are exposed to tobacco smoke, increasing their risk of poor health. Previous research has demonstrated misunderstanding among smoking parents regarding children's exposure. The parental perceptions of exposure (PPE) measure uses visual and textual vignettes to assess awareness of exposure to smoke. The study aimed to determine whether PPE is related to biochemical and reported measures of exposure in children with smoking parents. Families with at least one smoking parent and a child ≤ age 8 were recruited. In total, 82 parents completed the PPE questionnaire, which was assessed on a scale of 1-7 with higher scores denoting a broader perception of exposure. Parents provided a sample of their child's hair and a self-report of parental smoking habits. Parents who reported smoking away from home had higher PPE ratings than parents who smoke in and around the home (p = 0.026), constituting a medium effect size. PPE corresponded with home smoking frequency, with rare or no home exposure associated with higher PPE scores compared to daily or weekly exposure (p < 0.001). PPE was not significantly related to hair nicotine but was a significant explanatory factor for home smoking location. PPE was significantly associated with parental smoking behaviour, including location and frequency. High PPE was associated with lower exposure according to parental report. This implies that parental understanding of exposure affects protective behaviour and constitutes a potential target for intervention to help protect children.
abstract_id: PUBMED:27810713
Alcohol use disorders are associated with increased affective lability in bipolar disorder. Background: Affective dysregulation is a core feature of bipolar disorder (BD), and inter-episodic affect lability is associated with more severe outcomes including comorbidity. Rates of daily tobacco smoking and substance use disorders in BD are high. Knowledge regarding relationships between affective lability and abuse of the most commonly used substances such as tobacco, alcohol and cannabis in BD is limited.
Methods: We investigated whether dimensions of inter-episodic affective lability as measured with the Affective Lability Scale - short form (ALS-SF) were associated with lifetime daily tobacco use or alcohol (AUD) or cannabis use disorders (CUD) in a sample of 372 French and Norwegian patients with BD I and II.
Results: ALS-SF total score and all sub-dimensions (anxiety-depression, depression-elation and anger) were significantly associated with AUD, while only the depression-elation sub-dimension was associated with CUD, after controlling for possible confounders such as gender, age at interview, age at illness onset, BD subtype, duration of illness and other substance use disorders. Daily tobacco smoking was not significantly associated with affective lability.
Limitations: Data for recent substance use or psychiatric comorbidities such as personality or hyperkinetic disorders were not available, and could have mediated the relationships.
Conclusion: AUD is associated with several dimensions of inter-episodic affective lability in BD, while CUD is associated with increased oscillations between depression and elation only. Increased affective lability may partly explain the increased illness severity of patients with BD and AUD or CUD. Affective lability should be treated in order to prevent these comorbidities.
abstract_id: PUBMED:18284702
An investigation of the smoking behaviours of parents before, during and after the birth of their children in Taiwan. Background: Although many studies have investigated the negative effects of parental smoking on children and Taiwan has started campaigns to promote smoke-free homes, little is known about the smoking behaviours of Taiwanese parents during the childbearing period. To help fill the gap, this study investigated Taiwanese parents' smoking behaviours before, during and after the birth of their children, particularly focusing on smoking cessation during pregnancy and relapse after childbirth.
Methods: We used data from the Survey of Health Status of Women and Children, conducted by Taiwan's National Health Research Institutes in 2000. After excluding survey respondents with missing information about their smoking behaviours, our sample consisted of 3,109 women who were married at the time of interview and had at least one childbearing experience between March 1, 1995 and February 28, 1999. Data on parental smoking behaviour in the six months before pregnancy, during pregnancy, and in the first year after childbirth were extracted from the survey and analysed by descriptive statistics as well as logistic regression.
Results: Four percent of the mothers and sixty percent of the fathers smoked before the conception of their first child. The educational attainment and occupation of the parents were associated with their smoking status before the first pregnancy in the family. Over 80% of smoking mothers did not quit during pregnancy, and almost all of the smoking fathers continued tobacco use while their partners were pregnant. Over two thirds of the women who stopped smoking during their pregnancies relapsed soon after childbirth. Very few smoking men stopped tobacco use while their partners were pregnant, and over a half of those who quit started to smoke again soon after their children were born.
Conclusion: Among Taiwanese women who had childbearing experiences in the late 1990s, few smoked. Of those who smoked, few quit during pregnancy. Most of those who quit relapsed in the first year after childbirth. The smoking prevalence was high among the husbands of these Taiwanese women, and almost all of these smoking fathers continued tobacco use while their partners were pregnant. It is important to advocate the benefits of a smoke-free home to Taiwanese parents-to-be and parents with young children, especially the fathers. The government should take advantage of its free prenatal care and well-child care services to do this. In addition to educational campaigns through the media, the government can request physicians to promote smoke-free homes when they deliver prenatal care and well-child care. This could help reduce young children's health risks from their mothers' smoking during pregnancy and second-hand smoke at home.
Answer: Yes, there is evidence of increased lability in parents' smoking behavior after childbirth. A study conducted in Oslo, Norway, found that families with a child aged less than one year were more likely to make positive changes in their smoking behavior, such as quitting, reducing, or stopping smoking indoors, compared to families with only older children. However, there was also a trend for these families to make negative changes, such as starting to smoke or increasing the quantity of smoking. Older parents made positive changes more often than younger ones, and single parents were less likely to make positive changes. This indicates that there is indeed an increased lability in parents' smoking behavior following the birth of a child (PUBMED:8792501).
Additionally, research has shown that women's smoking probability drops several years before the first childbirth and remains reduced until the first child turns 18 years old. The largest drop in smoking occurs in the second and third trimester of pregnancy, by around 75 percent (PUBMED:32603997). Despite increased cessation rates around childbirth, more effective measures are needed to promote sustained abstinence from smoking among both parents, as childbirth leads to long-term abstinence from smoking only in a small minority of smoking mothers and fathers (PUBMED:8483861). |
Instruction: Does PET SUV Harmonization Affect PERCIST Response Classification?
Abstracts:
abstract_id: PUBMED:27283930
Does PET SUV Harmonization Affect PERCIST Response Classification? Pre- and posttreatment PET comparative scans should ideally be obtained with identical acquisition and processing, but this is often impractical. The degree to which differing protocols affect PERCIST classification is unclear. This study evaluates the consistency of PERCIST classification across different reconstruction algorithms and whether a proprietary software tool can harmonize SUV estimation sufficiently to provide consistent response classification.
Methods: Eighty-six patients with non-small cell lung cancer, colorectal liver metastases, or metastatic melanoma who were scanned for therapy monitoring purposes were prospectively recruited in this multicenter trial. Pre- and posttreatment PET scans were acquired in protocols compliant with the Society of Nuclear Medicine and Molecular Imaging and the European Association of Nuclear Medicine (EANM) acquisition guidelines and were reconstructed with a point spread function (PSF) or PSF + time-of-flight (TOF) for optimal tumor detection and also with standardized ordered-subset expectation maximization (OSEM) known to fulfill EANM harmonizing standards. After reconstruction, a proprietary software solution was applied to the PSF ± TOF data (PSF ± TOF.EQ) to harmonize SUVs with the OSEM values. The impact of differing reconstructions on PERCIST classification was evaluated.
Results: For the OSEMPET1/OSEMPET2 (OSEM reconstruction for pre- and posttherapeutic PET, respectively) scenario, which was taken as the reference standard, the change in SUL was -41% ± 25 and +56% ± 62 in the groups of tumors showing a decrease and an increase in 18F-FDG uptake, respectively. The use of PSF reconstruction affected classification of tumor response. For example, taking the PSF ± TOFPET1/OSEMPET2 scenario increased the apparent reduction in SUL in responding tumors (-48% ± 22) but reduced the apparent increase in SUL in progressing tumors (+37% ± 43), as compared with the OSEMPET1/OSEMPET2 scenario. As a result, variation in reconstruction methodology (PSF ± TOFPET1/OSEMPET2 or OSEM PET1/PSF ± TOFPET2) led to 13 of 86 (15%) and 17 of 86 (20%) PERCIST classification discordances, respectively. Agreement was better for these scenarios with application of the propriety filter, with κ values of 1 and 0.95 compared with 0.79 and 0.72, respectively.
Conclusion: Reconstruction algorithm-dependent variability in PERCIST classification is a significant issue but can be overcome by harmonizing SULs using a proprietary software tool.
abstract_id: PUBMED:28560574
EORTC PET response criteria are more influenced by reconstruction inconsistencies than PERCIST but both benefit from the EARL harmonization program. Background: This study evaluates the consistency of PET evaluation response criteria in solid tumours (PERCIST) and European Organisation for Research and Treatment of Cancer (EORTC) classification across different reconstruction algorithms and whether aligning standardized uptake values (SUVs) to the European Association of Nuclear Medicine acquisition (EANM)/EARL standards provides more consistent response classification.
Materials And Methods: Baseline (PET1) and response assessment (PET2) scans in 61 patients with non-small cell lung cancer were acquired in protocols compliant with the EANM guidelines and were reconstructed with point-spread function (PSF) or PSF + time-of-flight (TOF) reconstruction for optimal tumour detection and with a standardized ordered subset expectation maximization (OSEM) reconstruction known to fulfil EANM harmonizing standards. Patients were recruited in three centres. Following reconstruction, EQ.PET, a proprietary software solution was applied to the PSF ± TOF data (PSF ± TOF.EQ) to harmonize SUVs to the EANM standards. The impact of differing reconstructions on PERCIST and EORTC classification was evaluated using standardized uptake values corrected for lean body mass (SUL).
Results: Using OSEMPET1/OSEMPET2 (standard scenario), responders displayed a reduction of -57.5% ± 23.4 and -63.9% ± 22.4 for SULmax and SULpeak, respectively, while progressing tumours had an increase of +63.4% ± 26.5 and +60.7% ± 19.6 for SULmax and SULpeak respectively. The use of PSF ± TOF reconstruction impacted the classification of tumour response. For example, taking the OSEMPET1/PSF ± TOFPET2 scenario reduced the apparent reduction in SUL in responding tumours (-39.7% ± 31.3 and -55.5% ± 26.3 for SULmax and SULpeak, respectively) but increased the apparent increase in SUL in progressing tumours (+130.0% ± 50.7 and +91.1% ± 39.6 for SULmax and SULpeak, respectively). Consequently, variation in reconstruction methodology (PSF ± TOFPET1/OSEMPET2 or OSEM PET1/PSF ± TOFPET2) led, respectively, to 11/61 (18.0%) and 10/61 (16.4%) PERCIST classification discordances and to 17/61 (28.9%) and 19/61 (31.1%) EORTC classification discordances. An agreement was better for these scenarios with application of the propriety filter, with kappa values of 1.00 and 0.95 compared to 0.75 and 0.77 for PERCIST and kappa values of 0.93 and 0.95 compared to 0.61 and 0.55 for EORTC, respectively.
Conclusion: PERCIST classification is less sensitive to reconstruction algorithm-dependent variability than EORTC classification but harmonizing SULs within the EARL program is equally effective with either.
abstract_id: PUBMED:30128776
Multicentre analysis of PET SUV using vendor-neutral software: the Japanese Harmonization Technology (J-Hart) study. Background: Recent developments in hardware and software for PET technologies have resulted in wide variations in basic performance. Multicentre studies require a standard imaging protocol and SUV harmonization to reduce inter- and intra-scanner variability in the SUV. The Japanese standardised uptake value (SUV) Harmonization Technology (J-Hart) study aimed to determine the applicability of vendor-neutral software on the SUV derived from positron emission tomography (PET) images. The effects of SUV harmonization were evaluated based on the reproducibility of several scanners and the repeatability of an individual scanner. Images were acquired from 12 PET scanners at nine institutions. PET images were acquired over a period of 30 min from a National Electrical Manufacturers Association (NEMA) International Electrotechnical Commission (IEC) body phantom containing six spheres of different diameters and an 18F solution with a background activity of 2.65 kBq/mL and a sphere-to-background ratio of 4. The images were reconstructed to determine parameters for harmonization and to evaluate reproducibility. PET images with 2-min acquisition × 15 contiguous frames were reconstructed to evaluate repeatability. Various Gaussian filters (GFs) with full-width at half maximum (FWHM) values ranging from 1 to 15 mm in 1-mm increments were also applied using vendor-neutral software. The SUVmax of spheres was compared with the reference range proposed by the Japanese Society of Nuclear Medicine (JSNM) and the digital reference object (DRO) of the NEMA phantom. The coefficient of variation (CV) of the SUVmax determined using 12 PET scanners (CVrepro) was measured to evaluate reproducibility. The CV of the SUVmax determined from 15 frames (CVrepeat) per PET scanner was measured to determine repeatability.
Results: Three PET scanners did not require an additional GF for harmonization, whereas the other nine required additional FWHM values of GF ranging from 5 to 9 mm. The pre- and post-harmonization CVrepro of six spheres were (means ± SD) 9.45% ± 4.69% (range, 3.83-15.3%) and 6.05% ± 3.61% (range, 2.30-10.7%), respectively. Harmonization significantly improved reproducibility of PET SUVmax (P = 0.0055). The pre- and post-harmonization CVrepeat of nine scanners were (means ± SD) 6.59% ± 1.29% (range, 5.00-8.98%) and 4.88% ± 1.64% (range, 2.65-6.72%), respectively. Harmonization also significantly improved the repeatability of PET SUVmax (P < 0.0001).
Conclusions: Harmonizing SUV using vendor-neutral software produced SUVmax for 12 scanners that fell within the JSNM reference range of a NEMA body phantom and improved SUVmax reproducibility and repeatability.
abstract_id: PUBMED:28623376
EANM/EARL harmonization strategies in PET quantification: from daily practice to multicentre oncological studies. Quantitative positron emission tomography/computed tomography (PET/CT) can be used as diagnostic or prognostic tools (i.e. single measurement) or for therapy monitoring (i.e. longitudinal studies) in multicentre studies. Use of quantitative parameters, such as standardized uptake values (SUVs), metabolic active tumor volumes (MATVs) or total lesion glycolysis (TLG), in a multicenter setting requires that these parameters be comparable among patients and sites, regardless of the PET/CT system used. This review describes the motivations and the methodologies for quantitative PET/CT performance harmonization with emphasis on the EANM Research Ltd. (EARL) Fluorodeoxyglucose (FDG) PET/CT accreditation program, one of the international harmonization programs aiming at using FDG PET as a quantitative imaging biomarker. In addition, future accreditation initiatives will be discussed. The validation of the EARL accreditation program to harmonize SUVs and MATVs is described in a wide range of tumor types, with focus on therapy assessment using either the European Organization for Research and Treatment of Cancer (EORTC) criteria or PET Evaluation Response Criteria in Solid Tumors (PERCIST), as well as liver-based scales such as the Deauville score. Finally, also presented in this paper are the results from a survey across 51 EARL-accredited centers reporting how the program was implemented and its impact on daily routine and in clinical trials, harmonization of new metrics such as MATV and heterogeneity features.
abstract_id: PUBMED:35029817
New standards for phantom image quality and SUV harmonization range for multicenter oncology PET studies. Not only visual interpretation for lesion detection, staging, and characterization, but also quantitative treatment response assessment are key roles for 18F-FDG PET in oncology. In multicenter oncology PET studies, image quality standardization and SUV harmonization are essential to obtain reliable study outcomes. Standards for image quality and SUV harmonization range should be regularly updated according to progress in scanner performance. Accordingly, the first aim of this study was to propose new image quality reference levels to ensure small lesion detectability. The second aim was to propose a new SUV harmonization range and an image noise criterion to minimize the inter-scanner and intra-scanner SUV variabilities. We collected a total of 37 patterns of images from 23 recent PET/CT scanner models using the NEMA NU2 image quality phantom. PET images with various acquisition durations of 30-300 s and 1800 s were analyzed visually and quantitatively to derive visual detectability scores of the 10-mm-diameter hot sphere, noise-equivalent count (NECphantom), 10-mm sphere contrast (QH,10 mm), background variability (N10 mm), contrast-to-noise ratio (QH,10 mm/N10 mm), image noise level (CVBG), and SUVmax and SUVpeak for hot spheres (10-37 mm diameters). We calculated a reference level for each image quality metric, so that the 10-mm sphere can be visually detected. The SUV harmonization range and the image noise criterion were proposed with consideration of overshoot due to point-spread function (PSF) reconstruction. We proposed image quality reference levels as follows: QH,10 mm/N10 mm ≥ 2.5 and CVBG ≤ 14.1%. The 10th-90th percentiles in the SUV distributions were defined as the new SUV harmonization range. CVBG ≤ 10% was proposed as the image noise criterion, because the intra-scanner SUV variability significantly depended on CVBG. We proposed new image quality reference levels to ensure small lesion detectability. A new SUV harmonization range (in which PSF reconstruction is applicable) and the image noise criterion were also proposed for minimizing the SUV variabilities. Our proposed new standards will facilitate image quality standardization and SUV harmonization of multicenter oncology PET studies. The reliability of multicenter oncology PET studies will be improved by satisfying the new standards.
abstract_id: PUBMED:34370219
18F-FDG PET/CT for monitoring anti-PD-1 therapy in patients with non-small cell lung cancer using SUV harmonization of results obtained with various types of PET/CT scanners used at different centers. Objective: The prognostic value of treatment response in patients with non-small cell lung cancer (NSCLC) treated with immune-checkpoint inhibitors (ICIs) shown by 18F-fludeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) results obtained with multiple types of PET scanners using standardized uptake value (SUV) harmonization was evaluated.
Methods: Fifty-eight patients treated with ICIs who underwent 18F-FDG PET/CT examinations with nine types of PET scanners at six hospitals were enrolled. SUV harmonization of multiple PET scanner results was performed using the dedicated software packages "RAVAT" and "RC Tool for Harmonization". Tumor response was assessed by change in sum of harmonized SUVmax, according to the European Organization for Research and Treatment of Cancer (EORTC5) or the SUV of up to five lesions normalized to lean body mass, according to the PET Response Criteria in Solid Tumors (PERCIST5) and immunotherapy-modified PERCIST (imPERCIST5) criteria. The correlation between tumor response according to those three definitions and overall survival (OS) was evaluated and compared to known prognostic factors.
Results: One-year OS in responders and non-responders for harmonized EROTC5 was 86 and 32%, for harmonized PERCIST5 was 86 and 32%, and for harmonized imPERCIST5 was 80 and 30%, respectively (each p = 0.001). Univariate analysis showed that all response criteria remained as prognostic factors. However, there was an overlap for the categories stable metabolic disease (SMD) and progression metabolic disease (PMD) in survival curves using the PET treatment response criteria.
Conclusion: In patients with NSCLC treated with ICIs, tumor response based on the harmonized response criteria was associated with OS. PET response criteria using harmonized metabolic parameters may be difficult to routinely employ in daily practice due to overlapping SMD and PMD, although may have a supporting role for determining prognosis.
abstract_id: PUBMED:24976990
PET/CT evaluation of response to chemotherapy in non-small cell lung cancer: PET response criteria in solid tumors (PERCIST) versus response evaluation criteria in solid tumors (RECIST). Background: (18)F-FDG PET/CT is increasingly used in evaluation of treatment response for patients with non-small cell lung cancer (NSCLC). There is a need for an accurate criterion to evaluate the effect and predict the prognosis. The aim of this study is to evaluate therapeutic response in NSCLC with comparing PET response criteria in solid tumors (PERCIST) to response evaluation criteria in solid tumors (RECIST) criteria on PET/CT.
Methods: Forty-four NSCLC patients who received chemotherapy but no surgery were studied. Chemotherapeutic responses were evaluated using (18)F-FDG PET and CT according to the RECIST and PERCIST methodologies. PET/CT scans were obtained before chemotherapy and after 2 or 4-6 cycles' chemotherapy. The percentage changes of tumor longest diameters and standardized uptake value (SUV) (corrected for lean body mass, SUL) before and after treatment were compared using paired t-test. The response was categorized into 4 levels according to RECIST and PERCIST: CR (CMR) =1, PR (PMR) =2, SD (SMD) =3, PD (PMD) =4. Pearson chi-square test was used to compare the proportion of four levels in RECIST and PERCIST. Finally the relationship between progression-free survival (PFS) and clinicopathologic parameters (such as TNM staging, percentage changes in diameters and SUL, RECIST and PERCIST results etc.) were evaluated using univariate and multivariate Cox proportional hazards regression method.
Results: The difference of percentage changes between diameters and SUL was not significant using paired t-test (t=-1.69, P=0.098). However the difference was statistically significant in the 40 cases without increasing SUL (t=-3.31, P=0.002). The difference of evaluation results between RECIST and PERCIST was not significant by chi-square test (χ(2)=5.008, P=0.171). If RECIST evaluation excluded the new lesions which could not be found or identified on CT images the difference between RECIST and PERCIST was significant (χ(2)=11.759, P=0.007). Reduction rate of SULpeak (%), RECIST and PERCIST results were significant factors in univariate Cox analysis. But Multivariate Cox proportional hazards regression analysis demonstrated that only PERCIST was a significant factor for predicting DFS [hazard ratio (HR), 3.20; 95% (CI), 1.85-5.54; P<0.001].
Conclusions: PERCIST and RECIST criteria have good consistency and PERCIST (or PET) is more sensitive in detecting complete remission (CR) and progression. PERCIST might be the significant predictor of outcomes. The combination of PERCIST and RECIST would provide clinicians more accurate information of therapeutic response in earlier stage of treatment.
abstract_id: PUBMED:36243656
Prognostic value of PERCIST and PET/CT metabolic parameters after neoadjuvant treatment in patients with esophageal cancer. Aim: To assess the clinical utility of PERCIST criteria and changes in [18F]FDG PET/CT quantitative parameters as prognostic factors for progression-free survival and cancer-specific survival (CSS) in patients with esophageal cancer treated by chemoradiotherapy.
Material And Methods: Fifty patients (48 men) diagnosed with esophageal cancer were retrospectively evaluated over a 7.5-year interval. PERCIST criteria were used to assess response to neoadjuvant therapy. Variations in the metabolic parameters maximum SUV (SUVmax), metabolic tumor volume (MTV) and total lesion glycolysis (TLG) between pre- and post-treatment PET/CT studies were also determined. ROC curves, Kaplan-Meier method and Cox regression model were used for the analysis of prognostic factors and survival curves.
Results: The average follow-up was 26.8 months, with 40 recurrences-progressions and 41 deaths. Survival analysis showed statistically significant differences in CSS curves for PERCIST criteria and variation of MTV and TLG. PERCIST criteria were the only independent predictor in the multivariate analysis. Neither SUVmax nor tumor size were predictors for any of the assessment criteria.
Conclusion: Application of PERCIST criteria as well as change in MTV and TLG from PET/CT studies proved to be prognostic factors for CSS in patients in our setting treated for esophageal cancer. The results could help to personalize treatment.
abstract_id: PUBMED:30850971
How Often Do We Fail to Classify the Treatment Response with [18F]FDG PET/CT Acquired on Different Scanners? Data from Clinical Oncological Practice Using an Automatic Tool for SUV Harmonization. Purpose: Tumor response evaluated by 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) positron emission tomography/computed tomography (PET/CT) with standardized uptake value (SUV) is questionable when pre- and post-treatment PET/CT are acquired on different scanners. The aims of our study, performed in oncological patients who underwent pre- and post-treatment [18F]FDG PET/CT on different scanners, were (1) to evaluate whether EQ·PET, a proprietary SUV inter-exams harmonization tool, modifies the EORTC tumor response classification and (2) to assess which classification (harmonized and non-harmonized) better predicts clinical outcome.
Procedures: We retrospectively identified 95 PET pairs (pre- and post-treatment) performed on different scanners (Biograph mCT, Siemens; GEMINI GXL, Philips) in 73 oncological patients (52F; 57.8 ± 16.3 years). An 8-mm Gaussian filter was applied for the Biograph protocol to meet the EANM/EARL harmonization standard; no filter was needed for GXL. SUVmax and SUVmaxEQ of the same target lesion in the pre- and post-treatment PET/CT were noted. For each PET pair, the metabolic response classification (responder/non-responder), derived from combining the EORTC response categories, was evaluated twice (with and without harmonization). In discordant cases, the association of each metabolic response classification with final clinical response assessment and survival data (2-year disease-free survival, DFS) was assessed.
Results: On Biograph, SUVmaxEQ of all target lesions was significantly lower (p = 0.001) than SUVmax (8.5 ± 6.8 vs 12.5 ± 9.6; - 38.6 %). A discordance between the two metabolic response classifications (harmonized and non-harmonized) was found in 19/95 (20 %) PET pairs. In this subgroup (n = 19; mean follow-up, 33.9 ± 9 months), responders according to harmonized classification (n = 9) had longer DFS (47.5 months, 88.9 %) than responders (n = 10) according to non-harmonized classification (26.3 months, 50.0 %; p = 0.01). Moreover, harmonized classification showed a better association with final clinical response assessment (17/19 PET pairs).
Conclusions: The harmonized metabolic response classification is more associated with the final clinical response assessment, and it is able to better predict the DFS than the non-harmonized classification. EQ·PET is a useful harmonization tool for evaluating metabolic tumor response using different PET/CT scanners, also in different departments or for multicenter studies.
abstract_id: PUBMED:36858187
PERCIST and Hopkins criteria and PET/CT metabolic parameters as prognostic factors in squamous head and neck cancer. Aim: To assess the clinical utility of PERCIST and Hopkins criteria and changes in [18F]FDG PET/CT quantitative parameters as prognostic factors for progression-free survival (PFS) and cancer-specific survival (CSS) in patients with head and neck squamous cell carcinoma treated by chemoradiotherapy.
Material And Methods: Forty patients (34 men) diagnosed with head and neck squamous cell carcinoma were retrospectively assessed over an interval of 8 years. PERCIST and Hopkins criteria were used to assess response to treatment. Variations in the metabolic parameters maximum SUV (ΔSUVmax), metabolic tumor volume (ΔMTV) and total lesion glycolysis (ΔTLG) between pre- and post-treatment PET/CT studies were also determined. The Cox regression model, ROC curves and the Kaplan-Meier method were used for the analysis of prognostic factors and survival curves.
Results: The mean follow-up was 39.4 months, with 24 progressions and 22 deaths. Both PERCIST and Hopkins criteria and the three metabolic parameters were predictive factors in the univariate analysis and only ΔSUVmax was in the multivariate analysis. Survival analysis showed statistically significant differences in PFS and CSS curves for the five parameters considered.
Conclusion: Application of PERCIST and Hopkins criteria as well as ΔSUVmax, ΔMTV and ΔTLG from PET/CT studies proved to be prognostic factors for survival in patients in our setting for treating head and neck cancer. The results may help to personalize treatment.
Answer: Yes, PET SUV harmonization does affect PERCIST response classification. Studies have shown that the use of different reconstruction algorithms can impact the classification of tumor response when using PERCIST criteria. For example, the use of point spread function (PSF) reconstruction affected the classification of tumor response, leading to discordances in PERCIST classification in a significant number of cases (PUBMED:27283930). Similarly, another study found that PERCIST classification is less sensitive to reconstruction algorithm-dependent variability than EORTC classification, but harmonizing SULs within the EARL program is equally effective with either (PUBMED:28560574).
The Japanese Harmonization Technology (J-Hart) study demonstrated that harmonizing SUV using vendor-neutral software improved SUVmax reproducibility and repeatability across different PET scanners (PUBMED:30128776). The EANM/EARL harmonization strategies in PET quantification also emphasize the importance of harmonizing quantitative PET/CT performance to ensure comparability of parameters such as SUVs, MATVs, or TLG in multicenter studies (PUBMED:28623376).
Furthermore, a study on monitoring anti-PD-1 therapy in patients with non-small cell lung cancer using SUV harmonization showed that tumor response based on harmonized response criteria was associated with overall survival (PUBMED:34370219). Another study found that PERCIST might be a significant predictor of outcomes when comparing PERCIST to RECIST criteria on PET/CT (PUBMED:24976990).
In summary, PET SUV harmonization is crucial for consistent PERCIST response classification across different PET scanners and reconstruction algorithms. Harmonization ensures that quantitative measurements are comparable, which is particularly important in multicenter studies and when assessing treatment response and prognosis in oncology patients. |
Instruction: Does higher radiation dose lead to better outcome for non-operated localized esophageal squamous cell carcinoma patients who received concurrent chemoradiotherapy?
Abstracts:
abstract_id: PUBMED:27207358
Does higher radiation dose lead to better outcome for non-operated localized esophageal squamous cell carcinoma patients who received concurrent chemoradiotherapy? A population based propensity-score matched analysis. Background: The optimal radiotherapy dose for non-operated localized esophageal squamous cell carcinoma (NOL-ESCC) patients undergoing concurrent chemoradiotherapy (CCRT) is hotly debated.
Methods: We identified eligible patients diagnosed within 2008-2013 from Taiwan Cancer Registry and constructed a propensity score matched cohort (1:1 for high dose (⩾60Gy) vs standard dose (50-50.4Gy)) to balance observable potential confounders. We compared the hazard ratio (HR) of death between standard and high radiotherapy dose groups during the entire follow-up period. We performed sensitivity analysis (SA) to evaluate the robustness of our finding regarding potential unobserved confounders & index date definition.
Results: Our study population constituted 648 patients with well balance in observed co-variables. The HR of death when high dose was compared to standard dose was 0.75 (95% confidence interval 0.64-0.88). Our result was sensitive to potential unobserved confounders but robust to alternative index date definition in SA.
Conclusions: We found that higher than standard radiotherapy dose may lead to better survival for NOL-ESCC patients undergoing CCRT.
abstract_id: PUBMED:36351568
Combined modality therapy for patients with esophageal squamous cell carcinoma: Radiation dose and survival analyses. Background: We aimed to analyze the radiation dose and compare survival among combined modality therapy using modern radiation techniques for patients with esophageal squamous cell carcinoma (ESCC).
Methods: This retrospective study included patients with clinically staged T1-4N0-3M0 ESCC from 2014 to 2018. Patients who received combined modality therapies with curative intent were enrolled. The overall survival (OS) rates among combined modality therapy were compared. The clinical variables and impacts of radiation dose on survival were analyzed by the Kaplan-Meier method and Cox regression model.
Results: Of the 259 patients, 141 (54.4%) received definitive concurrent chemoradiotherapy (DCCRT); 67 (25.9%) underwent neoadjuvant chemoradiotherapy followed by surgery (NCRT+S); 51 (19.7%) obtained surgery followed by adjuvant chemoradiotherapy (S+ACRT). Two-year OS rates of the DCCRT, NCRT+S and S+ACRT group were 48.9, 61.5 and 51.2%. In the subgroup analysis of DCCRT group, the 2-year OS of patients receiving radiation dose 55-60 Gy was 57.1%. Multivariate analyses showed that clinical stage (p = 0.004), DCCRT with 55-60 Gy (p = 0.043) and NCRT+S with pathological complete response (pCR) (p = 0.014) were significant prognostic factors for better OS. The radiation dose-survival curve demonstrated a highly positive correlation between higher radiation dose and better survival.
Conclusion: Our results suggest that NCRT+S can provide a favorable survival for patients with ESCC, especially in patients who achieved pCR. The optimal radiation dose might be 55-60 Gy for patients receiving DCCRT via modern radiation techniques. Further randomized clinical studies are required to confirm the survival benefits between NCRT+S and DCCRT with escalated dose.
abstract_id: PUBMED:37169302
High-dose versus standard-dose radiotherapy in concurrent chemoradiotherapy for inoperable esophageal cancer: A systematic review and meta-analysis. Purpose: The aim of this study was to evaluate the effectiveness and safety of high-dose (HD-RT) versus standard-dose radiotherapy (SD-RT) in concurrent chemoradiotherapy (CCRT) for inoperable esophageal cancer (EC) patients.
Methods: A systematic search of the literature was conducted by screening PubMed, Web of Science, EMBASE and Cochrane Library databases before October 7, 2022 to collect controlled clinical studies of high-dose (≥60 Gy) and standard-dose (50-50.4 Gy) radiation in CCRT for EC. For statistical analysis, a fixed-effects model was used to synthesize HR and OR if there was no significant heterogeneity among studies; otherwise, a random-effects model was employed.
Results: There were ten studies with 4625 patients included in the study, 3667 of whom (79.3%) were esophageal squamous cell carcinoma (ESCC). The HD-RT group had no significant benefits in overall survival (OS) (HR = 0.88, 95% confidence interval [CI] = 0.74-1.05, P = 0.16) and progression-free survival (HR = 0.84, 95%CI = 0.67-1.04, P = 0.12) in total EC patients, compared with SD-RT group. However, in ESCC subgroup analysis, compared with SD-RT group, a better OS was observed in the HD-RT group (HR = 0.78, 95%CI = 0.70-0.88, P < 0.0001).
Conclusion: Compared with the radiation dose of 50-50.4 Gy, the increase of radiation dose (≥60 Gy) did not achieve benefits in survival for inoperable EC patients receiving CCRT. However, in patients with ESCC, high dose (≥60 Gy) of radiation probably improved OS.
abstract_id: PUBMED:30270099
Retrospective analysis of safety profile of high-dose concurrent chemoradiotherapy for patients with oesophageal squamous cell carcinoma. Background And Purpose: To evaluate the safety profile and efficacy of high-dose (60 Gy) concurrent chemoradiotherapy (CCRT) compared with standard-dose (50.4-54 Gy) CCRT.
Materials And Methods: Patients with oesophageal squamous cell carcinoma (OSCC) undergoing CCRT were eligible for a propensity score matched cohort (1:1 for high dose versus standard dose). Adverse events, local control (LC) and overall survival (OS) were assessed.
Results: A total of 380 patients with good balance in observed co-variables were enrolled. OS and LC rates of patients receiving high-dose CCRT were significantly higher than those receiving standard-dose CCRT, with the 10-year OS at 24% versus 13.3%, respectively. In contrast, there was a trend towards increased grades 2-3 acute oesophagitis toxicity among patients receiving high-dose versus standard-dose CCRT (37.4% versus 27.9%, respectively). None experienced grade 5 acute oesophagitis and grade 4 acute toxicities were rare. Similar rates of late radiation oesophagitis, radiation pneumonitis, gastrointestinal reactions and haematological toxicities were observed between patients receiving high-dose versus standard-dose CCRT. Six patients (3.2%) receiving high-dose CCRT experienced >grade 3 leucocytopaenia, and two (1.1%) received standard-dose CCRT, whereas none experienced >grade 3 thrombocytopaenia or anaemia. Three patients (2.3%) receiving high-dose CCRT died of infections caused by myelosuppression. Multivariate analysis showed that anaemia is a significant independent predictor of poor prognosis.
Conclusions: Compared with standard-dose CCRT, high-dose CCRT yielded more favourable local control and survival outcomes for patients with OSCC. Grades 2-3 acute oesophagitis toxicity in patients undergoing high-dose CCRT increased, whereas severe, life-threatening toxicities (>grade 3) did not.
abstract_id: PUBMED:35273471
Radiotherapy Combined With Concurrent Nedaplatin-Based Chemotherapy for Stage II-III Esophageal Squamous Cell Carcinoma. Objective: This study was conducted to explore the appropriate radical radiation dose in concurrent chemoradiotherapy (CCRT) for patients with inoperable stage II-III esophageal squamous cell carcinoma (ESCC).
Methods: This retrospective study included patients with esophageal cancer (EC) from the database of patients treated at the Affiliated Zhangjiagang Hospital of Soochow University (1/2015-12/2019). Overall survival (OS), progression-free survival (PFS), objective remission rate (ORR), first failure pattern, and toxicities were collected.
Results: 112 patients treated with intensity-modulated radiation therapy (IMRT) combined with concurrent chemotherapy of nedaplatin-based regimens were included. Fifty-eight (51.8%) and 54 (48.2%) patients received 60 (HD) and 50.4 (LD) Gy of radiotherapy, respectively. The HD group showed superior OS and a trend for longer PFS compared with the LD group (median OS: 25.5 vs 17.5 months, P = .021; median PFS: 14.0 vs 10.5 months, P = .076). There were more patients with a complete remission (CR) in the HD group than in the LD group (P=.016). The treatment-related toxicities were generally acceptable, but HD radiotherapy would increase the incidence of grade ≥3 late radiotoxicity (22.4% vs 5.6%, P = .011).
Conclusion: In nedaplatin-based CCRT for stage II-III ESCC, the radiotherapy dose of 60 Gy achieved a better prognosis.
Strengths And Limitations Of This Study: A comparative study of 50.4 Gy and 60 Gy was conducted to evaluate whether 50.4 Gy can be used as a radical radiotherapy dose for inoperable stage II-III esophageal squamous cell carcinoma from a real-world perspective.The highly consistent selection criteria in our study make analysis results highly reliable and scientific.The existing research results support that nedaplatin can be used in concurrent chemoradiotherapy for esophageal squamous cell carcinoma, and this study focuses on the discovery of a better nedaplatin-based combination regimen.The findings of this study are limited to a single-center study with a non-large sample size.Inevitably, recall bias may exist in this retrospective study.Surgery was not involved in the follow-up treatment after concurrent chemoradiotherapy, which may worsen the prognosis of some patients.
abstract_id: PUBMED:32489478
Do Higher Radiation Doses with Concurrent Chemotherapy in the Definitive Treatment of Esophageal Cancer Improve Outcomes? A Meta-Analysis and Systematic Review. Background: To investigate the effects and safety profile of radiation dose escalation utilizing computerized tomography (CT) based radiotherapy techniques (including 3-Dimensional conformal radiotherapy, intensity-modulated radiotherapy and proton therapy) in the definitive treatment of patients with esophageal carcinoma (EC) with definitive concurrent chemoradiotherapy (dCCRT). Methods: All relevant studies utilizing CT-based radiation planning, comparing high-dose (≥ 60 Gy) versus standard-dose (50.4 Gy) radiation for patients with EC were analyzed for this meta-analysis. Results: Eleven studies including 4946 patients met the inclusion criteria, with 96.5% of patients diagnosed with esophageal squamous cell carcinoma (ESCC). The high-dose group demonstrated a significant improvement in local-regional failure (LRF) (OR 2.199, 95% CI 1.487-3.253; P<0.001), two-year local-regional control (LRC) (OR 0.478, 95% CI 0.309-0.740; P=0.001), two-year overall survival (OS) (HR 0.744, 95% CI 0.657-0.843; P<0.001) and five-year OS (HR 0.683, 95% CI 0.561-0.831; P<0.001) rates relative to the standard-dose group. In addition, there was no difference in grade ≥ 3 radiation-related toxicities and treatment-related deaths between the groups. Conclusion: Under the premise of controlling the rate of toxicities, doses of ≥ 60 Gy in CT-based dCCRT of ESCC patients might improve locoregional control and ultimate survival compared to the standard-dose dCCRT. While our review supports a dose-escalation approach in these patients, multiple ongoing randomized trial initial and final reports are awaited to evaluate the effectiveness of this strategy.
abstract_id: PUBMED:27689398
Comparative effectiveness of image-guided radiotherapy for non-operated localized esophageal squamous cell carcinoma patients receiving concurrent chemoradiotherapy: A population-based propensity score matched analysis. Background: Although concurrent chemoradiotherapy (CCRT) coupled with image-guided radiotherapy (IGRT) is associated with a theoretical benefit in non-operated localized esophageal squamous cell carcinoma (NOL-ESCC) patients, there is currently no clinical evidence to support this.
Results: The study population in the primary analysis comprised 866 patients who were well balanced in terms of their co-variables. The HR for mortality when group A was compared with group B was 0.82 (95% confidence interval, 0.7-0.95). SA revealed that the result was moderately sensitive.
Materials And Methods: Eligible patients diagnosed between 2008 and 2013 were identified in the Taiwan Cancer Registry. A propensity score-matched cohort was constructed [1:1 in groups A (with IGRT) and B (without IGRT)] to balance any observable potential confounders. The hazard ratio (HR) for mortality was compared between groups A and B during the follow-up period. Sensitivity analyses (SA) were performed to evaluate the robustness of the findings regarding the selection of confounders and a potential unobserved confounder.
Conclusions: The current results provide the first clinical evidence that CCRT coupled with IGRT is associated with better overall survival when compared with CCRT without IGRT in NOL-ESCC patients. However, this study should be interpreted with caution given its non-randomized nature and the moderate sensitivity of the data. Further studies are needed to clarify this finding.
abstract_id: PUBMED:25006285
A randomized study to compare sequential chemoradiotherapy with concurrent chemoradiotherapy for unresectable locally advanced esophageal cancer. Background: Chemotherapy combined with radiotherapy can improve outcome in locally advanced esophageal cancer.
Aim: This study aimed to compare efficacy and toxicity between concurrent chemoradiotherapy (CCRT) and sequential chemoradiotherapy (SCRT) in unresectable, locally advanced, esophageal squamous cell carcinoma (ESSC).
Materials And Methods: Forty-one patients with unresectable, locally advanced ESCC were randomized into two arms. In the CCRT arm (Arm A), 17 patients received 50.4 Gy at 1.8 Gy per fraction over 5.6 weeks along with concurrent cisplatin (75 mg m(-2) intravenously on day 1 and 5-fluorouracil (1000 mg m(-2) continuous intravenous infusion on days 1-4 starting on the first day of irradiation and given after 28 days. In the SCRT arm (Arm B), 20 patients received two cycles of chemotherapy, using the same schedule, followed by radiotherapy fractionated in a similar manner. The endpoints were tumor response, acute and late toxicities, and disease-free survival.
Results: With a median follow up of 12.5 months, the complete response rate was 82.4% in Arm A and 35% in Arm B (P = 0.003). Statistically significant differences in frequencies of acute skin toxicity (P = 0.016), gastrointestinal toxicity (P = 0.005) and late radiation pneumonitis (P = 0.002) were found, with greater in the CCRT arm. A modest but non-significant difference was observed in median time to recurrence among complete responders in the two arms (Arm A 13 months and Arm B 15.5 months, P = 0.167) and there was also no significant difference between the Kaplan Meier survival plots (P = 0.641) of disease-free survival.
Conclusions: Compared to sequential chemoradiotherapy, concurrent chemoradiotherapy can significantly improve local control rate but with greater risk of adverse reactions.
abstract_id: PUBMED:35530530
The Radiation Dose to the Left Supraclavicular Fossa is Critical for Anastomotic Leak Following Esophagectomy - A Dosimetric Outcome Analysis. Purpose: For locally advanced esophageal cancer, definitive concurrent chemoradiotherapy (CCRT) with a radiation dose of 50-50.4 Gy/25-28 Fx is prescribed, followed by adjuvant esophagectomy for better local control or salvage treatment if locoregional recurrence occurs. However, radiation injury before surgery may delay wound healing. We performed cervical anastomosis directly inside the left supraclavicular fossa (SCF), the irradiation target for esophageal cancer. The significance of radiation injury in patients with cervical anastomotic leak (AL) remains unclear. Thus, we assessed the influence of radiation on cervical AL in patients undergoing preoperative CCRT followed by esophagectomy.
Patients And Methods: We defined the SYC zone, a portion of the region overlapping the left SCF. The radiation dose to the SYC zone was analyzed and correlated with AL in patients with locally advanced esophageal squamous cell carcinoma (ESCC) who were administered preoperative CCRT (radiation dose with 50-50.4 Gy/25-28 Fx to the primary esophageal tumor) followed by esophagectomy between October 2009 and January 2018. Receiver operating characteristic curve analysis and logistic regression were used to identify the optimal radiation factor to predict AL and the cutoff value.
Results: The optimal radiation factor to predict AL was the mean dose to the SYC zone (area under the curve (AUC)=0.642), and the cutoff point of the mean dose was 48.55 Gray (Gy). For a mean SYC zone dose ≥48.55 Gy, the AL risk was sevenfold greater than that for <48.55 Gy (OR = 7.805; 95% CI: 1.184 to 51.446; P value = 0.033).
Conclusion: Recognizing the SYC zone as an organ at risk and performing radiation evaluation are meaningful. A reduced mean dose of the SYC zone below 48.55 Gy results in a lower cervical AL rate following esophagectomy.
abstract_id: PUBMED:30591503
Outcomes of Localized Esophageal Squamous Cell Carcinoma Patients Treated With Definitive Concurrent Chemoradiotherapy Using Either Standard or High Radiotherapy Dose: A Retrospective Study Controlling for Organ at Risk Dose. Background/aim: The optimal radiotherapy dose for localized esophageal squamous cell carcinoma (ESqCC) patients treated with definitive concurrent chemo-radiotherapy (CCRT) is debated. The aim of our study was to compare patient outcomes using either standard or high radiotherapy dose.
Materials And Methods: Eligible patients diagnosed between 2011 and 2015 from the cancer registry of our Institute were identified and a propensity score (PS)-matched cohort (1:1 for high vs. standard dose) was constructed to balance observable potential confounders (including organ at risk dose). The hazard ratio (HR) of death between high and standard dose was compared.
Results: Our study population included 73/36 patients before/after PS matching. The HR of death at the high dose compared to the standard dose was 0.554 (95% confidence interval (CI)=0.308-0.998, p=0.049).
Conclusion: Definitive CCRT using a high radiotherapy dose showed improved survival outcomes for localized ESqCC patients compared to standard dose.
Answer: The evidence from the provided abstracts suggests that a higher radiation dose may lead to better outcomes for non-operated localized esophageal squamous cell carcinoma (ESCC) patients who receive concurrent chemoradiotherapy (CCRT). A population-based propensity-score matched analysis found that a higher radiotherapy dose (≥60 Gy) compared to a standard dose (50-50.4 Gy) was associated with a reduced hazard ratio (HR) of death (HR = 0.75), indicating better survival for patients undergoing CCRT (PUBMED:27207358). Similarly, another study suggested that the optimal radiation dose might be 55-60 Gy for patients receiving definitive concurrent chemoradiotherapy (DCCRT) using modern radiation techniques, with a 2-year overall survival (OS) rate of 57.1% for patients receiving this dose range (PUBMED:36351568).
A systematic review and meta-analysis also supported the notion that a higher dose (≥60 Gy) improved overall survival (OS) in patients with ESCC compared to the standard dose (50-50.4 Gy) (PUBMED:37169302). Furthermore, a retrospective analysis indicated that high-dose CCRT (60 Gy) yielded more favorable local control and survival outcomes for patients with oesophageal squamous cell carcinoma (OSCC) compared to standard-dose CCRT (50.4-54 Gy), although there was a trend towards increased grades 2-3 acute oesophagitis toxicity among patients receiving the high dose (PUBMED:30270099).
Another study found that a radiotherapy dose of 60 Gy achieved a better prognosis in nedaplatin-based CCRT for stage II-III ESCC, with superior overall survival and a trend for longer progression-free survival (PFS) compared to a lower dose of 50.4 Gy (PUBMED:35273471). A meta-analysis and systematic review also concluded that doses of ≥60 Gy in CT-based dCCRT of ESCC patients might improve locoregional control and ultimate survival compared to the standard-dose dCCRT, without an increase in grade ≥3 radiation-related toxicities and treatment-related deaths (PUBMED:32489478). |
Instruction: Acetylsalicylic acid-induced biochemical changes in gastric juice: a failure of adaptation?
Abstracts:
abstract_id: PUBMED:9465503
Acetylsalicylic acid-induced biochemical changes in gastric juice: a failure of adaptation? Background: Acetylsalicyclic acid (ASA) causes gastric mucosal damage which diminishes with continued use due to adaptation.
Methods: To determine the net effect of these processes on the gastric juice, we estimated acid, osmolality, bicarbonate concentration in nonparietal gastric juice, calcium, potassium and sodium in 18 patients (9 men; mean age 32 years, range 20-46) with irritable bowel syndrome, before and after 600 mg of ASA taken post-cibum thrice daily for 4 weeks. Osmolality was determined by an osmometer, acidity by titration, and Na+, K+ and Ca++ using a sodium-potassium-calcium analyzer; bicarbonate was derived from the two-component model of Feldman.
Results: Gastric juice K+ and Na+ increased significantly from mean (SE) 14.6 (0.5) and 197.5 (16.3) to 16.7 (0.4) and 256.8 (18.1) mEq/L, respectively. The other parameters remained unchanged.
Conclusion: After four weeks of ASA ingestion there is a dichotomy of gastric mucosal injury and adaptation, with preservation of acid secretion but continued loss of Na+ and K+.
abstract_id: PUBMED:3260568
Gastric adaptation. Studies in humans during continuous aspirin administration. To study the process of gastric mucosal adaptation to aspirin administration, 14 normal men underwent a study with continued administration of aspirin. Endoscopic assessment, biopsies, and gastric wash collections for acid, mucus, and deoxyribonucleic acid recovery were performed weekly; aspirin was continued until the endoscopy showed minimal damage. Six subjects took 650 mg of aspirin b.i.d., and 8 took 650 mg q.i.d.; adaptation and resolution took longer with the higher dose (median 4.5 wk vs. 1 wk, p less than 0.01). Despite improvement in mucosal appearance, gastric microbleeding remained elevated throughout aspirin administration. In contrast, deoxyribonucleic acid recovery (a marker for cellular exfoliation and regeneration) increased significantly just before the time of resolution, when, on average, it more than doubled. As no other biochemical or histologic changes could be associated with the resolution of damages, we conclude that gastric adaptation to chronic injury may involve increased cellular regeneration.
abstract_id: PUBMED:1290058
Gastric adaptation to nonsteroidal anti-inflammatory drugs in man. Adaptation describes the phenomenon in which visible gastric mucosal injury lessens or resolves completely despite continued administration of an injurious substance such as aspirin. Adaptation occurs in man although the mechanism remains unclear. Recent evidence suggests increased cell proliferation and correction of nonsteroidal anti-inflammatory drug induced reduction in gastric blood flow as possibly being important. Gastric erosions and ulcers in chronic nonsteroidal anti-inflammatory drug users represent failed adaptation. Gastric erosions and ulcers in chronic nonsteroidal anti-inflammatory drug users represent failed adaptation. The factors responsible for failure of adaptation are unknown but one clue is that there appears to be a dose-response effect relating anti-inflammatory dose and effectiveness of adaptation (i.e., adaptation is delayed, or less effective, when higher anti-inflammatory doses are administered). Gastric adaptation can be enhanced by co-therapy with synthetic prostaglandins but not with sucralfate or H2-receptor antagonists.
abstract_id: PUBMED:7959223
Mucosal adaptation to aspirin induced gastric damage in humans. Studies on blood flow, gastric mucosal growth, and neutrophil activation. The gastropathy associated with the ingestion of non-steroidal anti-inflammatory drugs (NSAIDs) such as aspirin is a common side effect of this class of drugs, but the precise mechanisms by which they cause mucosal damage have not been fully explained. During continued use of an injurious substance, such as aspirin, the extent of gastric mucosal damage decreases and this phenomenon is named gastric adaptation. To assess the extent of mucosal damage by aspirin and subsequent adaptation the effects of 14 days of continuous, oral administration of aspirin (2 g per day) to eight healthy male volunteers was studied. To estimate the rate of mucosal damage, gastroscopy was performed before (day 0) and at days 3, 7, 14 of aspirin treatment. Gastric microbleeding and gastric mucosal blood flow were measured using laser Doppler flowmeter and mucosal biopsy specimens were taken for the estimation of tissue DNA synthesis and RNA and DNA concentration. In addition, the activation of neutrophils in peripheral blood was assessed by measuring their ability to associate with platelets. Aspirin induced acute damage mainly in gastric corpus, reaching at day 3 about 3.5 on the endoscopic Lanza score but lessened to about 1.5 at day 14 pointing to the occurrence of gastric adaptation. Mucosal blood flow increased at day 3 by about 50% in the gastric corpus and by 88% in the antrum. The in vitro DNA synthesis and RNA concentration, an index of mucosal growth, were reduced at day 3 but then increased to reach about 150% of initial value at the end of aspirin treatment. It is concluded that the treatment with aspirin in humans induces gastric adaptation to this agent, which entails the increase in mucosal blood flow, the rise in neutrophil activation, and the enhancement in mucosal growth.
abstract_id: PUBMED:28976678
Gastric adaptation to aspirin and Helicobacter pylori infection in man. The relationship between Helicobacter pylori infection and aspirin (ASA)-induced gastropathy and gastric adaptation to ASA remains unclear. We compared gastric damage and adaptation after repeated exposures to ASA in the same subjects without H. pylori infection and those infected by H. pylori before and after eradication of this H. pylori. Twenty-four volunteers in two groups (A and B), without H. pylori infection (group A) and with H. pylori infection (group B) before and after H. pylori eradication, were given ASA 2 g/day or placebo for 14 days. Mucosal damage was evaluated by endoscopy and gastric microbleeding; mucosal prostaglandin (PG) E2 generation and luminal transforming growth factor (TGF)α were determined on days 0,3,7 and 14 of the ASA course. In all subjects, ASA-induced gastric damage reached a maximum on day 3. In H. pylori-positive subjects this damage was maintained at a similar level up to the 14th day of observation. Following H. pylori eradication, the damage was significantly lessened at day 14, as revealed by both endoscopy and microbleeding, and was accompanied by increased mucosal release of TGFα. Prostaglandin E2 generation was significantly higher in H. pylori-positive subjects than after H. pylori eradication, but ASA treatment resulted in greater than 90% reduction of this generation independent of H. pylori status. Gastric adaptation to ASA is impaired in H. pylori-positive subjects but eradication of this bacterium restores this process.
abstract_id: PUBMED:9376621
Helicobacter pylori and gastric adaptation to repeated aspirin administration in humans. The gastric irritant properties of nonsteroidal anti-inflammatory drugs (NSAID) are well established but the pathogenic mechanisms by which these agents damage the mucosa or delay its repair are poorly understood. The phenomenon of gastric adaptation after repeated exposures to ASA is well documented but the involvement of Helicobacter pylori (H. pylori) in NSAID-induced gastropathy and adaptation has not been elucidated. The aim of this study was 1) to compare the gastric damage in response to repeated exposures to ASA in the same subjects before and after eradication of H. pylori and 2) to examine the morphological and functional changes of gastric mucosa during the 14 day treatment with ASA in H. pylori-infected subjects before and after eradication of this bacteria: Eight healthy volunteers (age 19-28) with H. pylori infection were given ASA 1g bd during 14 days before and after H. pylori eradication. Mucosal damage was evaluated by endoscopy before and at 3, 7 and 14 days of ASA administration using modified Lanza score. During endoscopy mucosal biopsies were obtained for determination of DNA synthesis, by measuring 3H-thymidine incorporation into DNA. Prior to each endoscopy gastric microbleeding was determined in three consecutive gastric washings. Three months after successful eradication of H. pylori confirmed by 13C-urea breath test and mucosal rapid urease test, the same subjects received again 14 day treatment with ASA and underwent the same examinations as prior to the therapy. In all subjects, ASA administration induced acute gastric damage with endoscopic Lanza score reaching maximum at 3rd day. In H. pylori-positive subjects, this damage was maintained at similar level up to day 14th, whereas in H. pylori-eradicated subjects, this damage was lessened at day 14th by about 60-75%. Gastric microbleeding also reached its maximum at 3rd day of ASA treatment being significantly higher in H. pylori-eradicated subjects than in those with H. pylori infection. This microbleeding decreased to almost normal values by the end of the study in all H. pylori-negative subjects but remained significantly elevated in H. pylori-infected subjects. DNA synthesis before and following ASA administration was significantly higher in subjects after H. pylori eradication than in those with H. pylori infection. Moreover, this DNA synthesis showed significant increase at day 7 of ASA administration only in H. pylori-eradicated subjects. We conclude that: 1) gastric adaptation to ASA is impaired in H. pylori-positive subjects but eradication of H. pylori restores this adaptation, 2) the DNA synthesis and possibly also mucosal cell turnover in response to ASA are suppressed in H. pylori infection and this can be reversed by eradication of H. pylori.
abstract_id: PUBMED:8565764
Gastric mucosal adaptation to diclofenac injury. Adaptation occurs to the gastric injury produced by nonsteroidal antiinflammatory drugs during continued dosing. The aim of this study was to identify characteristics of this phenomenon that might help in the search for underlying mechanisms. The time frame for onset and offset of adaptation of diclofenac (damage assessed planimetrically) was examined in rats. Adaptation to oral diclofenac took three to five days to develop, and persisted for up to five days after the last dose. It was also demonstrable after subcutaneous dosing or when injury was measured by a change in mucosal potential difference. Diclofenac-adapted rats were protected against injury induced by subsequent exposure to ethanol, indomethacin, aspirin, or piroxicam, indicating that adaptation is not specific to injury by the adapting agent. This cross-adaptation was dose-dependent and characterized histologically by a reduction in deep damage. In conclusion, gastric adaptation to diclofenac is mediated by mechanisms that take several days to develop and be lost. The route of administration appears to be unimportant, but the development of both adaptation and cross-adaptation is influenced by dosage size.
abstract_id: PUBMED:7516882
Role of neutrophils and mucosal blood flow in gastric adaptation to aspirin. Gastric mucosa adapts to ulcerogenic action of aspirin but the mechanism of this phenomenon is unknown. In this study, acute gastric lesions were produced by single or repeated oral administration of acidified aspirin in rats with intact or deactivated (by capsaicin) sensory nerves and with intact or suppressed synthase of nitric oxide (NO). Single oral dose of aspirin produced a dose-dependent increase in the area of gastric lesions accompanied by a significant increase in blood neutrophils, neutrophil infiltration into the mucosa, leukotriene B4 formation and almost complete suppression of prostaglandin synthesis. After repeated administration of aspirin, the mucosal damage progressively declined and this was accompanied by a significant augmentation in gastric blood flow. In addition, a reduction in blood neutrophil count, mucosal neutrophil infiltration and leukotriene B4 release was observed during this adaptation of the stomach to repeated aspirin insults. Capsaicin denervation of sensory nerves aggravated the damage induced by the first exposure of the stomach to aspirin and caused a significant reduction in gastric blood flow, but with repeated aspirin administration, gastric adaptation to this agent and a rise in gastric blood flow were observed. Pretreatment NG-nitro-L-arginine with (L-NNA), a specific inhibitor of nitric oxide synthase, eliminated the hyperemic response to repeated aspirin insults but failed to affect the adaptation to aspirin. We conclude that the rat stomach adapts readily to repeated aspirin insults despite sustained inhibition of prostaglandin biosynthesis and this adaptation appears to be mediated by a significant increase in gastric blood flow and a reduction in neutrophil activation and leukotriene B4 release.
abstract_id: PUBMED:7840196
Adaptation of rat gastric mucosa to aspirin requires mucosal contact. Adaptation of the gastric mucosa to repeated administration of aspirin is a well-documented phenomenon, but the underlying mechanism is not fully understood. In this study, we tested the hypothesis that adaptation of the rat stomach to chronic aspirin administration required contact between the aspirin and the gastric mucosa. Rats were orally treated twice daily with either aspirin (100 mg/kg) or the vehicle. After various periods of treatment (< or = 20 days), the rats were given a higher dose of aspirin (250 mg/kg po), and the extent of gastric damage was assessed 3 h later. Rats receiving chronic aspirin demonstrated the development, in a time-dependent manner, of resistance to the damaging effects of aspirin. Chronic aspirin administration also significantly decreased the susceptibility of the rat stomach to damage induced by indomethacin or naproxen. The adaptation phenomenon was associated with a parallel increase in inflammatory infiltration of the mucosa, as measured by tissue myeloperoxidase activity and histology. Prostaglandin synthesis was markedly suppressed (> 80%) in all rats treated with aspirin. Gastric mucosal ornithine decarboxylase activity was not affected by chronic aspirin administration. If aspirin was administered subcutaneously or intrajejunally for 20 days, neither adaptation nor inflammation of the gastric mucosa was observed. These studies demonstrate that the rat stomach adapts to chronic oral administration of aspirin, but not to aspirin administration via other routes. Adaptation of the gastric mucosa occurred in parallel to infiltration of granulocytes. Whether these two phenomena are mechanistically or causally linked is not yet clear.
abstract_id: PUBMED:8658032
Gastric adaptation to aspirin and stress enhances gastric mucosal resistance against the damage by strong irritants. Background: Gastric mucosal adaptation to injury induced by repeated application of aspirin (ASA) or stress is a well-documented phenomenon, but it is known whether such adaptation affects the mucosal tolerance to other strong irritants.
Methods: In this study gastric adaptation was induced by repeated daily administration of acidified ASA for 4 consecutive days (Series A) or by 3.5H of water immersion and restraint stress (WRS) applied every other day for up to 8 days (series B). When the adaptation to ASA or WRS was fully developed, rats of series A and B were challenged with strong irritants such as 100% ethanol, 200 mM acidified taurocholate (TC), or 25% NaCl for 1 h or with WRS for 3.5 h.
Results: ASA or WRS applied once produced numerous gastric lesions and deep histologic necrosis accompanied by a decrease in gastric blood flow. With repeated application of ASA or stress the mucosal adaptation to ASA and WRS developed; the area of gastric lesions was reduced by 86% and 56%, respectively, and this was accompanied by a marked decrease of superficial and deep necrosis, and increase in gastric blood flow (GBF) and the enhancement of mucosal regeneration. An increase in mucosal and luminal contents of epidermal growth factor (EGF) and in mucosal expression of EGF receptors was also observed in the mucosa adapted to ASA or stress. In rats adapted to ASA or stress and then challenged with 100% ethanol, 200 mm TC, 25% NaCl, stress or ASA, the areas of macroscopic gastric lesions and deep histologic necrosis were remarkable reduced as compared with those in non-adapted vehicle-treated rats. This was also accompanied by a significant decrease in (GBF), a marked increase of mucosal and luminal contents of EGF and expression of its receptors, and enhanced mucosal cell proliferation.
Conclusions: Gastric adaptation to ASA or stress enhances mucosal resistance to the injury induced by strong irritants, and this appears to be mediated by mucosal regeneration, probably resulting from increased luminal and mucosal contents of EGF and excessive expression of its receptors.
Answer: The study of acetylsalicylic acid (ASA)-induced biochemical changes in gastric juice suggests that there may be a failure of adaptation in some aspects of gastric mucosal response. While gastric mucosal damage caused by ASA diminishes with continued use due to adaptation, there is evidence that not all changes adapt over time. Specifically, after four weeks of ASA ingestion, there was a significant increase in gastric juice potassium (K+) and sodium (Na+), indicating continued loss of these electrolytes despite the preservation of acid secretion (PUBMED:9465503). This dichotomy suggests that while some adaptation occurs, it may not be complete, and certain parameters, such as electrolyte loss, may persist.
Other studies have shown that gastric adaptation to chronic injury from ASA may involve increased cellular regeneration (PUBMED:3260568), and that adaptation is impaired in the presence of Helicobacter pylori infection but can be restored following the eradication of the bacterium (PUBMED:9376621; PUBMED:28976678). Additionally, adaptation can be enhanced by co-therapy with synthetic prostaglandins but not with sucralfate or H2-receptor antagonists (PUBMED:1290058).
Furthermore, the process of adaptation has been associated with increased mucosal blood flow, neutrophil activation, and mucosal growth (PUBMED:7959223), and it has been suggested that adaptation to ASA requires mucosal contact (PUBMED:7840196). The adaptation phenomenon also appears to enhance mucosal resistance to injury induced by other strong irritants, mediated by mucosal regeneration and increased expression of epidermal growth factor (EGF) and its receptors (PUBMED:8658032).
In conclusion, while there is evidence of gastric mucosal adaptation to ASA, the observed biochemical changes in gastric juice, specifically the continued loss of Na+ and K+, suggest that the adaptation may not be complete or uniform across all physiological parameters. |
Instruction: Are caregiving responsibilities associated with non-attendance at breast screening?
Abstracts:
abstract_id: PUBMED:21129196
Are caregiving responsibilities associated with non-attendance at breast screening? Background: Previous research showed that deprived individuals are less likely to attend breast screening and those providing intense amounts of informal care tend to be more deprived than non-caregivers. The aim of this study was to examine the relationship between informal caregiving and uptake of breast screening and to determine if socio-economic gradients in screening attendance were explained by caregiving responsibilities.
Methods: A database of breast screening histories was linked to the Northern Ireland Longitudinal Study, which links information from census, vital events and health registration datasets. The cohort included women aged 47 - 64 at the time of the census eligible for breast screening in a three-year follow-up period. Cohort attributes were recorded at the Census. Multivariate logistic regression was used to examine the relationship between informal caregiving and uptake of screening using STATA version 10.
Results: 37,211 women were invited for breast screening of whom 27,909 (75%) attended; 23.9% of the cohort were caregivers. Caregivers providing <20 hours of care/week were more affluent, while those providing >50 hours/week were more deprived than non-caregivers. Deprived women were significantly less likely to attend breast screening; however, this was not explained by caregiving responsibilities as caregivers were as likely as non-caregivers to attend (Odds Ratio 0.97; 95% confidence intervals 0.88, 1.06).
Conclusions: While those providing the most significant amounts of care tended to be more deprived, caregiving responsibilities themselves did not explain the known socio-economic gradients in breast screening attendance. More work is required to identify why more deprived women are less likely to attend breast screening.
abstract_id: PUBMED:35137088
Factors determining non-attendance in breast cancer screening among women in the Netherlands: a national study. Breast cancer is one of the most common types of cancer among women. National mammography screening programs can detect breast cancer early, but attendance rates have been decreasing in the Netherlands over the past decade. Non-attendees reported that overdiagnosis, the risk of false-negative results, x-ray exposure and mammography pain could be barriers to attendance, but it is not clear whether these disadvantages explain non-attendance and in which situations they are considered barriers. We conducted a national survey among 1227 Dutch women who did not attend mammography screening appointments in 2016. Logistic regression models were used to identify factors that influenced the likelihood of the abovementioned disadvantages leading to non-attendance. The results showed that the doctor's opinion increased the likelihood of the risk of false-negative being perceived as a reason for non-attendance. Moreover, opportunistic screening increased the likelihood that the risk of false-negative, overdiagnosis and x-ray exposure would lead to non-attendance. Women with lower education levels were less likely to consider overdiagnosis and x-ray exposure as reasons for non-attendance, while women who had not undergone mammography screening before were more likely to reject the screening invitation because of concerns about x-ray exposure and mammography pain. These findings indicate how we can address the specific concerns of different groups of women in the Netherlands to encourage them to attend potentially life-saving breast-screening appointments. Screening organizations could provide accurate and unbiased information on the effectiveness of mammography screening to GPs, putting them in a better position to advise their patients.
abstract_id: PUBMED:27663642
Breast cancer screening attendance in two Swiss regions dominated by opportunistic or organized screening. Background: In Switzerland, the French-speaking region has an organized breast cancer (BC) screening program; in the German-speaking region, only opportunistic screening until recently had been offered. We evaluated factors associated with attendance to breast cancer screening in these two regions.
Methods: We analyzed the data of 50-69 year-old women (n = 2769) from the Swiss Health Survey 2012. Factors of interest included education level, place of residence, nationality, marital status, smoking history, alcohol consumption, physical activity, diet, self-perceived health, history of chronic diseases and mental distress, visits to medical doctors and cervical and colorectal cancer screening. Outcome measures were dichotomized into ≤2 years since most recent mammography versus >2 years or never.
Results: In the German- and French-speaking regions, mammography attendance within the last two years was 34.9 % and 77.8 %, respectively. In the French region, moderate alcohol consumption (adjusted OR 2.01, 95 % CI 1.28-3.15) increased screening attendance. Compared to those with no visit to a physician during the recent year, women in both regions with such visits attended statistically significantly more often BC screening (1-5 times vs. no visit: German (adjusted OR 3.96, 95 % CI 2.58-6.09); French: OR 7.25, 95 % CI 4.04-13.01). Non-attendance to cervical screening had a negative effect in both the German (adjusted OR 0.44, 95 % CI 0.25-0.79) and the French region (adjusted OR 0.57, 95 % CI 0.35-0.91). The same was true for colorectal cancer screening (German (adjusted OR 0.66, 95 % CI 0.52-0.84); French: OR 0.52, 95 % CI 0.33-0.83). No other factor was associated with BC screening and none of the tests of interaction comparing the two regions revealed statistically significant results.
Conclusion: The effect of socio-demographic characteristics, lifestyle, health factors and screening behavior other than mammography on non-attendance to BC screening did not differ between the two regions with mainly opportunistic and organized screening, respectively, and did not explain the large differences in attendance between regions. Other potential explanations such as public promotion of attendance for BC screening, physicians' recommendations regarding mammography participation or women's beliefs should be further investigated.
abstract_id: PUBMED:33808101
Organized Breast and Cervical Cancer Screening: Attendance and Determinants in Southern Italy. The aims of this study were to evaluate the attendance to breast and cervical cancer screening and the related determinants in a low attendance area. A cross-sectional study was conducted among mothers of students attending secondary schools and university courses in Campania region, Southern Italy. Only 49.7% of the eligible women reported to have undergone mammography in the previous two years. Unemployed women, unsatisfied about their health status, with a family history of breast cancer, and having visited a physician in the previous 12 months were significantly more likely to have undergone mammography in the previous two years within an organized screening program. The attendance to cervical cancer screening in the interval of three years was reported to be 56.1% of women. Having a lower than graduation degree, being smokers, and having visited a physician in the previous 12 months were significant predictors of having had a Pap-smear in the previous three years in an organized screening program. In this study a very low attendance was found to both breast and cervical cancer organized screening programs. A strong commitment to their promotion is urgently needed, also to reduce inequalities of attendance of disadvantaged women.
abstract_id: PUBMED:32222788
Having caregiving responsibilities affects management of fragility fractures and bone health. In this secondary analysis of six qualitative studies, we found that approximately one-quarter of individuals with fragility fracture were serving as informal caregivers. The caregiving role appeared to be a cause of the fracture for some and was prioritized over bone health, acting as a barrier to bone health management.
Introduction: Among fragility fracture patients serving as informal caregivers, our objective was to examine how caregiving responsibilities were associated with, and possibly impacted by, the fracture experience and the resulting management of bone health.
Methods: We conducted a secondary analysis (amplified analysis) of six qualitative studies to understand caregiver responsibilities and the relationship between these responsibilities and patients' management of the fracture and bone health. The primary studies and the secondary analysis were conducted from a phenomenological approach. Eligible individuals in the primary studies were English-speaking men and women who were 45+ years old recruited from three settings (local, provincial, and national).
Results: Without being prompted to talk about their experience of caregiving, 33 of 145 (23%) individuals reported they were providing care to a family member or friend at the time of their fracture or during recovery post-fracture. The experience of having caregiving responsibilities was related to the fracture and bone health in two ways: (1) the caregiving role appeared to be a cause of the fracture in some participants and (2) caregiving was prioritized over participants' own bone health and was a barrier to bone health management.
Conclusion: Fragility fracture is associated with, and potentially leads to an impairment of, an important social role in patients providing physical and emotional support and supervision for dependents as caregivers. Further, an important cause of fragility fracture can occur in the act of caregiving.
abstract_id: PUBMED:29059006
Lower attendance rates in immigrant versus non-immigrant women in the Norwegian Breast Cancer Screening Programme. Objective: The Norwegian Breast Cancer Screening Programme invites women aged 50-69 to biennial mammographic screening. Although 84% of invited women have attended at least once, attendance rates vary across the country. We investigated attendance rates among various immigrant groups compared with non-immigrants in the programme.
Methods: There were 4,053,691 invitations sent to 885,979 women between 1996 and 2015. Using individual level population-based data from the Cancer Registry and Statistics Norway, we examined percent attendance and calculated incidence rate ratios, comparing immigrants with non-immigrants, using Poisson regression, following women's first invitation to the programme and for ever having attended.
Results: Immigrant women had lower attendance rates than the rest of the population, both following the first invitation (53.1% versus 76.1%) and for ever having attended (66.9% versus 86.4%). Differences in attendance rates between non-immigrant and immigrant women were less pronounced, but still present, when adjusted for sociodemographic factors. We also identified differences in attendance between immigrant groups. Attendance increased with duration of residency in Norway. A subgroup analysis of migrants' daughters showed that 70.0% attended following the first invitation, while 82.3% had ever attended.
Conclusions: Immigrant women had lower breast cancer screening attendance rates. The rationale for immigrant women's non-attendance needs to be explored through further studies targeting women from various birth countries and regions.
abstract_id: PUBMED:36429866
Caregiving Responsibilities and Mental Health Outcomes in Young Adult Carers during the COVID-19 Pandemic: A Longitudinal Study. This study investigated caregiving responsibilities and associated mental health outcomes in young adult carers during the COVID-19 pandemic and had three aims: (1) to investigate differences in caregiving responsibilities across two groups of young adult carers (parental illness context vs. ill non-parent family member context) relative to non-carers, (2) to identify COVID-19/lockdown correlates of caregiving responsibilities, and (3) to examine the longitudinal associations between caregiving responsibilities and mental health outcomes. Of the 1048 Italians aged 18-29 (Mage = 24.48, SDage = 2.80; 74.33% female) who consented to complete online surveys at Time 1, 813 reported no ill family member (non-carers). Young adult carers included 162 with an ill parent and 73 with an ill non-parent family member. The study included 3 time points: 740 participants completed Time 2 assessment (Mage = 24.35, SDage = 2.81; 76.76% female), while 279 completed Time 3 assessment (Mage = 24.78, SDage = 2.72; 79.93% female). Key variables measured were 13 COVID-19/lockdown factors at Times 1 and 2, caregiving responsibilities at Time 2, and mental health outcomes at Time 3 (fear of COVID-19, anxiety, depression, wellbeing). Two COVID-19/lockdown factors were significantly correlated with higher caregiving responsibilities: insufficient home space, and greater time spent working and learning from home. As predicted, young adult carers reported higher caregiving responsibilities than non-carers, and this effect was greater in young adults caring for an ill parent compared to young adults caring for an ill non-parent family member. As expected, irrespective of family health status, caregiving responsibilities were longitudinally related to poorer mental health outcomes, operationalised as higher fear of COVID-19, anxiety, and depression, and lower wellbeing. Elevated young adult caregiving is an emerging significant public health issue that should be addressed through a multipronged approach that includes education about young adult carer needs for personnel across all relevant sectors and flexible care plans for ill family members that include a 'whole family' biopsychosocial approach.
abstract_id: PUBMED:24317356
Attendance of the fourth (2008-2009) screening round of the Hungarian organized, nationwide breast cancer screening program Introduction: Organised, nationwide screening for breast cancer with mammography in the age group between 45 and 65 years with 2 years screening interval started in Hungary in January 2002.
Aim: The aim of this study is to analyze the attendance rate of nationwide breast screening programme for the 2008-2009 years.
Method: The data derive from the database of the National Health Insurance Fund Administration. The ratio of women in the age group 45-65 years was calculated having either a screening mammography or a diagnostic mammography in the 4th screening round of the programme.
Results: In the years 2000-2001, 7.6% of the women had an opportunistic screening mammography while in 2008-2009 31.2% of the target population had screening mammography within the organized programme. During the same periods 20.2% (2000-2001) and 20.4% (2008-2009) of women had a diagnostic mammography. Thus the total (screening and diagnostic) coverage of mammography increased from 26.6% (2000-2001) to 50.1% (2008-2009). The attendance rate failed to change between 2002 and 2009.
Conclusions: In order to decrease the mortality due to breast cancer, the attendance rate of mammography screening programme should be increased. Orv. Hetil., 154(50), 1975-1983.
abstract_id: PUBMED:37691575
Early performance measures following regular versus irregular screening attendance in the population-based screening program for breast cancer in Norway. Objective: Irregular attendance in breast cancer screening has been associated with higher breast cancer mortality compared to regular attendance. Early performance measures of a screening program following regular versus irregular screening attendance have been less studied. We aimed to investigate early performance measures following regular versus irregular screening attendance.
Methods: We used information from 3,302,396 screening examinations from the Cancer Registry of Norway. Examinations were classified as regular or irregular. Regular was defined as an examination 2 years ± 6 months after the prior examination, and irregular examination >2 years and 6 months after prior examination. Performance measures included recall, biopsy, screen-detected and interval cancer, positive predictive values, and histopathological tumor characteristics.
Results: Recall rate was 2.4% (72,429/3,070,068) for regular and 3.5% (8217/232,328) for irregular examinations. The biopsy rate was 1.0% (29,197/3,070,068) for regular and 1.7% (3825/232,328) for irregular examinations, while the rate of screen-detected cancers 0.51% (15,664/3,070,068) versus 0.86% (2003/232,328), respectively. The adjusted odds ratio was 1.53 (95% CI: 1.49-1.56) for recall, 1.73 (95% CI: 1.68-1.80) for biopsy, and 1.68 (95% CI: 1.60-1.76) for screen-detected cancer after irregular examinations compared to regular examinations. The proportion of lymph node-positive tumors was 20.1% (2553/12,719) for regular and 25.6% (426/1662) for irregular examinations.
Conclusion: Irregular attendance was linked to higher rates of recall, needle biopsies, and cancer detection. Cancers detected after irregular examinations had less favorable histopathological tumor characteristics compared to cancers detected after regular examinations. Women should be encouraged to attend screening when invited to avoid delays in diagnosis.
abstract_id: PUBMED:31744058
Predictors and Trend in Attendance for Breast Cancer Screening in Lithuania, 2006-2014. In Lithuania, a Nationwide Breast Cancer (BC) Screening Program was launched in 2005, offering mammography for women aged 50 to 69 years, every other year. This study aimed to determine the trend in the attendance for mammography screening during 2006-2014 and to identify the factors that are predictive for participation in it. The study sample consisted of 1941 women aged 50-64 years, who participated in five cross-sectional biennial postal surveys of Lithuanian Health Behavior Monitoring, carried out in independent national random samples. The attendance for screening was identified if women reported having had a mammogram within the last two years. The proportion of women attending the screening was continuously increasing from 20.0% in 2006 up to 65.8% in 2014. The attendance for BC screening was associated with the participation in cervical cancer screening. A higher level of education, living in a city, frequent contact with a doctor, and healthy behaviors (fresh-vegetable consumption, physical activity, and absence of alcohol abuse) were associated with higher participation rates in BC screening. To increase BC screening uptake and to reduce inequalities in attendance, new strategies of organized BC screening program using systematic personal invitations are required in Lithuania.
Answer: Caregiving responsibilities have been examined in relation to attendance at breast screening, but the findings suggest that caregiving itself does not directly explain non-attendance. A study in Northern Ireland found that while caregivers providing more than 50 hours of care per week tended to be more deprived than non-caregivers, caregiving responsibilities did not account for the socio-economic gradients in breast screening attendance. Deprived women were less likely to attend breast screening, but this was not explained by caregiving responsibilities, as caregivers were as likely as non-caregivers to attend (PUBMED:21129196).
In contrast, a secondary analysis of six qualitative studies found that caregiving responsibilities could act as a barrier to bone health management, which could imply that caregiving might also impact attendance at breast screening, although this was not the primary focus of the study (PUBMED:32222788).
Other factors, such as socio-demographic characteristics, lifestyle, health factors, and screening behavior other than mammography, did not differ significantly between regions with mainly opportunistic and organized screening in Switzerland and did not explain the large differences in attendance between regions (PUBMED:27663642).
In Southern Italy, factors such as employment status, health satisfaction, family history of breast cancer, and recent physician visits were associated with higher likelihoods of having undergone mammography within an organized screening program (PUBMED:33808101).
Overall, while caregiving responsibilities may impact an individual's capacity to manage their health, including attending breast screening, the research suggests that other factors, such as socio-economic status, health beliefs, and access to healthcare, play a more significant role in explaining non-attendance at breast screening (PUBMED:21129196; PUBMED:27663642; PUBMED:33808101). |
Instruction: Can the effect of transepithelial corneal collagen cross-linking be improved by increasing the duration of topical riboflavin application?
Abstracts:
abstract_id: PUBMED:24874297
Can the effect of transepithelial corneal collagen cross-linking be improved by increasing the duration of topical riboflavin application? An in vivo confocal microscopy study. Objective: To evaluate the effect of transepithelial corneal collagen cross-linking (CXL) with prolonged riboflavin application by in vivo confocal microscopy and to compare this effect with that of standard CXL with complete epithelial debridement.
Methods: In eyes with progressive keratoconus, CXL procedure was performed with standard technique and transepithelial technique after prolonged riboflavin drop application for 2 hr. Patients were evaluated with in vivo confocal microscopic examination preoperatively and at postoperative months 1 and 6.
Results: The depth of CXL effect was similar in both groups (i.e., 380.86 ± 103.23 μm in standard CXL group and 342.2 ± 68.6 μm in transepithelial CXL group) (P=0.4). The endothelial cell counts and morphological parameters (i.e., pleomorphism and polymegathism) were not significantly affected in both groups (P>0.05 for all). In the standard CXL group, in vivo confocal microscopy revealed anterior stromal acellular hyperreflective honeycomb edema with posteriorly gradually decreasing reflectivity and increasing number of keratocytes and some sheets of longitudinally aligned filamentary deposits. The keratocytes were seen to repopulate in the posterior-to-anterior direction. In transepithelial CXL group, although the depth of CXL effect was similar, less pronounced keratocyte damage, extracellular matrix hyperreflectivity, and sheets of filamentary deposits at the posterior stroma was observed.
Conclusions: Transepithelial CXL with prolonged peroperative riboflavin application can achieve similar depth of effect in the stroma with less pronounced confocal microscopic changes as compared with the standard CXL with complete epithelial debridement.
abstract_id: PUBMED:23848196
Transepithelial corneal collagen cross-linking by iontophoresis of riboflavin. Purpose: To evaluate the effectiveness of transepithelial cornea impregnation with riboflavin 0.1% by iontophoresis for collagen cross-linking.
Material And Methods: Transepithelial collagen cross-linking by iontophoresis of riboflavin was performed in a series of 22 eyes of 19 patients with progressive keratoconus I-II of Amsler classification. The riboflavin solution was administered by iontophoresis for 10 min in total, after which standard surface UVA irradiation (370 nm, 3 mW/cm(2) ) was performed at a 5-cm distance for 30 min.
Results: The riboflavin/UVA treatment resulted in a decrease in the average keratometry level from 46.47 ± 1.03 to 44.12 ± 1.12 D 1 year after the procedure. Corneal astigmatism decreased from 3.44 ± 0.48 to 2.95 ± 0.23 D. Uncorrected distance visual acuity improved from 0.61 ± 0.44 up to 0.48 ± 0.41 (LogMAR). Preoperative and postoperative endothelial cell density remained unchanged at 2765 ± 21.15 cells/mm(2) .
Conclusion: Transepithelial collagen cross-linking by iontophoresis might become an effective method for riboflavin impregnation of the corneal stroma reducing the duration of the procedure and being more comfortable for the patients. Further long-term studies are necessary to complete the evaluation of the efficacy and risk spectrum of the modified cross-linking technique.
abstract_id: PUBMED:28114577
Biomechanical Strengthening of the Human Cornea Induced by Nanoplatform-Based Transepithelial Riboflavin/UV-A Corneal Cross-Linking. Purpose: The purpose of this study was to investigate the biomechanical stiffening effect induced by nanoplatform-based transepithelial riboflavin/UV-A cross-linking protocol using atomic force microscopy (AFM).
Methods: Twelve eye bank donor human sclerocorneal tissues were investigated using a commercial atomic force microscope operated in force spectroscopy mode. Four specimens underwent transepithelial corneal cross-linking using a hypotonic solution of 0.1% riboflavin with biodegradable polymeric nanoparticles of 2-hydroxypropyl-β-cyclodextrin plus enhancers (trometamol and ethylenediaminetetraacetic acid) and UV-A irradiation with a 10 mW/cm2 device for 9 minutes. After treatment, the corneal epithelium was removed using the Amoils brush, and the Young's modulus of the most anterior stroma was quantified as a function of scan rate by AFM. The results were compared with those collected from four specimens that underwent conventional riboflavin/UV-A corneal cross-linking and four untreated specimens.
Results: The average Young's modulus of the most anterior stroma after the nanoplatform-based transepithelial and conventional riboflavin/UV-A corneal cross-linking treatments was 2.5 times (P < 0.001) and 1.7 times (P < 0.001) greater than untreated controls respectively. The anterior stromal stiffness was significantly different between the two corneal cross-linking procedures (P < 0.001). The indentation depth decreased after corneal cross-linking treatments, ranging from an average of 2.4 ± 0.3 μm in untreated samples to an average of 1.2 ± 0.1 μm and 1.8 ± 0.1 μm after nanoplatform-based transepithelial and conventional cross-linking, respectively.
Conclusions: The present nanotechnology-based transepithelial riboflavin/UV-A corneal cross-linking was effective to improve the biomechanical strength of the most anterior stroma of the human cornea.
abstract_id: PUBMED:27304600
Transepithelial Corneal Cross-linking Using an Enhanced Riboflavin Solution. Purpose: To assess the efficacy of a modified high concentration riboflavin solution containing benzalkonium chloride 0.01% for transepithelial corneal cross-linking (CXL).
Methods: In this prospective, interventional multicenter cohort study, 26 eyes of 26 patients with documented progressive keratoconus who underwent transepithelial CXL were included. Follow-up at 6 and 12 months postoperatively included slit-lamp examination, uncorrected and corrected distance visual acuity (logMAR), maximum keratometry (Kmax), and corneal pachymetry (corneal thinnest point) as determined by Scheimpflug imaging. Statistical analysis was performed using repeated measures analysis of variance and the Friedman test for parametric and non-parametric data, respectively. P values less than .05 were considered significant.
Results: Kmax did not change significantly at postoperative months 6 and 12. Changes in corneal thinnest point did not change postoperatively over 12 months. Uncorrected and corrected distance visual acuity did not change postoperatively. Progression (defined by an increase in Kmax greater than 1.00 diopter occurred in 46% of eyes at 12 months. Corneal epithelial defects were observed in 46% of the patients and marked punctate corneal epitheliopathy/loose epithelium in 23% of the patients in the immediate postoperative period. No corneal infection, sterile infiltrates, or haze were observed.
Conclusions: Transepithelial CXL with an enhanced riboflavin solution did not effectively halt progression of keratoconus. Significant epithelium damage was evident in the immediate postoperative period. [J Refract Surg. 2016;32(6):372-377.].
abstract_id: PUBMED:26011972
Clinical observation of transepithelial corneal collagen cross-linking by lontophoresis of riboflavin in treatment of keratoconus. Purpose: To evaluate the efficacy and safety of transepithelial collagen cross-linking by iontophoretic delivery of riboflavin in treatment of progressive keratoconus.
Methods: Eleven patients (15 eyes) with progressive keratoconus were enrolled. After 0.1% riboflavin-distilled water solution was deliveried via transepithelial iontophpresis for 5 min with 1 mA current, and ultraviolet radiation (370 nm, 3 mW/cm2) was performed at a 1.5 cm distance for 30 min. The follow up were 6 months in all eyes. The uncorrected visual acuity, corrected visual acuity, endothelial cell counting, corneal thickness, intraocular pressure, corneal curvature, corneal topography, OCT and corneal opacity before and 6-month after surgery were analyzed.
Results: At 6 month postoperatively, mean uncorrected visual acuity and corrected visual acuity changed from 0.36 to 0.30 and from 0.42 to 0.57 without statistical significance. The mean value of each index of corneal curvature declined without statistical significance.Kmax value dereased from 60.91 to 59.91, and the astigmatism declined from 3.86 to 3.19. Central corneal thickness decreased from 460.93 μm to 455.40 μm, and thinnest corneal thickness declined from 450.87 μm to 440.60 μm with no statistical significance. Intraocular pressure was significantly elevated from 10.85 mmHg to 12.62 mmHg. Endothelial cell count did not change significantly. No corneal haze occurred. Mean depth of corneal demarcation line was 288.46 μm at 1 month postoperatively.
Conclusion: Transepithelial corneal collagen cross-linking by iontophoresis is effective and safe in the treatment of progressive keratoconus, and yields stable clinical outcomes during 6-month follow up. However, long-term follow up is urgently required.
abstract_id: PUBMED:25402571
Impact of corneal cross-linking on topical drug penetration in humans. Purpose: To analyse the influence of corneal cross-linking (CXL) with ultraviolet-A (UV-A) and riboflavin on drug permeability in human subjects.
Methods: Keratoconus patients (n = 23; mean age 26.9 ± 5.8 years) undergoing a standard CXL procedure with UV-A (5.4 J/cm(2) , 30 min) and riboflavin in one eye were included in the study. The pupillary diameter, measured before and every 3 min for 30 min after the topical application of one drop of 2% pilocarpine, was used as an indirect measure of the corneal permeability. The pupillary diameter was measured with an infrared pupillometer device before (baseline) and 4 months after CXL.
Results: Prior to pilocarpine application, no significant difference in the pupillary diameter was detected before CXL and 4 months later. The mean decrease in the pupillary diameter after the application of pilocarpine was similar at baseline and the 4-month follow-up visit: mean decreases of 3.9 and 3.7 mm were observed 30 min after pilocarpine application, respectively (p > 0.05).
Conclusions: No significant influence of CXL on the corneal penetration of topically applied pilocarpine was observed in this clinical study.
abstract_id: PUBMED:28264131
Ex Vivo Study of Transepithelial Corneal Cross-linking. Purpose: To perform in vitro assessment of different techniques of transepithelial corneal cross-linking (CXL) and to compare the results to deepithelialized CXL.
Methods: Transepithelial CXL was performed after pre-treatment with or without penetration enhancers (gum cellulose, 0.44% sodium chloride, and 0.01% benzalkonium chloride) for 15 or 60 minutes. Deepithelialized corneas underwent CXL after pretreatment with riboflavin for 15 minutes, according to the Dresden protocol. All corneas were incubated in 0.3% collagenase A solution and the time to total dissolution was measured. Corneas were also imaged with confocal microscopy to evaluate the corneal epithelium, subbasal nerve plexus, and depth of stromal keratocyte nuclei as a means of measuring the depth of collagen CXL.
Results: Deepithelialized CXL corneas with 15 minutes of pretreatment dissolved after 15.4 ± 3.1 hours, significantly longer (P = .001) than deepithelialized untreated corneas (8.5 ± 0.6 hours). Transepithelial CXL corneas with 15 minutes of pretreatment with or without penetration enhancers dissolved after 8.3 ± 2.1 and 7.4 ± 1.6 hours, respectively. A longer pretreatment of 60 minutes with penetration enhancers resulted in greater resistance to degradation of the transepithelial CXL corneas (14.6 ± 2.2 hours), which was similar to deepithelialized CXL corneas. The results of the biological assay correlated well with the imaging results obtained by confocal microscopy.
Conclusions: Corneas treated by transepithelial CXL with an extended pretreatment time of 60 minutes and penetration enhancers exhibited similar characteristics as corneas treated by the deepithelialized CXL approach. By confocal imaging, the transepithelial approach with extended pretreatment time demonstrated evidence of epithelial damage, which may have improved the treatment effect of this group. [J Refract Surg. 2017;33(3):171-177.].
abstract_id: PUBMED:32879776
Topical Corneal Cross-Linking Solution Delivered Via Corneal Reservoir in Dutch-Belted Rabbits. Purpose: A topical corneal cross-linking solution that can be used as an adjunct or replacement to standard photochemical cross-linking (UV-riboflavin) methods remain an attractive possibility. Optimal concentration and delivery method for such topical corneal stabilization in the living rabbit eye were developed.
Methods: A series of experiments were carried out using Dutch-belted rabbits (3 months old, weighing 1.0-1.5 kg) and topical cross-linking solutions (sodium hydroxymethylglycinate) (10-250 mM) delivered via corneal reservoir. The application regimen included a one-time 30-minute application (10-40 mM sodium hydroxymethylglycinate) as well as a once per week 5-minute application (250 mM sodium hydroxymethylglycinate) for 7 weeks. Animals were evaluated serially for changes in IOP, pachymetry, epithelial integrity, and endothelial cell counts. Keratocyte changes were identified using intravital laser scanning confocal microscopy. Post mortem efficacy was evaluated by mechanical inflation testing.
Results: Overall, there were very few differences observed in right eye treated versus left eye controls with respect to intraocular pressure, pachymetry, and endothelial cell counts, although 30-minute cross-linking techniques did cause transient increases in thickness resolving within 7 days. Epithelial damage was noted in all of the 30-minute applications and fully resolved within 72 hours. Keratocyte changes were significant, showing a wound healing pattern similar to that after riboflavin UVA photochemical cross-linking in rabbits and humans. Surprisingly, post mortem inflation testing showed that the lower concentration of 20 mM delivered over 30 minutes showed the most profound stiffening/strengthening effect.
Conclusions: Topical cross-linking conditions that are safe and can increase corneal stiffness/strength in the living rabbit eye have been identified.
Translational Relevance: A topical corneal cross-linking solution delivered via corneal reservoir is shown to be both safe and effective at increasing tissue strength in living rabbit eyes and could now be tested in patients suffering from keratoconus and other conditions marked by corneal tissue weakness.
abstract_id: PUBMED:36167218
Repeated application of riboflavin during corneal cross-linking does not improve the biomechanical stiffening effect ex vivo. Purpose: To evaluate whether repeated application of riboflavin during corneal cross-linking (CXL) has an impact on the corneal biomechanical strength in ex-vivo porcine corneas.
Design: Laboratory investigation.
Methods: Sixty-six porcine corneas with intact epithelium were divided into three groups and analyzed. All corneas were pre-soaked with an iso-osmolar solution of 0.1% riboflavin in a phosphate-buffered saline (PBS) solution ("riboflavin solution"). Then, the corneas in Groups 1 and 2 were irradiated with a standard epi-off CXL (S-CXL) UV-A irradiation protocol (3 mW/cm2 for 30 min); while the corneas in Group 3 were not irradiated and served as control. During irradiation, Group 1 (CXL-PBS-Ribo) received repeated riboflavin solution application while corneas in Group 2 (CXL-PBS) received only repeated iso-osmolar PBS solution. Immediately after the procedure, 5-mm wide corneal strips were prepared, and elastic modulus was calculated to characterize biomechanical properties.
Results: Significant differences in stress-strain extensiometry were found between two cross-linked groups with control group (P = 0.005 and 0.002, respectively). No significant difference was observed in the normalized stiffening effect between Groups 1 and 2 (P = 0.715).
Conclusions: The repeated application of riboflavin solution during UV-A irradiation does not affect the corneal biomechanical properties achieved with standard epi-off CXL. Riboflavin application during CXL may be omitted without altering the biomechanical stiffening induced by the procedure.
abstract_id: PUBMED:31099830
Riboflavin Concentrations at the Endothelium During Corneal Cross-Linking in Humans. Purpose: To determine the riboflavin concentration in the posterior corneal stroma, Descemet's membrane, and endothelium prior to UV irradiation in corneal cross-linking (CXL) in humans.
Methods: Five human deepithelialized cadaver corneas were mounted into artificial anterior chambers. After the establishment of stable physiological hydration, two-photon imaging with a certified multiphoton tomograph was used to determine fluorescence intensity and second harmonic generation signals from collagen throughout each cornea by optical sectioning, with a step size of 2.5 μm. Afterward, 0.1% riboflavin solution was applied to the anterior corneal surface, similar to the standard CXL protocol. To determine the absolute riboflavin concentration immediately before UV irradiation, corneas were measured by two-photon imaging just at the end of the riboflavin imbibition and after riboflavin saturation.
Results: The topical application of 0.1% riboflavin results in a riboflavin concentration that decreases to 0.035% in the posterior stroma. Inside Descemet's membrane and endothelium, the concentration drops further to only approximately 0.015% at the endothelial level. Local riboflavin distribution indicates a predominantly paracellular passive diffusion of riboflavin into the anterior chamber.
Conclusion: The experimentally determined riboflavin concentration of 0.015% at the endothelium shows a substantial discrepancy of a factor of 1.7 to the previously theoretically calculated 0.025%. A lower riboflavin concentration at the endothelium may enable higher radiant exposures and further improve the efficacy of CXL.
Answer: Yes, the effect of transepithelial corneal collagen cross-linking (CXL) can be improved by increasing the duration of topical riboflavin application. A study found that transepithelial CXL with prolonged peroperative riboflavin application can achieve a similar depth of effect in the stroma with less pronounced confocal microscopic changes compared to the standard CXL with complete epithelial debridement (PUBMED:24874297). Additionally, another study demonstrated that corneas treated by transepithelial CXL with an extended pretreatment time of 60 minutes and penetration enhancers exhibited similar characteristics as corneas treated by the deepithelialized CXL approach (PUBMED:28264131). This suggests that increasing the duration of riboflavin application, especially when combined with penetration enhancers, can improve the effectiveness of transepithelial CXL. |
Instruction: Is it getting better?
Abstracts:
abstract_id: PUBMED:35080002
Getting Lost in People With Dementia: A Scoping Review Background: Many people with dementia suffer from getting lost, which not only impacts their daily lives but also affects their caregivers and the general public. The concept of getting lost in dementia has not been clarified in the literature.
Purpose: This scoping review was designed to provide a deeper understanding of the overall phenomenon of getting lost in people with dementia, with the results intended to provide caregivers with more complete information and enlightening research and practice related to dementia getting lost.
Methods: A systematic review method was used, and articles were retrieved from electronic databases including PubMed, Embase, Airiti Library, Cochrane Library, and Gray literature. Specific keywords, MeSH terms, and Emtree terms were used to search for articles on dementia and getting lost. A total of 10,523 articles published from 2011-2020 that matched the search criteria were extracted. After screening the topics and deleting repetitions, 64 articles were selected for further analysis. These articles were classified and integrated based on the six-step literature review method proposed by Arksey and O'Malley.
Results: The key findings of the review included: (1) The concept of getting lost in dementia is diverse and inseparable from wandering; (2) More than half of the assessment tools related to getting lost in dementia include the concept of wandering; (3) The factors identified as affecting getting lost in dementia include the patient's personal traits, disease factors, care factors, and environmental factors; (4) Getting lost in dementia negatively affects patients as well as their caregivers and the general public; (5) Most of the articles in this review were quantitative studies and were conducted in Western countries.
Conclusions / Implications For Practice: The scoping review approach may assist care providers to fully understand the phenomenon of getting lost in dementia, clarify its causes and consequences, and identify the limitations in the literature. The findings may be referenced in the creation of healthcare policies promoting related preventive measures and care plans as well as used to guide future academic research.
abstract_id: PUBMED:23919046
Getting to zero: Possibility or propoganda? The world is now in the fourth decade of a pandemic that united all the nations more than any other calamities or policies. The numbers with relation to HIV are falling consistently. Unfortunately the funding is also decreasing. In the current uncertain economic environment, the Joint United Nations Programme on HIV and AIDS (UNAIDS) has set a very ambitious target of reducing HIV to zero by 2015. There are strategies that are good and cost-effective and, if used appropriately, will give remarkable results. No new innovations have recently been discovered related to HIV. More molecular level studies are needed besides strengthening the existing strategies. We need money for all these activities and it should not stop coming. The paper reviews the success of HIV program in India and also foresees the challenges lying ahead of us in "getting to zero."
abstract_id: PUBMED:32848482
Time Perspectives and Delay of Gratification - The Role of Psychological Distance Toward the Future and Perceived Possibility of Getting a Future Reward. Purpose: This study investigated how an individual's time perspective of the present and the future affects the delay of gratification, using the construal level theory. In addition, the mechanisms that influence the time perspective on the delay of gratification were examined through the mediating roles of the psychological distance and the perceived possibility of getting a future reward.
Participants And Methods: One hundred twenty university students completed the Korean version of the Swedish Zimbardo Time Perspective Inventory (S-ZTPI) and performed a Temporal Discounting task to aid in the evaluation of their ability to delay gratification. Their psychological distance to the future and perceived possibility of getting a future reward were measured using the visual analogue scale.
Results: The results showed that once the Present-Hedonistic and Future-Negative from among the six-time perspectives increased, and the ability to delayed gratification decreased. On the other hand, once the Future-Positive time perspective increased, the ability to delayed gratification increased. Only the psychological distance for 9 months was associated with time perspective and the mediation effect was not significant. Present-Hedonistic time perspective negatively predicted the perceived possibility of getting a future reward and the delay of gratification. The perceived possibility of getting a future reward fully mediated the relation between the Future-Negative time perspective and the delay of gratification.
Conclusion: These findings suggest that problems involved with the delay of gratification (such as smoking, addiction, and binge eating behavior) are more likely to occur in people who have high Present-Hedonistic and Future-Negative time perspectives, because these time perspectives lead to a lower perceived possibility of getting a future.
abstract_id: PUBMED:35464564
Implementing the "Getting It Right First Time" (GIRFT) Report Recommendations: The Results of Introducing a Shoulder and Elbow Multidisciplinary Team. Objective In this study, we aimed to analyse the impact of implementing the "Getting It Right First Time" (GIRFT) recommendations in our shoulder and elbow unit, which included the introduction of a shoulder and elbow multidisciplinary team (MDT) meeting for all patients being considered for surgery. Methods A retrospective patient case-note review was undertaken to assess the impact of replacing the pre-admission clinic with an MDT meeting. We analysed how many of the proposed management plans were changed as a result of this new MDT, as well as the associated cost savings. Results Of note, 118/148 patients who attended the MDT had a provisional operative plan; 24/118 (20%) had their plan changed to non-operative management, 13/118 (11%) had a change of operation, and 6/118 (5%) were recommended further investigations or tertiary referral. This reduced theatre time required by 47 hours, an estimated saving of over £51,000. Significantly, 20/24 patients who had their plan changed from operative to non-operative still had not had an operation after a median follow-up of 39 months. Conclusion The introduction of a shoulder and elbow MDT for all patients being considered for an operation has improved decision-making, allowed optimisation of non-operative management, and helped prevent patients from having unnecessary operations. This has led to a better patient experience and a more efficient service delivery, which is associated with cost savings.
abstract_id: PUBMED:38025968
Introduction. Making work better. From the premise that better work makes for better societies, the challenge, taken up in the introduction to this special issue of Transfer: European Review of Labour and Research, is to explore what makes work better, or worse, and how it can be improved. As a wide variety of experiments shape our economies and communities for the future, a key challenge is to engage in shared learning about these processes in order to stimulate a dialogue between the aspiration for better work and the conditions likely to hinder or facilitate making work better. It is an invitation to move from narrow conceptions of job quality to a broader lens of how world-of-work actors strategise, innovate and incorporate uncertainty into their search for sustainable solutions for better work. Key themes include: why work needs to be better (but is often worse); why better work makes for better societies; how work can be made better; the role of institutions in achieving better work; and, finally, how union strategies are essential to processes of experimentation to make work better.
abstract_id: PUBMED:27713724
Paranoia as an Antecedent and Consequence of Getting Ahead in Organizations: Time-Lagged Effects Between Paranoid Cognitions, Self-Monitoring, and Changes in Span of Control. A 6-month, time-lagged online survey among 441 employees in diverse industries was conducted to investigate the role paranoia plays as an antecedent and as a consequence of advancement in organizations. The background of the study is the argument that it requires active social sense-making and behavioral adaptability to advance in organizations. The present paper thus explores the extent to which employees' paranoid cognitions-representative of a heightened albeit suspicious sense-making and behavioral adaptability-link with their advancement in organizations (operationalized as changes in afforded span of control), both as an antecedent and an outcome. Following the strategy to illuminate the process by interaction analysis, both conditions (antecedent and outcome) are examined in interaction with employees' self-monitoring, which is considered representative of a heightened but healthy sense-making and behavioral adaptability. Results support the expected interference interaction between paranoid cognitions and self-monitoring in that each can to some degree compensate for the other in explaining employees' organizational advancement. Reversely, changes in span of control also affected paranoid cognitions. In particular, low self-monitors, i.e., those low in adaptive sense-making, reacted with heightened paranoid cognitions when demoted. In effect, the present study is thus the first to empirically support that paranoid cognitions can be a consequence but also a prerequisite for getting ahead in organizations. Practical advice should, however, be suspended until it is better understood whether and under what circumstances paranoia may relate not only to personally getting ahead but also to an increased effectiveness for the benefit of the organization.
abstract_id: PUBMED:33432254
"Getting out from Intimate Partner Violence: Dynamics and Processes. A Qualitative Analysis of Female and Male Victims' Narratives". In the 1970s intimate partner violence became recognized as a major societal problem in Europe. The study of the processes that enable victims to emerge from this violence is still topical. Even more so when it concerns male victims, who remain an under-studied population. This article examines the processes involved in bringing an end to intimate partner violence, including female and male victims. This qualitative study examines the intra- and inter-subjective changes underlying the processes of ending IPV in victims by using a narrative approach. Semi-structured interviews including the use of qualitative life calendars were conducted with 21 victims, 18 women and 3 men. The thematic analysis highlighted eight stages of a process of getting out from intimate partner violence. From the change in perception to the post-separation, victims' trajectories contain similar stages nuanced by individual and environmental specificities for both female and male. Getting out from intimate partner violence involves a sequence of changes in the perception of self, partner, couple and violence that allows for cognitive and relational transitions.
abstract_id: PUBMED:34218461
Is redo mitral mortality getting better or getting worse? Zubarevich et al. present the 30 day and 1-year outcomes of redo mitral valve replacement in 58 high-risk patients. The authors conclude that careful patient selection and risk stratification provides acceptable surgical results in this cohort. This series reminds us that increased use of bioprostheses, increased use of mitral replacement instead of repair, and an aging population drive the volume of high-risk redo mitral replacement. It remains to be seen whether redo mitral mortality is getting better or worse, but the risk and the patients will be with us for some time.
abstract_id: PUBMED:32150909
Difficulties in Getting to Sleep and their Association with Emotional and Behavioural Problems in Adolescents: Does the Sleeping Duration Influence this Association? Sleep problems are common in adolescence with a negative impact on the mental health and functioning of adolescents. However, the roles of different sleep problems in relation to emotional and behavioural problems (EBPs), classified according to the 10th version of the International Classification of Diseases as emotional, conduct, hyperactivity and social functioning disorders, are not clear. The first aim of the study was to investigate the association between difficulties in getting to sleep and EBPs in adolescents. The second aim was to explore the role of sleep duration in this association. We used data from the Health Behaviour in School-aged Children (HBSC) study conducted in 2018 in Slovakia. Presented are results for specific age groups of 13-year-old (N = 1909) and 15-year-old (N = 1293) adolescents. Subjective measures of sleep variables were used. Binary logistic regression models adjusted for age and gender were used to assess associations between difficulties in getting to sleep, sleep duration and EBPs measured using the Strengths and Difficulties Questionnaire. Modification of the association between difficulties in getting to sleep and EBPs by sleep duration was also explored. We found that difficulties in getting to sleep at least once a week as well as insufficient sleep (less than 8 h) increased the probability of EBPs. Interactions of sleep duration with difficulties in getting to sleep on EBPs were found to be non-significant. The results suggest that caregivers and clinicians should screen and intervene for both sleep quality and quantity problems in adolescents as they might indicate and promote EBPs.
abstract_id: PUBMED:33580469
"Getting better at getting better": How Sharing Mental Health Stories Can Shape Young People's Wellbeing. Sharing lived-experience of mental ill-health is increasingly recognised as a promising youth mental health intervention. batyr is a preventative mental health organization that trains young people to share their story of mental ill-health safely through a speaker development program. This qualitative study aimed to understand how the role of batyr speakers shapes individual wellbeing, help-seeking behaviour, and stigma as well as how young people talk about mental health within their social circles. Data was collected using in-depth semi-structured interviews with 18 batyr speakers (age 18-35 years). Thematic analysis revealed five themes: Getting better at getting better, Growing towards self-acceptance, Breaking the wall by talking about mental health, Increasing connectedness and Reaching out for support. The participants reported overall improved wellbeing, as a result of increased confidence, greater self-awareness, and an increased ability to talk safely about their mental illness. Findings suggested that the program led to a reduction of self-stigma and stigma.
Answer: Based on the provided abstracts, the question "Is it getting better?" can be interpreted in various contexts related to different subjects. Here are some insights:
1. In the context of dementia and the phenomenon of getting lost, the scoping review aimed to provide a deeper understanding and assist care providers in fully understanding the phenomenon, its causes, and consequences. The review may help in creating healthcare policies and care plans to prevent and manage getting lost in dementia, which implies an effort towards improvement (PUBMED:35080002).
2. Regarding HIV, the paper reviewed the success of the HIV program in India and the challenges ahead in "getting to zero." While the numbers related to HIV are falling, funding is decreasing, and no new innovations have been discovered recently. The ambitious target of reducing HIV to zero by 2015 by UNAIDS suggests a goal of improvement, but the paper also highlights the need for more resources and strategies (PUBMED:23919046).
3. In the study of time perspectives and delay of gratification, the findings suggest that certain time perspectives (Present-Hedonistic and Future-Negative) are associated with a lower perceived possibility of getting a future reward and a decreased ability to delay gratification. This implies that for individuals with these time perspectives, it may not be getting better in terms of delaying gratification (PUBMED:32848482).
4. The implementation of the "Getting It Right First Time" (GIRFT) recommendations in a shoulder and elbow unit, including the introduction of a multidisciplinary team (MDT), led to improved decision-making, optimization of non-operative management, prevention of unnecessary operations, and cost savings. This indicates that in this context, it is getting better (PUBMED:35464564).
5. The special issue of Transfer: European Review of Labour and Research discusses making work better and the various factors that can improve or worsen work conditions. It suggests that there is an aspiration and potential for improvement in the quality of work (PUBMED:38025968).
6. The study on paranoia in organizations found that paranoid cognitions can be both an antecedent and a consequence of getting ahead in organizations. The implications for whether it is getting better in terms of organizational advancement are not clear and require further understanding (PUBMED:27713724).
7. |
Instruction: Can pulsatile cardiopulmonary bypass prevent perioperative renal dysfunction during myocardial revascularization in elderly patients?
Abstracts:
abstract_id: PUBMED:19287182
Can pulsatile cardiopulmonary bypass prevent perioperative renal dysfunction during myocardial revascularization in elderly patients? Backgrounds/aims: We recently demonstrated that pulsatile cardiopulmonary bypass (CPB) versus standard linear CPB is associated with better perioperative renal function. Since older subjects have a higher risk of acute renal failure, we have extended our study to evaluate the specific impact of pulsatile CPB on the perioperative renal function in elderly patients.
Methods: We enrolled 50 patients with normal preoperative renal function: they were stratified by age (65-75 vs. 50-64 years) and randomized to nonpulsatile (group A) or pulsatile CPB (group B). Twenty-seven patients aged > or =50 years and <65 years were randomized to group A (n = 12) or to group B (n = 15) and 23, aged > or =65 years and < or =75 years, to group A (n = 13) or to group B (n = 10). Glomerular filtrate rate (GFR), daily diuresis, lactatemia and other parameters were measured during the pre- and perioperative period.
Results: The percent perioperative decrease in GFR was lower in group A than in group B (p < 0.001), without differences between older and younger patients. By contrast, perioperative plasma lactate levels were higher in group A than in group B (p < 0.001), both in older and younger patients. No difference was observed for 24 h urine output and blood urea nitrogen.
Conclusions: Pulsatile CPB preserves renal function better than standard CPB even in patients older than 65. CPB could be adopted as the procedure of choice in this subgroup of patients.
abstract_id: PUBMED:12538132
Aiming towards complete myocardial revascularization without cardiopulmonary bypass: a systematic approach. Background: Coronary artery bypass grafting (CABG) has become the surgical procedure of choice for symptomatic coronary artery disease. However, the use of traditional cardiopulmonary bypass (CPB) techniques represents an invasive therapeutic system with immediate and long-term complications. Off-pump myocardial revascularization has emerged as an attractive alternative that offers improvements in early outcomes and avoidance of the recognized adverse affects of CPB. A major criticism of this procedure has been a perceived inability to accomplish complete revascularization of the heart. In this report, we describe a surgical technique we have used in a series of patients that has allowed complete myocardial revascularization.
Methods: Combinations of intraoperative techniques were employed, including (1) right pleural-pericardial window, (2) deep pericardial sutures, (3) right heart displacement, (4) intermittent hypotensive anesthesia, (5) multimodality brain monitoring, and (6) coronary shunting. Following surgery, coronary artery grafts performed were statistically compared to each coronary artery's vascular territory to show that all territories were equally treatable with the combination of techniques.
Results: There were 734 coronary artery grafts performed in 200 consecutive patients (mean of 3.7 grafts/patient), and 533 compromised vascular territories were revascularized (mean of 1.38 grafts for each diseased vessel). Eight patients had one-vessel disease, 51 had two-vessel disease and 141 had three-vessel disease. The left anterior descending coronary artery (LAD) was compromised in 192 patients, the circumflex in 171 and the right coronary artery in 170 patients. The overall 30-day estimated hospital mortality was 5.5%; the observed was 4.0% (8 of 200). Postoperative complications included pulmonary insufficiency in 6 patients (3.0%), reoperation for bleeding in 3 patients (1.5%), cerebrovascular accident in 3 patients (1.5%), renal dysfunction in 2 patients (1.0%), perioperative myocardial infarction in 8 patients (4.0%), cardiac arrest in 2 patients (1.0%), low cardiac output in 5 patients (2.5%), and deep sternal infection in 2 patients (1.0%).
Conclusions: Use of intermittent hypotensive anesthesia in conjunction with multimodality brain monitoring, right heart displacement, deep pericardial sutures, coronary shunting and epicardial compression stabilization facilitates complete revascularization of the myocardium.
abstract_id: PUBMED:12803260
Pattern of renal dysfunction associated with myocardial revascularization surgery and cardiopulmonary bypass. Background And Objective: A variable incidence rate of renal dysfunction (3-35%) after cardiac surgery with cardiopulmonary bypass has been reported. The aim was to define the typical pattern of renal dysfunction that follows coronary surgery with cardiopulmonary bypass using albumin, immunoglobulin (IgG), alpha1-microglobulin and beta-glucosaminidase (beta-NAG) excretion as indicators.
Methods: Twenty patients with preoperative normal renal function, defined by plasma creatinine, creatinine clearance, fractional excretion of sodium and renal excretion of proteins, undergoing elective myocardial revascularization surgery with cardiopulmonary bypass, were prospectively studied. Variables recorded were demographic and haemodynamic variables, duration of cardiopulmonary bypass and aortic clamping, intra- and postoperative urine output, plasma creatinine concentration, creatinine clearance and excretion of sodium, albumin, IgG, beta-glucosaminidase (beta-NAG), and alpha1-microglobulin. Measurements were made preoperatively, immediately before and then during and immediately after cardiopulmonary bypass, and again at 1, 24, 72 h, 7 and 40 days following surgery.
Results: Albumin and IgG excretion rose significantly during cardiopulmonary bypass (P < 0.05), remaining at these levels at 24 h postoperatively. An increase of alpha1-microglobulin and beta-NAG concentrations was observed during cardiopulmonary bypass (P < 0.05), which were maintained until the seventh postoperative day and remained elevated in some patients at the 40th postoperative day. This correlated with preoperative diabetes mellitus (P < 0.001), low cardiac output after cardiopulmonary bypass (P < 0.001) and the duration of stay in the intensive care unit (P < 0.001).
Conclusions: The pattern of renal dysfunction after cardiopulmonary bypass for myocardial revascularization is characterized by temporary renal dysfunction at both glomerular and tubular levels with an onset within 24 h of surgery and which lasts between 24 h and 40 days, respectively, following surgery.
abstract_id: PUBMED:9454527
Renal dysfunction after myocardial revascularization: risk factors, adverse outcomes, and hospital resource utilization. The Multicenter Study of Perioperative Ischemia Research Group. Background: Acute changes in renal function after elective coronary bypass surgery are incompletely characterized and represent a challenging clinical problem.
Objective: To determine the incidence and characteristics of postoperative renal dysfunction and failure, perioperative predictors of dysfunction, and the effect of renal dysfunction and failure on in-hospital resource utilization and patient disposition after discharge.
Design: Prospective, observational, multicenter study.
Setting: 24 university hospitals.
Patients: 2222 patients having myocardial revascularization with or without concurrent valvular surgery.
Measurements: Prospective histories, physical examinations, and electrocardiographic and laboratory studies. The main outcome measure was renal dysfunction (defined as a postoperative serum creatinine level > or = 177 mumol/L with a preoperative-to-postoperative increase > or = 62 mumol/L).
Results: 171 patients (7.7%) had postoperative renal dysfunction; 30 of these (1.4% overall) had oliguric renal failure that required dialysis. In-hospital mortality, length of stay in the intensive care unit, and hospitalization were significantly increased in patients who had renal failure and those who had renal dysfunction compared with those who had neither (mortality: 63%, 19%, and 0.9%; intensive care unit stay: 14.9 days, 6.5 days, and 3.1 days; hospitalization: 28.8 days, 18.2 days, and 10.6 days, respectively). Patients with renal dysfunction were three times as likely to be discharged to an extended-care facility. Multivariable analysis identified five independent preoperative predictors of renal dysfunction: age 70 to 79 years (relative risk [RR], 1.6 [95% CI, 1.1 to 2.3]) or age 80 to 95 years (RR, 3.5 [CI, 1.9 to 6.3]); congestive heart failure (RR, 1.8 [CI, 1.3 to 2.6]); previous myocardial revascularization (RR, 1.8 [CI, 1.2 to 2.7]); type 1 diabetes mellitus (RR, 1.8 [CI, 1.1 to 3.0]) or preoperative serum glucose levels exceeding 16.6 mmol/L (RR, 3.7 [CI, 1.7 to 7.8]); and preoperative serum creatinine levels of 124 to 177 mumol/L (RR, 2.3 [CI, 1.6 to 3.4]). Independent perioperative factors that exacerbated risk were cardiopulmonary bypass lasting 3 or mor hours and three measures of ventricular dysfunction.
Conclusions: Many patients having elective myocardial revascularization develop postoperative renal dysfunction and failure, which are associated with prolonged intensive care unit and hospital stays, significant increases in mortality, and greater need for specialized long-term care. Resources should be redirected to mitigate renal injury in high-risk patients.
abstract_id: PUBMED:19229429
Risk factors in septuagenarians or elderly patients undergone coronary artery bypass grafting and or valves operations. Objectives: Septuagenarians or older patients needing heart surgery has increased in whole world. The objective of study is to know the characteristics of this group of patients and determine the risk factors for operative morbidity.
Methods: We revised the medical records of 783 patients undergone heart valve surgery, myocardial revascularization or both between 2002 and 2007. The patients were divided in 'control group' (<70 years) 'septuagenarian group' (70 years old or more).
Results: One hundred ninety seven patients were at least 70 years old (mean age 74.1+/-3.9) and 61% were male. In the control group the mean age was 52.1+/-11.7 and 54% were male. In the septuagenarians group it was significantly higher the proportion of patients suffering from peripheral vascular disease (9% versus 5%, P=0.019), carotid artery obstruction (5% versus 2%, P=0.026), unstable angina (17% versus 9%, P=0.018). In both groups coronary artery bypass surgery prevailed. In the septuagenarian group 41% of the patients had a least one morbid event, versus 22% of the patients in the control group (P<0.001). Postoperative bleeding, pulmonary complications, mediastinitis, need of vasopressors, renal dysfunction and strokes were significantly higher in the septuagenarian group. The mortality was higher in the septuagenarian (19% versus 8.5%, P<0.001). The logistic regression revealed that COPD (OR: 8.6), EF < 35% (OR: 7,1), non-elective operation (OR: 17,2) and cardiopulmonary bypass time >120 min (OR: 3,4) were predictive of hospital mortality in septuagenarian or older patients.
Conclusions: The hospital mortality of septuagenarians or elderly is significantly higher than younger patients.
abstract_id: PUBMED:25865900
A Meta-Analysis of Renal Function After Adult Cardiac Surgery With Pulsatile Perfusion. The aim of this meta-analysis was to determine whether pulsatile perfusion during cardiac surgery has a lesser effect on renal dysfunction than nonpulsatile perfusion after cardiac surgery in randomized controlled trials. MEDLINE, EMBASE, and the Cochrane Central Register of Controlled Trials were used to identify available articles published before April 25, 2014. Meta-analysis was conducted to determine the effects of pulsatile perfusion on postoperative renal functions, as determined by creatinine clearance (CrCl), serum creatinine (Cr), urinary neutrophil gelatinase-associated lipocalin (NGAL), and the incidences of acute renal insufficiency (ARI) and acute renal failure (ARF). Nine studies involving 674 patients that received pulsatile perfusion and 698 patients that received nonpulsatile perfusion during cardiopulmonary bypass (CPB) were considered in the meta-analysis. Stratified analysis was performed according to effective pulsatility or unclear pulsatility of the pulsatile perfusion method in the presence of heterogeneity. NGAL levels were not significantly different between the pulsatile and nonpulsatile groups. However, patients in the pulsatile group had a significantly higher CrCl and lower Cr levels when the analysis was restricted to studies on effective pulsatile flow (P < 0.00001, respectively). The incidence of ARI was significantly lower in the pulsatile group (P < 0.00001), but incidences of ARF were similar. In conclusion, the meta-analysis suggests that the use of pulsatile flow during CPB results in better postoperative renal function.
abstract_id: PUBMED:6612640
Pulsatile cardiopulmonary bypass for patients with renal insufficiency. Pulsatile cardiopulmonary bypass has been shown to preserve renal function and could therefore have considerable clinical value in patients undergoing cardiac surgery with preoperative renal insufficiency, by protecting them from further postoperative renal deterioration. Our three-year experience with pulsatile bypass in 29 patients with a preoperative serum creatinine concentration over 1.7 mg/100 ml (mean 2.9, range 1.8-6.1 mg/100 ml) (greater than 150 mumol/l (mean 256, range 159-539 mumol/l] supports this premise. There were no renal deaths in the perioperative period and only two patients had irreversible postoperative deterioration in renal function; one died on day 3 of low-output syndrome and the other had rapidly progressive nephrosclerosis and died of that disease one year later. Postoperative oliguria occurred in the patient with low cardiac output and in only one other. This experience contrasts with our previous experience and that reported by others with non-pulsatile bypass in patients with renal insufficiency. We suggest that pulsatile bypass should be considered for cardiac surgery in patients with preoperative renal dysfunction.
abstract_id: PUBMED:26911799
The effect of pulsatile cardiopulmonary bypass on the need for haemofiltration in patients with renal dysfunction undergoing cardiac surgery. Objectives: The aim of our study was to investigate the effects of pulsatile cardiopulmonary bypass (CPB) on renal function and the need for haemofiltration in patients with preoperative renal impairment undergoing cardiac surgery.
Methods: Clinical data were collected prospectively for patients undergoing cardiac surgery with pulsatile CPB (Group A, n=66) and compared to matched patients with standard non-pulsatile CPB (Group B, n=66). Patients included in the study had mild renal impairment and at least moderate risk from surgery as defined by logistic EuroSCORE. Emergency operations were excluded.
Results: Patients in Groups A and B had similar age (71 ± 10 versus 70 ± 10 years), sex distribution, mean preoperative renal function (creatinine clearance 63.9 ± 28 versus 67.7 ± 27.3 ml/min) and overall risk profile as predicted by the logistic EuroSCORE (8 ± 8.3 versus 11.05±13.3, p=0.122). Intraoperative variables were comparable with respect to bypass and cross-clamp times (96 ± 37 minutes and 64 ± 28 minutes versus 103 ± 40 minutes and 70 ± 33 minutes in Groups A and B, respectively). A smaller proportion of patients in Group A (4.5% versus 15%, p=0.076) required haemofiltration in the postoperative period. Postoperative mortality was low in both groups (Group A 1.54% versus Group B 3.03%, p=1.00).
Conclusion: Within the limitations imposed by retrospective analyses, our study demonstrates that pulsatile CPB may confer a reno-protective effect in higher-risk patients with pre-existing mild renal dysfunction undergoing cardiac surgery.
abstract_id: PUBMED:21693564
A targeted metabolic protocol with D-ribose for off-pump coronary artery bypass procedures: a retrospective analysis. Objectives: Coronary revascularization using cardiopulmonary bypass is an effective surgical procedure for ischemic coronary artery disease. Complications associated with cardiopulmonary bypass have included cerebral vascular accidents, neurocognitive disorders, renal dysfunction, and acute systemic inflammatory responses. Within the last two decades off-pump coronary artery bypass has emerged as an approach to reduce the incidence of these complications, as well as shorten hospital stays and recovery times. Many patients with coronary artery disease have insulin resistance and altered energy metabolism, which can exacerbate around the time of coronary revascularization. D-ribose has been shown to enhance the recovery of high-energy phosphates following myocardial ischemia. We hypothesized that patient outcomes could improve using a perioperative metabolic protocol with D-ribose.
Methods: A perioperative metabolic protocol was used in 366 patients undergoing off-pump coronary artery bypass during 2004-2008. D-ribose was added in 308 of these 366 patients. Data were collected prospectively as part of the Society of Thoracic Surgeons database and retrospectively analyzed.
Results: D-ribose patients were generally similar to those who did not receive D-ribose. There was one death, two patients suffered strokes and renal failure requiring dialysis occurred in two patients postoperatively among the entire group of patients. D-ribose patients enjoyed a greater improvement in cardiac index postrevascularization compared with non-D-ribose patients (37% vs. 17%, respectively, p < 0.001).
Conclusions: This metabolic protocol was associated with very low mortality and morbidity with a significant early postoperative improvement in cardiac index using D-ribose supplementation. These preliminary results support a prospective randomized trial using this protocol and D-ribose.
abstract_id: PUBMED:12974921
Beating heart ischemic mitral valve repair and coronary revascularization in patients with impaired left ventricular function. Objective: The aim of this study is to evaluate in a cohort of patients with impaired left ventricular (LV) function and ischemic mitral valve regurgitation (MVR), the effects of on-pump/beating heart versus conventional surgery in terms of postoperative mortality and morbidity and LV function improvement.
Materials And Methods: Between January 1993 and February 2001, 91 patients with LVEF between 17% and 35% and chronic ischemic MVR (grade III-IV), underwent MV repair in concomitance with coronary artery bypass grafting (CABG) Sixty-one patients (Group I) underwent cardiac surgery with cardioplegic arrest, and 30 patients (Group II) underwent beating heart combined surgery. Aortic valve insufficiency was considered a contraindication for the on-pump/beating heart procedure. Mean age in Group I was 64.4 +/- 7 years and in Group II, 65 +/- 6 years (p = 0.69).
Results: The in-hospital mortality in Group I was 8 (13%) patients versus 2 (7%) patients in Group II (p > 0.1). The cardiopulmonary bypass (CPB) time was significantly higher in Group I (p < 0.001). In Groups I and II, respectively (p > 0.1), 2.5 +/- 1 and 2.7 +/- 0.8 grafts per patient were employed. Perioperative complications were identified in 37 (60.7%) patients in Group I versus 10 (33%) patients in Group II (p = 0.025). Prolonged inotropic support of greater than 24 hours was needed in 48 (78.7%) patients (Group I) versus 15 (50%) patients (Group II) (p = 0.008). Postoperative IABP and low cardiac output incidence were significantly higher in Group I, p = 0.03 and p = 0.027, respectively. Postoperative bleeding greater than 1000 mL was identified in 24 patients (39.4%) in Group I versus 5 (16.7%) in Group II (p = 0.033). Renal dysfunction incidence was 65.6% (40 patients) in Group I versus 36.7% (11 patients) in Group II (p = 0.013). The echocardiographic examination within six postoperative months revealed a significant improvement of MV regurgitation fraction, LV function, and reduced dimensions in both groups. The postoperative RF was significantly lower in Group II patients 12 +/- 6 (%) versus 16 +/- 5.6 (%) in Group I (p = 0.001). The 1, 2, and 3 years actuarial survival including all deaths was 91.3%, 84.2%, and 70% in Group I and 93.3%, 87.1%, and 75% in Group II (p = ns). NYHA FC improved significantly in all patients from both groups.
Conclusion: We conclude that patients with impaired LV function and ischemic MVR may undergo combined surgery with acceptable mortality and morbidity. The on/pump beating heart MV repair simultaneous to CABG offers an acceptable postoperative outcome in selected patients.
Answer: Yes, pulsatile cardiopulmonary bypass (CPB) can prevent perioperative renal dysfunction during myocardial revascularization in elderly patients. A study demonstrated that pulsatile CPB preserves renal function better than standard CPB even in patients older than 65, suggesting that it could be adopted as the procedure of choice in this subgroup of patients (PUBMED:19287182). Additionally, a meta-analysis indicated that pulsatile perfusion during cardiac surgery has a lesser effect on renal dysfunction than nonpulsatile perfusion after cardiac surgery, with patients in the pulsatile group having significantly higher creatinine clearance and lower serum creatinine levels, as well as a significantly lower incidence of acute renal insufficiency (PUBMED:25865900). Furthermore, another study found that pulsatile CPB may confer a reno-protective effect in higher-risk patients with pre-existing mild renal dysfunction undergoing cardiac surgery (PUBMED:26911799). This is supported by earlier findings that pulsatile CPB in patients with preoperative renal insufficiency protected them from further postoperative renal deterioration (PUBMED:6612640). |
Instruction: Superficial acral fibromyxoma: a clinicopathological study of new 41 cases from the U.K.: should myxoma (NOS) and fibroma (NOS) continue as part of 21st-century reporting?
Abstracts:
abstract_id: PUBMED:18764846
Superficial acral fibromyxoma: a clinicopathological study of new 41 cases from the U.K.: should myxoma (NOS) and fibroma (NOS) continue as part of 21st-century reporting? Background: Superficial acral fibromyxoma (SAF) remains poorly recognized by general pathologists and dermatopathologists, partly attributable to its relatively uncommon occurrence and recent documentation.
Objectives: To examine a series of SAF and document the U.K. experience with this new entity.
Methods: We reviewed 771 tumours reported between 1970 and 2006 in seven different U.K. hospitals and coded as myxoma, not otherwise specified (NOS), fibroma (NOS) or dermatofibroma (NOS) presenting at acral sites. Forty-one cases of SAF were studied.
Results: The patients comprised 27 men and 14 women, age range 19-91 years (mean 50, median 47), presenting with a solitary mass or nodule with a mean size of 1.92 cm. The common clinical sites were the toes (n=29) and fingers (n=11) as well as the palm (n=1), with more than 75% of cases close to or involving the nail bed. All cases presented with a painless mass except for four cases where pain was the presenting complaint. A history of trauma was reported in only two cases. Histologically, all cases presented as proliferation of spindle-shaped and/or stellate cells with a storiform and fascicular pattern embedded in a fibromyxoid/collagenous stroma with conspicuous mast cells. Multinucleated cells were observed (n=22), increased number of blood vessels in the stroma and extravasation of red blood cells (n=4). The characteristic immunophenotype was CD34+, CD99+/-, epithelial membrane antigen+ focally/-, S100-, desmin-, smooth muscle actin-, HMB45- and cytokeratin-.
Conclusions: We describe a large series of 41 cases of SAF showing that it is a distinct entity with typical clinical, histological and immunohistochemical features. Follow-up was available only in 12 patients, precluding a firm comment on recurrence. However, complete excision and follow-up review is recommended.
abstract_id: PUBMED:11244935
The clinicopathological study on 41 cases of cardiac tumors Objective: To analyze the relation between the pathological changes and the prognosis of cardiac tumors through the clinicopathological study on 41 cases of cardiac tumors.
Methods: The study was carried out by using the common and special histochemical stainings.
Results: 39(95.1%) tumors were benign, including myxoma, fibroma, and rhabdomyoma; while 2(4.9%) tumors were malignant, including neurofibrosarcoma and malignant mesothelioma. In the myxomas group, 75% patients were female, 91.7% tumors existed in the left atrium.
Conclusions: The results of the clinicopathological study showed that the cardiac tumors were quite different from the tumors in the other sites, i.e. even benign ones could cause fatal hemodynamic disturbance, hence early diagnosis and early operation are necessary. The prognosis of the malignant tumors is worst.
abstract_id: PUBMED:27825811
Superficial acral fibromyxoma: clinicopathological, immunohistochemical, and molecular study of 11 cases highlighting frequent Rb1 loss/deletions. Superficial acral fibromyxoma (SAF) is an uncommon benign dermal mesenchymal lesion of adults with predilection for acral sites, in particular the nail region. To date, less than 300 cases have been reported. SAFs consistently express CD34, but other diagnostic markers or specific genetic alterations have not been established yet. We describe 11 SAFs occurring in 7 men and 4 women aged 37 to 86years (median, 48 years). Mean size was 6mm (range, 4-20mm). Affected sites were fingers (n=5), toes (n=3), heel (n=1), calf (n=1), and unspecified digit (n=1). None of 10 patients with available follow-up (2-60months; median, 24months) developed recurrence. Histology showed relatively hypocellular vaguely lobulated nodules composed of bland-looking spindled or stellate fibroblast-like cells arranged into storiform or loose fascicles within a variably myxoid, fibromyxoid, or collagenous vascularized stroma. Immunohistochemistry showed expression of CD34 (9/10) and focal weak reactivity for epithelial membrane antigen (2/11). None of the lesions expressed protein S100 (0/11), MUC4 (0/11), or STAT6 (0/11). Loss of Rb1 immunoexpression was observed in 9 (90%) of 10 cases. All 7 cases with successful RB1 fluorescence in situ hybridization testing showed RB1 gene deletions, which was variably associated with co-loss of the corresponding 13q12 signal (monosomy at the 13q region). To our knowledge, this is the first study investigating the expression status of the tumor suppressor Rb1 in SAF by immunohistochemistry and fluorescence in situ hybridization. Our results showed frequent Rb1 deficiency as a possible driver molecular event in SAF (seen in 90% of cases) indicating relationship of SAF to the RB1-deleted tumor family.
abstract_id: PUBMED:15981933
Superficial angiomyxoma: report of four cases, including two subungueal tumors We report four cases of superficial angiomyxomas, including two cutaneous tumors and two subungueal tumors. Histological analysis revealed a recently described tumor, so called superficial angiomyxoma. This is a myxoid paucicellular tumor lobulated and poorly circumbscribed, containing numerous small blood vessels surrounded by a mixed inflammatory cell infiltrate with notable neutrophils. Those tumors are positive for CD34. The differential diagnosis includes myxoid neurothecoma, myxoid neurofibroma and, for ungueal tumors, superficial acral fibromyxoma.
abstract_id: PUBMED:22367301
Digital fibromyxoma (superficial acral fibromyxoma): a detailed characterization of 124 cases. Digital fibromyxoma (first described by Fetsch and colleagues as superficial acral fibromyxoma) is a distinctive soft tissue tumor with a predilection for the subungual or periungual region of the hands and feet. This report details the histologic, immunophenotypic, and clinical findings in 124 cases of digital fibromyxoma. The study group included 70 male and 54 female patients (1.3:1, M:F), ranging in age from 4 to 86 years (mean, 48 y; median, 49 y). Mean tumor size was 1.7 cm (range, 0.5 to 5 cm; median, 1.5 cm). Nearly half of the patients (41%) presented with a painful mass. Tumors arose on the hands (52%) or feet (45%), with rare tumors arising on the ankle or leg. Most tumors occurred on the digits (94% of hand tumors, 82% of foot tumors), with the majority growing in close proximity to the nail (97% on fingers, 96% on toes). Histologically, 80% of cases were poorly marginated; 70% infiltrated the dermal collagen, 27% infiltrated fat, and 3% invaded bone. In cases in which imaging studies were available, bone involvement by an erosive or lytic lesion was more frequent (9/25, 36%). All tumors were composed of spindle-shaped or stellate-shaped cells with palely eosinophilic cytoplasm and a random or loosely fascicular growth pattern. The tumor cells were separated by dense hyaline collagen alternating with myxoid stroma. Most (86%) of the tumors showed alternating areas of fibrous and myxoid stroma, 11% showed predominantly fibrous stroma, and 3% had predominantly myxoid stroma. Increased mast cells were noted in 88% of tumors. All tumors comprised cells with minimal atypia, occasionally showing scattered larger cells with so-called "degenerative change." Mitotic figures were infrequent, and all tumors lacked necrosis, pleomorphism, or neural/perineural infiltration. Multinucleate stromal cells were occasionally seen. Tumor cells were reactive for CD34 in 42/61 cases (69%), with rare tumors showing focal reactivity for EMA (3/40, 7.5%), smooth muscle actin (5/42, 12%), and desmin (1/18, 6%). All tumors were negative for S100 (0/66), MUC4 (0/11), GFAP (0/10), AE1/AE3 (0/4), Cam5.2 (0/2), PanK (0/2), Claudin (0/4), and NFP (0/3). Follow-up in 47 cases ranged from 1 to 252 months (mean, 35 mo). Ten tumors (24%) recurred locally (all near the nail unit of the fingers or toes) after a mean interval of 27 months. One tumor recurred twice. All recurrent tumors had positive margins on initial biopsy or subsequent excision and no other clinical or pathologic features correlated with recurrence/persistence. To date, no tumor has metastasized. Finally, sequencing of 8 digital fibromyxomas failed to reveal mutations in exon 8 or 9 of GNAS1, in contrast to intramuscular or cellular myxoma.
abstract_id: PUBMED:34889853
Superficial Angiomyxomas Frequently Demonstrate Loss of Protein Kinase A Regulatory Subunit 1 Alpha Expression: Immunohistochemical Analysis of 29 Cases and Cutaneous Myxoid Neoplasms With Histopathologic Overlap. Superficial angiomyxomas (SAMs) are benign cutaneous tumors that arise de novo and in the setting of the Carney complex (CC), an autosomal dominant disease with several cutaneous manifestations including lentigines and pigmented epithelioid melanocytomas. Although most SAM do not pose a diagnostic challenge, a subset can demonstrate histopathologic overlap with other myxoid tumors that arise in the skin and subcutis. Traditional immunohistochemical markers are of limited utility when discriminating SAM from histopathologic mimics. Since protein kinase A regulatory subunit 1 alpha (PRKAR1A) genetic alterations underlie most CC cases, we investigated whether SAM demonstrate loss of PRKAR1A protein expression by immunohistochemistry. In our series, 29 SAM, 26 myxofibrosarcoma, 5 myxoid dermatofibrosarcoma protuberans, 11 superficial acral fibromyxomas, and 18 digital mucous cysts were characterized. Of the 29 SAM examined in this study, 1 was associated with documented CC in a 5-year-old girl. SAM tended to arise in adults (mean 49.7 y; range: 5 to 87 y). Loss of PRKAR1A was seen in 55.2% of cases (16/29) and had a male predilection (87.5%, 12/16). PRKAR1A-inactivated SAM demonstrated significant nuclear enlargement (100%, 16/16 vs. 23.1%, 3/13), multinucleation (81.3%, 13/16 vs. 23.1%, 3/13), and presence of neutrophils (43.8%, 7/16 vs. 0%, 0/13). In contrast, PRKAR1A was retained in all cases of myxofibrosarcoma (100%, 26/26), myxoid dermatofibrosarcoma protuberans (100%, 5/5), superficial acral fibromyxomas (100%, 11/11), and digital mucous cyst (100%, 18/18). Taken together, PRKAR1A loss by immunohistochemistry can be used as an adjunctive assay to support the diagnosis of SAM given the high specificity of this staining pattern compared with histopathologic mimics.
abstract_id: PUBMED:11486169
Superficial acral fibromyxoma: a clinicopathologic and immunohistochemical analysis of 37 cases of a distinctive soft tissue tumor with a predilection for the fingers and toes. This report describes the clinicopathologic features and immunohistochemical findings identified in 37 cases of a distinctive soft tissue tumor that has a predilection for the hands and feet. The study group included 25 male and 12 female subjects ranging in age from 14 to 72 (mean, 43; median, 46) years. The patients presented with solitary masses 0.6 to 5.0 cm (mean, 1.75 cm) that were present from 3 months to 30 years (median duration, approximately 3 years) before surgical intervention and involved the toes (n = 20), fingers (n = 13), and palm (n = 4). Twenty of the cases were documented to involve the nail region. Histologically, the tumors were typically located in the dermis or subcutis and composed of spindled and stellate-shaped cells with random, loose storiform, and fascicular growth patterns. The lesional cells were embedded in myxoid or collagenous matrix, often with mildly to moderately accentuated vasculature and increased numbers of mast cells. There was generally slight to mild nuclear atypia; only 3 cases had more substantial atypia. Mitotic figures were infrequent. Occasional multinucleated stromal cells were noted in 19 cases. The process showed immunoreactivity for CD34 (21 of 23 cases), epithelial membrane antigen (18 of 25 cases), and CD99 (11 of 13 cases). No immunoreactivity was detected for actins, desmin, keratins, or HMB-45, and only 1 of 23 tumors had weak reactivity for S100 protein. The surgical specimens consisted of biopsy or partial resection specimens (n = 4), local excisions (n = 29), and amputated or partially amputated digits (n = 4). Detailed follow-up, available for 18 patients (mean follow-up interval, 10.1 years), revealed 1 recurrence after local excision and 2 instances of persistent or progressive disease after partial excision. A differential diagnosis of fibrous histiocytoma, dermatofibrosarcoma protuberans, acquired (digital) fibrokeratoma, sclerosing perineurioma, cutaneous myxoma (superficial angiomyxoma), and acral myxoinflammatory fibroblastic sarcoma is discussed.
abstract_id: PUBMED:9255279
Superficial angiomyxoma of the right inguinal region: report of a case. We report one rare case of superficial angiomyxoma of the right inguinal region, in a 67-year-old man. The tumor, measuring 4.5 x 4.0 x 3.0 cm, had a finger-like shape, was composed of a well circumscribed conglomerate of multiple myxomatous nodules and was located partially in the dermis and partially in the subcutaneous tissue. Microscopically, in contrast to previously reported cases, the tumor was composed mainly of oval plump stromal cells with an amphophilic cytoplasm. Spindle-shaped stromal cells were scattered throughout the tumor. The tumor border was not infiltrative and was well defined by thick hyalized collagen bundles. Neither hyperchromasia nor pleomorphism was apparent. No mitotic figures were detected in the specimens prepared. Small to medium-sized blood vessels showed a scattered distribution, but large vessels, seen frequently in aggressive angiomyxoma, were absent. Moreover, no plexiform capillary pattern was evident. These findings were diagnostic of superficial angiomyxoma. Although this disease entity is considered as including cutaneous focal mucinosis, follicular fibroma, trichofolliculoma and trichogenic adnexal tumor, we propose that these tumors should be excluded.
abstract_id: PUBMED:17885669
Apolipoprotein D in CD34-positive and CD34-negative cutaneous neoplasms: a useful marker in differentiating superficial acral fibromyxoma from dermatofibrosarcoma protuberans. More recent techniques to characterize the genetic profile of soft-tissue tumors include the use of gene arrays. Using this technique, Apolipoprotein D (Apo D), a 33-kDa glycoprotein component of high-density lipoprotein, has been found to be highly expressed in dermatofibrosarcoma protuberans. To corroborate these results, we sought to ascertain the utility of Apo D by investigating its sensitivity and specificity in a variety of CD34-positive and CD34-negative cutaneous neoplasms including superficial acral fibromyxoma, sclerotic fibromas, and cellular dermatofibromas. Of interest, we found absence of Apo D expression in all four cases of superficial acral fibromyxoma. Of the remaining CD34-positive lesions, Apo D expression was noted in 35/36 (97%) cases of dermatofibrosarcoma protuberans, 3/5 (60%) giant-cell fibroblastomas, 4/4 (100%) sclerotic fibromas, 8/8 (100%) neurofibromas, and 1/1 (100%) solitary fibrous tumor. Of the CD34-negative lesions, Apo D expression was noted in 2/22 (9%) regular dermatofibroma, 23/45 (51%) cellular dermatofibroma, 10/10 (100%) malignant fibrous histiocytoma, 9/10 (90%) atypical fibroxanthoma, 7/8 (86%) cellular neurothekeoma, 9/9 (100%) malignant melanoma, 8/8 (100%) melanocytic nevi (100%), 0/2 superficial angiomyxoma, 0/15 fibromatosis, 0/1 nodular fasciitis, and 1/2 (50%) desmoplastic fibroblastomas. In summary, our findings indicate that Apo D expression is not specific to dermatofibrosarcoma protuberans. Its principal use as an immunohistochemical adjunct lies in its utility in differentiating superficial acral fibromyxoma from dermatofibrosarcoma protuberans. Although strong positive staining of Apo D in a markedly atypical fibrohistiocytic lesion is suggestive of atypical fibroxanthoma and/or malignant fibrous histiocytoma, further studies with the inclusion of other atypical spindled cell neoplasms are required to conclusively prove the same.
abstract_id: PUBMED:26838773
Mesenchymal Lesions of the Lower Genital Tract. A variety of site-specific mesenchymal lesions of the lower genital tract may occur, which are often diagnostically challenging because of their morphologic overlap. Some of this morphologic homogeneity presumably stems from a potentially shared origin from the specialized superficial stroma of the distal female genital tract. Despite shared appearances, morphologic distinction remains the gold standard for diagnosis of most of these lesions, as immunoperoxidase staining is often not helpful in their distinction.
Answer: The study of 41 new cases of superficial acral fibromyxoma (SAF) from the U.K. suggests that SAF is a distinct entity with typical clinical, histological, and immunohistochemical features (PUBMED:18764846). The study indicates that general pathologists and dermatopathologists may not fully recognize SAF due to its relative rarity and recent documentation. The cases reviewed presented as a proliferation of spindle-shaped and/or stellate cells with a storiform and fascicular pattern embedded in a fibromyxoid/collagenous stroma with conspicuous mast cells, and the characteristic immunophenotype was CD34+, CD99+/-, epithelial membrane antigen+ focally/-, S100-, desmin-, smooth muscle actin-, HMB45- and cytokeratin- (PUBMED:18764846).
Given the distinct clinical and pathological features of SAF, the question arises whether myxoma (NOS) and fibroma (NOS) should continue as part of 21st-century reporting. The implication is that with the increasing recognition and characterization of SAF, it may be more appropriate to classify and report these lesions specifically as SAF rather than the more generic and non-specific terms of myxoma (NOS) and fibroma (NOS). This would allow for more accurate diagnosis, treatment, and prognostication for patients with this condition.
The study also highlights the need for complete excision and follow-up review due to the limited follow-up available in the study, which precluded a firm comment on recurrence (PUBMED:18764846). This further supports the notion that specific recognition and reporting of SAF are crucial for proper patient management. |
Instruction: Are social supports in late midlife a cause or a result of successful physical ageing?
Abstracts:
abstract_id: PUBMED:9794023
Are social supports in late midlife a cause or a result of successful physical ageing? Background: Many studies have noted a strong association between poor social support and premature mortality. A limitation of such studies has been their failure to control adequately for confounders that damage both social supports and physical health.
Methods: A 50-year prospective multivariate study of 223 men was used to examine the possible causal relationships between social supports and health. Alcohol abuse, prior physical health and mental illness prior to age 50 were controlled. Relative social supports were quantified over the period from age 50 to 70.
Results: Adequacy of social supports from age 50 to 70 was powerfully correlated with physical health at age 70 (P < 0.001). However, such social supports were also powerfully predicted by alcohol abuse (P < 0.001), smoking (P < 0.001) and indicators of major depressive disorder (P < 0.01) assessed at age 50. When prior smoking, depression and alcohol abuse were controlled, then the association of physical health with social supports was very much attenuated. Some facets of social support like religion and confidantes were unassociated with health even at a univariate level. Surprisingly, in this sample friends seemed more important for sustained physical health than closeness to spouse and to children.
Conclusions: While social supports undoubtedly play a significant role in maintaining physical well-being in late life, much of the association between poor social supports and mortality may be mediated by alcoholism, smoking and pre-morbid psychopathology.
abstract_id: PUBMED:23746413
Social connectedness and predictors of successful ageing. Objectives: As populations age it is important to minimize the time people live in a less than successful state of ageing. Our aim was to identify predictors of successful ageing.
Study Design: At baseline (1990-1994), demographic, anthropometric, health, social connectedness and behavioural data were collected for 41,514 men and women participating in the Melbourne Collaborative Cohort Study. Only those born in Australia, New Zealand and UK were included in this analysis. At follow-up in 2003-2007 data on health conditions, physical disability and psychological stress were collected and used to define successful ageing. A total of 5512 eligible participants with full data who were aged 70 and over, were included in this longitudinal analysis.
Outcome Measures: Successful ageing at follow-up was defined as aged 70 years or over and absence of diabetes, heart attack, coronary artery bypass graft surgery, angioplasty, stroke, cancer; impairment, perceived major difficulty with physical functioning; and low risk of psychological distress.
Results: A body mass index in the healthy range, low waist/hip ratio, not smoking, being physically active, and not having arthritis, asthma, hypertension, or gallstones were associated prospectively with successful ageing. There was no evidence for an association of social connectedness with successful ageing.
Conclusions: A healthy lifestyle and maintenance of healthy weight, but not social connectedness, may improve the chances of ageing successfully by our definition. Social connectedness may be related to a perception of ageing well, but it does not appear to help avoid the usual conditions associated with ageing.
abstract_id: PUBMED:34082996
Men in/and crisis: The cultural narrative of men's midlife crises. Focusing on cultural narratives about men's midlife crises, this article explores the more subtle forms that medicalization takes by broadening and re-orientating the concept of successful ageing away from strictly political, medical or/and sociological discussions of health and ageing and towards cultural representations of masculinity, optimization and the handling of a personal crisis. Using two examples; the British comedy Swimming with Men (2018) and the novel Doppler (2014) by Erlend Loe the article discusses the entanglement of masculinity, crisis and ageing and in doing so argues that cultural narratives about men's midlife crises do more than merely comment on already existing understandings of ageing and should in fact be understood as important components in the ongoing medicalization of middle-aged masculinities.
abstract_id: PUBMED:26635373
Successful ageing for psychiatrists. Objective: This paper aims to explore the concept and determinants of successful ageing as they apply to psychiatrists as a group, and as they can be applied specifically to individuals.
Conclusions: Successful ageing is a heterogeneous, inclusive concept that is subjectively defined. No longer constrained by the notion of "super-ageing", successful ageing can still be achieved in the face of physical and/or mental illness. Accordingly, it remains within the reach of most of us. It can, and should be, person-specific and individually defined, specific to one's bio-psycho-social and occupational circumstances, and importantly, reserves. Successful professional ageing is predicated upon insight into signature strengths, with selection of realistic goal setting and substitution of new goals, given the dynamic nature of these constructs as we age. Other essential elements are generativity and self-care. Given that insight is key, taking a regular stock or inventory of our reserves across bio-psycho-social domains might be helpful. Importantly, for successful ageing, this needs to be suitably matched to the professional task and load. This lends itself to a renewable personal ageing plan, which should be systemically adopted with routine expectations of self-care and professional responsibility.
abstract_id: PUBMED:32097986
An investigation of the relationship between ageing in place and successful ageing in elderly individuals. Background: With the increase in longevity in the world, successful ageing has become an important issue. This study aims to investigate the relationship between ageing in place and successful ageing in elderlies.
Methods: This study, which utilised a descriptive and relational-screening model, was conducted with the participation of 370 individuals aged 65 and over who were registered in Family Health Centres in a city centre located in the eastern part of Turkey.
Results: The participating elderlies' Successful Ageing Scale mean score was 54.16 ± 11.32, and the Ageing in Place Scale mean score was 54.24 ± 12.88. While there was a positive, statistically significant relationship between the Successful Ageing Scale total score, the Ageing in Place Scale total score, and living in the same environment, there was a negative, significant relationship between age and the Successful Ageing Scale total score.
Conclusion: Elderlies' successful ageing processes are affected positively by the increase in the duration of living in the same environment and satisfaction level about the place they lived in. Successful ageing is negatively affected by the increase in age. It is recommended that elderly people's living environments should not be changed and their social support networks should be strengthened as much as possible so they can have a successful ageing process.
abstract_id: PUBMED:36416380
The impact of social capital on successful ageing of empty nesters: A cross-sectional study. Aim: To explore the impact of social capital on successful ageing among empty nesters in China.
Design: A cross-sectional study.
Methods: The data for this study came from the survey of the China Health and Retirement Longitudinal Study (CHARLS) in 2018. Overall, 6098 empty nesters aged 60 years and over were included. Successful ageing was defined according to Rowe and Kahn's model. Social capital includes social trust, social support, reciprocity and social networks. Multivariable logistic regression and a classification and regression tree model were applied to estimate the impact of social capital on successful ageing. For this study, we followed the Reporting of Studies Conducted Using Observational Routinely Collected Health Data (RECORD) reporting guidelines, an extension of Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines.
Results: The successful ageing rate of empty nesters in China was 9.2%. Empty nesters who had a higher level of reciprocal behaviour and caregiving support in several dimensions of social capital and who were members of organizations in their social networks have had higher odds of achieving successful ageing. We also observed interactions with social capital associated with successful ageing, suggesting that special attention should be given to empty nesters who are less educated, have no caregiving support, live in rural areas, have worse self-rated health, are older, do not have reciprocal behaviours and are unmarried.
Conclusions: The results of this study show that social capital, especially in terms of reciprocity, caregiving support and organizational membership in a social network, can contribute to the achievement of successful ageing among empty nesters.
Impact: This study confirms the impact of social capital on the successful ageing of empty nesters for the first time and provides new ideas for state, community and health care workers to address ageing issues.
No Patient Or Public Contribution: Because of the public database data used in this study, all data were collected by survey agency personnel, so this section is not applicable to this study.
abstract_id: PUBMED:37047895
Social Networks, New Technologies, and Wellbeing-An Interview Study on Factors Influencing Older Adults' Successful Ageing. Many factors are considered vital in supporting successful ageing and older adults' wellbeing. Whilst evidence exists around facilitating and hindering factors in the general use of various forms of institutional and family support and personal development-oriented education and/or new technologies, evidence is limited with regards to older people's motivations, expectations, and experiences surrounding ageing. Hence, in this study, the author used a qualitative explanatory method to interpret the factors influencing seniors' successful ageing. The author's focus was on how seniors experience ageing. The second issue was how they have been organizing life in old age. The third point concerned their expectations towards ageing now and in the future. Thirteen older adults (60+) were interviewed nationwide using a semi-structured scenario tool. Their objective was to give rich descriptions of their experiences of ageing. The interviews revealed the older adults' own experiences and enabled an understanding of their motivations, perceptions, moderators, and expectations around successful ageing. Based on the analysis of the qualitative data, the author developed three main themes, each with its own sub-themes: 1. Life satisfaction (transitioning to retirement, using coping strategies in adaptation to negative changes, reaching personal goals, leading a meaningful life); 2. Supportive environments (being independent but using temporary assistance from relatives and/or people close to oneself, living with family members (e.g., husband or wife, children, grandchildren), having access to health care system); 3. Social integration (social relations, social engagement, independence in using technological advancements). The main categories that emerged from the three themes were social networks, new technologies, and wellbeing. To analyze these issues, the author used a sociological approach. The theoretic explorations were embedded mainly in two methods: criticism of writing and the analytical and comparative one.
abstract_id: PUBMED:28804348
Prevalence and correlates of successful ageing: a comparative study between China and South Korea. Successful ageing is often defined as a later life with less disease and disease-related disability, high level of cognitive and physical functions, and an active life style. Few studies have compared successful ageing across different societies in a non-Western social context. This study aims to compare prevalence and correlates of successful ageing between China and South Korea. The data come from the Chinese Longitudinal Healthy Longevity Survey (CLHLS) and the Korean Longitudinal Study of Ageing (KLoSA). A total of 19,346 community-dwelling elders over 65 years were included, 15,191 from China and 4,155 from Korea. A multidimensional construct of successful ageing was used, with the criteria of no major comorbidity, being free of disability, good mental health, engaging in social or productive activity, and satisfaction on life. Correlates of successful ageing included demographics (gender, age, and rural/urban residence), socioeconomic features (financial status, education, and spouse accompany), and health behaviours (smoking, alcohol-drinking, and exercising). The results showed that 18.6 % of the older adults in China was successful agers, which was less than 25.2 % in Korea. When gender and age were adjusted, older adults were 51 % less likely to be successful agers in China than Korea (p < 0.001). The association patterns between successful ageing and its correlates are similar between China and Korea. However, before the socioeconomic variables are under control, rural residence was negatively related to successful ageing in China, whereas this is not the case in Korea. And the gender gap of successful ageing was mostly explained by socioeconomic features and health behaviours in Korea, but not in China. In both countries, good financial condition was highly associated with successful ageing. The study suggests that advancement of public health system could better control progression of non-communicable diseases among old people and thus promote successful ageing.
abstract_id: PUBMED:34030546
Life-course trajectories of working conditions and successful ageing. Aims: As populations are ageing worldwide, it is important to identify strategies to promote successful ageing. We investigate how working conditions throughout working life are associated with successful ageing in later life.
Methods: Data from two nationally representative longitudinal Swedish surveys were linked (n=674). In 1991, respondents were asked about their first occupation, occupations at ages 25, 30, 35, 40, 45 and 50 years and their last recorded occupation. Occupations were matched with job exposure matrices to measure working conditions at each of these time points. Random effects growth curve models were used to calculate intra-individual trajectories of working conditions. Successful ageing, operationalised using an index including social and leisure activity, cognitive and physical function and the absence of diseases, was measured at follow-up in 2014 (age 70 years and older). Multivariable ordered logistic regressions were used to assess the association between trajectories of working conditions and successful ageing.
Results: Intellectually stimulating work; that is, substantive complexity, in the beginning of one's career followed by an accumulation of more intellectually stimulating work throughout working life was associated with higher levels of successful ageing. In contrast, a history of stressful, hazardous or physically demanding work was associated with lower levels of successful ageing.
Conclusions: Promoting a healthy workplace, by supporting intellectually stimulating work and reducing physically demanding and stressful jobs, may contribute to successful ageing after retirement. In particular, it appears that interventions early in one's employment career could have positive, long-term effects.
abstract_id: PUBMED:36900720
The Impact of Caregiving on Successful Ageing of Informal Carers: A Qualitative Study among Respiratory Patients' Caregivers. Providing intensive informal caregiving can cause caregivers' overburden, possibly impacting successful ageing factors such as physical and mental health and social life. This article aimed to investigate how informal caregivers experience the impact of providing care for chronic respiratory patients on the process of caregiver ageing. A qualitative exploratory study was performed using semi-structured interviews. The sample comprised 15 informal caregivers who provided intensive care for patients with chronic respiratory failure for more than six months. They were recruited while accompanying the patients on their examination for chronic respiratory failure in Special Hospital for Pulmonary Disease in Zagreb between January 2020 and November 2020. Semi-structured interviews were used with informal caregivers, and interview transcripts were analysed by the inductive thematic analysis method. Similar codes were organised into categories, and categories were grouped into themes. Two themes were identified in the domain of physical health relating to informal caregiving activities and inadequate treatment of their difficulties, three themes were in the domain of mental health relating to satisfaction with the recipient of care and feelings, and two themes were in the domain of social life, relating to social isolation and social support. Informal caregivers of patients with chronic respiratory failure experience a negative impact on the factors that contribute to the successful ageing of informal caregivers. The results of our research suggest that caregivers need support in maintaining their own health and social inclusion.
Answer: The relationship between social supports in late midlife and successful physical ageing is complex and may be bidirectional. On one hand, social supports from age 50 to 70 have been found to be powerfully correlated with physical health at age 70, suggesting that social supports could play a significant role in maintaining physical well-being in late life (PUBMED:9794023). However, this same study also found that the association between social supports and physical health was greatly attenuated when controlling for prior smoking, depression, and alcohol abuse. This indicates that while social supports are correlated with physical health, much of this association may be mediated by other factors such as alcoholism, smoking, and pre-morbid psychopathology, rather than social supports being a direct cause of successful ageing (PUBMED:9794023).
In contrast, another study found no evidence for an association of social connectedness with successful ageing, suggesting that social connectedness may not directly help avoid the usual conditions associated with ageing (PUBMED:23746413). This study defined successful ageing as the absence of certain health conditions and disabilities, as well as a low risk of psychological distress, and found that a healthy lifestyle and maintenance of healthy weight were more predictive of successful ageing than social connectedness.
Furthermore, social capital, which includes elements such as social trust, support, reciprocity, and networks, has been shown to contribute to successful ageing among empty nesters, indicating that social factors can indeed have a positive impact on ageing outcomes (PUBMED:36416380). This suggests that while social supports may not directly cause successful physical ageing, they can be an important contributing factor.
In summary, social supports in late midlife may not be a direct cause of successful physical ageing, but they are associated with it. The relationship is likely influenced by a range of other factors, including health behaviors and pre-existing conditions, and social supports may still play a significant role in the broader context of successful ageing (PUBMED:9794023; PUBMED:23746413; PUBMED:36416380). |
Instruction: Does type of disability and participation in rehabilitation affect satisfaction of stroke survivors?
Abstracts:
abstract_id: PUBMED:26123856
Does type of disability and participation in rehabilitation affect satisfaction of stroke survivors? Results from the 2013 Behavioral Risk Surveillance System (BRFSS). Background: Studies show that stroke survivors typically have lower life satisfaction than persons who have not been diagnosed with stroke.
Objective: To determine if significant differences in life satisfaction exist between stroke survivors with and without functional limitations and whether specific functional limitations, as well as participation in outpatient rehabilitation affect the odds of reported life satisfaction for stroke survivors.
Method: Chi square analysis was used to examine data from the 2013 BRFSS to determine the relationship of functional limitations as well as participation in rehabilitation services to life satisfaction for stroke survivors. Logistic regression analysis was used to determine what variables increased the odds of reported poor life satisfaction.
Results: Stroke survivors experiencing difficulty with cognition, depression and IADLs showed significantly lower life satisfaction than those who did not experience these functional limitations. Survivors exhibiting activity limitations had almost twice the odds of reporting poor life satisfaction and those experiencing limitations in cognition and IADLs had 2.88 times and 1.81 times the odds as others without these limitations of reporting poor life satisfaction, respectively. Participation in outpatient rehabilitation reduced the odds of reporting of poor life satisfaction by approximately one half.
Conclusions: Rehabilitation focused on addressing these functional limitations would increase life satisfaction for persons diagnosed with stroke. Future research on specific types of cognitive and daily living limitations would assist policy makers and referral sources in making appropriate referrals to rehabilitation.
abstract_id: PUBMED:37696200
An application of Organismic Integration Theory to enhance basic psychological needs satisfaction and motivation for rehabilitation in older stroke survivors: A randomized controlled trial study. Stroke survivors may experience disability and need long-term post-stroke rehabilitation to maintain optimal functioning. However, rehabilitation may not be sufficiently performed due to lack of motivation. This randomized controlled trial aimed to investigate the effectiveness of the Organismic Integration Theory (OIT)-based program for enhancing basic psychological needs satisfaction and motivation for rehabilitation in older stroke survivors. Participants were 38 older stroke survivors randomly assigned to an experimental group (n = 19) receiving the OIT-based program and a control group (n = 19) receiving standard care. Data were collected at baseline, and at 1, 4, and 12 weeks after the program ended. Data analysis showed the significantly higher levels of basic psychological satisfaction and motivation for rehabilitation in participants receiving the OIT-based program than those receiving standard care. The findings support the effectiveness of the OIT-based program in enhancing basic psychological satisfaction and motivation for home rehabilitation of older stroke survivors.
abstract_id: PUBMED:37781842
Stroke survivors' long-term participation in paid employment. Background: Knowledge on long-term participation is scarce for patients with paid employment at the time of stroke.
Objective: Describe the characteristics and the course of participation (paid employment and overall participation) in patients who did and did not remain in paid employment.
Methods: Patients with paid employment at the time of stroke completed questions on work up to 30 months after starting rehabilitation, and the Utrecht Scale for Evaluation of Rehabilitation-Participation (USER-P, Frequency, Restrictions and Satisfaction scales) up to 24 months. Baseline characteristics of patients with and without paid employment at 30 months were compared using Fisher's Exact Tests and Mann-Whitney U Tests. USER-P scores over time were analysed using Linear Mixed Models.
Results: Of the 170 included patients (median age 54.2 interquartile range 11.2 years; 40% women) 50.6% reported paid employment at 30 months. Those returning to work reported at baseline more working hours, better quality of life and communication, were more often self-employed and in an office job. The USER-P scores did not change statistically significantly over time.
Conclusion: About half of the stroke patients remained in paid employment. Optimizing interventions for returning to work and achieving meaningful participation outside of employment seem desirable.
abstract_id: PUBMED:36138370
Association between participation self-efficacy and participation in stroke survivors. Background: Most stroke survivors face restrictions in functional disability and social participation, which can impede their recovery and community reintegration. Participation self-efficacy refers to survivors' confidence in using strategies to manage participation in areas including community living and work engagement. This study aimed to assess the association between participation self-efficacy and participation among stroke survivors.
Methods: This study adopted a cross-sectional correlational design with a convenience sample of 336 stroke survivors recruited from five hospitals in China. Participation self-efficacy was measured using the Chinese version of the Participation Strategies Self-Efficacy Scale (PS-SES-C) and participation measured using the Chinese version of the Reintegration to Normal Living Index (RNLI-C). The association between participation self-efficacy and participation was examined using multiple regression analysis with adjustment for potential confounders.
Results: Participants had a mean age of 69.9 ± 11.5 years, with most (81.6%) having an ischaemic stroke, and more than half (61.6%) a first-ever stroke. After adjustment for potential confounders, every 10-point increase in the PS-SES-C total score was significantly associated with an average 1.3-point increase in the RNLI-C total score (B = 1.313, SE = 0.196, p < 0.001).
Conclusions: This study demonstrates that participation self-efficacy is significantly associated with participation among Chinese community-dwelling survivors of a mild or moderate stroke. This suggests that rehabilitation programmes for stroke survivors may be more effective if they incorporate participation-focused strategies designed to enhance self-efficacy. (229 words).
abstract_id: PUBMED:23658564
Perceived and experienced restrictions in participation and autonomy among adult survivors of stroke in Ghana. Background: Many stroke survivors do not participate in everyday life activities.
Objective: To assess the perceived and experienced restrictions in participation and autonomy among adult stroke survivors in Ghana.
Method: The "Impact on Participation and Autonomy Questionnaire" (IPAQ) instrument was administered in a survey of 200 adult stroke survivors to assess perceived restrictions in participation and autonomy, followed by in-depth interviews with a sub-sample on the restrictions they experienced in participation.
Results: Perceived restrictions in participation were most prevalent in the domains of education and training (3.46±0.79), paid or voluntary work (2.68±0.89), helping and supporting other people (2.20±0.82), and mobility (2.12±0.79). There were significant differences in two domains between survivors who received physiotherapy and those who received traditional rehabilitation. Over half of the survivors also perceived they would encounter severe problems in participation in the domains of paid or voluntary work, mobility, and education and training. The sub-sample of stroke survivors (n=7) mostly experienced restrictions in participation and autonomy in going outside the house, working, and in fulfilling family roles.
Conclusion: If these perceptions and experiences are not addressed during rehabilitation, they could further inhibit the full participation and social integration of stroke survivors.
abstract_id: PUBMED:26728302
Impact of depression following a stroke on the participation component of the International Classification of Functioning, Disability and Health. Purpose: To assess the impact of post-stroke depression on the participation component of the International Classification of Functioning, Disability and Health (ICF).
Method: Thirty-five stroke survivors with chronic hemiparesis were divided into two groups: those with and without depression. The Geriatric Depression Scale (GDS) was used for the analysis of depressive symptoms. Participation was analysed using the Stroke Specific Quality of Life scale. The Mann-Whitney test was used to compare the participation scores between the two groups. Spearman's correlation coefficients were calculated to determine the strength of the association between the assessment tools. Simple linear regression was used to determine the impact of depression on participation. An alpha risk of 0.05 was considered indicative of statistical significance.
Results: The group with depression had low participation scores (p = 0.04). A statistically significant negative correlation of moderate magnitude was found between depression and participation (r = -0.6; p = 0.04). The linear regression model demonstrated that depression is a moderate predictor of participation (r(2) = 0.51; p = 0.001).
Conclusions: Depression is a moderate predictor of participation among stroke survivors, explaining 51% of the decline of this aspect. Thus, depression should be diagnosed, monitored and treated to ensure a better prognosis regarding social participation following a stroke. Implications for Rehabilitation Individuals with post-stroke depression experience a lower degree of social participation. Depression explains 51% of the decline in participation following a stroke. The present findings can serve as a basis to assist healthcare professionals involved in the rehabilitation of stroke survivors and can assist in the establishment of adequate treatment plans in stroke rehabilitation.
abstract_id: PUBMED:33879705
Determinants of life satisfaction among stroke survivors 1 year post stroke. Abstract: Stroke is the major leading cause of death and severe long-term disability worldwide. The consequences of stroke, aside from diminished survival, have a significant impact on an individual's capability in maintaining self-autonomy and life satisfaction (LS). Thus, this study aimed to assess LS and other specific domains of LS in stroke survivors following their first-ever stroke, and to describe the relationship using socio-demographic and stroke-related variables.This study recruited 376 stroke survivors (244 men and 132 women, mean age: 57 years) 1 year following stroke. Data on participants' LS (measured using the Life Satisfaction Questionnaire [LiSat-11]), socio-demographics, and stroke-related variables were collected.Univariate analysis showed that LS and the 10 specific domains were not associated with the patients' gender or stroke type; however, age at onset, marital status, and vocational situation were significantly associated with some domains in LiSat-11 (Spearman's rho = 0.42-0.87; all P < 0.05). Logistic regression revealed that verbal and cognitive dysfunction were the most negative predictors of LS (odds ratio 4.1 and 3.7, respectively).LS is negatively affected in stroke survivors 1 year post onset. The results indicate that recovering social engagement is a positive predictor of higher LS in stroke survivors. More importantly, the findings revealed that cognitive and verbal dysfunctions were the most prominent negative predictors of the overall gross level of LS. Multidisciplinary rehabilitation for stroke survivors is therefore critical.
abstract_id: PUBMED:31669298
Participation Restrictions and Satisfaction With Participation in Partners of Patients With Stroke. Objective: To investigate participation restrictions and satisfaction with participation in partners of patients with stroke.
Design: Cross-sectional study.
Setting: Five rehabilitation centers and 3 hospitals in The Netherlands.
Participants: A consecutive sample of 54 partners of patients with stroke. The patients were participating in a multicenter randomized controlled trial.
Interventions: Not applicable.
Main Outcome Measures: Participation restrictions as a result of the patient's stroke and satisfaction with participation measured with the Utrecht Scale for Evaluation of Rehabilitation-Participation.
Results: The number of participation restrictions differed between partners of patients with stroke. The median number of participation restrictions experienced was 2 for the 11 activities assessed. Most participation restrictions were reported regarding paid work, unpaid work, or education, relationship with partner (ie, patient), and going out. Partners were least satisfied regarding going out, sports or other physical exercise, and day trips and other outdoor activities. The participation restrictions and satisfaction with participation were significantly correlated (ρ=0.65; P<.001), although this relation between participation restrictions and satisfaction with participation differed for the various activities. Differences between satisfied partners with participation restrictions and dissatisfied partners concerned anxiety (U=93.0; P=.026), depression (U=81.5, P=.010), and the number of restrictions experienced (U=50.0; P<.001).
Conclusions: There is great variety in restrictions experienced by partners regarding different activities and in their satisfaction with these activities. Specific assessment is therefore important when supporting partners of patients with stroke.
abstract_id: PUBMED:23340071
Participation in the chronic phase of stroke. Background: Participation is a multidimensional concept, consisting of an objective and a subjective dimension. Many studies have focused on determinants of only 1 dimension of participation post stroke.
Objective: To describe participation (both objective and subjective) and to determine how physical and cognitive independence and subjective complaints (pain, fatigue, and mood) influence participation in community-dwelling stroke survivors in the Netherlands.
Methods: The Utrecht Scale for Evaluation of Rehabilitation (USER) measures physical and cognitive independence and subjective complaints. USER-Participation measures 3 dimensions of participation: frequency (objective perspective), restrictions (subjective perspective), and satisfaction (subjective perspective). Spearman correlations and backward linear regression analyses were used to analyze associations between the 3 USER-Participation scores with demographics, stroke characteristics, physical and cognitive independence, and subjective complaints.
Results: Of the 111 participants, 48.5% returned to work post stroke, but mostly for only 1 to 16 hours a week. Experienced participation restrictions were most prevalent in physical exercise, chores in/around the house, housekeeping, and outdoor activities. On average, participants were relatively satisfied with their participation, but dissatisfaction occurred in cognition, activities outdoors, and work/housekeeping. Regression analysis revealed that objective participation was determined by physical and cognitive independence, age, and education, whereas subjective participation was determined by physical and cognitive independence, fatigue, and mood.
Conclusions: Most participants experienced participation problems, despite relatively good physical recovery. In addition to physical and cognitive factors, subjective complaints of persons with stroke should be addressed in the rehabilitation program.
abstract_id: PUBMED:26889797
Participation in leisure activities after stroke: A survey of community-residing stroke survivors in Nigeria. Background: Leisure provides pleasure and relaxation, and has health benefits even after a stressful and life-changing event such as a stroke.
Objective: This study examined leisure participation among a sample of community-residing stroke survivors in Nigeria.
Methods: Fifty-five stroke survivors undergoing rehabilitation were consecutively recruited from two government hospitals in Northern Nigeria. Data on pre- and post-stroke participation, and socio-demographic (age, sex, marital, employment, and educational status) and clinical (level of disability, post-stroke duration, stroke type and side of hemiplegia/hemiparesis) attributes of the stroke survivors were obtained. Leisure participation was assessed in four domains of recreational, social, cognitive, and productive/creative activities. Associations between leisure participation and the socio-demographic and clinical variables were examined using bivariate analysis.
Results: Mean (SD) age of the stroke survivors was 53.55 (14.39) years. Prevalence of leisure participation was 89.1%. Participation in specific leisure domains however varied thus: social (83.6%), cognitive (60%), recreational (41.8%), productive/creative activities (30.9%). Significant associations were observed between participation in cognitive, productive/creative, and recreational leisure activities, and specific socio-demographic and clinical attributes.
Conclusions: Leisure participation was high in a general sense but marginal in recreational and productive/creative activities. The observed socio-demographic and clinical associations with post-stroke leisure participation may assist in providing effective leisure rehabilitation strategies.
Answer: Yes, the type of disability and participation in rehabilitation do affect the satisfaction of stroke survivors. According to the study results from the 2013 Behavioral Risk Surveillance System (BRFSS), stroke survivors experiencing difficulty with cognition, depression, and instrumental activities of daily living (IADLs) showed significantly lower life satisfaction than those who did not experience these functional limitations. Survivors with activity limitations had almost twice the odds of reporting poor life satisfaction, and those experiencing limitations in cognition and IADLs had 2.88 times and 1.81 times the odds, respectively, of reporting poor life satisfaction compared to others without these limitations. Participation in outpatient rehabilitation was found to reduce the odds of reporting poor life satisfaction by approximately one half (PUBMED:26123856).
Furthermore, an application of Organismic Integration Theory (OIT) to enhance basic psychological needs satisfaction and motivation for rehabilitation in older stroke survivors through a randomized controlled trial showed that participants receiving the OIT-based program had significantly higher levels of basic psychological satisfaction and motivation for rehabilitation than those receiving standard care (PUBMED:37696200).
Additionally, the study on stroke survivors' long-term participation in paid employment indicated that about half of the stroke patients remained in paid employment and that optimizing interventions for returning to work and achieving meaningful participation outside of employment is desirable (PUBMED:37781842).
Moreover, the association between participation self-efficacy and participation in stroke survivors demonstrated that participation self-efficacy is significantly associated with participation among Chinese community-dwelling survivors of a mild or moderate stroke, suggesting that rehabilitation programs incorporating participation-focused strategies designed to enhance self-efficacy may be more effective (PUBMED:36138370).
In summary, the type of disability, particularly cognitive and daily living limitations, as well as depression, significantly affect life satisfaction among stroke survivors. Participation in rehabilitation, especially programs that enhance psychological needs satisfaction and self-efficacy, can positively influence satisfaction levels in this population. |
Instruction: Adult Crohn disease: can ileoscopy replace small bowel radiology?
Abstracts:
abstract_id: PUBMED:8645954
The role of small bowel radiology in the diagnosis and management of Crohn's disease. A total of 50 children with Crohn's disease were examined by barium follow-through and colonoscopy with ileoscopy, to determine the value of small bowel radiology. Of these children, 40 (80%) had evidence of small bowel Crohn's disease on ileoscopy and/or barium follow-through. Twenty-two (44%) had disease confined to the terminal ileum. Radiology diagnosed disease proximal to the terminal ileum in 18 cases (36%), including 5 children in whom the terminal ileum was normal. Ileoscopy was not possible in nine patients (18%), six of whom had small bowel disease on barium follow-through. Colonic involvement, demonstrated in 34 (68%), was the sole site of disease in 6 (12%). Fifteen (30%) children had surgery, which in six (12%) was determined by the radiological findings of complicated small bowel disease. As the terminal ileum may be uninvolved in the presence of proximal ileal disease, normal ileoscopy does not exclude small bowel Crohn's disease. Small bowel radiology remains necessary to assess the full extent of Crohn's disease in children.
abstract_id: PUBMED:15352909
Diagnostic accuracy of faecal calprotectin estimation in prediction of abnormal small bowel radiology. Background: [corrected] Patients being investigated for symptoms of abdominal pain, diarrhoea and or weight loss often undergo small bowel radiology as part of their diagnostic workup mainly to exclude inflammatory bowel disease.
Aim: To assess and compare the utility of a single faecal calprotectin estimation to barium follow through as well as conventional inflammatory markers such as erythrocyte sedimentation rate and C-reactive protein in exclusion of intestinal inflammation.
Methods: Seventy-three consecutive cases undergoing barium follow through for investigation of symptoms of diarrhoea and or abdominal pain with or without weight loss were studied. The control group comprised 25 cases with known active Crohn's disease (positive controls), 26 normal healthy volunteers (negative controls) and 25 cases of irritable bowel syndrome diagnosed by Rome II criteria. Symptoms, erythrocyte sedimentation rate and C-reactive protein were recorded at recruitment and a single stool sample assayed for calprotectin within 7 days prior to or after barium follow through.
Results: The median calprotectin value in the active Crohn's group, irritable bowel syndrome group and normal volunteers was 227 microg/g of stool, 19 and 10 microg/g respectively (P < 0.0001). A faecal calprotectin above a cut-off value of 60 microg/g was able to predict all nine cases with an abnormal barium follow through as well as all six cases with a normal barium follow through but with organic intestinal disease. The negative predictive value of a single calprotectin result below 60 microg/g of stool was 100% compared with 91% each for erythrocyte sedimentation rate > 10 mm and C-reactive protein > 6 mg/L and 84% for a combination of erythrocyte sedimentation rate and C-reactive protein in predicting absence of organic intestinal disease.
Conclusion: A single stool calprotectin value < 60 microg/g of stool obviates the need for further barium radiology of the small bowel, is more accurate than measurement of erythrocyte sedimentation rate or C-reactive protein and effectively excludes Crohn's disease or non-functional gastrointestinal disease.
abstract_id: PUBMED:38188070
Small Bowel Carcinoma in the Setting of Inflammatory Bowel Disease. Small bowel carcinomas are rare in the general population, but the incidence is increasing. Patients with inflammatory bowel diseases (IBDs) are at significantly higher risk of small bowel adenocarcinomas than their non-IBD counterparts, with Crohn's patients having at least a 12-fold increased risk and ulcerative colitis patients with a more controversial and modest 2-fold increased risk compared with the general population. IBD patients with small bowel carcinomas present with nonspecific symptoms that overlap with typical IBD symptoms, and this results in difficulty making a preoperative diagnosis. Cross-sectional imaging is rarely diagnostic, and most cancers are found incidentally at the time of surgery performed for an IBD indication. As such, most small bowel carcinomas are found at advanced stages and carry a poor prognosis. Oncologic surgical resection is the treatment of choice for patients with locoregional disease with little evidence available to guide adjuvant therapy. Patients with metastatic disease are treated with systemic chemotherapy, and surgery is reserved for palliation in this population. Prognosis is poor with few long-term survivors reported.
abstract_id: PUBMED:31832494
Evaluation of Small-Bowel Patency in Crohn's Disease: Prospective Study with a Patency Capsule and Computed Tomography. Background And Purpose: Patency capsule (PC) examination is usually performed - previously to capsule endoscopy - to evaluate small-bowel patency in patients with established Crohn's disease (CD). The reported PC retention rate is significantly higher than expected. Our aims were to assess small-bowel patency, to determine the precise location of the retained PC in patients with CD, and to determine the false positive rate of evaluation with a radiofrequency identification tag (RFIT) scanner.
Methods: This is a prospective single-center study including CD patients with clinical indication for small-bowel capsule endoscopy. PillCam® PC examination was performed on all patients to assess small-bowel patency. On all patients with a positive identification of the PC using an RFIT scanner, 30 h after ingestion, an abdominal CT was performed in order to identify its precise location.
Results: Fifty-four patients were included. The PC retention rate, according to evaluation with the RFIT scanner, was 20% (in 11 patients) 30 h after ingestion. These patients were then submitted to abdominal CT, which revealed that there was small-bowel retention in 5 cases (9%). Higher CRP levels, penetrating disease, and a history of abdominal surgery were associated with an increased risk of PC retention (p = 0.007, p = 0.011, and p = 0.033, respectively). On multivariate analysis, there was an independent association between small-bowel PC retention and CRP levels >5 mg/dL (OR = 15.5; p = 0.03).
Discussion: The small-bowel PC retention rate (9%) was considerably lower than those found in previous reports. Our results show that, with this protocol, the false-positive cases of RFIT scans or plain abdominal X-rays may be avoided. This may contribute to more extensive application of capsule endoscopy without the risk of small-bowel retention.
abstract_id: PUBMED:31205659
Optimising the use of small bowel endoscopy: a practical guide. The wireless nature of capsule endoscopy offers patients the least invasive option for small bowel investigation. It is now the first-line test for suspected small bowel bleeding. Furthermore meta-analyses suggest that capsule endoscopy outperforms small bowel imaging for small bowel tumours and is equivalent to CT enterography and magnetic resonance enterography for small bowel Crohn's disease. A positive capsule endoscopy lends a higher diagnostic yield with device-assisted enteroscopy. Device-assisted enteroscopy allows for the application of therapeutics to bleeding points, obtain histology of lesions seen, tattoo lesions for surgical resection or undertake polypectomy. It is however mainly reserved for therapeutics due to its invasive nature. Device-assisted enteroscopy has largely replaced intraoperative enteroscopy. The use of both modalities is discussed in detail for each indication. Current available guidelines are compared to provide a concise review.
abstract_id: PUBMED:32990071
Imaging of the small bowel: a review of current practice. This article summarises radiological imaging of the small bowel, with an emphasis on Crohn's disease. Different imaging techniques are discussed, including the advantages and disadvantages of each modality, and radiological findings for common small bowel pathologies are described, supplemented with pictorial examples.
abstract_id: PUBMED:15677908
Small intestine contrast ultrasonography: an alternative to radiology in the assessment of small bowel disease. Background: Radiology and transabdominal ultrasonography (TUS) are used in the evaluation of the small bowel; however, the former technique is limited by radiation exposure, and the latter by its inability to visualize the entire small bowel.
Aim: To evaluate the diagnostic accuracy of small intestine contrast ultrasonography (SICUS) to assess the presence, number, site, and extension of small bowel lesions.
Subjects And Methods: TUS, SICUS, and small bowel follow-through (SBFT) were performed in 148 consecutive patients (78 women; age range, 12 to 89 yr), 91 with undiagnosed conditions, and 57 with previously diagnosed Crohn's disease (CD).
Results: In the undiagnosed patients, the sensitivity and specificity of TUS and SICUS were 57% and 100%, and 94.3% and 98%, respectively. In the CD patients, the sensitivity of TUS and SICUS was 87.3% and 98%, respectively. In comparison with SBFT, the extension of lesions was correctly assessed with SICUS and greatly underestimated with TUS. The concordance index between SBFT and SICUS for the number and site of lesions was 1 and 1 (P < 0.001), respectively, in undiagnosed patients, and 0.81 and 0.83 (P < 0.001), respectively, in CD patients. Between SBFT and TUS, the concordance index was 0.28 and 0.27 (not significant), respectively, in undiagnosed patients, and 0.28 and 0.31 (not significant), respectively, in CD patients.
Conclusions: The diagnostic accuracy of SICUS is comparable to that of a radiologic examination, and is superior to that of TUS in detecting the presence, number, extension, and sites of small bowel lesions. These findings support the use of noninvasive SICUS for an initial investigation when small bowel disease is suspected and in the follow-up of CD patients.
abstract_id: PUBMED:22844550
Adenocarcinoma of the small bowel in a patient with occlusive Crohn's disease. A 40-year-old male, diagnosed with mild Crohn's disease (CD) 11 years ago but with no prior abdominal surgeries, was diagnosed with a small bowel stricture, due to ongoing abdominal pain and intolerance of enteral diet, and referred for surgical treatment. Exploratory laparoscopy revealed a white solid mass causing a near total jejunal obstruction with significant proximal dilatation. An adjacent small node was sampled for frozen biopsy, revealing a lymph node infiltrated with adenocarcinoma. Laparoscopic assisted small bowel resection and appendectomy were carried out. Final pathological results supported the initial report of diffuse small bowel adenocarcinoma. In conclusion, once a small bowel stricture associated with CD is suspected, rapid action should be considered to avoid late diagnosis of a neoplasia.
abstract_id: PUBMED:23983481
Clinical applications of small bowel capsule endoscopy. Video capsule endoscopy has revolutionized our ability to visualize the entire small bowel mucosa. This modality is established as a valuable tool for the diagnosis of obscure gastrointestinal bleeding, Crohn's disease, small bowel tumors, and other conditions involving the small bowel mucosa. This review includes an overview of the current and potential future clinical applications of small bowel video endoscopy.
abstract_id: PUBMED:27757521
The first joint ESGAR/ ESPR consensus statement on the technical performance of cross-sectional small bowel and colonic imaging. Objectives: To develop guidelines describing a standardised approach to patient preparation and acquisition protocols for magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound (US) of the small bowel and colon, with an emphasis on imaging inflammatory bowel disease.
Methods: An expert consensus committee of 13 members from the European Society of Gastrointestinal and Abdominal Radiology (ESGAR) and European Society of Paediatric Radiology (ESPR) undertook a six-stage modified Delphi process, including a detailed literature review, to create a series of consensus statements concerning patient preparation, imaging hardware and image acquisition protocols.
Results: One hundred and fifty-seven statements were scored for agreement by the panel of which 129 statements (82 %) achieved immediate consensus with a further 19 (12 %) achieving consensus after appropriate modification. Nine (6 %) statements were rejected as consensus could not be reached.
Conclusions: These expert consensus recommendations can be used to help guide cross-sectional radiological practice for imaging the small bowel and colon.
Key Points: • Cross-sectional imaging is increasingly used to evaluate the bowel • Image quality is paramount to achieving high diagnostic accuracy • Guidelines concerning patient preparation and image acquisition protocols are provided.
Answer: No, ileoscopy cannot replace small bowel radiology in the diagnosis and management of Crohn's disease. While ileoscopy is a valuable tool for assessing the terminal ileum, it may not always be possible to perform, and it does not provide a complete assessment of the small bowel. In a study of 50 children with Crohn's disease, ileoscopy was not possible in 18% of cases, and radiology identified small bowel disease proximal to the terminal ileum in 36% of cases, including some where the terminal ileum appeared normal on ileoscopy (PUBMED:8645954). This indicates that normal ileoscopy does not exclude small bowel Crohn's disease, and small bowel radiology remains necessary to assess the full extent of the disease.
Furthermore, other imaging modalities such as capsule endoscopy, computed tomography (CT), and magnetic resonance imaging (MRI) have been shown to be valuable in the evaluation of small bowel Crohn's disease. Capsule endoscopy is considered the first-line test for suspected small bowel bleeding and is equivalent to CT enterography and magnetic resonance enterography for small bowel Crohn's disease (PUBMED:31205659). Additionally, cross-sectional imaging techniques like CT and MRI are increasingly used to evaluate the bowel, and guidelines have been developed to standardize patient preparation and acquisition protocols for these imaging modalities (PUBMED:27757521).
In summary, while ileoscopy is an important diagnostic tool, it cannot replace small bowel radiology, which includes a range of imaging techniques that provide a more comprehensive evaluation of the small bowel in patients with Crohn's disease. |
Instruction: Symptomatic maxillary sinus retention cysts: should they be removed?
Abstracts:
abstract_id: PUBMED:20715088
Symptomatic maxillary sinus retention cysts: should they be removed? Objectives/hypothesis: Recently, endoscopic sinus surgery (ESS) endoscopic sinus surgery (ESS) okay? has become the surgical procedure of choice for removing retention cysts from the maxillary sinus. The aim of our study was to determine the relationship between symptomatic relief and ESS with or without endoscopic excision of maxillary cysts.
Study Design: Prospective, randomized study.
Methods: Inclusion criteria were symptomatic maxillary cysts filling at least 50% of the sinus space. We conducted a prospective, randomized study comprising 80 patients. Of the patients, 41 underwent endoscopic ethmoidectomy, middle meatus antrostomy, and excision of the cysts (group A); and 39 underwent ethmoidectomy and antrostomy without cyst detachment (group B). During follow-up an attempt was made to correlate symptomatic failure with type of surgery, computed tomography (CT) score, cyst size, and ratio of cyst size/antral size.
Results: Symptomatic failure occurred in nine cases: four in the group A and five in group B. There was no relationship between success rates and type of surgery, CT score, cyst size, and ratio of cyst size/antral size.
Conclusions: Endoscopic ethmoidectomy and middle meatus antrostomy without cyst detachment yielded similar outcomes with cyst extirpation through the antrostomy. Our treatment should be aimed in restoring ventilation and drainage of the dependent maxillary sinus.
abstract_id: PUBMED:2614239
Symptomatic mucosal cysts of the maxillary sinus: antroscopic treatment. Antroscopy has been shown to have a role in the diagnosis and treatment of antral disease. Four cases of 'non-secreting' benign antral cysts, each presenting with facial pain, were successfully treated using antroscopic techniques. The aetiology and management of these lesions is reviewed with the recommendation that antroscopic removal is the treatment of choice for all symptomatic cases, and for asymptomatic cases in which the diagnosis is in doubt.
abstract_id: PUBMED:35725950
Surgical outcomes between two endoscopic approaches for maxillary cysts. Objective: To compare recurrence rates and symptomatic relief in symptomatic maxillary sinus Retention Cysts (RCs) between Middle Meatus Antrostomy (MMA) alone and Inferior Meatus Antrostomy (IMA) with basal mucosa electrocoagulation.
Methods: Patients with symptomatic unilateral maxillary RCs were randomly allocated to MMA (n=54) and IMA combined with mucosa electrocoagulation (n=53) groups. Symptomatic relief, cyst recurrence, and closure of the antrostomy opening were compared at 12-months postoperatively.
Results: Symptomatic failure occurred in 13 (12.1%) patients, including 9 (16.7%) MMA and 4 (7.5%) IMA patients; this difference was not statistically significant (p=0.251). Postoperative cyst recurrence occurred in 16 (29.7%) and 1 (1.9%) patient in the MMA and IMA groups, respectively (p<0.0001). Closure of the opening occurred in 7 (13.0%) and 17 (32.1%) patients in the MMA and IMA groups, respectively (p=0.032). However, there were no significant pairwise correlations between closure of the opening and symptomatic failure or cyst recurrence.
Conclusion: IMA combined with basal mucosa electrocoagulation and MMA alone provided similar symptomatic relief for symptomatic maxillary RCs, but IMA had shorter operation times and lower postoperative recurrence rates of RCs.
Level Of Evidence: Level 1b.
abstract_id: PUBMED:32621999
Surgical approach of ectopic maxillary third molar avulsion: Systematic review and meta-analysis. Ectopic maxillary third molars (EMTM) are extracted mainly by the Caldwell-Luc technique but also by nasal endoscopy. There is currently no consensus on the treatment of this eruption and its management is heterogeneous and multidisciplinary. Two literature searches were performed with no time restrictions via Pubmed. In the first, we used the keywords "ectopic AND third molar" and in the second the keywords "dentigerous cyst AND ectopic third molar". For both articles, epidemiological, symptomatic, radiological and surgical data were recorded. Overall, 33 eligible articles were identified involving 39 cases of EMTM. 79% of patients were symptomatic. 87% of the teeth were associated with a dental cyst. In only 13% of cases was the location of the tooth in the sinus specified in the three planes of the space. Surgery was performed in 77% of patients by the Caldwell-Luc technique, by nasal endoscopy in 10% and by the Le Fort I approach in 3%. The indications for avulsion of EMTM are symptomatic patients or asymptomatic patients with an associated cyst. The intra-sinusal location of the tooth is not a factor in the choice of technique used, which depends rather on the individual skills of the surgeon. Although for a trained operator the Le Fort I osteotomy is an easy procedure, its interest in the treatment of EMTM is limited owing to the rare but potentially severe complications involved.
abstract_id: PUBMED:10864731
Mucus retention cyst of the maxillary sinus: the endoscopic approach. Objective: To present our experience of endoscopic surgery for symptomatic mucus retention cyst of the maxillary sinus.
Design: Retrospective study.
Setting: Teaching hospital, Israel.
Patients: 60 patients with 65 symptomatic cysts of the maxillary sinus who were operated on endoscopically. Only patients with large cysts that filled at least 50% of the sinus space were included.
Intervention: A rigid nasal endoscope was used in all cases; most of the cysts were removed through the natural sinus ostium.
Results: Cysts recurred in only two patients during the first postoperative year. There were no complications from the procedure.
Conclusion: The endoscopic approach to the treatment of maxillary sinus cyst is associated with a low rate of recurrence (3% in this study) and no complications, and we recommend it as the surgical procedure of choice.
abstract_id: PUBMED:8029622
Osteoplastic endonasal approach to the maxillary sinus. A new surgical approach for removal of isolated maxillary sinus pathology, mainly of symptomatic maxillary sinus cysts, is presented. It is based on the principles of osteoplastic sinus surgery and uses the transnasal approach. It allows a safe removal under direct vision or endoscopic control with standard surgical instrumentation. The normal maxillary ostium and healthy ethmoidal cells are not sacrificed and thus the lymphatic pathways as well as the mucociliary transport are not endangered.
abstract_id: PUBMED:30257548
Principal Clinical Factors Predicting Therapeutic Outcomes After Surgical Drainage of Postoperative Cheek Cysts: Experience From a Single Center. Objectives: Postoperative cheek cyst (POCC) is a late postoperative complication of radical maxillary sinus surgery including the Caldwell-Luc (C-L) operation. The present study aimed to evaluate the therapeutic outcomes of surgical treatment for POCC and to assess the clinical factors correlated to these outcomes.
Methods: This study included 57 patients (67 nostrils) diagnosed with POCC who underwent surgical drainage. The medical records of the patients were retrospectively reviewed for radiological findings, treatment modalities, residual symptoms, and recurrences.
Results: In total, 30 patients were male and 27 patients were female with a mean age of 55 years, and the patients were usually diagnosed with POCC 28.2 years after radical surgery. Endonasal endoscopic marsupialization was performed via inferior meatal antrostomy, and if possible, middle meatal antrostomy was performed at the same time. In patients with cysts that were difficult to reach using an endonasal endoscopic approach, additional open C-L approaches were performed. The median follow-up period was 19.4 months. Overall, adequate drainage and symptomatic relief were achieved in 91% (61/67) of the patients. The recurrence rate was significantly higher in patients who had anterolateral POCC. Failure to achieve symptomatic relief was correlated to a smaller cyst and the use of the open C-L approach for drainage.
Conclusion: The location and size of the cyst as well as the use of the open surgical approach were important factors in predicting the therapeutic outcome of POCC. The time point of treatment and surgical approaches should be based on the above-mentioned findings.
abstract_id: PUBMED:3178550
Destructive cysts of the maxillary sinus affecting the orbit. Symptomatic maxillary sinus cysts are diagnosed less frequently than similar cysts of the frontal and ethmoidal sinuses and are rarely reported in the ophthalmic literature. Patients with cysts of the maxillary sinus may present to the ophthalmologist with proptosis, enophthalmos, diplopia, ptosis, epiphora, and, rarely, decreased visual acuity. Four patients with maxillary sinus mucoceles are presented; one of these patients had a concurrent retention cyst in the orbit. Clinical history, radiologic findings, and histopathologic mechanisms are discussed. Mucocele is a recognized complication of the Caldwell-Luc procedure and midface trauma. Blockage of the sinus ostia was the cause previously proposed to explain antral mucocele development. Clinical and histopathologic features may support more than one single mechanism for the pathogenesis of maxillary sinus cysts. Maxillary sinus mucocele or retention cysts should be considered in the differential diagnosis of exophthalmos or enophthalmos following blowout fracture of the orbital floor.
abstract_id: PUBMED:22040674
Pneumosinus dilatans, pneumocoele or air cyst? A case report and literature review. Background: Pathological paranasal sinus expansion secondary to air is uncommon. However, this condition may be symptomatic or cosmetically apparent, requiring surgical intervention. Various terms have been used to describe this condition, and nomenclature is controversial.
Method: An 18-year-old man presented with right facial pain, and was subsequently found to have pneumosinus dilatans of the maxillary sinus. A search was conducted of the PubMed, Medline and Embase databases, using the key words 'pneumosinus dilatans', 'pneumoc(o)ele', 'pneumatoc(o)ele' and 'maxillary sinus'. Articles were also hand-searched. Relevant articles published in English were reviewed.
Results: The literature review identified 36 cases involving the maxillary sinus (including the present case), with 19 cases reported as pneumosinus dilatans, 12 as pneumocoeles, two as pneumatocoeles, two as pneumosinus dilatans multiplex and one as an air cyst. However, based on the strict classification described by Urken et al., the majority of these cases should be reclassified as pneumocoeles.
Conclusion: Whilst the nomenclature of this pathology is confusing, distinctions of terminology do not alter the management. Hence, we support the use of the term 'air cyst', to incorporate all these lesions.
abstract_id: PUBMED:37884395
Surgical ciliated cysts of the mandible: A systematic review of case reports. Background: Surgical ciliated cysts occur mainly in the maxillae after radical maxillary sinus surgery; they rarely develop in the mandible. This study aims to gather information on all the characteristics of patients suffering from mandibular surgical ciliated cysts. This article also reports two new cases.
Methods: PubMed, Google Scholar and the International Clinical Trials Registry Platform were explored until 13 December 2022 for articles regarding mandibular surgical ciliated cysts. Maxillary ciliated cysts were excluded.
Results: Fourteen original articles were included in a total of 16 cases. Maxillofacial surgeries are the first aetiology (94.4%). Surgical ciliated cysts of the mandibular region show a 1.25:1 male-to-female ratio with a protracted time to diagnosis (range: 2-56 years). Most patients are symptomatic (77.8%). Typically, radiology shows a radiolucency lesion (88.9%) and histology describes pseudostratified ciliated columnar epithelium. Enucleation has always been described as the treatment.
Discussion: All results were case reports, thus a low level of evidence studies. Mandibular surgical ciliated cysts should be considered in patients presenting a mandibular swelling or radiolucency lesion with a maxillofacial surgery history. Meticulous surgical techniques can aid in the prevention of this lesion.
Answer: The question of whether symptomatic maxillary sinus retention cysts should be removed has been addressed in several studies. According to the findings from these studies, the removal of symptomatic maxillary sinus retention cysts can be beneficial.
A prospective, randomized study comprising 80 patients with symptomatic maxillary cysts filling at least 50% of the sinus space found that endoscopic ethmoidectomy and middle meatus antrostomy without cyst detachment yielded similar outcomes to cyst extirpation through the antrostomy. The study concluded that treatment should aim to restore ventilation and drainage of the dependent maxillary sinus, suggesting that cyst removal is not always necessary for symptomatic relief (PUBMED:20715088).
Another study reported that antroscopic techniques successfully treated four cases of symptomatic 'non-secreting' benign antral cysts presenting with facial pain. The study recommended antroscopic removal as the treatment of choice for all symptomatic cases (PUBMED:2614239).
A comparison of surgical outcomes between Middle Meatus Antrostomy (MMA) alone and Inferior Meatus Antrostomy (IMA) with basal mucosa electrocoagulation for symptomatic maxillary sinus Retention Cysts (RCs) showed that both approaches provided similar symptomatic relief. However, IMA had shorter operation times and lower postoperative recurrence rates of RCs (PUBMED:35725950).
A retrospective study of 60 patients with 65 symptomatic cysts of the maxillary sinus operated on endoscopically reported a low rate of recurrence and no complications, recommending the endoscopic approach as the surgical procedure of choice (PUBMED:10864731).
In summary, the literature suggests that symptomatic maxillary sinus retention cysts can be removed for symptomatic relief, and various endoscopic approaches have been shown to be effective with low recurrence rates and minimal complications. However, the decision to remove such cysts should be individualized based on the patient's symptoms, cyst characteristics, and the surgeon's expertise. |
Instruction: Is there a role for concomitant pelvic floor repair in patients with sphincter defects in the treatment of fecal incontinence?
Abstracts:
abstract_id: PUBMED:16075237
Is there a role for concomitant pelvic floor repair in patients with sphincter defects in the treatment of fecal incontinence? Background And Aims: More than half of all patients who undergo overlapping anal sphincter repair for fecal incontinence develop recurrent symptoms. Many have associated pelvic floor disorders that are not surgically addressed during sphincter repair. We evaluate the outcomes of combined overlapping anal sphincteroplasty and pelvic floor repair (PFR) vs. anterior sphincteroplasty alone in patients with concomitant sphincter and pelvic floor defects.
Patients And Methods: We reviewed all patients with concomitant defects who underwent surgery between February 1998 and August 2001. Patients were assessed preoperatively by anorectal manometry, pudendal nerve terminal motor latency, and endoanal ultrasound. The degree of continence was assessed both preoperatively and postoperatively using the Cleveland Clinic Florida fecal incontinence score. Postoperative success was defined as a score of <or=5, whereas postoperative quality of life was assessed by a standardized questionnaire.
Results: Twenty-eight patients (mean age 52.3 years) underwent overlapping anal sphincteroplasty. The mean follow-up was 33.8 months. Cleveland Clinic Florida scores postoperatively showed a significant improvement from preoperative values (14.2 vs 5.1, p<0.001). Seventeen patients (61%) underwent concomitant PFR with sphincteroplasty. Three patients (27%) without PFR and one patient (6%) with PFR underwent repeat sphincter repair due to worsening symptoms (p=0.15). Two patients with PFR and one patient without PFR ultimately had an ostomy due to a failed repair (p=0.66). Comparing patients with and without PFR, there was a trend toward higher success rates (71 vs. 45%) when pelvic prolapse issues were addressed during sphincter repair.
Conclusion: Concomitant evaluation and repair of pelvic floor prolapse may be a clinically significant component of a successful anal sphincteroplasty for fecal incontinence but warrant further prospective evaluation.
abstract_id: PUBMED:37350333
Sonographic postpartum anal sphincter defects and the association with pelvic floor pain and dyspareunia. Introduction: Pelvic floor pain and dyspareunia are both important entities of postpartum pelvic pain, often concomitant and associated with perineal tears during vaginal delivery. The association between postpartum sonographic anal sphincter defects, pelvic floor pain, and dyspareunia has not been fully established. We aimed to determine the prevalence of postpartum anal sphincter defects using three-dimensional endoanal ultrasonography (3D-EAUS) and evaluate their association with symptoms of pelvic floor pain and dyspareunia.
Material And Methods: This prospective cohort study followed 239 primiparas from birth to 12 months post delivery. Anal sphincters were assessed with 3D-EAUS 3 months postpartum, and self-reported pelvic floor function data were obtained using a web-based questionnaire distributed 1 year after delivery. Descriptive statistics were compared between the patients with and without sonographic defects, and the association between sonographic sphincter defects and outcomes were analyzed using logistic regression.
Results: At 3 months postpartum, 48/239 (20%) patients had anal sphincter defects on 3D-EAUS, of which 43 (18%) were not clinically diagnosed with obstetric anal sphincter injury at the time of delivery. Patients with sonographic defects had higher fetal weight than those without defects, and a perineum <2 cm before the suture was a risk factor for defects (odds ratio [OR], 6.9). Patients with sonographic defects had a higher frequency of dyspareunia (OR, 2.4), and pelvic floor pain (OR, 2.3) than those without defects.
Conclusions: Our results suggest an association between postpartum sonographic anal sphincter defects, pelvic floor pain, and dyspareunia. A perineal height <2 cm, measured by bidigital palpation immediately postdelivery, was a risk factor for sonographic anal sphincter defect. We suggest offering pelvic floor sonography around 3 months postpartum to high- risk women to optimize diagnosis and treatment of perineal tears and include perineum <2 cm prior to primary repair as a proposed indication for postpartum follow-up sonography.
abstract_id: PUBMED:7813338
Randomized trial of internal anal sphincter plication with pelvic floor repair for neuropathic fecal incontinence. Purpose: This study was designed to examine the role of adjuvant internal anal sphincter plication in women with neuropathic fecal incontinence undergoing pelvic floor repair.
Methods: We completed a randomized trial with symptomatic and physiologic assessment before and after surgery.
Results: There was no symptomatic advantage of adding internal sphincter plication; the mean improvement of functional score was 3.61 +/- 1.82 (standard deviation; P < 0.01) following pelvic floor repair alone compared with 2.80 +/- 1.66 (standard deviation; P < 0.01) when adjuvant internal and sphincter plication was added. The addition of internal sphincter plication was associated with a significant fall in maximum anal resting and squeezing pressures (P < 0.01).
Conclusions: Addition of internal sphincter plication is not advised in women with neuropathic fecal incontinence treated by pelvic floor repair.
abstract_id: PUBMED:18415899
External anal sphincter repair using the overlapping technique in patients with anal incontinence and concomitant pudendal nerve damage Background: No single surgical technique has so far emerged as the optimal approach to treat defects of the anal sphincter in patients with postpartum fecal incontinence. Our approach is to repair the external sphincter using the overlapping technique to optimize morphological and clinical outcome. The results were correlated with preoperatively determined pudendal nerve function.
Methods: Thirty-five patients were followed up for three years after repair of the external anal sphincter. The patients had grade 2 (n = 29) or grade 3 (n = 6) fecal incontinence. Nineteen (54 %) patients had a concomitant defect of the internal anal sphincter and 28 (80 %) had abnormal pelvic floor EMG findings. Before surgery, all patients underwent conservative treatment with biofeedback and electrostimulation. The muscle ends were overlapped with Vicryl 4-0 sutures. A standardized protocol was used for the perioperative management in all patients.
Results: Of the 35 patients who underwent overlapping repair of the external anal sphincter, 32 (91 %) had a satisfactory result at 3-year follow-up based on sonomorphological criteria. These 32 patients were continent for solid and liquid stools. Six of the 35 patients (17 %) continued to have flatus incontinence. Two (6 %) patients were improved and one patient (3 %) had unchanged incontinence. Pudendal nerve damage had no effect on the outcome of surgery.
Conclusions: Our findings at 3-year follow-up show good results for the overlapping repair of the external anal sphincter in terms of morphology and clinical symptoms. This outcome depends on an adequate preoperative pelvic floor conditioning, optimal perioperative management, and use of a standardized operative technique. Surgical repair of the morphological defect is recommended even in patients with pudendal nerve damage.
abstract_id: PUBMED:11960226
Imaging of the posterior pelvic floor. Disorders of the posterior pelvic floor are relatively common. The role of imaging in this field is increasing, especially in constipation, prolapse and anal incontinence, and currently imaging is an integral part of the investigation of these pelvic floor disorders. Evacuation proctography provides both structural and functional information for rectal voiding and prolapse. Dynamic MRI may be a valuable alternative as the pelvic floor muscles are visualised, and it is currently under evaluation. Endoluminal imaging is important in the management of anal incontinence. Both endosonography and endoanal MRI can be used for detection of anal sphincter defects. Endoanal MRI has the advantage of simultaneously evaluating external sphincter atrophy, which is an important predictive factor for the outcome of sphincter repair. Many aspects of constipation and prolapse remain incompletely understood and treatment is partly empirical; however, imaging has a central role in management to place patients into treatment-defined groups.
abstract_id: PUBMED:31813034
Anal sphincter imaging: better done at rest or on pelvic floor muscle contraction? Introduction And Hypothesis: Exo-anal ultrasound imaging of the anal sphincter is usually undertaken on pelvic floor muscle contraction (PFMC) as this seems to enhance tissue discrimination. Some women are unable to achieve a satisfactory PFMC, and in this situation, the sphincter is assessed at rest. We aimed to determine whether sphincter imaging at rest is inferior to imaging on PFMC.
Methods: We analysed 441 women in this retrospective study. All underwent a standardised interview, including St Mark's incontinence score, clinical examination and 4D trans-labial ultrasound (TLUS). On analysing volume data, tomographic imaging was used to obtain a standardised set of slices at rest and on PFMC to evaluate external anal sphincter (EAS) and internal anal sphincter (IAS) trauma as described previously.
Results: When assessments obtained from volumes acquired at rest and on PFMC were tested against measures of anal incontinence (AI), all associations between the diagnosis of significant anal sphincter defects and AI were no stronger when imaging was performed on PFMC. On cross-tabulation, the percentage agreement for significant defects of the EAS and IAS at rest and on PFMC was 96.5% and 98.9% respectively, if discrepancy by one slice was allowed.
Conclusions: Exo-anal tomographic imaging of sphincter defects at rest seems sufficiently valid for clinical use and may not be inferior to sphincter assessment on pelvic floor muscle contraction.
abstract_id: PUBMED:20087089
Anterior sphincteroplasty for fecal incontinence: is the outcome compromised in patients with associated pelvic floor injury? Introduction: It has been shown that vaginal delivery may result not only in sphincter defects, but also in pelvic floor injury. However, the influence of this type of injury on the etiology of fecal incontinence and its treatment is unknown. The present study was aimed to assess the prevalence of pelvic floor injury in patients who underwent anterior sphincteroplasty for the treatment of fecal incontinence and to determine the impact of this type of injury on the outcome of this procedure.
Methods: Women who underwent anterior sphincteroplasty in the past were invited to participate in the present study. With transperineal ultrasound, which has been developed recently, pelvic floor integrity was examined in 70 of 117 patients (60%). Follow-up was obtained from a standardized questionnaire.
Results: The median time period between anterior sphincteroplasty and the current assessment was 106 (range, 15-211) months. Pelvic floor injury was diagnosed in 43 patients (61%). Despite the prior sphincteroplasty, an external anal sphincter defect was found in 20 patients (29%). Outcome did not differ, neither between patients with and those without pelvic floor injury, nor between patients with and those without an adequate repair. However, patients with an adequate repair and an intact pelvic floor did have a better outcome than patients with one or both abnormalities.
Conclusion: The majority of female patients with incontinence who were eligible for anterior sphincteroplasty have concomitant pelvic floor injury. Based on the present study, it seems unlikely that this type of injury itself has an impact on the outcome of anterior sphincteroplasty.
abstract_id: PUBMED:33536850
Biofeedback for Pelvic Floor Disorders. Defecatory disorders can include structural, neurological, and functional disorders in addition to concomitant symptoms of fecal incontinence, functional anorectal pain, and pelvic floor dyssynergia. These disorders greatly affect quality of life and healthcare costs. Treatment for pelvic floor disorders can include medications, botulinum toxin, surgery, physical therapy, and biofeedback. Pelvic floor muscle training for pelvic floor disorders aims to enhance strength, speed, and/or endurance or coordination of voluntary anal sphincter and pelvic floor muscle contractions. Biofeedback therapy builds on physical therapy by incorporating the use of equipment to record or amplify activities of the body and feed the information back to the patients. Biofeedback has demonstrated efficacy in the treatment of chronic constipation with dyssynergic defecation, fecal incontinence, and low anterior resection syndrome. Evidence for the use of biofeedback in levator ani syndrome is conflicting. In comparing biofeedback to pelvic floor muscle training alone, studies suggest that biofeedback is superior therapy.
abstract_id: PUBMED:27613623
Residual defects after repair of obstetric anal sphincter injuries and pelvic floor muscle strength are related to anal incontinence symptoms. Introduction And Hypothesis: The aim was to analyze the correlation between residual anal sphincter (AS) defects and pelvic floor muscle (PFM) strength on anal incontinence (AI) in patients with a history of obstetric AS injuries (OASIS).
Methods: From September 2012 to February 2015, an observational study was conducted on a cohort of females who underwent repair of OASIS intrapartum. The degree of OASIS was scored intrapartum according to Sultan's classification. Participants were assessed at 6 months postpartum. Incontinence symptoms were evaluated using Wexner's score and PFM strength using the Modified Oxford Scale (MOS). 3D-endoanal ultrasound was performed to classify AS defects according to Starck's system. Correlation between Sultan's and Starck's classifications was calculated using Cohen's kappa and Spearman's rho (Rs) test. The impact of residual AS defects and PFM strength on AI was analyzed using a multiple regression model.
Results: A total of 95 women were included in the study. Good correlation (κ= 0.72) was found between Sultan's and Starck's classifications. Significant positive correlation was observed between Wexner's score and both Sultan's (p = 0.023, Rs =0.212) and Starck's (p < 0.001, Rs =0.777) scores. The extent of the residual AS defect was the most relevant factor correlating with AI symptoms. In patients with severe AS injuries, higher MOS values were associated with lower Wexner's score.
Conclusions: The degree of AS tear measured intrapartum was the most important factor related to AI after primary repair of OASIS. PFM strength was associated with lower incontinence symptoms in the postpartum period.
abstract_id: PUBMED:24502361
Long-term function and morphology of the anal sphincters and the pelvic floor after primary repair of obstetric anal sphincter injury. Aim: More than 50% of women experience deteriorating continence over time following primary repair of obstetric anal sphincter injuries. The objectives of this study were to assess the function and morphology of the anal sphincters and pelvic floor in women with long-term faecal incontinence after sphincter repair (primary end-point) and to evaluate their correlation with severity of incontinence (secondary end-point).
Method: The participants in this prospective study were recruited from a cohort of all women who sustained third or fourth degree obstetric sphincter injury reconstruction (cases) from January 1976 to November 1991. The women who delivered immediately before and after each case, without sustaining obstetric damage, were included as controls. Cases and controls were stratified into three categories: (i) continent; (ii) minor incontinence; and (iii) severe incontinence. The function and morphology of the anal sphincters and pelvic floor were evaluated by MRI, three-dimensional endoanal ultrasonography and anorectal physiology tests.
Results: Fifty-nine women (29 cases/30 controls; mean age 51/53 years; mean follow-up 23.7/24.1 years, respectively) were assessed. Morphologically, cases had a significantly shorter anterior external anal sphincter length compared with controls when evaluated by three-dimensional endoanal ultrasonography (8.6 vs 10.2 mm; P = 0.03). Functionally, cases with severe incontinence had a significantly shorter anterior sphincter length compared with cases with minor incontinence (7.7 vs 10.4 mm; P = 0.04). No correlation could be found between anal pressures and severity of incontinence in the case group.
Conclusions: Cases had a significantly shorter anterior external anal sphincter length. Functionally, anterior sphincter length correlated with increased severity of incontinence.
Answer: Yes, there appears to be a role for concomitant pelvic floor repair (PFR) in patients with sphincter defects in the treatment of fecal incontinence. A study evaluating the outcomes of combined overlapping anal sphincteroplasty and PFR versus anterior sphincteroplasty alone in patients with concomitant sphincter and pelvic floor defects found that addressing pelvic prolapse issues during sphincter repair showed a trend toward higher success rates (71% vs. 45%) (PUBMED:16075237). This suggests that concomitant evaluation and repair of pelvic floor prolapse may be a clinically significant component of successful anal sphincteroplasty for fecal incontinence, although further prospective evaluation is warranted.
Additionally, another study indicated that the majority of female patients with incontinence who were eligible for anterior sphincteroplasty have concomitant pelvic floor injury. However, the study concluded that it seems unlikely that this type of injury itself has an impact on the outcome of anterior sphincteroplasty (PUBMED:20087089). This suggests that while pelvic floor injury is common, its presence does not necessarily compromise the outcome of sphincteroplasty for fecal incontinence.
Moreover, a randomized trial found no symptomatic advantage of adding internal sphincter plication in women with neuropathic fecal incontinence undergoing pelvic floor repair (PUBMED:7813338). This indicates that the addition of certain types of pelvic floor repair may not always be beneficial and should be considered on a case-by-case basis.
In summary, while there is evidence to support the role of concomitant pelvic floor repair in the treatment of fecal incontinence in patients with sphincter defects, the specific type of pelvic floor repair and its impact on treatment outcomes may vary. Further research is needed to fully establish the benefits and indications for concomitant pelvic floor repair in this patient population. |
Instruction: The recent reversal of the growth trend in MRI: a harbinger of the future?
Abstracts:
abstract_id: PUBMED:38298651
Enhancing stock market trend reversal prediction using feature-enriched neural networks. According to several previous studies, neural network-based stock price predictors perform better for plunging patterns of stock prices than normal stock price patterns. Focusing on this issue, this study proposes a novel method that uses a neural network-based stock price predictor to predict the upward trend-reversal of the plunging market itself. To achieve more consistent prediction results for plunging patterns, newly designed input features are added to improve the performance of traditionally used neural network-based predictors. The statistics of the prediction scores for past plunging markets and analyzed, and the results are used to predict the upward trend-reversal in the plunging market that occurred during the test period. We demonstrate the superiority of the proposed method through the simulation results of 3-year trading on KOSDAQ, a representative stock market in South Korea.
abstract_id: PUBMED:23833515
Recent trends and future of pharmaceutical packaging technology. The pharmaceutical packaging market is constantly advancing and has experienced annual growth of at least five percent per annum in the past few years. The market is now reckoned to be worth over $20 billion a year. As with most other packaged goods, pharmaceuticals need reliable and speedy packaging solutions that deliver a combination of product protection, quality, tamper evidence, patient comfort and security needs. Constant innovations in the pharmaceuticals themselves such as, blow fill seal (BFS) vials, anti-counterfeit measures, plasma impulse chemical vapor deposition (PICVD) coating technology, snap off ampoules, unit dose vials, two-in-one prefilled vial design, prefilled syringes and child-resistant packs have a direct impact on the packaging. The review details several of the recent pharmaceutical packaging trends that are impacting packaging industry, and offers some predictions for the future.
abstract_id: PUBMED:30103391
Future Trend Forecast by Empirical Wavelet Transform and Autoregressive Moving Average. In engineering and technical fields, a large number of sensors are applied to monitor a complex system. A special class of signals are often captured by those sensors. Although they often have indirect or indistinct relationships among them, they simultaneously reflect the operating states of the whole system. Using these signals, the field engineers can evaluate the operational states, even predict future behaviors of the monitored system. A novel method of future operational trend forecast of a complex system is proposed in this paper. It is based on empirical wavelet transform (EWT) and autoregressive moving average (ARMA) techniques. Firstly, empirical wavelet transform is used to extract the significant mode from each recorded signal, which reflects one aspect of the operating system. Secondly, the system states are represented by the indicator function which are obtained from those normalized and weighted significant modes. Finally, the future trend is forecast by the parametric model of ARMA. The effectiveness and practicality of the proposed method are verified by a set of numerical experiments.
abstract_id: PUBMED:32317392
Soil Microbial Biogeography in a Changing World: Recent Advances and Future Perspectives. Soil microbial communities are fundamental to maintaining key soil processes associated with litter decomposition, nutrient cycling, and plant productivity and are thus integral to human well-being. Recent technological advances have exponentially increased our knowledge concerning the global ecological distributions of microbial communities across space and time and have provided evidence for their contribution to ecosystem functions. However, major knowledge gaps in soil biogeography remain to be addressed over the coming years as technology and research questions continue to evolve. In this minireview, we state recent advances and future directions in the study of soil microbial biogeography and discuss the need for a clearer concept of microbial species, projections of soil microbial distributions toward future global change scenarios, and the importance of embracing culture and isolation approaches to determine microbial functional profiles. This knowledge will be critical to better predict ecosystem functions in a changing world.
abstract_id: PUBMED:25452342
Regulatory focus affects predictions of the future. This research investigated how regulatory focus might influence trend-reversal predictions. We hypothesized that compared with promotion focus, prevention focus hinders sense of control, which in turn predicts more trend-reversal developments. Studies 1 and 3 revealed that participants expected trend-reversal developments to be more likely to occur when they focused on prevention than when they focused on promotion. Study 2 extended the findings by including a control condition, and revealed that participants expected trend-reversal developments to be more likely to occur in the prevention condition than in the promotion and control conditions. Studies 4 and 5 revealed that participants' chronic prevention focus predicted a low sense of control (Study 4), and that promotion focus predicted a high sense of control (Studies 4 and 5). Furthermore, participants with a high sense of control expected trend-reversal developments to be less likely to occur. Thus, the results provided converging evidence for the hypothesis.
abstract_id: PUBMED:28725528
Battery-Supercapacitor Hybrid Devices: Recent Progress and Future Prospects. Design and fabrication of electrochemical energy storage systems with both high energy and power densities as well as long cycling life is of great importance. As one of these systems, Battery-supercapacitor hybrid device (BSH) is typically constructed with a high-capacity battery-type electrode and a high-rate capacitive electrode, which has attracted enormous attention due to its potential applications in future electric vehicles, smart electric grids, and even miniaturized electronic/optoelectronic devices, etc. With proper design, BSH will provide unique advantages such as high performance, cheapness, safety, and environmental friendliness. This review first addresses the fundamental scientific principle, structure, and possible classification of BSHs, and then reviews the recent advances on various existing and emerging BSHs such as Li-/Na-ion BSHs, acidic/alkaline BSHs, BSH with redox electrolytes, and BSH with pseudocapacitive electrode, with the focus on materials and electrochemical performances. Furthermore, recent progresses in BSH devices with specific functionalities of flexibility and transparency, etc. will be highlighted. Finally, the future developing trends and directions as well as the challenges will also be discussed; especially, two conceptual BSHs with aqueous high voltage window and integrated 3D electrode/electrolyte architecture will be proposed.
abstract_id: PUBMED:37596564
Key drivers of reversal of trend in childhood anaemia in India: evidence from Indian demographic and health surveys, 2016-21. Aim: Recent National Family Health Survey results portray striking improvements in most population and health indicators, including fertility, family planning, maternal and child health, gender treatment, household environments, and health insurance coverage of the Pradhan Mantri Jan Arogya Yojana (PM-JAY), with all India resonance. However, the prevalence of any anaemia (< 11 g/dl) among children under age five has exhibited a reversed trajectory in recent years. Therefore, the present study explores key drivers of the reversal of the trend in the prevalence of childhood anaemia between 2015 and2021.
Methods: Data of four rounds of the National Family Health Survey (NFHS) were used to show the overall trend of anaemia among children. However, for the analysis of key drivers of the reversal trend of childhood anaemia, only the recent two rounds (NFHS-4 & NFHS-5) were used. Descriptive, bivariate multivariable analysis and Fairlie decomposition model were used to explore the drivers of the reversal of the trend in childhood anaemia.
Results: During the past two decades, India has seen a decline in the prevalence of childhood anaemia (NFHS-2 to NFHS-4). However, a reversal of trend was observed recently. The prevalence of anaemia among children aged 6-59 months increased from 59 percent in NFHS-4 to 67 percent in NFHS-5. In addition, the prevalence of mild anaemia increased from 23.3 percent in NFHS-2 to 28.7 percent in NFHS-5. However, the prevalence of moderate and severe anaemia declined considerably from NFHS-2 (40 percent and 4.1 percent) to NFHS-4 (28.7 percent and 1.6 percent), but showed an increase in the prevalence in NFHS-5 (36.3 percent and 2.2 percent). Among others, mothers' educational attainment, anaemia status and socio-economic status emerge as the key drivers of the change in the prevalence of childhood anaemia.
Conclusion: These findings may have vital implications for the ongoing Anaemia Mukt Bharat Programme, one of the government's dream projects in India.
abstract_id: PUBMED:23542023
The recent reversal of the growth trend in MRI: a harbinger of the future? Purpose: Diagnostic imaging services have been repeatedly targeted as a source of excess health care expenditure. In particular, MRI is considered a high-tech and high-cost imaging service that saw rapid increases in utilization in the early 2000s. However, the most recent trends in the utilization of MR are not known. The aim of this study was to quantify trends in MR utilization overall and by body system from 1998 to 2010 in the Medicare population.
Methods: Medicare Part B data sets were obtained for 1998 to 2010 for all MR examinations performed in the Medicare population. Using Current Procedural Terminology codes, the total volume and utilization rates of all MR examinations were tabulated for each year of the study period. MR volume was then categorized by body system.
Results: The utilization rate of MR examinations in the Medicare population was 73 per 1,000 beneficiaries in 1998, increased to a peak of 189 in 2008, and decreased to 183 in 2010. The compound annual growth rate from 1998 to 2008 was 10%. The utilization rate in 2010 represents a decrease of 3.1% from the 2009 utilization rate. The most frequently imaged body section in every year was the head, which accounted for 2,404,250 examinations in 2010, 37.3% of all MR examinations in that year.
Conclusions: The overall MRI utilization rate sharply increased from 1998 until 2008 but then decreased in each of the next 2 years. A similar trend was noted for MR examinations performed in most body sections. These trends are likely to be the result of a number of possible causative factors.
abstract_id: PUBMED:27457672
Impact of future urban growth on regional climate changes in the Seoul Metropolitan Area, Korea. The influence of changes in future urban growth (e.g., land use changes) on the future climate variability in the Seoul metropolitan area (SMA), Korea was evaluated using the WRF model and an urban growth model (SLEUTH). The land use changes in the study area were simulated using the SLEUTH model under three different urban growth scenarios: (1) current development trends scenario (SC 1), (2) managed development scenario (SC 2) and (3) ecological development scenario (SC 3). The maximum difference in the ratio of urban growth between SC 1 and SC 3 (SC 1 - SC 3) for 50years (2000-2050) was approximately 6.72%, leading to the largest differences (0.01°C and 0.03ms(-1), respectively) in the mean air temperature at 2m (T2) and wind speed at 10m (WS10). From WRF-SLEUTH modeling, the effects of future urban growth (or future land use changes) in the SMA are expected to result in increases in the spatial mean T2 and WS10 of up to 1.15°C and 0.03ms(-1), respectively, possibly due to thermal circulation caused by the thermal differences between urban and rural regions.
abstract_id: PUBMED:32255948
Trend Prediction for Cesarean Deliveries Based on Robson Classification System at a Tertiary Referral Unit of North India. Background: World Health Organization proposed use of Robson Classification as a global standard for assessing, maintaining and comparing Cesarean section (CS) rates. This paper aimed to examine CS trend at a tertiary center according to Robson Ten-Group Classification System (TGCS) over three-year period (2015-2017) and to predict future Cesarean trends.
Methods: This prospective observational study was conducted at a tertiary teaching institute and included 81,784 females who delivered at this hospital over three-year duration (2015-2017). The data compilation was done according to Robson TGCS. The main outcome measures were overall annual CS rates, Robson group-wise CS rates, future overall and Robson group-wise CS trend. These parameters were calculated, trend analysis was done and trend over future 3 years was predicted.
Results: There were 81,784 deliveries (62,336 vaginal and 19,448 Cesarean deliveries) over the study period. The year-wise CS rate was 22.4%, 23.5% and 25.5%, respectively. The largest contribution was by group 5 followed by group 2 and group 1. Based on 3-year data, it was predicted that CS rate will increase by 0.905% annually over coming 3 years. In groups 3, 4, 6, 7 and 8, predicted trend value showed an annual increase by 0.65%, 0.05%, 0.05%, 0.05% and 0.10%, respectively; in groups 1, 2, 5, 9 and 10, it showed an annual decrease of 0.45%, 0.05%, 1.50%, 0.50% and 0.05%, respectively.
Conclusion: Increasing CS rate trend was seen over last 3 years with a predicted rise of 0.905% per year. Robson groups 5, 2 and 1 were at present major contributors; however, the trend analysis predicted a decreasing trend. Trend analysis predicted annual increment in groups 3, 4, 6, 7 and 8 over next 3 years, thereby suggesting need to focus on these groups as well.
Answer: The recent reversal of the growth trend in MRI utilization, as observed in the Medicare population from 1998 to 2010, may indeed be indicative of future trends in the use of this diagnostic imaging service. The study found that after a period of rapid increases in MRI utilization, peaking in 2008, there was a subsequent decrease in the following two years. The overall MRI utilization rate increased from 73 per 1,000 beneficiaries in 1998 to 189 in 2008, before decreasing to 183 in 2010. This represents a compound annual growth rate of 10% from 1998 to 2008, followed by a decrease of 3.1% from the 2009 utilization rate. The most frequently imaged body section was the head, accounting for 37.3% of all MR examinations in 2010. The observed trends suggest that a variety of factors may be contributing to this shift in MRI utilization, and these trends could continue into the future, potentially affecting the demand and provision of MRI services (PUBMED:23542023). |
Instruction: Two-year follow-up of splenic radiofrequency ablation in patients with cirrhotic hypersplenism: does increased hepatic arterial flow induce liver regeneration?
Abstracts:
abstract_id: PUBMED:18374048
Two-year follow-up of splenic radiofrequency ablation in patients with cirrhotic hypersplenism: does increased hepatic arterial flow induce liver regeneration? Background: Hepatocyte hypoxia may be a mechanism determining abnormal tissue oxygenation and dysfunction of the cirrhotic liver. Since the introduction of radiofrequency ablation (RFA) for patients with cirrhotic hypersplenism, we observed a phenomenon of visible hepatic regeneration. This study aims to investigate the potential mechanism of RFA-induced liver regeneration, and the 2-year outcomes of splenic RFA.
Methods: Forty patients who underwent splenic RFA for cirrhotic hypersplenism were followed for 24 months. Before and after RFA procedures, portal hemodynamics and liver and spleen volumes were measured by Doppler ultrasonography and computed tomography volumetry. Liver function tests and blood counts were also determined.
Results: The splenic and portal venous flows decreased, but hepatic arterial flow (HAF) increased dramatically after the RFA procedure. Liver volumes at 3 month post-RFA increased compared to the baseline volumes (872 +/- 107 vs. 821 +/- 99 cm(3), P = .031). A correlation was found between maximum absolute values of liver volumes (triangle upliver volumes) and that of HAF (triangle upHAF) in Child-Pugh class A/B patients (r = 0.60; P < .001). Leukocyte and platelet counts, as well as liver function, improved substantially during the 2-year follow-up. Patients with > or = 40% of spleen volume ablated had better improvement of thrombocytopenia. No death or severe complications occurred.
Conclusions: RFA for cirrhotic hypersplenism is safe and efficacious. The increase in HAF as a result of splenic RFA may improve liver function and induce liver regeneration in cirrhotics, but further studies are necessary to clarify the underlying mechanisms.
abstract_id: PUBMED:28145148
Preliminary experimental study on splenic hemodynamics of radiofrequency ablation for the spleen. Purpose: To test the splenic blood flow change after radiofrequency ablation (RFA) of the spleen in a porcine experimental model.
Material And Methods: Six pigs underwent RFA of the spleen via laparotomy. During the procedure of RFA, clamping of splenic artery (one) and both splenic artery/vein (one) was also performed. Measurement of blood flow of both splenic artery (SA) and splenic vein (SV) with flow-wire at pre- and post-RFA of the spleen was also performed.
Results: Ablated splenic lesions were created as estimating ∼50% area of the spleen in all pigs. Resected specimens reveal not only the coagulated necrosis but also the congestion of the spleen. On the SA hemodynamics, maximum peak velocity (MPV) changed from 37 ± 7 to 24 ± 8 cm/s (normal), 11 to 10 cm/s (clamp of the SA), and 12 to 7.5 cm/s (clamp of both SA/SV), respectively. On the SV hemodynamic, MPV changed from 15 ± 5 to 13 ± 4 cm/s (normal), 17 to 15 cm/s (clamp of the SA), and 17 to 26 cm/s (clamp of both SA/SV), respectively.
Conclusions: RFA of the spleen could induce coagulation necrosis and reduce the splenic arterial blood flow.
abstract_id: PUBMED:15862259
Radiofrequency ablation for hypersplenism in patients with liver cirrhosis: a pilot study. Radiofrequency ablation is a relatively new technique used for local ablation of unresectable tumors. We investigated the feasibility and efficacy of radiofrequency ablation for hypersplenism and its effect on liver function in patients with liver cirrhosis and portal hypertension. Nine consecutive patients with hypersplenism due to cirrhotic portal hypertension underwent radiofrequency ablation in enlarged spleens. The ablation was performed either intraoperatively or percutaneously. Patients are followed up for over 12 months. After treatment, between 20% and 43% of spleen volume was ablated, and spleen volume increased by 4%-10.2%. White blood cell count, platelet count, liver function, and hepatic artery blood flow showed significant improvement after 1-year follow-up. Splenic vein and portal vein blood flow were significantly reduced. Only minor complications including hydrothorax (three of nine patients) and mild abdominal pain (four of nine patients) were observed. No mortality or other morbidity occurred. Radiofrequency ablation is a safe, effective, and minimally invasive approach for the management of splenomegaly and hypersplenism in patients with liver cirrhosis and portal hypertension. Increased hepatic artery blood flow may be responsible for sustained improvement of liver condition. Radiofrequency ablation may be used as a bridging therapy for cirrhotic patients waiting for liver transplantation.
abstract_id: PUBMED:16029544
Radiofrequency ablation for hypersplenism due to portal hypertension: clinical study Objective: To investigate the feasibility, efficacy and clinical prospects of radiofrequency ablation (RFA) for hypersplenism in patients with liver cirrhosis and portal hypertension.
Methods: The laboratory and radiologic data over one-year period of patients undergone splenic RFA were analyzed.
Results: Nine patients undergone splenic RFA has closely followed-up over 1 year. During hospitalization, no procedure-related complications occurred, only minor complications including hydrothorax (3/9 patients) and mild abdominal pain (4/9 patients) were observed. After treatment, average 30.7% (20%-43%) of spleen volume was ablated, and the platelet count reached peak on 14th post-procedure day. White blood cell and platelet counts, liver function, and hepatic artery blood flow had gained significant improvements comparing with those before RFA procedures. Hyperplasia/regeneration was also occurred in cirrhotic liver after splenic RFA.
Conclusion: Radiofrequency ablation is a safe, effective and minimally invasive approach for the management of hypersplenism in patients with liver cirrhosis and portal hypertension. Increased hepatic artery blood flow can contribute to significant improvement of liver function, and maybe potentially stimulate liver regeneration in cirrhotic liver.
abstract_id: PUBMED:26034376
Radiofrequency ablation for treatment of hypersplenism: A feasible therapeutic option. We present a case of a patient with hypersplenism secondary to portal hypertension due to hepato-splenic schistosomiasis, which was accompanied by severe and refractory thrombocytopenia. We performed spleen ablation and measured the total spleen and ablated volumes with contrast-enhanced computed tomography and volumetry. No major complications occurred, thrombocytopenia was resolved, and platelet levels remained stable, which allowed for early treatment of the patient's underlying disease. Previous work has shown that splenic radiofrequency ablation is an attractive alternative treatment for hypersplenism induced by liver cirrhosis. We aimed to contribute to the currently sparse literature evaluating the role of radiofrequency ablation (RFA) in the management of hypersplenism. We conclude that splenic RFA appears to be a viable and promising option for the treatment of hypersplenism.
abstract_id: PUBMED:33281164
Short-term Effects of Hepatic Arterial Buffer Responses Induced by Partial Splenic Embolization on the Hepatic Function of Patients with Cirrhosis According to the Child-Pugh Classification. Objective This study primarily aimed to investigate the short-term effects of partial splenic embolization (PSE) on the Child-Pugh score and identify predictive factors for changes in the score caused by PSE. The secondary aim was to analyze changes in various parameters at one month postoperatively using these identified factors. Methods Between September 2007 and December 2019, 118 patients with cirrhosis and hypersplenism underwent PSE at our hospital. Testing was conducted preoperatively and at one month after PSE. Results Overall, the Child-Pugh score was not significantly changed postoperatively. The Child-Pugh score before PSE was identified as the strongest independent predictor of ameliorated and deteriorated Child-Pugh scores after PSE. Higher pretreatment Child-Pugh scores were correlated with higher posttreatment amelioration rates of the score. A significant decrease in the portal vein diameter and a significant increase in the common hepatic artery diameter were evident at the same level postoperatively in 64 patients with Child-Pugh class A (group A) and in 54 patients with Child-Pugh class B or C (group B/C) preoperatively. According to Murray's Law, PSE resulted in decreased portal venous flow and increased hepatic arterial flow, suggesting a hepatic arterial buffer response (HABR) induced by the procedure. Despite equivalent splenic infarction rates and similar posttreatment changes in hepatic hemodynamics, PSE significantly increased the Child-Pugh score of group A; however, the procedure significantly decreased the score of group B/C. Conclusion Considering original portal venous-hepatic arterial hemodynamics, PSE is expected to produce HABR-mediated hepatic functional improvements in cirrhosis patients with Child-Pugh class B/C.
abstract_id: PUBMED:23539400
Radiofrequency ablation of splenic tumors: a case series. Radiofrequency ablation (RFA) for treatment of splenic tumors has rarely been reported. Here we describe our experiences of undergoing RFA in three patients with solitary metastatic (n=2) and benign (n=1) tumors of the spleen. Two patients also had underlying cirrhotic hypersplenism. A 53-year-old male with solitary splenic metastasis from hepatocellular carcinoma underwent laparoscopical RFA of the splenic tumor. Another 61-year-old female with intraabdominal recurrence, focal splenic metastasis from colon cancer and cirrhotic hypersplenism underwent cytoreductive surgery and RFA of splenic tumors. On the third patient, a 32-year-old man with severe hypersplenism, splenic artery steal syndrome and a solitary splenic hemangioma, a laparoscopical RFA of the splenic tumor was performed. The three patients recovered uneventfully. The concurrent hypersplenism of the latter two patients improved significantly. The results indicate that RFA of splenic tumors is feasible and safe, and could be evaluated as an alternative to splenectomy in selected patients with solitary splenic tumors.
abstract_id: PUBMED:23798315
Large splenic volume may be a useful predictor for partial splenic embolization-induced liver functional improvement in cirrhotic patients. Background: Partial splenic embolization (PSE) for cirrhotic patients has been reported not only to achieve an improvement in thrombocytopenia and portal hypertension, but also to induce PSE-associated fringe benefit such as individual liver functional improvement. The purpose of this study was to clarify the predictive marker of liver functional improvement due from PSE in cirrhotic patients.
Methods: From April 1999 to January 2009, 83 cirrhotic patients with hypersplenism-induced thrombocytopenia (platelet count <10 × 10(4)/μl) underwent PSE. Of them, 71 patients with follow-up for more than one year after PSE were retrospectively investigated.
Results: In liver tissues after PSE, proliferating cell nuclear antigen (PCNA)-positive hepatocytes were remarkably increased, speculating that PSE induced liver regenerative response. Indeed, serum albumin and cholinesterase levels increased to 104 ± 14% and 130 ± 65% each of the pretreatment level at one year after PSE. In a multiple linear regression analysis, preoperative splenic volume was extracted as the predictive factor for the improvement in cholinesterase level after PSE. Cirrhotic patients with preoperative splenic volume >600 ml obtained significantly higher serum albumin and cholinesterase levels at one year after PSE compared to those with less than 600 ml (P-values were 0.029 in both).
Conclusion: A large preoperative splenic volume was the useful predictive marker for an effective PSE-induced liver functional improvement.
abstract_id: PUBMED:36345736
Open Radiofrequency Ablation Combined with Splenectomy and Pericardial Devascularization vs. Liver Transplantation for Hepatocellular Carcinoma Patients with Portal Hypertension and Hypersplenism: A Case-Matched Comparative Study. Aim: To compare the short- and long-term treatment outcomes of open radiofrequency ablation combined with splenectomy and pericardial devascularization versus liver transplantation for hepatocellular carcinoma patients with portal hypertension and hypersplenism.
Methods: During the study period, the treatment outcomes of consecutive HCC patients with portal hypertension and hypersplenism who underwent open radiofrequency ablation, splenectomy and pericardial devascularization (the study group) were compared with the treatment outcomes of a case-matched control group of HCC patients who underwent liver transplantation.
Results: The study group consisted of 32 patients, and the control group comprised 32 patients selected from 155 patients who were case-matched by tumor size, age, gender, MELD sore, tumor location, TNM classification, degree of splenomegaly and Child-Pugh staging. Baseline data on preoperative laboratory tests and tumor characteristics were comparable between the two groups. The mean follow-up was 43.2 ± 5.3 months and 44.9 ± 5.8 months for the study and control groups, respectively. Although the disease-free survival rates of the control group were better than those of the study group (P < 0.001), there was no significant difference in the cumulative overall survival time or the incidence of portal vein thrombosis between the two groups (P = 0.670, 0.083). Compared with the control group, the study group had significantly less intraoperative blood loss, and lower incidences of postoperative pleural effusion and pneumonia (all P < 0.05).
Conclusion: Open radiofrequency ablation, splenectomy and pericardial devascularization for small HCCs with portal hypertension and hypersplenism can be an alternative therapy for a subset of carefully selected patients under the shortage of liver donors.
abstract_id: PUBMED:37885290
Relationship Between Perioperative Factors and Splenic Artery Steal Syndrome After Orthotopic Liver Transplant: A Retrospective Clinical Study. Objectives: After orthotopic liver transplant, ischemia of biliary tract and graft loss may occur due to impaired hepatic arterial blood flow. This situation with hypersplenism and impaired hepatic arterial blood flowis defined as splenic artery steal syndrome.The aim of this study was to investigate the relationship between perioperative factors and splenic artery steal syndrome in orthotopic liver transplant patients.
Materials And Methods: Forty-five patients who underwent orthotopic liver transplant between 2014 and 2022 were included in the study. The data for the patients were obtained from the hospital database, including the intraoperative anesthesiology and postoperative intensive care records.
Results: Eleven patients were diagnosed with splenic artery steal syndrome. Patients with splenic artery steal syndrome had higher need for intraoperative vasopressor agents (P = .016) and exhibited lower intraoperative urine output (P = .031). In the postoperative intensive care follow-up, patients with splenic artery steal syndrome had higher levels of C-reactive protein during the first 48 hours (P = .030).
Conclusions: Intraoperative administration of vasopressor drugs, low urine output, and early postoperative high C-reactive protein levels were associated with the development of splenic artery steal syndrome in patients undergoing orthotopic liver transplant. Future studies should focus on investigation of biomarkers associated systemic hypoperfusion that may contribute to the development of splenic artery steal syndrome.
Answer: The two-year follow-up study of splenic radiofrequency ablation (RFA) in patients with cirrhotic hypersplenism aimed to investigate whether increased hepatic arterial flow (HAF) could induce liver regeneration. The study involved 40 patients who underwent splenic RFA and were followed for 24 months. Measurements of portal hemodynamics, liver and spleen volumes, liver function tests, and blood counts were taken before and after the RFA procedures. The results showed that after RFA, there was a decrease in splenic and portal venous flows, but a dramatic increase in HAF. Liver volumes at 3 months post-RFA were increased compared to baseline volumes (872 +/- 107 vs. 821 +/- 99 cm^3, P = .031). A correlation was found between the maximum absolute values of liver volumes and that of HAF in Child-Pugh class A/B patients (r = 0.60; P < .001). Additionally, leukocyte and platelet counts, as well as liver function, improved substantially during the 2-year follow-up. Patients with ≥ 40% of spleen volume ablated had better improvement of thrombocytopenia. No death or severe complications occurred. The study concluded that RFA for cirrhotic hypersplenism is safe and efficacious, and the increase in HAF as a result of splenic RFA may improve liver function and induce liver regeneration in cirrhotics. However, further studies are necessary to clarify the underlying mechanisms (PUBMED:18374048). |
Instruction: Is there an optimal hemoglobin level for patients with glioblastoma multiforme?
Abstracts:
abstract_id: PUBMED:15701272
Is there an optimal hemoglobin level for patients with glioblastoma multiforme? Purpose: The purpose of this study was to assess the relationship between hemoglobin levels and survival for patients treated with radiation therapy for glioblastoma multiforme.
Methods/materials: Between 1992 and 2001, 89 patients with newly diagnosed glioblastoma multiforme were treated with a minimum of 50 Gy of radiation therapy. The primary study endpoint was overall survival. The independent variables analyzed included peak hemoglobin level, age, sex, extent of surgery, and duration of therapy. The peak hemoglobin level was the highest hemoglobin value obtained within 1 week before the initiation of radiation therapy or at some point during radiation therapy. The peak hemoglobinlevel was stratified into values of less than or equal and values greater than for each of the following hemoglobin values: 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, and 14.0 g/dL.
Results: On univariate analysis, age (< or = 50 years of age) and surgical treatment (resection) were significant for increased survival at 1 year. When univariate analysis was performed on the stratification of the peak hemoglobin, levels greater than 11.0, 13.5, and 14.0 g/dL reached statistical significance for increased survival. Multivariate analysis was then performed on models composed of the hemoglobin levels that reached significance, and the other independent variables were investigated. In all models, both age and the peak hemoglobin level tested were prognostic for survival. However, for the hemoglobin level of 11.0 g/dL, an interaction was detected between hemoglobin and age.
Conclusion: We found that increasing hemoglobin levels may have prognostic implications and could thus influence clinical outcome. We will be seeking to verify our results in larger cohorts.
abstract_id: PUBMED:23110493
Elderly patients affected by glioblastoma treated with radiotherapy: the role of serum hemoglobin level. Objective: To investigate the role of serum hemoglobin level for elderly patients with glioblastoma treated with radiotherapy (RT).
Methods: Patients older than 65 years with glioblastoma, who underwent surgical resection/biopsy and RT, were evaluated. Total doses were 30 or 60 Gy:30 Gy in 10 or 5 fractions (palliative approach) and 60 Gy in 30 fractions (standard approach). In the standard approach, temozolomide was administered concomitantly and adjuvantly to RT. Before starting and weekly during RT, serum hemoglobin level was assessed for all patients. Recursive partitioning analysis (RPA) was used to classify patients.
Results: From 2005 to 2011, 45 patients (median age 71 years) were treated in our institution. Hemoglobin level less than 12 was confirmed in 11 patients. Median progression-free survival (PFS) and overall survival (OS) were 8 and 13 months, respectively. Only RPA class and extent of surgery correlated to PFS (p = .002, p = .04, respectively). RPA class, surgery, and RT dose affected OS (p = .003, p = .02, p = .03, respectively), whereas age (<70 vs. ≥70 years) and hemoglobin level (<12 vs. ≥12) did not influenced outcome (p = 0.2, p = 0.5, respectively).
Conclusion: Our data suggested that extent of surgery and RPA class remain independent prognostic factor, whereas patients' anemia did not adversely affect prognosis in glioblastoma elderly patients.
abstract_id: PUBMED:14628127
Prognostic impact of hemoglobin level prior to radiotherapy on survival in patients with glioblastoma. Purpose: To evaluate prognostic factors in patients with glioblastoma treated with postoperative or primary radiotherapy.
Patients And Methods: From 1989 to 2000, a total of 100 patients underwent irradiation as part of their initial treatment for glioblastoma. All patients had undergone surgery or biopsy followed by conventional external-beam radiotherapy. 85 patients who received the planned dose of irradiation (60 Gy in 30 fractions) were analyzed for the influence of prognostic factors. 73/85 (86%) of patients were given postoperative irradiation, while 12/85 (14%) of patients were primarily treated with radiotherapy after biopsy.
Results: The median overall survival was 10.1 months (range, 3.7-49.8 months), the 1- and 2-year survival rates were 41% and 5%, respectively. Univariate analysis revealed age < or = 55 years (p < 0.001), pre-radiotherapy hemoglobin (Hb) level > 12 g/dl (p = 0.009), and pre-radiotherapy dose of dexamethasone < or = 2 mg/day (p = 0.005) to be associated with prolonged survival. At multivariate analysis, younger age (p < 0.001), higher Hb level (p = 0.002), lower dose of dexamethasone (p = 0.026), and a hemispheric tumor location (p = 0.019) were identified as independent prognostic factors for longer survival. The median survival for patients with an Hb level > 12 g/dl was 12.1 months compared to 7.9 months for those with a lower Hb level. Contingency-table statistics showed no significant differences for the two Hb groups in the distribution of other prognostic factors.
Conclusion: The results indicate that lower Hb level prior to radiotherapy for glioblastoma can adversely influence prognosis. This finding deserves further evaluation.
abstract_id: PUBMED:22127356
Prognostic impact of hemoglobin level and other factors in patients with high-grade gliomas treated with postoperative radiochemotherapy and sequential chemotherapy based on temozolomide: a 10-year experience at a single institution. Background And Purpose: To evaluate the influence of serum hemoglobin level prior to radiotherapy and other prognostic factors on survival in patients with high-grade gliomas.
Material And Methods: From 2001-2010, we retrospectively evaluated a total of 48 patients with malignant glioma treated with surgery and postoperative radiochemotherapy with temozolomide. A total of 37 of 48 patients received sequential temozolomide. Hemoglobin levels were assayed before radiotherapy in all patients. The Kaplan-Meier method was applied to estimate the overall survival, while the log-rank test was applied to evaluate the differences on survival probability between prognostic subgroups.
Results: Results were assessed in 43 patients. The median overall survival time was 18 months (95% confidence interval: 12-40 months). The 1- and 2-year survival rates were 62.2% and 36.3%, respectively. The prognostic factors analyzed were gender, age, extent of surgery, performance status before and after radiotherapy, sequential chemotherapy, hemoglobin level, and methylation of the O-6-methylguanine-DNA methyltransferase gene (MGMT). In univariate analysis, the variables significantly related to survival were performance status before and after radiotherapy, sequential chemotherapy, and hemoglobin level. The median overall survival in patients with a hemoglobin level ≤ 12 g/dl was 12 months and 23 months in patients with a hemoglobin level > 12 g/dl. The 1- and 2-year survival rates were 46.7% and 20.0%, respectively, for patients with a hemoglobin level ≤ 12 mg/dl and 69.6% and 45.7%, respectively, for patients with a hemoglobin level > 12 g/dl.
Conclusion: Our results confirm the impact of well-known prognostic factors on survival. In this research, it was found that a low hemoglobin level before radiotherapy can adversely influence the prognosis of patients with malignant gliomas.
abstract_id: PUBMED:32426265
Hemoglobin Levels and Red Blood Cells Distribution Width Highlights Glioblastoma Patients Subgroup With Improved Median Overall Survival. Glioblastoma multiforme (GBM) is known for its dismal prognosis, though its dependence on patients' readily available RBCs parameters is not fully established. In this work, 170 GBM patients, diagnosed and treated in Soroka University Medical Center (SUMC) over the last 12 years were retrospectively inspected for their survival dependency on pre-operative RBCs parameters. Besides KPS and tumor resection supplemented by oncological treatment, age under 70 (HR = 0.4, 95% CI 0.24-0.65, p = 0.00073), low hemoglobin level (HR = 1.79, 95% CI 1.06-2.99, p = 0.031), and Red Cell Distribution Width (RDW) < 14% (HR = 0.57, 95% CI 0.37-0.88, p = 0.018) were found to be prognostic of patients' overall survival in multivariate analysis, accounting for a false discovery rate of < 5% due to multiple hypothesis testing. According to these results, a stratification tree was made, from which a favorable route highlighted a subgroup of nearly 30% of the cohorts' patients whose median overall survival was 21.1 months (95% CI 16.2-27.2)-higher than the established chemo-radiation standard first-line treatment regimen overall median survival average of about 15 months. The beneficial or detrimental effect of RBCs parameters on GBM prognosis and its possible causes is discussed.
abstract_id: PUBMED:36325374
On optimal temozolomide scheduling for slowly growing glioblastomas. Background: Temozolomide (TMZ) is an oral alkylating agent active against gliomas with a favorable toxicity profile. It is part of the standard of care in the management of glioblastoma (GBM), and is commonly used in low-grade gliomas (LGG). In-silico mathematical models can potentially be used to personalize treatments and to accelerate the discovery of optimal drug delivery schemes.
Methods: Agent-based mathematical models fed with either mouse or patient data were developed for the in-silico studies. The experimental test beds used to confirm the results were: mouse glioma models obtained by retroviral expression of EGFR-wt/EGFR-vIII in primary progenitors from p16/p19 ko mice and grown in-vitro and in-vivo in orthotopic allografts, and human GBM U251 cells immobilized in alginate microfibers. The patient data used to parametrize the model were obtained from the TCGA/TCIA databases and the TOG clinical study.
Results: Slow-growth "virtual" murine GBMs benefited from increasing TMZ dose separation in-silico. In line with the simulation results, improved survival, reduced toxicity, lower expression of resistance factors, and reduction of the tumor mesenchymal component were observed in experimental models subject to long-cycle treatment, particularly in slowly growing tumors. Tissue analysis after long-cycle TMZ treatments revealed epigenetically driven changes in tumor phenotype, which could explain the reduction in GBM growth speed. In-silico trials provided support for implementation methods in human patients.
Conclusions: In-silico simulations, in-vitro and in-vivo studies show that TMZ administration schedules with increased time between doses may reduce toxicity, delay the appearance of resistances and lead to survival benefits mediated by changes in the tumor phenotype in slowly-growing GBMs.
abstract_id: PUBMED:36252189
Relationship of carbohydrate metabolism indicators during adjuvant radiotherapy and survival in patients with glioblastoma Despite the improvement of treatment methods, survival of patients with glioblastoma is still low. Glioblastoma is the most common brain tumor.
Objective: To study carbohydrate metabolism in patients with glioblastoma during adjuvant external beam radiation therapy and its impact on survival.
Material And Methods: The study included 66 patients with glioblastoma (Karnofsky score ≥80%) who underwent hypofractionated adjuvant external beam radiation therapy (single focal dose 2.5-3Gr). Patients received dexamethasone 4-8 mg daily throughout the entire course of irradiation.
Results: High level of glycated hemoglobin (HbA1c) was observed in 33.3% of patients with glioblastoma undergoing irradiation. Cumulative survival was 17 months (95% CI 13.7-20.3). Two indicators had a significant negative impact on cumulative survival: age of patients (HR 1.04; 95% CI 1.01-1.08; p=0.02) and level of HbA1c (HR 1.94; 95% CI 1.23-3.06; p=0.005). Cumulative survival was significantly (p=0.022) higher in patients younger 53 years compared to older people (18 months and 14 months, respectively). Cumulative survival was 20 months among patients whose HbA1c did not exceed the upper reference value (<5.8%). Survival was higher (p=0.017) in these ones compared to patients with HbA1c ≥5.8% (13 months). According to multivariate Cox regression model, high HbA1c was the only significant negative predictor of cumulative survival (HR 3.35; 95% CI 1.14-9.81; p=0.027).
Conclusion: High HbA1c is unfavorable predictor of cumulative survival in patients with glioblastoma and Karnofski score ≥80% undergoing adjuvant hypofractionated irradiation regardless their age.
abstract_id: PUBMED:24270851
Adult, embryonic and fetal hemoglobin are expressed in human glioblastoma cells. Hemoglobin is a hemoprotein, produced mainly in erythrocytes circulating in the blood. However, non-erythroid hemoglobins have been previously reported in other cell types including human and rodent neurons of embryonic and adult brain, but not astrocytes and oligodendrocytes. Human glioblastoma multiforme (GBM) is the most aggressive tumor among gliomas. However, despite extensive basic and clinical research studies on GBM cells, little is known about glial defence mechanisms that allow these cells to survive and resist various types of treatment. We have shown previously that the newest members of vertebrate globin family, neuroglobin (Ngb) and cytoglobin (Cygb), are expressed in human GBM cells. In this study, we sought to determine whether hemoglobin is also expressed in GBM cells. Conventional RT-PCR, DNA sequencing, western blot analysis, mass spectrometry and fluorescence microscopy were used to investigate globin expression in GBM cell lines (M006x, M059J, M059K, M010b, U87R and U87T) that have unique characteristics in terms of tumor invasion and response to radiotherapy and hypoxia. The data showed that α, β, γ, δ, ζ and ε globins are expressed in all tested GBM cell lines. To our knowledge, we are the first to report expression of fetal, embryonic and adult hemoglobin in GBM cells under normal physiological conditions that may suggest an undefined function of those expressed hemoglobins. Together with our previous reports on globins (Ngb and Cygb) expression in GBM cells, the expression of different hemoglobins may constitute a part of series of active defence mechanisms supporting these cells to resist various types of treatments including chemotherapy and radiotherapy.
abstract_id: PUBMED:25839399
Optimal Timing of Whole-Brain Radiation Therapy Following Craniotomy for Cerebral Malignancies. Background: For patients with cerebral metastases that are limited in number, surgical resection followed by whole-brain radiation therapy is the standard of care. In addition, for high-grade gliomas, maximal surgical resection followed by local radiotherapy is considered the optimal treatment. Radiation is known to impair wound healing, including healing of surgical incisions. Radiotherapy shortly after surgical resection would be expected to minimize the opportunity for tumor regrowth or progression. Owing to these competing interests, the purpose of this study was to shed light on the optimal timing of radiotherapy after surgical resection of brain metastasis or high-grade gliomas.
Methods: A review of the literature was conducted on the following topics: radiation and wound healing, corticosteroid use and wound healing, radiotherapy for tumor control for cerebral metastases and high-grade gliomas, and whole-brain radiation therapy or focal radiotherapy after craniotomy with focus on the timing of radiotherapy after surgery.
Results: In animal models, wound integrity and healing was less impaired by radiotherapy administered 1 week after surgery. In humans, this timing would be expected to be significantly longer, on the order of several weeks.
Conclusions: Given the limited literature, insufficient conclusions can be drawn. However, animal data suggest a period of at least 1 week (but it is likely several weeks in humans) is necessary for reconstitution of wound strength before initiation of radiation therapy. A randomized prospective study is recommended to understand better the effect of the timing of radiation therapy following surgical intervention for brain metastasis or high-grade gliomas.
abstract_id: PUBMED:28516344
Defining optimal cutoff value of MGMT promoter methylation by ROC analysis for clinical setting in glioblastoma patients. Resistance to temozolomide (TMZ) chemotherapy poses a significant challenge in the treatment of glioblastoma (GBM). Hypermethylation in O6-methylguanine-DNA methyltransferase (MGMT) promoter is thought to play a critical role in this resistance. Pyrosequencing (PSQ) has been shown to be accurate and robust for MGMT promoter methylation testing. The unresolved issue is the determination of a cut-off value for dichotomization of quantitative MGMT PSQ results into "MGMT methylated" and "MGMT unmethylated" patient subgroups as a basis for further treatment decisions. In this study, receiver operating characteristic (ROC) curve analysis was used to identify an optimal cutoff of MGMT promoter methylation by testing mean percentage of methylation of 4 CpG islands (76-79) within MGMT exon 1. The area under the ROC (AUC) as well as the best cutoff to classify the methylation were calculated. Positive likelihood ratio (LR+) was chosen as a diagnostic parameter for defining an optimal cut-off. Meanwhile, we also analyzed whether mean percentage of methylation at the investigated CpG islands could be regarded as a marker for evaluating prognostication. ROC analysis showed that the optimal threshold was 12.5% (sensitivity: 60.87%; specificity: 76%) in response to the largest LR+ 2.54. 12.5% was established to distinguish MGMT promoter methylation, which was confirmed using validation set. According to the cutoff value, the MGMT promoter methylation was found in 58.3% of GBM. Mean methylation level of the investigated CpG sites strong correlated with overall survival (OS), which means GBM patients with a high level of methylation survived longer than those with low level of methylation(log-rank test, P = 0.017). In conclusion, ROC curve analysis enables the best cutoff for discriminating MGMT promoter methylation status. LR+ can be used as a key factor that evaluates cutoff. The promoter methylation level of MGMT by PSQ in GBM patients had prognostic value.
Answer: The relationship between hemoglobin levels and survival in patients with glioblastoma multiforme (GBM) has been investigated in several studies, but the results are not entirely consistent, and thus an "optimal" hemoglobin level is not definitively established.
One study found that increasing hemoglobin levels may have prognostic implications for patients treated with radiation therapy for GBM. On univariate analysis, peak hemoglobin levels greater than 11.0, 13.5, and 14.0 g/dL were significant for increased survival. Multivariate analysis confirmed that both age and peak hemoglobin level were prognostic for survival (PUBMED:15701272).
Another study focused on elderly patients with GBM treated with radiotherapy and found that hemoglobin level did not influence the outcome. In this study, neither age nor hemoglobin level affected overall survival, suggesting that anemia did not adversely affect prognosis in elderly GBM patients (PUBMED:23110493).
A different study indicated that a pre-radiotherapy hemoglobin level greater than 12 g/dL was associated with prolonged survival, identifying it as an independent prognostic factor for longer survival in patients with GBM (PUBMED:14628127).
Further research confirmed that a low hemoglobin level before radiotherapy can adversely influence the prognosis of patients with malignant gliomas. In this study, patients with a hemoglobin level ≤ 12 g/dL had a median overall survival of 12 months, compared to 23 months for those with a hemoglobin level > 12 g/dL (PUBMED:22127356).
An additional study highlighted that low hemoglobin level and Red Cell Distribution Width (RDW) < 14% were prognostic of patients' overall survival in multivariate analysis, identifying a subgroup of GBM patients with improved median overall survival (PUBMED:32426265).
In summary, while some studies suggest that higher hemoglobin levels (>12 g/dL) are associated with improved survival in GBM patients, other studies do not find a significant impact of hemoglobin levels on patient outcomes. Therefore, it is difficult to define an "optimal" hemoglobin level for GBM patients, and further research is needed to clarify the prognostic value of hemoglobin in this context. |
Instruction: Does fingernail polish affect pulse oximeter readings?
Abstracts:
abstract_id: PUBMED:17064901
Does fingernail polish affect pulse oximeter readings? Introduction: Results from previous studies evaluating the effect of nail polish on oxygen saturation (SpO(2)) determined by pulse oximeter monitors are inconsistent. Establishing the effect of nail polish on SpO(2) is relevant to clinical practice, since removing nail polish requires clinical time and supplies.
Objective: The objective of this study was to determine if fingernail polish affects SpO(2) as measured by two different pulse oximeter machines.
Methods: Absorption spectra of 10 nail polish colors were obtained by spectrophotometry. Twenty-seven healthy volunteers with SpO(2)> or =95% participated. Using the Nellcor N20 and N595 pulse oximeters, the mean SpO(2) was measured on each of 10 nails with and without nail polish and using a side-to-side configuration. Means were compared using paired t-tests.
Results: Mean SpO(2) had a statistically significant decrease with brown and blue nail polish using both machines (p<0.05) but this was not clinically significant (<1% difference). Using the side-to-side configuration, the N595 oximeter had a statistically significant decrease in mean SpO(2) with red nail polish but again this was not clinically significant.
Conclusion: Fingernail polish does not cause a clinically significant change in pulse oximeter readings in healthy people.
abstract_id: PUBMED:24054173
The effect of nail polish on pulse oximetry readings. Introduction: Pulse oximeters utilise the pulsatile nature of arterial blood flow to distinguish it from venous flow and estimate oxygen saturation in arterial blood. Pulse oximetry is primarily used in hospital wards, emergency rooms, intensive care units, operating rooms and home care.
Aim: The objective of this study is to determine whether the use of nail polish of various colours have an effect on oximeter readings of oxygen saturation value.
Method: The sample group of this study is comprised of 40 healthy women. In the first phase of the study, readings were taken on left and right hand fingers, with no nail polish, to determine any differences in oxygen saturation value. In the second phase of the study, 10 different colours of nail polish, namely dark red, yellow, dark blue, green, purple, brown, white, metallic, black and pink, of the same brand were applied. Readings were recorded once oxygen saturation values on the screen became stable. Number and percentage distributions along with Wilcoxon signed ranks and Friedman test were used in the analysis of data.
Conclusion: Only red nail polish did not yield statistically significant reading results. We conclude that different nail polish colours cause a clinically significant change in pulse oximeter readings in healthy volunteers.
abstract_id: PUBMED:37185113
Impact of Fingernail Polish on Pulse Oximetry Measurements: A Systematic Review. Background: The effect of application of fingernail polish on SpO2 measurement remains unclear. We conducted this systematic review to ascertain the impact of fingernail polish on SpO2 measurement.
Methods: We queried PubMed, Embase, and CINAHL databases for publications indexed through December 2022. We included studies providing paired SpO2 data from fingertips without and after nail polish application or reporting the number of subjects whose SpO2 could not be measured due to fingernail polish. We used random effects modeling to summarize standardized mean differences (SMDs) and corresponding 95% CI for different nail polish colors from comparative studies.
Results: We retrieved 122 studies and included 21 publications, mostly performed on healthy volunteers. Of these, 17 (81.0%) studies had a low risk of bias. We summarized mean SMD for 10 nail polish colors (black, blue, brown, green, orange, pink, purple, red, white, and yellow) from 25 paired data sets on SpO2 across 20 studies. We found small (likely clinically insignificant) but statistically significant differences in mean SpO2 when fingers were coated with black, blue, brown, or purple nail polish (SMD -0.57, -0.47, -0.33, and -0.25, respectively; 95% CI -0.86 to -0.29, -0.84 to -0.10, -0.59 to -0.07, and -0.48 to -0.02, respectively). Only one of 4 studies reported a high proportion of unsuccessful oximeter readings from fingers painted with black (88.0%) or brown (36.0%) nail polish.
Conclusions: Although fingernail polish of some colors can marginally reduce SpO2 reading or occasionally impede SpO2 measurement, the variability is clinically insignificant.
abstract_id: PUBMED:24799789
The effect of subareolar isosulfan blue injection on pulse oximeter readings. Besides several side effects including anaphylaxis, blue dyes are also known to cause false pulse oximeter readings. We aimed to examine the effects of subareolar isosulfan blue injection on pulse oximeter (SpO2) readings. The study group included 27 patients undergoing SLNB using both radiocolloid and isosulfan blue. Another group of 27 patients constituted the control group. Pulse oximeter readings were compared. SpO2 decline ≥4 % was defined as significant. All but one (96.2 %) of the patients in the study group showed SpO2 declines, compared to only one patient in the control group. Median ± Interqartile Range (IR) SpO2 decrease was 3.0 ± 4.0 % in the study and 0.0 ± 1.0 % in the control group (p < 0.001). There were significant (≥4 %) SpO2 decreases in 13 (48.1 %) patients in the study group. Statistically significant differences were noted between the two groups in all recordings between 15 and 180 min (p < 0.001). Initial time for SpO2 fall and the time to the lowest SpO2 recording were 10.0 ± 10.0 and 40.0 ± 30.0 min respectively. Using subareolar injection, the frequency of false readings is comparable with intraparenchymal injections, and is higher than intradermal injections. Time to peak SpO2 fall, and the recovery period, are delayed in the subareolar technique.
abstract_id: PUBMED:27815366
Effect of lingual gauze swab placement on pulse oximeter readings in anaesthetised dogs and cats. This study aimed to evaluate the effect of lingual gauze swab placement on pulse oximeter readings in anaesthetised dogs and cats. Following anaesthetic induction, the following pulse oximeter probe configurations were performed: no gauze swab (control), placement of a gauze swab between the tongue and the probe, placement of different thicknesses of gauze swab, placement of red cotton fabric, placement of a sheet of white paper and placement of the probe and gauze swab on different locations on the tongue. Oxygen saturation (SpO2) and peripheral perfusion index (PI) were recorded. Placement of a gauze swab between the pulse oximeter probe and the tongue in anaesthetised dogs and cats resulted in significantly higher SpO2 values compared with the control group. In dogs, PI values were significantly higher than the control in all groups except the quarter thickness swab group. In cats, PI was significantly higher in the double thickness swab and white paper groups compared with the control. Cats had significantly higher SpO2 and lower PI values than dogs. The authors propose that increased contact pressure is responsible for significantly higher SpO2 and PI readings with the use of a lingual gauze swab resulting from changes in transmural pressure and arterial compliance.
abstract_id: PUBMED:28921136
Epidural anesthesia affects pulse oximeter readings and response time. We investigated the effects of epidural anesthesia on pulse oximeter readings (Spo2) and response time because this type of anesthesia causes significant changes in microcirculation at measurement sites. Twenty patients were divided into lumbar epidural (L-EPI;n=10) and the cervical epidural (C-EPI;n=10) groups. Spo2 and skin blood flow (SBF) were measured at the finger and toe simultaneously by pulse oximeter and laser Doppler flowmeter, respectively. Data were collected before and after epidural anesthesia for 1 min and the response time was calculated by the difference between the finger and toe using the breath-holding method. Epidural anesthesia increased SBF in the blocked area and decreased it in the nonblocked area in both groups (P<0.01, respectively). In the L-EPI group, Spo2 was increased at the finger (P<0.05) and decreased at the toe (P<0.05). In the C-EPI group, Spo2 at both the finger and toe was decreased by the anesthesia. ΔSpo2 (Spo2 at the finger minus Spo2 at the toe) was increased in the L-EPI group (P<0.05) and decreased in the C-EPI group (P<0.01). The difference in the response time became larger in the C-EPI group and smaller or opposite in the L-EPI group after anesthesia. The difference in response time and SBF were significantly correlated (r=0.71;P<0.05). These results indicated that epidural anesthesia lowerd Spo2 and shortened the response time through vasodilation in the blocked area and caused the opposite reactions in the nonblocked area through compensatory vasoconstriction.
abstract_id: PUBMED:30854571
The effects of gel-based manicure on pulse oximetry. Introduction: Pulse oximetry is the standard monitoring technique of functional oxygen saturation (SpO2). As the use of fingernail polish has been described to alter SpO2 readings, its removal is commonly recommended prior to measurement. Gel-based manicures have gained popularity in recent years due to their attractiveness and longevity. However, the removal of gel nail polish requires a specialised procedure. Valuable time and resources can be saved if removal can be avoided. To our knowledge, there are no available studies on the effect of gel-based manicures on pulse oximetry readings. Hence, we evaluated the effect with two oximeters, using different technology and wavelength combinations.
Methods: 17 healthy female adult volunteers were recruited for this single-blind randomised controlled trial. Subjects with hypothermia, hypotension, poor plethysmographic waveform and nail pathology were excluded. Colours tested were: black, purple, navy blue, green, light blue, white, yellow, orange, pink and red. Pulse oximetry was measured at 15- and 30-second intervals using two different pulse oximeters, the Philips M1191BL and Masimo SET®. Means were compared using paired t-tests.
Results: Using the Masimo oximeter, light blue (ΔM = 0.97% ± 0.96%; p = 0.001) and orange (ΔM = 0.76 ± 1.17%; p = 0.016) gel nail polish resulted in a statistically significant increase from baseline SpO2 readings. With the Philips oximeter, the limits of agreement ranged from 2% for pink to 17% for black, indicating imprecision.
Conclusion: Gel-based manicures can result in overestimations of actual readings, delaying detection of hypoxaemia. Gel nail polish should be routinely removed or an alternative monitoring technique sought.
abstract_id: PUBMED:33926803
The Effect of Nail Polish and Henna on the Measures of Pulse Oximeters in Healthy Persons. Purpose: The aim of the study was to determine the effects of nail polish and henna on pulse oximetry measurements in healthy individuals.
Methods: The study was designed as quasi-experimental and cross-sectional study. The population consisted of 682 women studying in a university's nursing department in the Mediterranean region during the academic year of 2016 to 2017. The sample consisted of 103 female students who agreed to participate in the study and met the inclusion criteria. The data were collected using a personal information form prepared in light of the literature. A single layer of nail polish of the same brand was applied; white on the thumb, red on the ring finger, and black on the little finger of the left hand, while henna was applied on the index finger of the left hand of the students. The middle finger was considered as the control group. A portable Nellcor (N-65) pulse oximeter was used for oxygen saturation measurements. The data were analyzed using means, SD, and paired-samples t test.
Findings: There was no statistically significant difference between oxygen saturation measurements of fingers with henna and red nail polish and the control finger (P > .05). However, oxygen saturation levels of fingers with black and white nail polish were lower than the control group's levels, and the difference was statistically significant (P < .05).
Conclusions: The results demonstrated that white and black nail polish had an impact on oxygen saturation measurements, whereas henna and red nail polish had no effect on the measurements. Based on these findings, nurses may be advised to remove patients' nail polish before measuring oxygen saturation using the finger. In addition, conducting new studies investigating the effects of nail polish, henna, and false nails, which are increasingly used today, on SpO2 values, is suggested.
abstract_id: PUBMED:25083036
Evaluation of efficacy of a pulse oximeter to assess pulp vitality. Background: To evaluate the efficacy of pulse oximeter as a pulp vitality tester.
Materials And Methods: The sample group consisted of 60 patients of age 15 to 30 years with normal maxillary anterior teeth. Thirty nonvital teeth with complete endodontic fillings were tested as control group. Systemic oxygen saturation values from the patient's fingers served as the control sample for comparison of pulp oxygen saturation values. Readings were recorded on index fingers first; teeth were then evaluated by placing sensor onto the tooth.
Results: Nonvital teeth recorded oxygen saturation values of 0%. The mean value for central incisor was 85.11 (SD ± 2.07), for lateral incisors 80.21 (SD ± 2.03) and for canines 89.55 (SD ± 1.09). Their control values (patient's index fingers) averaged 95.88% (SD ± 0.66). Pearson's correlation analysis showed a correlation of 0.11 for central incisors, 0.19 for lateral incisors and 0.12 for canines.
Conclusion: This study confirms that pulse oximeter is effective equipment for pulp vitality testing. Pulse oximeter evidences the actual method of evaluating the pulp vitality compared to contemporary methods. How to cite the article: Sadique M, Ravi SV, Thomas K, Dhanapal P, Simon EP, Shaheen M. Evaluation of efficacy of a pulse oximeter to assess pulp vitality. J Int Oral Health 2014;6(3):70-2.
abstract_id: PUBMED:17426236
Hemoglobin Bassett produces low pulse oximeter and co-oximeter readings. Variant hemoglobins can have altered oxygen affinity and can produce changes in oximeter readings. We present a case of hemoglobin Bassett, a possible cause of low pulse oximeter and co-oximeter readings in a 63-year-old woman.
Answer: The impact of fingernail polish on pulse oximeter readings has been studied with varying results. Some studies have found that certain colors of nail polish can cause a statistically significant decrease in oxygen saturation (SpO2) readings, but these changes are often not clinically significant. For instance, one study found that brown and blue nail polish resulted in a statistically significant decrease in mean SpO2 using two different pulse oximeter machines, but the difference was less than 1% and thus not clinically significant (PUBMED:17064901). Another study concluded that different nail polish colors, except for red, cause a clinically significant change in pulse oximeter readings in healthy volunteers (PUBMED:24054173).
A systematic review that included 21 publications mostly performed on healthy volunteers found small but statistically significant differences in mean SpO2 when fingers were coated with black, blue, brown, or purple nail polish. However, these differences were likely clinically insignificant, and only one of four studies reported a high proportion of unsuccessful oximeter readings from fingers painted with black (88.0%) or brown (36.0%) nail polish (PUBMED:37185113).
Other studies have investigated the effects of different substances and conditions on pulse oximetry. For example, gel-based manicures were found to result in overestimations of actual readings with certain colors, suggesting that gel nail polish should be removed or an alternative monitoring technique sought (PUBMED:30854571). Another study indicated that white and black nail polish impacted oxygen saturation measurements, while henna and red nail polish did not (PUBMED:33926803).
In summary, while some studies suggest that fingernail polish can affect pulse oximeter readings, the clinical significance of these effects appears to be minimal in healthy individuals. However, caution may be warranted with certain colors, and in clinical practice, it may be advisable to remove nail polish or use an alternative site for monitoring if accurate SpO2 readings are critical. |
Instruction: Geographic variations in breast cancer mortality: do higher rates imply elevated incidence or poorer survival?
Abstracts:
abstract_id: PUBMED:9518983
Geographic variations in breast cancer mortality: do higher rates imply elevated incidence or poorer survival? Objectives: Mortality rates from breast cancer are approximately 25% higher for women in the northeastern United States than for women in the South or West. This study examined the hypothesis that the elevation is due to decreased survival rather than increased incidence.
Methods: Data on breast cancer incidence, treatment, and mortality were reviewed.
Results: The elevated mortality in the Northeast is apparent only in older women. For women aged 65 years and older, breast cancer mortality is 26% higher in New England than in the South, while incidence is only 3% higher. Breast cancer mortality for older women by state correlates poorly with incidence (r = 0.28).
Conclusions: Those seeking to explain the excess breast cancer mortality in the Northeast should assess survival and should examine differences in cancer control practices that affect survival.
abstract_id: PUBMED:28616736
Geographic variations in female breast cancer incidence in relation to ambient air emissions of polycyclic aromatic hydrocarbons. A significant geographic variation of breast cancer incidence exists, with incidence rates being much higher in industrialized regions. The objective of the current study was to assess the role of environmental factors such as exposure to ambient air pollution, specifically carcinogenic polycyclic aromatic hydrocarbons (PAHs) that may be playing in the geographic variations in breast cancer incidence. Female breast cancer incidence and ambient air emissions of PAHs were examined in the northeastern and southeastern regions of the USA by analyzing data from the Surveillance, Epidemiology, and End Results (SEER) Program and the State Cancer Profiles of the National Cancer Institute and from the Environmental Protection Agency. Linear regression analysis was conducted to evaluate the association between PAH emissions and breast cancer incidence in unadjusted and adjusted models. Significantly higher age-adjusted incidence rates of female breast cancer were seen in northeastern SEER regions, when compared to southeastern regions, during the years of 2000-2012. After adjusting for potential confounders, emission densities of total PAHs and four carcinogenic individual PAHs (benzo[a]pyrene, dibenz[a,h]anthracene, naphthalene, and benzo[b]fluoranthene) showed a significantly positive association with annual incidence rates of breast cancer, with a β of 0.85 (p = 0.004), 58.37 (p = 0.010), 628.56 (p = 0.002), 0.44 (p = 0.041), and 77.68 (p = 0.002), respectively, among the northeastern and southeastern states. This study suggests a potential relationship between ambient air emissions of carcinogenic PAHs and geographic variations of female breast cancer incidence in the northeastern and southeastern US. Further investigations are needed to explore these interactions and elucidate the role of PAHs in regional variations of breast cancer incidence.
abstract_id: PUBMED:31620366
Assessing and Explaining Geographic Variations in Mammography Screening Participation and Breast Cancer Incidence. Investigating geographic variations in mammography screening participation and breast cancer incidence help improve prevention strategies to reduce the burden of breast cancer. This study examined the suitability of health insurance claims data for assessing and explaining geographic variations in mammography screening participation and breast cancer incidence at the district level. Based on screening unit data (1,181,212 mammography screening events), cancer registry data (13,241 incident breast cancer cases) and claims data (147,325 mammography screening events; 1,778 incident breast cancer cases), screening unit and claims-based standardized participation ratios (SPR) of mammography screening as well as cancer registry and claims-based standardized incidence ratios (SIR) of breast cancer between 2011 and 2014 were estimated for the 46 districts of the German federal state of Lower Saxony. Bland-Altman analyses were performed to benchmark claims-based SPR and SIR against screening unit and cancer registry data. Determinants of district-level variations were investigated at the individual and contextual level using claims-based multilevel logistic regression analysis. In claims and benchmark data, SPR showed considerable variations and SIR hardly any. Claims-based estimates were between 0.13 below and 0.14 above (SPR), and between 0.36 below and 0.36 above (SIR) the benchmark. Given the limited suitability of health insurance claims data for assessing geographic variations in breast cancer incidence, only mammography screening participation was investigated in the multilevel analysis. At the individual level, 10 of 31 Elixhauser comorbidities were negatively and 11 positively associated with mammography screening participation. Age and comorbidities did not contribute to the explanation of geographic variations. At the contextual level, unemployment rate was negatively and the proportion of employees with an academic degree positively associated with mammography screening participation. Unemployment, income, education, foreign population and type of district explained 58.5% of geographic variations. Future studies should combine health insurance claims data with individual data on socioeconomic characteristics, lifestyle factors, psychological factors, quality of life and health literacy as well as contextual data on socioeconomic characteristics and accessibility of mammography screening. This would allow a comprehensive investigation of geographic variations in mammography screening participation and help to further improve prevention strategies for reducing the burden of breast cancer.
abstract_id: PUBMED:12023271
Geographic variations in breast cancer survival among older women: implications for quality of breast cancer care. Background: Breast cancer care, such as utilization of screening procedures and types of treatment received, varies substantially by geographic region of the United States. However, little is known about variations in survival with breast cancer.
Methods: We examined breast cancer incidence, survival, and mortality in the 66 health service areas covered by the Surveillance, Epidemiology, and End Results (SEER) program for women aged 65 and older at diagnosis. Incidence and survival data were derived from SEER, while breast cancer mortality data were from Vital Statistics data.
Results: There was considerable variation in breast cancer survival among the 66 health service areas (chi2 = 202.7, p <.0001). There was also significant variation in incidence and mortality from breast cancer. In a partial correlation weighted for the size of the health service area, both incidence (r =.812) and percent 5-year survival (r = -.587) correlate with mortality. In a Poisson regression analysis, the combination of variation in incidence and variation in survival explains 90.9% of the variation in mortality.
Conclusions: There is considerable geographic variation in survival from breast cancer among older women, and this contributes to variation in breast cancer mortality. Geographic variations in breast cancer mortality should diminish as the quality of breast cancer care becomes more standardized.
abstract_id: PUBMED:33490183
Trends in cancer incidence and mortality rates in the United States from 1975 to 2016. Background: Cancer is the second leading cause of death in the United States (US). The goal of this study was to characterize the trends in cancer incidence and mortality in the US from 1975 to 2016.
Methods: In this study, we analyzed 4,711,958 cancer cases and 21,489,462 cancer death cases from the Surveillance, Epidemiology and End Results (SEER) database. Cancer incidence and mortality were assessed according to sex, race, and age group. Cancer survival rates between 2010 and 2016 were also examined.
Results: The continuous decline in the overall cancer mortality rate from the early 1990s has resulted in overall decreases of 33.6% and 23.6% in the cancer mortality rates of males and females, respectively. In males, the top three leading cancers and causes of cancer death from 1975 to 2016 were prostate, lung and bronchial, and colon and rectal cancers, while in females, the top three leading cancers and causes of cancer death from 1979 to 2016 were breast, lung and bronchial, and colon and rectal cancers. The 5-year relative survival rates of males and females for all cancers combined, diagnosed from 2010-2016, were 68.5% and 70.1%, respectively. The overall cancer incidence and mortality were higher in males than females from 1975-2016. Also, black people had higher mortality and shorter survival rates for all cancers combined compared with white people (in both sexes).
Conclusions: This study presents a comprehensive overview of cancer incidence and mortality in the US over the past 42 years. Such information can provide a scientific basis for cancer prevention and control.
abstract_id: PUBMED:23149312
Cancer incidence and patient survival rates among the residents in the Pudong New Area of Shanghai between 2002 and 2006. With the growing threat of malignancy to health, it is necessary to analyze cancer incidence and patient survival rates among the residents in Pudong New Area of Shanghai to formulate better cancer prevention strategies. A total of 43,613 cancer patients diagnosed between 2002 and 2006 were recruited from the Pudong New Area Cancer Registry. The incidence, observed survival rate, and relative survival rate of patients grouped by sex, age, geographic area, and TNM stage were calculated using the Kaplan-Meier, life table, and Ederer II methods, respectively. Between 2002 and 2006, cancer incidence in Pudong New Area was 349.99 per 100,000 person-years, and the 10 most frequently diseased sites were the lung, stomach, colon and rectum, liver, breast, esophagus, pancreas, brain and central nervous system, thyroid, and bladder. For patients with cancers of the colon and rectum, breast, thyroid, brain and central nervous system, and bladder, the 5-year relative survival rate was greater than 40%, whereas patients with cancers of the liver and pancreas had a 5-year relative survival rate of less than 10%. The 1-year to 5-year survival rates for patients grouped by sex, age, geographic area, and TNM stage differed significantly (all P < 0.001). Our results indicate that cancer incidence and patient survival in Pudong New Area vary by tumor type, sex, age, geographic area, and TNM stage.
abstract_id: PUBMED:38329248
Cancers: incidence and survival in metropolitan France Cancers: INCIDENCE AND SURVIVAL IN METROPOLITAN France. Incidence and survival rates are key indicators for cancer surveillance. They also help to drive cancer control programs and public health policies. Focusing on the main cancer localisations, this paper describes the latest incidence (2023) and survival (2018) rates, as well as their evolutions since 1990 in metropolitan France. In 2023, the number of new cases of cancer was estimated to be 433 136, of which 57% occurring in men. Both gender considered, the most frequent cancers are: breast cancer (61 214 new cases), prostate cancer (59 885 new cases) and lung cancer (52 777 new cases). Although the « all cancer » incidence rate as remained quite stable for 33 years in men, it has been raising by almost 1% per year in women. Regarding survival, the standardized net survival (SNS) at 5 years shows great disparities among tumor sites, and it is overall higher in women. Cancers with the best prognosis are thyroid cancer (SNS at 5 years: 96%), prostate cancer (93%), skin melanoma (93%) and uterine cancer (74%). On the contrary, a few tumor locations, including the pancreas (SNS at 5 years of 11%), the liver (18%) and the lung (20%) are still associated with a poor prognosis, even if survival rates have increased in most of cancer locations since 1990.
abstract_id: PUBMED:3012964
Geographic variations of breast carcinoma incidence in Sweden. Are the differences real? The validity of the reported geographic variations of breast carcinoma incidence in Sweden was assessed by examination of two possible sources of bias: non-notification to the Cancer Registry of diagnosed carcinoma cases and 'biologically benign' breast carcinoma, i.e. with a low disease-specific lethality, e.g. detected accidentally at autopsy. No significant geographic differences in registration deficit were found even though non-notification tended to be slightly higher for old patients in low-incidence areas. Autopsy cases were estimated to account for less than one per cent of all cases and tended to be more frequent in high-incidence areas but the regional differences were generally small and not significant. An analysis of the relationship between 10-year relative survival and age-standardized incidence in 27 different regions revealed no significant correlation, whereas there was a significant positive correlation between age-standardized incidence and mortality. These findings indicate that non-lethal breast carcinoma cases do not explain the variations of incidence. In conclusion, no evidence was found suggesting that the geographic differences were artifactual. Registration deficit and autopsy cases, however, may have slightly increased the variations among elderly women.
abstract_id: PUBMED:30464592
The fluctuating incidence, improved survival of patients with breast cancer, and disparities by age, race, and socioeconomic status by decade, 1981-2010. Purpose: Breast cancer is the most commonly diagnosed cancer and the leading cause of cancer-related deaths among women worldwide. However, the data on breast cancer incidence and survival over a long period, especially the dynamic changes in the role of race and socioeconomic status (SES), are scant.
Materials And Methods: To evaluate treatment outcomes of patients with breast cancer over the past 3 decades, the data from the Surveillance, Epidemiology, and End Results (SEER) registries were used to assess the survival of patients with breast cancer. Period analysis was used to analyze the incidence and survival trend; survival was evaluated by the relative survival rates (RSRs) and Kaplan-Meier analyses. The HRs for age, race, stage, and SES were assessed by Cox regression.
Results: A total of 433,366 patients diagnosed with breast cancer between 1981 and 2010 were identified from the original nine SEER registries. The incidences of breast cancer in each decade were 107.1 per 100,000, 117.5 per 100,000, and 109.8 per 100,000. The 10-year RSRs improved each decade, from 70.8% to 81.5% to 85.6% (P<0.0001). The lower survival in black race and high-poverty group is confirmed by Kaplan-Meier analyses and RSRs. Furthermore, Cox regression analyses demonstrated that age, race, SES, and stage are independent risk factors for patients with breast cancer in each decade.
Conclusion: The current data demonstrated a fluctuating incidence trend with improving survival rates of patients with breast cancer over the past 3 decades. In addition, the survival disparity exists among different races, ages, SESs, and stages.
abstract_id: PUBMED:12115569
Geographic differences in invasive and in situ breast cancer incidence according to precise geographic coordinates, Connecticut, 1991-95. To evaluate geographical variation of invasive and in situ breast cancer incidence rates using precise geographical coordinates for place of residence at diagnosis, latitude-longitude coordinates pertaining to 10,601 invasive and 1,814 in situ breast cancers for Connecticut women, 1991-95, were linked to US Census information on the 2,905 State census block groups. A spatial scan statistic was used to detect geographic excess or deficits in incidence and test the statistical significance of results, without prior assumptions about the size or location of such areas. The age adjusted invasive cancer incidence rate was 165.3/100,000 women/year. The spatial scan statistic identified 3 places with significantly low incidence rates and 4 places where rates were significantly high. The most probable location of low incidence was rural northeastern Connecticut where risk of disease, relative to elsewhere around the state, was 0.70 (p = 0.0001); the most probable place of elevated incidence was north central Connecticut where a relative risk of 1.34 (p = 0.002) was observed. Incidence of in situ disease was estimated to be significantly high for north central Connecticut (RR = 1.84; p = 0.0001). Geographic differences of invasive and in situ breast cancer incidence were observed. Examining cancer events at the lowest available level of data aggregation is beneficial in highlighting localized rate variations. Such information may enable public health officials to target additional resources for promoting breast cancer screening to specific locations.
Answer: The geographic variations in breast cancer mortality do not necessarily imply elevated incidence. For instance, in the northeastern United States, breast cancer mortality is higher, especially among older women, but this is not solely due to increased incidence. In New England, breast cancer mortality for women aged 65 and older is 26% higher than in the South, while the incidence is only 3% higher. This suggests that factors other than incidence, such as survival rates, may play a significant role in the observed mortality differences (PUBMED:9518983).
Indeed, survival rates vary geographically and contribute to differences in mortality. For example, there is considerable variation in breast cancer survival among health service areas covered by the SEER program for women aged 65 and older, which contributes to the variation in mortality. The combination of variation in incidence and survival explains a significant portion of the variation in mortality (PUBMED:12023271).
Environmental factors may also contribute to geographic variations in breast cancer incidence. A study assessing the role of exposure to ambient air pollution, specifically carcinogenic polycyclic aromatic hydrocarbons (PAHs), found a significant association between PAH emissions and annual incidence rates of breast cancer in the northeastern and southeastern regions of the USA (PUBMED:28616736).
Furthermore, disparities in mammography screening participation and socioeconomic factors such as unemployment rate, income, education, and the proportion of employees with an academic degree have been associated with geographic variations in mammography screening participation, which could indirectly affect breast cancer incidence and survival (PUBMED:31620366).
In summary, higher breast cancer mortality rates in certain geographic regions are influenced by a combination of factors, including but not limited to incidence rates. Survival rates, environmental factors, screening practices, and socioeconomic conditions also play crucial roles in shaping these geographic variations. |
Instruction: Diagnosis of posttraumatic pulmonary embolism: is chest computed tomographic angiography acceptable?
Abstracts:
abstract_id: PUBMED:22992603
Dynamic computed tomographic pulmonary angiography as a problem-solving tool in indeterminate computed tomographic angiography for pulmonary embolism. Objective: Computed tomographic pulmonary angiography may be indeterminate in regions of slow arterial flow because of underlying lung disease. In this case, dynamic computed tomographic angiography of the pulmonary vasculature (dynamic CTPA) was used to confirm flow variation within the pulmonary arteries in regions of pulmonary fibrosis and excluded pulmonary embolism.
Conclusions: Dynamic CTPA successfully demonstrates flow variation within the pulmonary arteries and may be a useful adjunct to exclude pulmonary embolism in CTPA cases with questionable arterial filling defects.
abstract_id: PUBMED:11155414
Diagnosis of pulmonary embolism with use of computed tomographic angiography. Pulmonary embolism (PE) is a common diagnostic problem, particularly in hospitalized patients. It remains a frequent cause of unexpected deaths. Traditionally, the diagnostic work-up for suspected PE has centered on the use of ventilation-perfusion (V-P) radionuclide lung scanning. However, V-P scanning does not provide adequate confirmation or exclusion of the diagnosis in the majority of patients who undergo this test. Although published guidelines advise further diagnostic testing after nondiagnostic V-P scans, clinicians infrequently perform such testing, and management decisions are commonly based on clinical judgment. In recent years, there has been an increasing interest in the use of computed tomographic (CT) angiography in the diagnostic evaluation of patients with suspected PE. Although there are unresolved issues regarding its sensitivity in detecting small peripheral emboli, CT angiography is more accurate than V-P scanning in the diagnosis of PE and yields other intrathoracic diagnoses. Herein we summarize the problems with the traditional approach centered on the use of V-P scanning in the diagnosis of PE and propose an alternative diagnostic strategy based primarily on the use of CT angiography.
abstract_id: PUBMED:19546768
Diagnosing pulmonary embolism in pregnancy using computed-tomographic angiography or ventilation-perfusion. Objective: To estimate the rate of nondiagnosis for patients who initially undergo computed-tomographic angiography compared with those who undergo ventilation-perfusion imaging to diagnose pulmonary embolism in pregnancy.
Methods: This was a retrospective cohort study of all women consecutively evaluated from 2001-2006 for clinical suspicion of pulmonary embolism who were pregnant or 6 weeks postpartum and underwent at least computed-tomographic angiography or ventilation-perfusion scan. Charts were abstracted for history, clinical presentation, examination, imaging, and pregnancy and maternal outcomes. Women who underwent computed-tomographic angiography for initial diagnosis were compared with women who underwent ventilation-perfusion. Primary outcome was defined as a nondiagnostic study: nondiagnostic for pulmonary embolism in the computed-tomographic angiography group, or "low or intermediate probability" in the ventilation-perfusion group. Univariable, bivariable, and multivariable analyses were performed.
Results: Of 304 women with a clinical suspicion of pulmonary embolism, initial diagnosis was sought by computed-tomographic angiography in 108 (35.1%) and by ventilation-perfusion in 196 (64.9%) women. Women who underwent computed-tomographic angiography tended to have a slightly higher rate of nondiagnostic study (17.0% compared with 13.2%, P=.38). Examining the subgroup of women with a normal chest X-ray, computed-tomographic angiography was much more likely to yield a nondiagnostic result than ventilation-perfusion, even after adjusting for relevant confounding effects (30.0% compared with 5.6%, adjusted odds ratio 5.4, 95% confidence interval 1.4-20.1, P<.01).
Conclusion: Pregnant or postpartum women with clinical suspicion of a pulmonary embolism and a normal chest X-ray are more likely to have a diagnostic study from a ventilation-perfusion scan compared with a computed-tomographic angiography. Evidence supports computed-tomographic angiography as a better initial test than ventilation-perfusion in patients with an abnormal chest X-ray.
Level Of Evidence: II.
abstract_id: PUBMED:19465853
Computerized tomographic pulmonary angiography versus ventilation perfusion lung scanning for the diagnosis of pulmonary embolism. Purpose Of Review: The purpose of this review is to focus on recent research that has addressed the relative merits of computed tomographic pulmonary angiography (CTPA) and ventilation perfusion (V/Q) scanning for the diagnosis of pulmonary embolism.
Recent Findings: Computed tomographic pulmonary angiography is the most sensitive test for the diagnosis of pulmonary embolism and its use has been associated with a rising incidence of the condition. Diagnostic algorithms using either CTPA or V/Q scanning have proven to be comparably safe to exclude the diagnosis of pulmonary embolism. Negative multidetector CTPA study results essentially ruled out the diagnosis of pulmonary embolism without the need to routinely exclude the presence of deep vein thrombosis. Use of multidetector CTPA was associated with significant radiation exposure that potentially increases risk of secondary malignancies. This is particularly a concern for young women given the risk of breast cancer. Single photon emission tomography (SPECT) V/Q and modified diagnostic criteria for V/Q scan interpretation increased their diagnostic accuracy compared with V/Q scanning and offer nuclear medicine modalities that are alternatives to CTPA in at least some patients with suspected pulmonary embolism at a fraction of the risk of radiation exposure. Excluding low risk patients for pulmonary embolism as defined by clinical scoring systems and D-dimer testing would enhance the yield of diagnostic testing.
Summary: Computed tomographic pulmonary angiography is the most reliable test for diagnosis of pulmonary embolism. However, diagnostic algorithms using V/Q scanning are safe and may be preferred in some patient populations.
abstract_id: PUBMED:36368975
A deep learning approach for automated diagnosis of pulmonary embolism on computed tomographic pulmonary angiography. Background: Computed tomographic pulmonary angiography (CTPA) is the diagnostic standard for confirming pulmonary embolism (PE). Since PE is a life-threatening condition, early diagnosis and treatment are critical to avoid PE-associated morbidity and mortality. However, PE remains subject to misdiagnosis.
Methods: We retrospectively identified 251 CTPAs performed at a tertiary care hospital between January 2018 to January 2021. The scans were classified as positive (n = 55) and negative (n = 196) for PE based on the annotations made by board-certified radiologists. A fully anonymized CT slice served as input for the detection of PE by the 2D segmentation model comprising U-Net architecture with Xception encoder. The diagnostic performance of the model was calculated at both the scan and the slice levels.
Results: The model correctly identified 44 out of 55 scans as positive for PE and 146 out of 196 scans as negative for PE with a sensitivity of 0.80 [95% CI 0.68, 0.89], a specificity of 0.74 [95% CI 0.68, 0.80], and an accuracy of 0.76 [95% CI 0.70, 0.81]. On slice level, 4817 out of 5183 slices were marked as positive for the presence of emboli with a specificity of 0.89 [95% CI 0.88, 0.89], a sensitivity of 0.93 [95% CI 0.92, 0.94], and an accuracy of 0.89 [95% CI 0.887, 0.890]. The model also achieved an AUROC of 0.85 [0.78, 0.90] and 0.94 [0.936, 0.941] at scan level and slice level, respectively for the detection of PE.
Conclusion: The development of an AI model and its use for the identification of pulmonary embolism will support healthcare workers by reducing the rate of missed findings and minimizing the time required to screen the scans.
abstract_id: PUBMED:12634525
Diagnosis of posttraumatic pulmonary embolism: is chest computed tomographic angiography acceptable? Background: Pulmonary angiography (PA-gram) has long been the accepted criterion standard for diagnosing pulmonary embolism (PE). Computed tomographic angiography has recently been advocated as an equivalent alternative to PA-gram. CT angiography is known to be insensitive for peripheral (segmental and subsegmental) emboli. We have previously found that a significant number of posttraumatic PEs occur early. We therefore hypothesized that because of the fragmentation of these early (soft) clots, posttraumatic PEs would be found disproportionately in the lung periphery.
Methods: Trauma patients with PE confirmed by PA-gram were identified from our trauma database and medical records. PA-grams and reports were re-reviewed and the location of all emboli was documented.
Results: We identified 45 patients, with an average age of 46 +/- 19 years; two thirds of the patients were men and 82% had a blunt mechanism of injury. Patients had PE diagnosed between days 0 and 57. Overall, PE was confined to segmental or smaller vessels in 27 (60%) patients and to subsegmental vessels in 7 (16%) patients. Twelve patients (27%) had a PE within the first 4 days. Furthermore, 32 patients (71%) had unilateral clot and 22 patients (48.9%) had clot confined to one region.
Conclusion: PE frequently occurs soon after injury. The majority of PEs after trauma are found peripherally (in segmental or subsegmental vessels). Right/left pulmonary artery embolisms are likely to be found only later in a trauma patient's course. Any diagnostic study used to diagnose pulmonary embolism in trauma patients must have sufficient resolution capacity to reliably detect segmental and subsegmental clot. A diagnostic modality such as CT scanning that is insensitive to peripheral embolisms may miss a significant number of posttraumatic PEs.
abstract_id: PUBMED:29749576
Clinical outcomes after magnetic resonance angiography (MRA) versus computed tomographic angiography (CTA) for pulmonary embolism evaluation. Purpose: To compare patient outcomes following magnetic resonance angiography (MRA) versus computed tomographic angiography (CTA) ordered for suspected pulmonary embolism (PE).
Methods: In this IRB-approved, single-center, retrospective, case-control study, we reviewed the medical records of all patients evaluated for PE with MRA during a 5-year period along with age- and sex-matched controls evaluated with CTA. Only the first instance of PE evaluation during the study period was included. After application of our exclusion criteria to both study arms, the analysis included 1173 subjects. The primary endpoint was major adverse PE-related event (MAPE), which we defined as major bleeding, venous thromboembolism, or death during the 6 months following the index imaging test (MRA or CTA), obtained through medical record review. Logistic regression, chi-square test for independence, and Fisher's exact test were used with a p < 0.05 threshold.
Results: The overall 6-month MAPE rate following MRA (5.4%) was lower than following CTA (13.6%, p < 0.01). Amongst outpatients, the MAPE rate was lower for MRA (3.7%) than for CTA (8.0%, p = 0.01). Accounting for age, sex, referral source, BMI, and Wells' score, patients were less likely to suffer MAPE than those who underwent CTA, with an odds ratio of 0.44 [0.24, 0.80]. Technical success rate did not differ significantly between MRA (92.6%) and CTA (90.5%) groups (p = 0.41).
Conclusion: Within the inherent limitations of a retrospective case-controlled analysis, we observed that the rate of MAPE was lower (more favorable) for patients following pulmonary MRA for the primary evaluation of suspected PE than following CTA.
abstract_id: PUBMED:31206454
Smoke: How to Differentiate Flow-related Artifacts From Pathology on Thoracic Computed Tomographic Angiography. Nonuniform contrast opacification of vasculature is frequently encountered on thoracic computed tomographic angiography. The purpose of this pictorial essay is to discuss the appearance of, and factors underlying mixing artifacts, which we term "smoke." We provide an approach to distinguish it from pathology including pulmonary embolism, aortic dissection, and thrombus. Smoke results from a combination of technical factors, abnormal physiology, or inflow of unopacified blood. Smoke produces ill-defined filling defects that may be confidently diagnosed in many cases if these fundamentals are applied.
abstract_id: PUBMED:11480332
Combined computed tomographic pulmonary angiography and venography for evaluation of pulmonary embolism and lower extremity deep venous thrombosis: report of two cases. Pulmonary embolism (PE) and deep venous thrombosis (DVT) are major causes of morbidity and mortality, which can be reduced with accurate diagnosis and proper treatment. More than 90% of PEs originate in lower-extremity DVT. Currently, evaluation of PEs and lower-extremity DVT requires 2 separate tests (ventilation-perfusion scan, computed tomographic pulmonary angiography (CTPA), or pulmonary angiography for PE and sonography, computed tomographic venography (CTV), conventional venography, or magnetic resonance venography for DVT). Combined computed tomographic pulmonary angiography and venography (CTPAV) is a new diagnostic technique that combines CTPA and CTV into a single study for the screening of PE and subdiaphragmatic DVT. CTPAV is a modified CTPA study that evaluates the subdiaphragmatic deep vein system at the time of CTPA, without additional venipuncture or contrast medium. It is easy to perform, fairly easy to interpret, readily available, and requires no invasive procedure. We present 2 cases of multiple PE and lower-extremity DVT in which CTPAV was used.
abstract_id: PUBMED:26941814
Qualitative evaluation of pulmonary CT angiography findings in pregnant and postpartum women with suspected pulmonary thromboembolism. Background: Considering the importance of using more appropriate imaging technique for accurate diagnosis of pulmonary thromboembolism (PTE) with less side effects, we aimed to evaluate the quality of pulmonary 64-multidetector computed tomographic (MDCT) angiography in pregnant and postpartum women with suspected PTE in Isfahan.
Materials And Methods: In this descriptive study, radiological findings of pregnant and postpartum women with suspected PTE who underwent pulmonary 64-MDCT angiography were evaluated. Prevalence of PTE in pregnant and postpartum women, mean of pulmonary arteries density for right and left pulmonary arteries, and their lobar and segmental branches, diagnostic quality of the pulmonary arteries density and their scoring, frequency of diagnostic and nondiagnostic images, mean of radiation dose and mean of bolus time, and the correlation between the quality of the vascular density with the peak density of the pulmonary artery were determined.
Results: In this study, 44 pregnant and postpartum women with suspected PTE were selected. The overall prevalence of PTE was 9.1% (4/44). PTE was diagnosed in 1 (3.7%) pregnant and 3 (17.5%) postpartum women (P = 0.14). Mean density of pulmonary trunk was 278.81± 108.16 Hounsfield unit (HU) and 308.41 ± 59.30 HU in pregnant and postpartum women, respectively. Mean of bolus timing, kilovoltage peak (kVp), tube current, and dose length product (DLP) were 12.53 ± 2.36 s, 105.22± 45.71 milliamperage (MA), 382.9 ± 173.5 MA, and 317.98 ± 78.92 mGy/cm, respectively. The rate of nondiagnostic images was 4.5%.
Conclusion: Our findings indicated that pulmonary 64-MDCT angiography is an appropriate imaging method for diagnosing PTE in pregnant and postpartum women with suspected PTE. It seems that, using fast CT systems (64-MDCT), in accordance with high flow rate, high contrast medium concentration and low kVp could explain the obtained appropriate quality of images more efficiently than computed tomographic pulmonary angiography (CTPA).
Answer: The diagnosis of posttraumatic pulmonary embolism (PE) presents unique challenges, and the acceptability of chest computed tomographic angiography (CTA) for this purpose has been a subject of investigation. A study examining the diagnosis of PE in trauma patients found that pulmonary embolism frequently occurs soon after injury and is often found peripherally, in segmental or subsegmental vessels. The study highlighted that a significant number of posttraumatic PEs occur early and may be found disproportionately in the lung periphery. Given that CTA is known to be insensitive for peripheral emboli, the study concluded that CTA may miss a significant number of posttraumatic PEs, suggesting that any diagnostic study used to diagnose PE in trauma patients must have sufficient resolution capacity to reliably detect segmental and subsegmental clot (PUBMED:12634525).
In contrast, CTA has been advocated as an equivalent alternative to pulmonary angiography (PA-gram), which has long been the accepted criterion standard for diagnosing PE. However, the insensitivity of CTA for peripheral emboli is a concern, especially in the context of posttraumatic PEs, which may be found in segmental or subsegmental vessels (PUBMED:12634525).
Overall, while CTA is a widely used and reliable test for the diagnosis of PE, its limitations in detecting peripheral emboli, particularly in the context of posttraumatic cases, suggest that it may not always be acceptable as a standalone diagnostic tool for posttraumatic PE. Additional diagnostic modalities or approaches may be necessary to ensure accurate diagnosis in such cases. |
Instruction: Do logbooks influence recall of physical activity in validation studies?
Abstracts:
abstract_id: PUBMED:15235322
Do logbooks influence recall of physical activity in validation studies? Purpose: To examine whether physical activity logbooks influence estimates of validity of 7-d recall physical activity questionnaires.
Methods: A convenience sample of 551 adults aged 18-75 yr wore an MTI accelerometer for seven consecutive days and were then randomly administered two of four 7-d recall physical activity questionnaires that varied in length and format (Active Australia Survey (AAS), long and short International Physical Activity Questionnaires (IPAQ-L and IPAQ-S), and Behavioral Risk Factor Surveillance System (BRFSS)). A subsample of 75% concurrently completed a physical activity logbook.
Results: Correlations (rho) between self-reported and measured duration of moderate- and vigorous-intensity activity and total activity were similar among participants who received a logbook and those who did not for each of the four instruments. There was also no interaction between assessment method (survey, accelerometer) and the assignment of a logbook. For the IPAQ-L, however, variability in the difference between accelerometer data and responses to the vigorous items was smaller among those assigned a logbook (F = 4.128, df = 260, P = 0.043). Overall, there were no differences in percent agreement or kappa for participation in sufficient levels of physical activity according to receipt of a logbook for any of the surveys.
Conclusion: The process of self-monitoring through completion of a logbook does not appear to influence estimates of validity for brief or long questionnaires with global questions. Whereas the magnitude of error in accuracy of recall of particular types of activity may be reduced by completion of a logbook that is similar in structure to the survey being validated, this does not appear to influence overall estimates of validity.
abstract_id: PUBMED:20864762
Validation of an historical physical activity recall tool in postpartum women. Background: Physical activity (PA) is an important component of a healthy pregnancy and postpartum period. Since prospective PA monitoring throughout gestation is difficult, a valid PA recall tool would be of significant benefit to researchers. The purpose of this study was to evaluate the ability of women to recall their physical activity performed during pregnancy and postpartum, 6 years later.
Methods: Thirty women participated in an historical PA recall study. Pregnancy PA was monitored carefully via assisted physical activity diary (PAD) 6 years before the current investigation. A Modifiable Activity Questionnaire (MAQ) was used to assess current and past pregnancy PA. The MAQ was administered for each time period in the order of most distant past to most current. Leisure time energy expenditure values (kcal/kg/day) calculated from the PAD and the MAQ were compared.
Results: MAQ energy expenditure values showed good positive relationships with PAD measures at 20 weeks gestation (r = .57; P < .01), 32 weeks gestation (r = .85; P < .01), and 12 weeks postpartum (r = .86; P < .01). Correlations found were similar to those from previous PA recall and MAQ validation studies using nonpregnant populations.
Conclusions: The MAQ is an appropriate tool to assess pregnancy and postpartum PA in women 6 years postpartum.
abstract_id: PUBMED:10412965
Reliability of recall of physical activity in the distant past. Substantial data exist supporting the role of physical activity in the etiology of several chronic diseases. Many chronic diseases begin developing 20-30 years before they become clinically evident. Since researchers often must rely on recall to characterize the long term habits of study participants, the accuracy of recall of physical activity is an important methodological issue in etiologic studies. The purpose of this study was to examine the quality of recall of physical activity in the distant past in a cohort of western New York residents followed since 1960. Paired t tests and intraclass correlation coefficients (ICCs) were used to compare "original" (1960) and "recalled" (1992-1996) reports of weekday (occupational) and free-day (leisure time) physical activity. Results showed that the recalled reports underestimated past weekday activities when overall activity was examined; estimates closer to the originals were found when levels of activity were examined. Recall was best for weekday light (ICC = 0.43) and weekday moderate (ICC = 0.45) activity in both sexes and free-day hard activity in females (ICC = 0.45). Most participants underestimated past free-day activity, but males overestimated free-day hard activity. Correlations for free-day activity were highest for summer sports in females (ICC = 0.29) and winter sports in both sexes (ICC = 0.39) and were low for walking and "other activity." Considering the length of time between the original interviews and the recall interviews, the correlations found here are remarkable and close to those found in other studies where recall intervals were 10 years or less.
abstract_id: PUBMED:24767807
A validation study concerning the effects of interview content, retention interval, and grade on children's recall accuracy for dietary intake and/or physical activity. Background: Practitioners and researchers are interested in assessing children's dietary intake and physical activity together to maximize resources and minimize subject burden.
Objective: Our aim was to investigate differences in dietary and/or physical activity recall accuracy by content (diet only; physical activity only; diet and physical activity), retention interval (same-day recalls in the afternoon; previous-day recalls in the morning), and grade (third; fifth).
Design: Children (n=144; 66% African American, 13% white, 12% Hispanic, 9% other; 50% girls) from four schools were randomly selected for interviews about one of three contents. Each content group was equally divided by retention interval, each equally divided by grade, each equally divided by sex. Information concerning diet and physical activity at school was validated with school-provided breakfast and lunch observations, and accelerometry, respectively. Dietary accuracy measures were food-item omission and intrusion rates, and kilocalorie correspondence rate and inflation ratio. Physical activity accuracy measures were absolute and arithmetic differences for moderate to vigorous physical activity minutes.
Statistical Analyses Performed: For each accuracy measure, linear models determined effects of content, retention interval, grade, and their two-way and three-way interactions; ethnicity and sex were control variables.
Results: Content was significant within four interactions: intrusion rate (content×retention-interval×grade; P=0.0004), correspondence rate (content×grade; P=0.0004), inflation ratio (content×grade; P=0.0104), and arithmetic difference (content×retention-interval×grade; P=0.0070). Retention interval was significant for correspondence rate (P=0.0004), inflation ratio (P=0.0014), and three interactions: omission rate (retention-interval×grade; P=0.0095), intrusion rate, and arithmetic difference (both already mentioned). Grade was significant for absolute difference (P=0.0233) and five interactions mentioned. Content effects depended on other factors. Grade effects were mixed. Dietary accuracy was better with same-day than previous-day retention interval.
Conclusions: Results do not support integrating dietary intake and physical activity in children's recalls, but do support using shorter rather than longer retention intervals to yield more accurate dietary recalls. Additional validation studies need to clarify age effects and identify evidence-based practices to improve children's accuracy for recalling dietary intake and/or physical activity.
abstract_id: PUBMED:11164131
Validation of the Stanford 7-day recall to assess habitual physical activity. Purpose: The ability of the Stanford 7-Day Recall (7-DR), a well known instrument for surveying work and leisure-time physical activity (PA) in epidemiologic studies, to assess levels of habitual PA in men and women was evaluated.
Methods: The 7-DR was administered twice, one month apart. Its accuracy was studied in 77 men and women, aged 20-59 years, by its repeatability and comparison of both administrations of the 7-DR with: fourteen 48-hour physical activity records; fourteen 48-hour Caltrac accelerometer readings; peak oxygen uptake (VO(2) peak) determinations; and percent body fat. These criteria measures were obtained over a year's duration.
Results: One month repeatability correlation coefficients for 7-DR total activity were r = 0.60 and r = 0.36 for men and women, respectively. Comparison of corresponding indices of activity between the 7-DR and the PA record indicated: 1) a closer relationship in men for total (r = 0.58 for visit 10 7-DR and 0.66 for visit 11 7-DR, p < or = 0.01), and very hard (r = 0.44 and 0.60, p< or = 0.05) activity then in women (r = 0.32 and 0.33, p < or = 0.05, and r = 0.21, ns and 0.43, p< or = 0.01, respectively); and 2) in general, lower and less consistent associations for hard, moderate, and light activity. Total PA by the 7-DR was significantly associated with Caltrac readings in men only. 7-DR results were more consistently related to VO(2) peak in men than women, but were significantly related to percent body fat in women only.
Conclusions: The ability of the 7-DR to assess habitual PA was greater for more vigorous than for lower intensity PA.
abstract_id: PUBMED:1912047
Validation of a three-month physical activity recall questionnaire with a seven-day food intake and physical activity diary. We assessed the validity of a three-month physical activity questionnaire. The validation instrument was a seven-day self-report diary of physical activity and food intake, given to 113 randomly selected persons. We obtained Spearman correlations of 0.60, 0.48, and 0.91 and kappa scores of 0.36, 0.23, and 0.62 from the physical activity recall and diary for moderate, vigorous, and total activity. We conclude that the three-month recall questionnaire reasonably reflects activity in this community-based sample.
abstract_id: PUBMED:23423997
An evaluation of questionnaires assessing physical activity levels in youth populations. The aim of this study was to revise and organize according to recall time based criteria, questionnaires created and validated to assess the level of physical activity in children and adolescents, with the intention of enabling their proper understanding and subsequent use by nurses and health care professionals. In order to determine the questionnaires' quality, their degree of reliability and validity was the main feature to be taken into account. Thirty-eight papers were retrieved and analyzed, 31 of which were aimed at designing and validating a questionnaire intended for physical activity (PA) level assessment in youth populations (four to 19 year olds). The most widely used questionnaires were those whose recall time spans from one to seven days. In general, all questionnaires were characterized by the use of pen-and-paper format and scarce utilization of new technologies. Based upon validity and reliability criteria, in order to assess PA level in children and adolescents, nurses should use the "Children's Leisure Activities Study Survey" and the "Flemish Physical Activity Computer Questionnaire", respectively. There is a need for the validation of these tools in other languages and cultures.
abstract_id: PUBMED:12471306
The reliability and validity of the Adolescent Physical Activity Recall Questionnaire. Purpose: This study assessed the test-retest reliability and validity of the Adolescent Physical Activity Recall Questionnaire (APARQ) among 13- and 15-yr-old Australians.
Methods: Two studies were conducted using the same instrument. Self-reported participation in organized and nonorganized physical activity was summarized into four measures: a three-category measure of activity, a two-category measure, and estimated energy expenditure expressed as a continuous variable and as quintiles. The reliability study (N = 226) assessed strength of agreement for all measures between responses to two administrations of the questionnaire. The validity study (N = 2026) assessed the relationship between the APARQ and performance on the Multistage Fitness Test (MFT).
Results: Reliability study: for the three-category measure, percent agreement ranged 67-83% and weighted kappa ranged 0.33-0.71. For the two-category measure, percent agreement ranged 76-90% and kappa ranged 0.25-0.74. For energy expenditure expressed as a continuous variable, the intraclass correlations coefficients were generally greater than 0.6 for grade 10 students, but most were below 0.5 for grade 8 students. Validity study: for the three-category measure, mean laps were higher in the adequately and vigorously active categories than the inactive category for girls, but only the mean laps in the vigorously active and inactive categories were significantly different for boys. For the two-category measure, mean laps were higher in the active category than the inactive category for all groups. Correlations between energy expenditure and MFT laps were 0.15, 0.21, 0.14, and 0.39 for grade 8 boys, grade 8 girls, grade 10 boys, and grade 10 girls, respectively.
Conclusion: The APARQ has acceptable to good reliability and acceptable validity, but further validation using other methods and in other population groups is required.
abstract_id: PUBMED:27623360
Revisiting the International Physical Activity Questionnaire (IPAQ): Assessing physical activity among individuals with schizophrenia. Background: Individuals with schizophrenia tend to have low levels of physical activity (PA) which contributes to high rates of physical comorbidities. Valid and reliable methods of assessing PA are essential for advancing health research. Ten years after initial validation of the Short-Form International Physical Activity Questionnaire (IPAQ), this study expands on the initial validation study by examining retest reliability over a 4-week period, assessing validity with a larger sample, and comparing validity of the IPAQ to a 24-hour recall alternative.
Methods: Participants completed the IPAQ at baseline and 4weeks later, along with a 24-hour PA recall at week 4. At week 3 participants wore waist accelerometers for 7days. Spearman's correlation coefficients and Bland-Altman plots were calculated based on weekly minutes of moderate to vigorous PA (MVPA).
Results: Test-retest reliability for the self-administered IPAQ was ρ=0.47, p<0.001 for MVPA. Correlation between IPAQ assessment and accelerometer-determined MVPA was ρ=0.30, p=0.003. The 24-hour recall correlated significantly with MVPA on the previous day ρ=0.27, p=0.012. A Bland-Altman plot indicated the IPAQ-SF underreported by -119.2min (-72%) on average compared to accelerometry (95% limits of agreement -1017.1 to 778.7min, -292% to 147%).
Conclusion: Compared to previous IPAQ validation work in this population, criterion validity was similar, but reliability was lower over a 4-week period. MVPA criterion validity of the 24-hour recall was comparable to the 7-day self-report IPAQ. Findings further support that the IPAQ is a suitable assessment tool for epidemiological studies. Objective measures of physical activity are recommended for intervention assessment.
abstract_id: PUBMED:18359191
Indications of recall bias found in a retrospective study of physical activity and myocardial infarction. Objective: To evaluate the presence of recall bias in retrospective studies of physical activity and myocardial infarction.
Study Design And Setting: In 2005, seventy-eight cases who had suffered from a myocardial infarction and 243 control subjects, who had previously participated in the Swedish WOLF (Work, Lipids, and Fibrinogen) study, were asked about their physical activity level during 1990-2005. The answers about recalled past leisure time, occupational, and household physical activity level were compared with physical activity level as reported at the baseline examination of the WOLF study in 1992-1998.
Results: The proportion who recalled the same activity level as originally reported ranged from 69% to 96% (cases) and 69% to 89% (controls), and the kappa values ranged from 0.30 to 0.91 (cases) and 0.46 to 0.59 (controls), with the exception of perceived physical workload in household work, which showed low agreement between the originally stated and later recalled activity levels. Some differences were found between cases and controls regarding recall of past occupational activity, indicating the presence of recall bias in this domain of physical activity.
Conclusion: We cannot preclude the existence of recall bias when using retrospectively recalled information about occupational physical activity in studies of physical activity and myocardial infarction.
Answer: Based on the provided abstracts, the influence of logbooks on the recall of physical activity in validation studies appears to be minimal. A study examining whether physical activity logbooks affect the validity of 7-day recall physical activity questionnaires found that self-monitoring through the completion of a logbook does not seem to influence estimates of validity for brief or long questionnaires with global questions. Although the magnitude of error in the accuracy of recall of particular types of activity may be reduced by the completion of a logbook that is similar in structure to the survey being validated, this does not appear to influence overall estimates of validity (PUBMED:15235322).
In summary, the evidence suggests that the process of self-monitoring through logbooks does not significantly impact the validity of physical activity recall in the context of the studies examined. |
Instruction: Are all health plans created equal?
Abstracts:
abstract_id: PUBMED:9302244
Are all health plans created equal? The physician's view. Context: The health care market is demanding increasing amounts of information regarding quality of care in health plans. Physicians are a potentially important but infrequently used source of such information.
Objective: To assess physicians' views on health plan practices that promote or impede delivery of high-quality care in health plans and to compare ratings between plans.
Setting: Minneapolis-St Paul, Minn.
Participants: One hundred physicians in each of 3 health plans. Each physician rated 1 health plan.
Main Outcome Measures: Likert-type items that assessed health plan practices that promote or impede delivery of high-quality care.
Results: A total of 249 physicians (84%) completed the survey. Fewer than 20% of all physicians gave plans the highest rating (excellent or strongly agree) for health plan practices that promote delivery of high-quality care (such as providing continuing medical education for physicians, identifying patients needing preventive care, and providing physicians feedback about practice patterns). Barriers to delivering high-quality care related to sufficiency of time to spend with patients, covered benefits and copayment structure, and utilization management practices. Ratings differed across health plans. For example, the percentage of physicians indicating that they would recommend the plan they rated to their own family was 64% for plan 1, 92% for plan 2, and 24% for plan 3 (P<.001 for all comparisons).
Conclusions: Physician surveys can highlight strengths and weaknesses in health plans, and their ratings differ across plans. Physician ratings of health plan practices that promote or impede delivery of high-quality care may be useful to consumers and purchasers of health care as a tool to evaluate health plans and promote quality improvement.
abstract_id: PUBMED:10152752
ERISA and health plans. This Issue Brief is designed to provide a basic understanding of the relationship of the Employee Retirement Income Security Act of 1974 (ERISA) to health plans. It is based, in part, on an Employee Benefit Research Institute-Education and Research Fund (EBRI-ERF) educational briefing held in March 1995. This report includes a section by Peter Schmidt of Arnold & Porter, a section about multiemployer plans written by Judy Mazo of The Segal Company; and a section about ERISA and state health reform written by Kala Ladenheim of the Intergovernmental Health Policy Project. Starting in the late 1980s, three trends converged to make ERISA a critical factor in state health reforms: increasingly comprehensive state health policy experimentation; changes in the makeup of the insurance market (including the rise in self-insurance and the growth of managed care); and increasingly expansive interpretations of ERISA by federal courts. The changing interpretations of ERISA's relationship to three categories of state health initiatives--insurance mandates, medical high risk pools, and uncompensated care pools--illustrate how these forces are playing out today. ERISA does have a very broad preemptive effect. Federal statutes do not need to say anything about preemption in order to preempt state law. For example, if there is a direct conflict, it would be quite clear under the Supremacy Clause [of the U.S. Constitution] that ERISA, or any federal statue, would preempt a directly conflicting state statute. States can indirectly regulate health care plans that provide benefits through insurance contracts by establishing the terms of the contract. And they also raise money by imposing premium taxes. But they cannot do the same with respect to self-funded plans. That is one of the factors that has caused a great rise in the number of self-funded plans. State regulation [of employee benefits] can create three kinds of problems: cost of taxes, fees, or other charges; cost of dealing with substantive, possibly inconsistent, benefit standards; and cost of identifying, understanding, and complying with the regulations themselves.
abstract_id: PUBMED:9304906
The Highmark Pegasus Project report: characteristics of highly successful health plans. The Highmark Blue Cross/Blue Shield Pegasus Project was created in the fall of 1996 to benchmark best practices at health plans around the United States through extensive interviews, literature searches, and other measures. Characteristics of highly successful health plans across a number of major categories are summarized in the final recommendations presented by this report.
abstract_id: PUBMED:31419862
Price Shopping in Consumer-Directed Health Plans. We use health insurance claims data from 63 large employers to estimate the extent of price shopping for nine common outpatient services in consumer-directed health plans (CDHPs) compared to traditional health plans. The main measures of price shopping include (1) the total price paid on the claim, (2) the share of claims from low- and high-cost providers, and (3) the savings from price shopping relative to choosing prices randomly. All analyses control for individual and zip code level demographics and plan characteristics. We also estimate differences in price shopping within CDHPs depending on expected health care costs and whether the service was bought before or after reaching the deductible. For eight out of nine services analyzed, prices paid by CDHP and traditional plan enrollees did not differ significantly; CDHP enrollees paid 2.3% less for office visits. Similarly, office visits was the only service where CDHP enrollment resulted in a significantly larger share of claims from low-cost providers and greater savings from price shopping relative to traditional plans. There was also no evidence that, within CDHP plans, consumers with lower expected medical expenses exhibited more price shopping or that consumers exhibited more price shopping before reaching the deductible.
abstract_id: PUBMED:11534218
The Pregnancy Discrimination Act: employer health insurance plans must cover prescription contraceptives. The Equal Employment Opportunity Commission (EEOC), which recently took the position that employer health plans are required, in many instances, to cover prescription contraceptives, has issued guidelines to assist employers in complying with the law prohibiting discrimination on the basis of sex and pregnancy. Employers should review these guidelines carefully in relation to their health care plans.
abstract_id: PUBMED:33936182
Planning for health equity in the Americas: an analysis of national health plans. There is growing recognition that health and well-being improvements have not been shared across populations in the Americas. This article analyzes 32 national health sector policies, strategies, and plans across 10 different areas of health equity to understand, from one perspective, how equity is being addressed in the region. It finds significant variation in the substance and structure of how the health plans handle the issue. Nearly all countries explicitly include health equity as a clear goal, and most address the social determinants of health. Participatory processes documented in the development of these plans range from none to extensive and robust. Substantive equity-focused policies, such as those to improve physical accessibility of health care and increase affordable access to medicines, are included in many plans, though no country includes all aspects examined. Countries identify marginalized populations in their plans, though only a quarter specifically identify Afro-descendants and more than half do not address Indigenous people, including countries with large Indigenous populations. Four include attention to migrants. Despite health equity goals and data on baseline inequities, fewer than half of countries include time-bound targets on reducing absolute or relative health inequalities. Clear accountability mechanisms such as education, reporting, or rights-enforcement mechanisms in plans are rare. The nearly unanimous commitment across countries of the Americas to equity in health provides an important opportunity. Learning from the most robust equity-focused plans could provide a road map for efforts to translate broad goals into time-bound targets and eventually to increasing equity.
abstract_id: PUBMED:10345788
Features of employment-based health plans. This Issue Brief focuses on changes to the health care financing and delivery system as implemented by employers. It discusses health plan costs, cost sharing, plan funding, health care delivery systems, services covered under various health plan types, coverage limitations, and retiree health coverage. National health expenditures are estimated at $1.035 trillion, representing 13.6 percent of Gross Domestic Product in 1996, up from $699.5 billion and 12.2 percent in 1990. Rising health care spending is also evident at the employer level: In 1996, employer spending on private health insurance totaled $262.7 billion, up from $61.0 billion in 1980. Business health spending as a percentage of total compensation increased from 3.7 percent in 1980 to a high of 6.6 percent in 1993, and declined to 5.9 percent in 1996. Employment-based health plans are the most common source of health insurance coverage among the nonelderly population in the United States, providing coverage to nearly two-thirds of those under age 65. Despite the growth of many cost-sharing provisions, individuals are paying a smaller percentage of total health care costs. In 1960, 69 percent of private health care expenditures were paid out of pocket. Between 1993 and 1996, only 37 percent of private health expenditures were paid out of pocket. One of the most significant developments of the 1980s, which has continued throughout the 1990s, is the growth of managed care plans. As recently as 1994, traditional indemnity plans were the most commonly offered type of employment-based health plan. As fewer employers offered traditional indemnity plans, participation in these plans declined and participation in managed care plans increased. In 1997, 15 percent of employees participating in a health plan were enrolled in an indemnity plan, compared with 52 percent in 1992. Since 1993, employment-based health benefit cost inflation has been virtually nonexistent. Employers have kept cost increases low by using managed care and making other changes. Workers have been shifted to, have been induced to choose, or have voluntarily selected managed care health plans. Preferred provider organization (PPO) and point-of-service (POS) plans have experienced relatively strong gains in enrollment. Employers have also increased the use of utilization review for active workers, and cut back on health benefits for retirees. These changes are in stark contrast to the pre-1993 period, which saw even faster change, with rising health care costs and increasing deductibles and coinsurance for workers in non-HMOs.
abstract_id: PUBMED:31681482
The state of strategic plans for the health workforce in Africa. Many African countries have a shortage of health workers. As a response, in 2012, the Ministers of Health in the WHO African Region endorsed a Regional Road Map for Scaling Up the Health Workforce from 2012 to 2025. One of the key milestones of the roadmap was the development of national strategic plans by 2014. It is important to assess the extent to which the strategic plans that countries developed conformed with the WHO Roadmap. We examine the strategic plans for human resource for health (HRH) of sub-Saharan African countries in 2015 and assess the extent to which they take into consideration the WHO African Region's Roadmap for HRH. A questionnaire seeking data on human resources for health policies and plans was sent to 47 Member States and the responses from 43 countries that returned the questionnaires were analysed. Only 72% had a national plan of action for attaining the HRH target. This did not meet the 2015 target for the WHO, Regional Office for Africa's Roadmap. The plans that were available addressed the six areas of the roadmap. Despite all their efforts, countries will need further support to comprehensively implement the six strategic areas to maintain the health workers required for universal health coverage.
abstract_id: PUBMED:27468383
An emerging trend of equal authorship credit in major public health journals. Background: This study aimed to identify the longitudinal trends and characteristics of the practice of explicitly giving equal credit to multiple authors of publications in public health journals. Manual searches were conducted to identify original research articles, published in five public health journals with the highest IFs according to the "2012 JCR Science Edition" between January 1, 2004 and December 31, 2013, which awarded equal credit to multiple authors (Epidemiologic Reviews, Environmental Health Perspectives, the International Journal of Epidemiology, Epidemiology, and the Annual Review of Public Health). The Instructions to Authors in the five journals were also examined with regard to information about giving equal credit to multiple authors.
Findings: Statistically significant differences were noted in the annual prevalence in Environmental Health Perspectives, International Journal of Epidemiology, and Epidemiology (r = 0.753, P = 0.012; r = 0.894, P = 0.000; r = 0.522, P = 0.122, respectively). The first two authors listed in the by-line received equal credit in the majority of articles, but this practice was also extended to authors in nearly every position on the by-line in some publications. The authors given equal credit in articles appearing in Environmental Health Perspectives, International Journal of Epidemiology, and Epidemiology were primarily from European and North American countries. Finally, none of the journals provided specific guidance regarding this practice in their Instructions to Authors.
Conclusions: An emerging trend of giving equal credit to multiple authors is observed in the public health journals. This practice should be better addressed in the guidance provided by journals to authors.
abstract_id: PUBMED:2220862
Employer management of employee health plans. Current trends in employer management of health-care plans are summarized, and the role of managed care is emphasized. Employers realize that employees perceive health-care plans as an important benefit and that attractive benefit plans will help in recruiting and retaining employees. At the same time, companies are faced with the need to find new ways in which to curtail the rising cost of health-benefit plans, which often constitute their largest and most uncontrollable budget item. Such techniques as more effective claims administration, plan design changes, plan financing arrangements, education of employees, and increased employee cost sharing are among the cost-containment actions that employers are implementing. Managed-care networks including health maintenance organizations are often viewed as the ultimate cost-containment model; however, the effectiveness of these evolving arrangements must be rigorously evaluated on the basis of appropriate performance criteria and documentation of achievement. Faced with cost-containment pressures, employers are taking a more assertive and informed role in health-plan purchasing decisions. The future course of health-care delivery will depend in large part on the response of corporate management to the challenge of ensuring that their employees receive quality health care at controllable costs.
Answer: No, all health plans are not created equal. Physicians' views on health plan practices indicate that there are significant differences in how health plans promote or impede the delivery of high-quality care. A survey of physicians in Minneapolis-St Paul, Minnesota, revealed that fewer than 20% of physicians gave the highest rating for health plan practices that promote high-quality care. The survey also highlighted barriers to delivering high-quality care, such as insufficient time with patients, covered benefits and copayment structures, and utilization management practices. Moreover, physicians' willingness to recommend the health plan they rated to their own family varied significantly across the three health plans evaluated in the study, with percentages ranging from 24% to 92% (PUBMED:9302244).
Additionally, the Highmark Blue Cross/Blue Shield Pegasus Project report identified characteristics of highly successful health plans, suggesting that there are benchmarks and best practices that can differentiate health plans in terms of their success (PUBMED:9304906).
Furthermore, the Employee Retirement Income Security Act of 1974 (ERISA) has a broad preemptive effect on state health reforms, which can affect how health plans are regulated and the extent to which they can be shaped by state initiatives. This can lead to variations in health plans based on whether they are self-funded or provide benefits through insurance contracts (PUBMED:10152752).
In the context of consumer-directed health plans (CDHPs), research has shown that for most outpatient services, prices paid by CDHP enrollees did not differ significantly from those in traditional health plans, indicating that the type of health plan may not necessarily lead to price shopping or cost savings (PUBMED:31419862).
Overall, the evidence suggests that health plans can vary widely in terms of quality, practices, and outcomes, and they are not created equal. |
Instruction: Are strict isolation policies based on susceptibility testing actually effective in the prevention of the nosocomial spread of multi-drug-resistant gram-negative rods?
Abstracts:
abstract_id: PUBMED:24969125
Are strict isolation policies based on susceptibility testing actually effective in the prevention of the nosocomial spread of multi-drug-resistant gram-negative rods? Background: The emergence of multi-drug-resistant gram-negative rods (MDR-GNRs) has become a worldwide problem. To limit the emergence of MDR-GNRs, a tertiary care cancer center in Japan implemented a policy that requires the pre-emptive isolation of patients with organisms that have the potential to be MDR-GNRs.
Methods: A retrospective analysis was performed. Any gram-negative bacillus isolates categorized as intermediate or resistant to at least 2 classes of antimicrobials were subjected to contact precautions. The incidence of patients with MDR-GNRs was analyzed.
Results: There was no difference between the preintervention and intervention time periods in the detection rate of nonfermenting MDR-GNR species (0.15 per 10,000 vs 0.35 per 10,000 patient-days, P = .08). There was an increase in the detection rate of multi-drug-resistant Enterobacteriaceae (0.19 per 10,000 vs 0.56 per 10,000 patient-days, P = .007), which was prominent for extended-spectrum β-lactamase (ESBL)-producing organisms (0.19 per 10,000 vs 0.50 per 10,000 patient-days, P = .02).
Conclusions: Our intervention kept the emergence of multi-drug-resistant non-glucose-fermenting gram-negative bacilli to a small number, but it failed to prevent an increase in ESBL producers. Policies, such as active detection and isolation, are warranted to decrease the incidence of these bacilli.
abstract_id: PUBMED:15003666
Distribution of multi-resistant Gram-negative versus Gram-positive bacteria in the hospital inanimate environment. We prospectively studied the difference in detection rates of multi-resistant Gram-positive and multi-resistant Gram-negative bacteria in the inanimate environment of patients harbouring these organisms. Up to 20 different locations around 190 patients were surveyed. Fifty-four patients were infected or colonized with methicillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant enterococci (VRE) and 136 with multi-resistant Gram-negative bacteria. The environmental detection rate for MRSA or VRE was 24.7% (174/705 samples) compared with 4.9% (89/1827 samples) for multi-resistant Gram-negative bacteria (P<0.001). Gram-positive bacteria were isolated more frequently than Gram-negatives from the hands of patients (P<0.001) and hospital personnel (P=0.1145). Environmental contamination did not differ between the intensive care units (ICUs) and the general wards (GWs), which is noteworthy because our ICUs are routinely disinfected twice a day, whereas GWs are cleaned just once a day with detergent. Current guidelines for the prevention of spread of multi-resistant bacteria in the hospital setting do not distinguish between Gram-positive and Gram-negative isolates. Our results suggest that the inanimate environment serves as a secondary source for MRSA and VRE, but less so for Gram-negative bacteria. Thus, strict contact isolation in a single room with complete barrier precautions is recommended for MRSA or VRE; however, for multi-resistant Gram-negative bacteria, contact isolation with barrier precautions for close contact but without a single room seems sufficient. This benefits not only the patients, but also the hospital by removing some of the strain placed on already over-stretched resources.
abstract_id: PUBMED:33425500
Frequency of Extensively Drug-Resistant Gram-Negative Pathogens in a Tertiary Care Hospital in Pakistan. Background Gram-negative bacteria are frequently involved in nosocomial infections. These bacteria have a particular tendency to develop antibiotic resistance and may become extensively drug-resistant (XDR). This study aimed to detect the prevalence of XDR Gram-negative bacteria in a tertiary care hospital in Pakistan. Materials and methods Clinical samples were obtained from patients admitted to different inpatient wards and sent for microbial analysis and culture. Antibiotic susceptibility testing of isolates was performed by the disk diffusion method to detect XDR strains. Results Antibiotic susceptibility patterns of a total of 673 clinical samples were studied. Of all bacterial isolates, 64% were extensively drug-resistant. Klebsiella pneumoniae had the highest percentage of XDR isolates (68.4%), followed by Pseudomonas aeruginosa (67.6%) and Escherichia coli (56.1%). Most XDR pathogens were isolated from the burn unit (87.7%), followed by the intensive care unit (69.2%) and surgical unit (68.9%). Conclusions The rate of extensive drug-resistance is alarmingly high, which calls for strict surveillance and control measures to prevent the development of further resistance. Proper sanitation and rational prescription of antibiotics should be ensured.
abstract_id: PUBMED:26435462
Controversies in guidelines for the control of multidrug-resistant Gram-negative bacteria in EU countries. The various guidelines that are available for multidrug-resistant Gram-negative bacteria are useful, and contain broad areas of agreement. However, there are also important areas of controversy between the guidelines in terms of the details of applying contact precautions, single-room isolation and active surveillance cultures, differences in the approach to environmental cleaning and disinfection, and whether or not to perform staff and patient cohorting, healthcare worker screening or patient decolonization. The evidence-base is extremely limited and further research is urgently required to inform an evidence-based approach to multidrug-resistant Gram-negative bacteria prevention and control.
abstract_id: PUBMED:23577496
Emerging issues in the management of infections caused by multi-drug-resistant, gram-negative bacilli. Background: The past decade has witnessed the continued emergence and spread of multidrug resistance in gram-negative bacilli. Infections caused by multi-drug-resistant, gram-negative bacilli lead, in many instances, to increased morbidity and mortality, prolonged hospital stays, and the use of broad-spectrum antibiotics.
Methods: Recent literature from 1990 to the present is reviewed in order to put into perspective the effects of increasing incidences of multi-drug-resistant gram-negative bacilli on patient care.
Results: Factors important in the emergence and spread of multi-drug-resistant gram-negative bacilli include increasing severity of illness in hospitalized patients, poor attention to infection control practices by healthcare personnel, and the large, often indiscriminate use of broad-spectrum antimicrobial agents. Unlike earlier iterations, there is no steady stream of newer antimicrobial agents in development to address the problem. The only broad-spectrum antimicrobial agent with activity against multi-resistant, gram-negative bacilli and with potential to be licensed in the foreseeable future is tigecycline. Tigecycline, the first member of a novel class of antimicrobials, the glycylcyclines, is a structural derivative of minocycline, with potent activity against most gram-positive, gram-negative (excepting Pseudomonas aeruginosa and Proteus spp.) and anaerobic species. Phase 3 trials indicate that tigecycline is effective for treating both complicated skin and skin structure infections, and intra-abdominal infections in hospitalized patients.
Conclusion: Tigecycline promises to be an important addition to our monotherapy armamentarium, complementing essential efforts to promote compliance with good infection control measures and rational use of currently available antimicrobial agents.
abstract_id: PUBMED:30615958
Can guidelines for the control of multi-drug-resistant Gram-negative organisms be put into practice? A national survey of guideline compliance and comparison of available guidelines. Multi-drug-resistant Gram-negative organisms (MDRGNO) are an emerging global threat, reflected in the increasing incidence of infections in Ireland and elsewhere. The response to this threat has been the development of Infection Prevention and Control (IPC) guidelines. A survey of IPC teams in Ireland was undertaken to assess compliance with national guidelines. To place these survey results in context, IPC guidelines from the Irish Health Protection Surveillance Centre (HPSC) are compared with guidelines from Healthcare Infection Society (HIS), European Society of Clinical Microbiology and Infectious Diseases (ESCMID) and Centre for Disease Control (CDC). Thirty-three percent of hospitals responded across a range of hospital types. The results highlight the variability in implementation of guidelines across Ireland, as well as the variability between guidelines internationally. Respondents are less than 90% compliant with the majority of MDRGNO screening guidelines. Hospitals have variable access to isolation facilities with an average of 29% single rooms available (range 2.6-100%), resulting in some patients with MDRGNO not being isolated. Broad variability in application of guidance on personal protective equipment was demonstrated. This survey gives an insight into the real-life applicability of HPSC guidelines. Survey results are placed in context with a comparison of five MDRGNO IPC guidelines. Although core tenets of IPC are standard across guidelines, research into which practices are efficient in reducing MDRGNO transmission while being cost-effective would be worthwhile.
abstract_id: PUBMED:30002821
Variability in contact precautions to control the nosocomial spread of multi-drug resistant organisms in the endemic setting: a multinational cross-sectional survey. Background: Definitions and practices regarding use of contact precautions and isolation to prevent the spread of gram-positive and gram-negative multidrug-resistant organisms (MDRO) are not uniform.
Methods: We conducted an on-site survey during the European Congress on Clinical Microbiology and Infectious Diseases 2014 to assess specific details on contact precaution and implementation barriers.
Results: Attendants from 32 European (EU) and 24 non-EU countries participated (n = 213). In EU-respondents adherence to contact precautions and isolation was high for Methicillin-resistant Staphylococcus aureus (MRSA), carbapenem-resistant Enterobacteriaceae, and MDR A. baumannii (84.7, 85.7, and 80%, respectively) whereas only 68% of EU-respondents considered any contact precaution measures for extended-spectrum-beta-lactamase (ESBL) producing non-E. coli. Between 30 and 45% of all EU and non-EU respondents did not require health-care workers (HCW) to wear gowns and gloves at all times when entering the room of a patient in contact isolation. Between 10 and 20% of respondents did not consider any rooming specifications or isolation for gram-positive MDRO and up to 30% of respondents abstain from such interventions in gram-negative MDRO, especially non-E. coli ESBL. Understaffing and lack of sufficient isolation rooms were the most commonly encountered barriers amongst EU and non-EU respondents.
Conclusion: The effectiveness of contact precautions and isolation is difficult to assess due to great variation in components of the specific measures and mixed levels of implementation. The lack of uniform positive effects of contact isolation to prevent transmission may be explained by the variability of interpretation of this term. Indications for contact isolation require a global definition and further sound studies.
abstract_id: PUBMED:18078118
Imipenem resistance in Gram-negative rods and its consumption between 1999 and 2005 Multidrug resistant Gram-negative rods are increasingly isolated from clinical specimens, especially from hospitalized patients. The aim of this study was to evaluate the prevalence of imipenem resistant strains of Gram-negative rods isolated in dr. A. Jurasz University Hospital in Bydgoszcz between 1999 and 2005 and imipenem consumption in this period. Out of 109614 isolated microorganisms, Gram-negative rods were 28,5%, 637 (2,0%) of strains were resistant to imipenem. These strains were isolated mostly from patients hospitalized in intensive care and rehabilitation clinics. Among imipenem-resistant strains Pseudomonas aeruginosa prevailed (88,9%). P. aeruginosa strains were sensitive to colistin, 45,5% of them to aztreonam and 44,0% to ceftazidime. The imipenem consumption in the appropriate years included in this study was: 805,00; 1201,25; 940,00; 1390,00; 1660,00; 1341,25; 1841,25 DDD respectively, and was strictly connected with increasing imipenem-resistant Gram-negative rods isolation.
abstract_id: PUBMED:24728736
Intrathecal/intraventricular colistin in external ventricular device-related infections by multi-drug resistant Gram negative bacteria: case reports and review. We report three cases of external ventricular derivation infections caused by multidrug-resistant Gram-negative rods and treated successfully with intraventricular colistin. The intrathecal or intraventricular use of colistin have been reported in more than 100 cases without any consensus on dosage, duration and type (monotherapy or combination therapy) of treatment. Based on our comprehensive review of the relevant literature relating to both clinical and pharmacokinetic data, we conclude that the intrathecal/intraventricular administration of colistin is a safe and effective option to treat central nervous system infections caused by multidrug-resistant Gram-negative bacteria.
abstract_id: PUBMED:12861084
Multi-resistant Gram-negative bacilli: from epidemics to endemics. Purpose Of Review: Infections due to multi-drug resistant Gram-negative bacilli represent a worrying situation for the management of hospitalized patients. In addition, these bacteria are increasingly involved in epidemics throughout the world. This review focuses on recent data that may help to understand the emergence and dissemination of multi-drug resistant bacilli and the current trend from epidemic to endemic situations.
Recent Findings: Well-established clones enhance their resistance phenotype by the acquisition of new resistant genes, via gene capture genetic units (plasmids, transposons or integrons), thus facilitating the co-selective process under different antimicrobial selective pressures and therefore the long-term persistence of organisms in selective environments. Not only resistant bacterial clones are selected, but also their genetic structures carrying resistance genes. Therefore, current epidemiology of multi-drug resistant bacilli is not only focused on bacterial clones but also on any kind of resistance gene capture units. In this scenario a multiclonal population structure of bacterial organisms corresponds to a collection of different strains sharing resistance genes carried by horizontally transferred genetic structures. As different strains tend to prefer different environments, this concept helps understand why the epidemiology of multi-drug resistant Gram-negative bacilli is moving from epidemics to endemics.
Summary: The emergence and spread of multi-drug resistant bacilli in the nosocomial setting should be understood in terms of a complex interplay of bacterial clonality, resistance genes and genetic structures promoting rapid dissemination of antimicrobial resistance. Intervention strategies in the forthcoming scenario should identify existing epidemic and/or endemic situations involving clonal organisms or resistance genes carried by epidemic gene capture units.
Answer: Strict isolation policies based on susceptibility testing have been implemented in various healthcare settings to prevent the nosocomial spread of multi-drug-resistant gram-negative rods (MDR-GNRs), but their effectiveness is subject to debate and appears to vary.
A study conducted at a tertiary care cancer center in Japan found that while their intervention kept the emergence of multi-drug-resistant non-glucose-fermenting gram-negative bacilli to a small number, it failed to prevent an increase in extended-spectrum β-lactamase (ESBL) producers (PUBMED:24969125). This suggests that while strict isolation policies may have some impact, they may not be universally effective against all types of MDR-GNRs.
Another study highlighted that the inanimate environment serves as a secondary source for MRSA and VRE, but less so for Gram-negative bacteria, suggesting that strict contact isolation in a single room with complete barrier precautions is recommended for MRSA or VRE, but for multi-resistant Gram-negative bacteria, contact isolation with barrier precautions for close contact but without a single room seems sufficient (PUBMED:15003666).
The variability in the implementation of guidelines and the lack of uniform positive effects of contact isolation to prevent transmission may be explained by the variability of interpretation of this term (PUBMED:30002821). Additionally, a national survey in Ireland showed variability in the implementation of guidelines across hospitals, with less than 90% compliance with the majority of MDR-GNR screening guidelines (PUBMED:30615958).
Furthermore, the evidence base for the effectiveness of these policies is extremely limited, and further research is urgently required to inform an evidence-based approach to MDR-GNR prevention and control (PUBMED:26435462).
In summary, while strict isolation policies based on susceptibility testing are a component of the strategy to prevent the spread of MDR-GNRs, their effectiveness is not absolute and may vary depending on the specific organism and healthcare setting. There is a need for more research to determine the most effective and efficient practices for controlling the spread of these organisms. |
Instruction: Are school employees role models of healthful eating?
Abstracts:
abstract_id: PUBMED:27782781
Co-creating healthful eating behaviors with very young children: The impact of information overload on primary caregivers. Primary caregivers of very young children are subject to excessive and often disparate information regarding the instilling of healthful eating behaviors. Our study focuses on the integration of the operant resources of primary caregivers (i.e., their knowledge and modeling skills) and that of their very young children (i.e., their self-regulation of energy intake and food preferences) to co-create healthful eating behaviors as a measure to curb overweight and obesity in adulthood. Our two-stage qualitative study makes original contributions demonstrating that primary caregivers' efforts to co-create healthful eating behaviors with their very young children are adversely affected by information overload.
abstract_id: PUBMED:15127066
Coordinated school health program and dietetics professionals: partners in promoting healthful eating. Although research indicates that school meal programs contribute to improved academic performance and healthier eating behaviors for students who participate, fewer than 60% of students choose the National School Lunch Program or School Breakfast Program. School meal programs have a difficult time competing with foods that are marketed to young people through sophisticated advertising campaigns. Youth's preferences for fast foods, soft drinks, and salty snacks; mixed messages sent by school personnel; school food preparation and serving space limitations; inadequate meal periods; and lack of education standards for school foodservice directors challenge school meal programs as well. A coordinated school health program offers a framework for meeting these challenges and provides children and adolescents with the knowledge and skills necessary for healthful eating. This article identifies challenges facing school foodservice directors in delivering healthful meals and acquaints dietetics professionals with the coordinated school health program to be used as a tool for addressing unhealthful weight gain and promoting healthful eating.
abstract_id: PUBMED:19699834
Are school employees role models of healthful eating? Dietary intake results from the ACTION worksite wellness trial. Background: Little is known about the dietary intake of school employees, a key target group for improving school nutrition.
Objective: To investigate selected dietary variables and weight status among elementary school personnel.
Design: Cross-sectional, descriptive study.
Subjects/setting: Elementary school employees (n=373) from 22 schools in a suburban parish (county) of southeastern Louisiana were randomly selected for evaluation at baseline of ACTION, a school-based worksite wellness trial.
Methods: Two 24-hour dietary recalls were administered on nonconsecutive days by registered dietitians using the Nutrition Data System for Research. Height and weight were measured by trained examiners and body mass index calculated as kg/m(2).
Statistical Analyses Performed: Descriptive analyses characterized energy, macronutrient, fiber, and MyPyramid food group consumption. Inferential statistics (t tests, analysis of variance, chi(2)) were used to examine differences in intake and compliance with recommendations by demographic and weight status categories.
Results: Approximately 31% and 40% of the sample were overweight and obese, respectively, with higher obesity rates than state and national estimates. Mean daily energy intake among women was 1,862+/-492 kcal and among men was 2,668+/-796 kcal. Obese employees consumed more energy (+288 kcal, P<0.001) and more energy from fat (P<0.001) than those who were normal weight. Approximately 45% of the sample exceeded dietary fat recommendations. On average, only 9% had fiber intakes at or above their Adequate Intake, which is consistent with the finding that more than 25% of employees did not eat fruit, 58% did not eat dark-green vegetables, and 45% did not eat whole grains on the recalled days. Only 7% of employees met the MyPyramid recommendations for fruits or vegetables, and 14% of the sample met those for milk and dairy foods.
Conclusions: These results suggest that greater attention be directed to understanding and improving the diets of school employees given their high rates of overweight and obesity, poor diets, and important role in student health.
abstract_id: PUBMED:28728759
School Nurses' Experiences and Perceptions of Healthy Eating School Environments. School nurses provide health promotion and health services within schools, as healthy children have a greater potential for optimal learning. One of the school nurses' role is in encouraging healthy eating and increasing the availability of fruits and vegetables in the school. The purpose of this study was to explore and describe school nurses' perceptions of their role in promoting increased fruit and vegetable consumption in the school setting. One avenue to increased availability of fruits and vegetables in schools is Farm to School programs mandated by the Federal government to improve the health of school children. School nurses are optimally positioned to work with Farm to School programs to promote healthy eating. A secondary aim was to explore school nurses' knowledge, experiences and/or perceptions of the Farm to School program to promote fruit and vegetable consumption in the school setting. Three themes emerged from the focus groups: If There Were More of Me, I Could Do More; Food Environment in Schools; School Nurses Promote Health. School nurses reported that they addressed health issues more broadly in their roles as educator, collaborator, advocate and modeling healthy behaviors. Most of the participants knew of Farm to School programs, but only two school nurses worked in schools that participated in the program. Consequently, the participants reported having little or no experiences with the Farm to School programs.
abstract_id: PUBMED:23797808
Parent conversations about healthful eating and weight: associations with adolescent disordered eating behaviors. Importance: The prevalence of weight-related problems in adolescents is high. Parents of adolescents may wonder whether talking about eating habits and weight is useful or detrimental.
Objective: To examine the associations between parent conversations about healthful eating and weight and adolescent disordered eating behaviors.
Design: Cross-sectional analysis using data from 2 linked multilevel population-based studies.
Setting: Anthropometric assessments and surveys completed at school by adolescents and surveys completed at home by parents in 2009-2010.
Participants: Socioeconomically and racially/ethnically diverse sample (81% ethnic minority; 60% low income) of adolescents from Eating and Activity in Teens 2010 (EAT 2010) (n = 2793; mean age, 14.4 years) and parents from Project Families and Eating and Activity in Teens (Project F-EAT) (n = 3709; mean age, 42.3 years). EXPOSURE Parent conversations about healthful eating and weight/size.
Main Outcomes And Measures: Adolescent dieting, unhealthy weight-control behaviors, and binge eating.
Results: Mothers and fathers who engaged in weight-related conversations had adolescents who were more likely to diet, use unhealthy weight-control behaviors, and engage in binge eating. Overweight or obese adolescents whose mothers engaged in conversations that were focused only on healthful eating behaviors were less likely to diet and use unhealthy weight-control behaviors. Additionally, subanalyses with adolescents with data from 2 parents showed that when both parents engaged in healthful eating conversations, their overweight or obese adolescent children were less likely to diet and use unhealthy weight-control behaviors.
Conclusions And Relevance: Parent conversations focused on weight/size are associated with increased risk for adolescent disordered eating behaviors, whereas conversations focused on healthful eating are protective against disordered eating behaviors.
abstract_id: PUBMED:16833096
Eating disorders prevention: does school have a role to play? Eating disorders are widespread among adolescents in Western Countries. Some prevention programs have been developed. They target knowledge about eating disorders, eating attitudes or behaviour and media literacy and self-esteem. Unfortunately, most eating disorder prevention programs have not demonstrated some efficacy to decrease eating pathology on the long term. An other strategy to prevent eating disorders would be to integrate obesity in the objectives of these programs. These interventions would help to reach a greater number of adolescent and to avoid side effects of both preventive efforts.
abstract_id: PUBMED:24735212
A case study of middle school food policy and persisting barriers to healthful eating. Decreasing access to competitive foods in schools has produced only modest effects on adolescents' eating patterns. This qualitative case study investigated persistent barriers to healthful eating among students attending an ethnically diverse middle school in a working-class urban neighborhood that had banned on campus competitive food sales. Participant observations, semi-structured interviews and document reviews were conducted. Unappealing school lunches and easily accessible unhealthful foods, combined with peer and family influences, increased the appeal of unhealthy foods. Areas for further inquiry into strategies to improve urban middle school students' school and neighborhood food environments are discussed.
abstract_id: PUBMED:29770461
Psychometric properties and factor structure of the adapted Self-Regulation Questionnaire assessing autonomous and controlled motivation for healthful eating among youth with type 1 diabetes and their parents. Background: The purpose of this cross-sectional study was to examine the psychometric properties of 2 adapted Self-Regulation Questionnaire (SRQ) measures assessing youth with type 1 diabetes motivation internalization for healthful eating and their parents motivation internalization for providing healthy meals for the family.
Methods: External validity of the adapted SRQ was evaluated with respect to healthy eating attitudes (healthful eating self-efficacy, barriers, and outcome expectations) assessed by questionnaire, diet quality (Healthy Eating Index-2005 [HEI-2005]; Nutrient-Rich Foods Index 9.3 [NRF9.3]; Whole Plant Food Density [WPFD]) assessed by 3-day food records, and body mass index assessed by measured height and weight in youth with type 1 diabetes (N = 136; age 12.3 ± 2.5 years) and their parents.
Results: Exploratory factor analysis with varimax rotation yielded a 2-factor structure with the expected autonomous and controlled motivation factors for both youth and parents. Internal consistencies of subscales were acceptable (α = .66-.84). Youth autonomous and controlled motivation were positively correlated overall (r = 0.30, p < .001); however, in analyses stratified by age (<13 vs. ≥13 years), the correlation was not significant for youth ≥13 years. Autonomous motivation was significantly associated (p < .001) with greater self-efficacy (youth: r = 0.39, parent: r = 0.36), positive outcome expectations (youth: r = 0.30, parent: r = 0.35), and fewer barriers to healthful eating (youth: r = -0.36, parent: r = -0.32). Controlled motivation was positively correlated with negative outcome expectations for parents (r = 0.29, p < .01) and both positive (r = 0.28, p < .01) and negative (r = 0.34, p < .001) outcome expectations for youth. Autonomous motivation was positively associated (p < .05) with diet quality indicators for parents (NRF9.3 r = 0.22; WPFD r = 0.24; HEI-2005 r = 0.22) and youth ≥13 years (NRF9.3 r = 0.26) but not youth < 13years. Among parents, but not youth, body mass index was associated negatively with autonomous motivation (r = -.33, p < .001) and positively with controlled motivation (r = .27, p < .01).
Conclusions: Findings provide initial support for the SRQ in this population and suggest potential developmental differences in the role of motivation on healthful eating among children, adolescents, and adults.
abstract_id: PUBMED:38279694
Eat healthy, feel better: Are differences in employees' longitudinal healthy-eating trajectories reflected in better psychological well-being? Eating healthily in terms of fruit and vegetable consumption has beneficial effects for employees and their organisations. Yet, we know little about how employees' eating behaviour develops over longer periods of time (trajectories) as well as about how subgroups of employees in these trajectories differ (trajectory classes). Gaining such insights is critical to understand how employees address healthy eating recommendations over time as well as to develop individualised interventions that also consider the development of healthy eating (i.e. improvement versus impairment beyond mean levels). We analysed panel data (Longitudinal Internet Studies for the Social Sciences) from 1054 employees by means of growth mixture modelling. Our analyses revealed three relevant classes of healthy-eating trajectories: a favourable trajectory class, an unfavourable trajectory class and a strongly improving trajectory class. Furthermore, unfavourable healthy-eating trajectories were especially critical with respect to impaired psychological well-being. Specifically, we found robust results for impaired positive and negative affects, but not for self-esteem, in the unfavourable trajectory class. We discuss limitations and implications of these findings, thereby encouraging research and practice to further consider such fine-grained approaches (i.e. focusing on subgroups within a larger population) when addressing healthy-eating promotion over time.
abstract_id: PUBMED:22118997
Engaging youth in learning about healthful eating and active living: an evaluation of Educational Theater Programs. Objective: To compare knowledge gains and knowledge retention of healthful eating and active living behaviors in elementary school children participating in Educational Theatre Programs (ETP).
Methods: The study sample included 47 schools (2,915 third- or fourth-grade students) in 8 Kaiser Permanente regions. Children's knowledge of 4 healthful behaviors was measured using a brief survey before and immediately after performances, followed by a post-delay survey 3 weeks later.
Results: Statistically significant increases in knowledge pre/post for individual topics (P < .01). The percentage of children who answered all 4 questions correctly increased from 17% to 63% immediately after ETP performances (P < .01). Knowledge was retained over the short term; the proportion of students who correctly identified recommended healthful eating/active living practices had declined only slightly 3 weeks later.
Conclusions And Implications: The ETP significantly improved participating students' knowledge of healthful eating and active living concepts. Theater programs should be considered for inclusion in elementary school health curricula.
Answer: The available abstracts suggest that school employees may not consistently be role models of healthful eating. According to the study reported in abstract PUBMED:19699834, approximately 31% and 40% of the sample of elementary school employees were overweight and obese, respectively, with higher obesity rates than state and national estimates. The study also found that obese employees consumed more energy and more energy from fat than those who were normal weight, and a significant portion of the sample exceeded dietary fat recommendations. Additionally, a large percentage of employees did not consume recommended amounts of fruits, vegetables, and whole grains. These findings indicate that school employees' dietary habits may not align with healthful eating guidelines, which could impact their ability to serve as positive role models for students in terms of nutrition.
Furthermore, abstract PUBMED:28728759 highlights that school nurses, who are in a position to promote healthy eating in schools, reported that they addressed broader health issues and were aware of programs like Farm to School, but only a few actually worked in schools that participated in such programs. This suggests that while school nurses may have the potential to be role models and advocates for healthful eating, there may be limitations in their capacity to do so due to program participation and other factors.
In summary, while school employees, including school nurses, have the potential to be role models for healthful eating, the evidence suggests that there are challenges and barriers that may prevent them from consistently demonstrating healthful eating behaviors themselves. This could limit their effectiveness as role models for students in promoting healthy dietary habits. |